I am a PhD candidate in Information Science at Cornell Tech, Cornell University. During my PhD research I am building Computational Healthcare to advance medical research by enabling physicians and researchers to quickly search and analyze millions of healthcare records. Additionally I have unique experience in application of data mining methods to problems in security. I gained this experience during summer 2016 while interning with the Abuse Prevention team at Dropbox. I am also interested in machine learning & data mining applied to text, images and video datasets. Previously I completed Master of Engineering in Computer Science from Cornell.
As part of my PhD research, I am developing Computational Healthcare a Search & Aggregation Engine. Computational Healthcare indexes and aggregates data from millions of patient visits. It allows physicians and researchers to instantly search through aggregate results and get answers to queries which would otherwise require multiple weeks.
I am specifically interested in using Computational Healthcare for studying uncommon (less than 50,000 cases per year) diseases such as Multiple Sclerosis, Sarcoidosis, and other rare diseases which cannot be studied without access to data on millions of patients.
Computational Healthcare is developed in collaboration with Radiology and Anesthesiology departments at Weill Cornell Medical College. I am also a co-founder of Temporal Health which aims to provide Computational Healthcare as a platform.
Videos recorded by dashcams of personal vehicles, taxis, government vehicles and trucks not only provide information about the cars and drivers but also about roads and surroundings. In spite of ample availability of such videos currently there arent any libraries or framework for extracting information from them. I have started working on an app that allows operators of large fleets of vehicles equipped with dash cams to extract information with minimal manual annotation.
I am currently developing a simple visual indexing and search system, using features derived from Google's inception model trained on the imagenet data. Along with an Approximate Nearest Neighbor query server. I am developing a swift app that computes index vector on smartphone and retrives results from the server. Images are efficiently (2$ per 400,000 images) indexed using AWS spot GPU instances.
To experiment with developing interactive computer vision applications in JS, I built EraseImage.com. Its an online tool for segmenting images, that lets users perform image segmentation and background removal completely in client-side JS. It is implemented using Angular, FabricJS and superpixel algorithms.
Copyright Akshay Bhat, 2016. All rights reserved.