Last week I had the pleasure of judging Hack Cambridge. Hack Cambridge is a 24 hour Hackathon with over 300 of the brightest students from around the world, who come together to learn, build, tinker, push technology to its limits and make new friends.
Team Neural Fall Detection -
Real-time Event-based Monitoring System for Seniors and Elderly using Neural Networks
It has been estimated that 33% of people age 65 will fall. At around 80, that increases to 50%. In case of a fall, seniors who receive help within an hour have a better rate of survival and, the faster help arrives, the less likely an injury will lead to hospitalization or the need to move into a long-term care facility. In such cases fast visual detection of abnormal motion patterns is crucial.
In this project we proposed the use of a novel embedded Dynamic Vision Sensor (eDVS) for the task of classifying/detecting falls. Opposite from standard cameras which provide a time sequenced stream of frames, the eDVS provides only relative changes in a scene, given by individual events at the pixel level. Using this different encoding scheme, the eDVS brings advantages over standard cameras.
First, there is no redundancy in the data received from the sensor, only changes are reported. Second, as only events are considered the eDVS data rate is high. Third, the power consumption of the overall system is small, as just a low-end microcontroller is used to fetch events from the sensor and can ultimately run for long time periods in a battery powered setup. This project investigates how can we exploit the eDVS fast response time and low-redundancy in making decisions about elderly motion/fall.
The computation backend was realized with a neural network classifier used to detect falls and filter outliers. The data was provided from 2 stimuli (blinking LEDs at different frequencies, 200Hz and 1000Hz) and will represent the actual position of the person wearing them (actual hack to simplify the problem – otherwise the events gfenerated by the edges of the body will be enough). The changes in position of the stimuli will encode the possible positions corresponding to falls or normal cases (i.e. angle determined by the relative positioning of the 2 stimuli).
We used Microsoft Azure ML Studio to implement a MLP binary classifier for the 4 dimensional input (2 stimuli x 2 Cartesian coordinates - (x, y) in the field of view). We labelled the data with Fall (F) and No Fall (NF).
LEDs attached to a T-shirt flash at particular frequencies and are picked up by the DVS sensors, as depicted in the following diagram. The high temporal resolution of the sensor (us) allows an efficient tracking of the preset frequencies and noise rebustness (e.g. blink of lamp bulbs blinking at 50Hz powerline frequency :D
Data from the DVS is processed using Python, to extract the 4 values (positions of the stimuli) and fed to the Azure ML service as shown in the following diagram.
The processed data (i.e. from single events to tracked stimuli positions) is then fed into the Azure Machine Learning API via POST requests, where the MLP neural network (trained supervisedly) takes the 4D input and provides a binary decision at the output (Fall/NoFall - F/NF).
The actual neural network parametrization along with the training infrastructure was made according to the following configuration
Next, an actual evaluation of the results for 3 minutes of training data fed to the neural network and 1 minute of training (speed up for timing issues).
The output of the trained network when fed with new data from the DVS sensor (operation phase) then decides the program flow: continue processing or send a notification through SMS (using a Twilio Rest Client service) and email (using SMTPLib MiMe Email service) in case someone has fallen. The notification will also communicate the location at which the fall has taken place.
This was realised by using the web service infrastructure available in the Azure infrastructure.