This project empowers people to easily setup customized in-home
cameras and sensors and monitor the video and data via the web,
to assist family/friends in need of care, or to assist oneself
for the hearing/vision impaired). The project enables people to setup
customized automated notifications (e.g., text message, blinking light)
upon detection of critical situations, such as a person not arising in
morning, or to record data to detect longer-term critical trends, such
a person getting less exercise or frequently stumbling. The project's
novel methods emphasize people customizing the system to meet their
unique and changing needs and situations, including their privacy
Assistive monitoring analyzes data from cameras and sensors for
of interest, and notifies the appropriate persons in response. The
ability of the end-user to customize an assistive
monitoring system is essential to the system’s practical use.
Automated fall detection on privacy-enhanced video
Falls are detectable by algorithms that process raw video from in-home cameras, but raw video raises privacy concerns, especially when stored on a local computer or streamed to a remote computer for processing. We developed an algorithm for automating fall detection on raw video and privacy enhancements video. The key observation is that the minimum bounding rectangle (MBR) around the moving object is almost the exact same height and width for raw video and all privacy-enhanced video.
Same fall with raw and privacy-enhanced video
We compared various features of the MBR for fall detection sensitivity and specificity. Sensitivity is the ratio of correct fall detections over actual falls, e.g., if 11 falls were correctly detected but there were 12 total falls, sensitivity is 11/12 = 0.92. Specificity is the ratio of correct non-fall reports over actual non-falls, e.g., if 10 non-falls were reported but there were 11 non-falls, specificity is 10/11 = 0.91.
Width of MBR in pixels
Height of MBR in pixels
Height-to-width ratio of MBR
Width-to-height ratio of MBR
We compared the sensitivity and specificity for raw video and privacy-enhanced video using the exact same fall detection algorithm using height of MBR in pixels.
The automated fall detection algorithm performed well for the privacy-enhanced videos compared to raw video, except perhaps blur, which suffered from the color of the person and the background blending together thus making the moving object harder to identify.
Energy expenditure estimation from video
Automatically estimating a personís energy expenditure has numerous uses, including determining whether
an elderly person living alone is achieving sufficient levels of daily activity. Sufficient activity has been shown
to significantly delay the onset of dementia, to reduce the likelihood of falls, to improve mood, and more. Energy
expenditure is also important for monitoring of diabetic patients.
A key expected use of video-based energy expenditure estimation is to compare a personís activity levels across many days, to detect negative trends and thus introduce interventions. As such, a goal of estimation is not necessarily accurate calorie estimation, but rather correct relative estimation of energy expenditure across days, including correct ratios among low/medium/high activity days. Thus, our first experiments sought to determine the fidelity of our video-based energy estimation. We compared our video-based energy expenditure algorithm to a commerically available device, namely the BodyBugg.
Low activity day
Medium activity day
High activity day
The slope changes of the video-based approach are very similar to the BodyBugg. Notice that the video-based approach is consistenly off by 230 Calories. We exploited this observation to improve the Calorie prediction from 86.4% to 91.1% accuracy.
A comprehensive list of the video recordings can be found with this link. Here's an example video recording:
Privacy perception and fall detection accuracy with privacy-enhanced video
Video of in-home activity provides valuable information for assistive monitoring but raises privacy concerns. Raw video can be privacy-enhanced by obscuring the appearance of a person. We considered raw video and five privacy enhancements:
We conducted an experiment with 376 non-engineering participants to determine whether there exists a privacy enhancement that provides sufficient perceived privacy while enabling accurate fall detection by humans.
The oval is the best trade-off between sufficient privacy and fall detection accuracy. However, the optimal privacy enhancements depends on the end-user's requirements.
Monitoring and Notification Flow Language (MNFL)
MNFL enables end-user customization of assistive monitoring systems. Data flows from monitoring devices on the left to notification methods on the right. Each graphical block is always-executing, intuitively analogous to objects in the physical world. The always-executing behavior gives instant feedback to the end-user when two blocks are connected, making development fast and rewarding.
Sensors typically output Boolean (on/off) or Integer (78į F) data, while cameras output Video data. Cameras are integrated with sensors via feature extractors, which convert Video data to Boolean or Integer data. A feature extractor determines the amount of some physical phenomenon from Video data, e.g. the amount of rightward motion from the camera's perspective.
We implemented MNFL as a web browser application, called EasyNotify. This example video shows possible solutions to the problem of an Alzheimer's patient leaving and not returning for a prolonged period of time at night:
We conducted an experiment with 51 non-engineering, non-science undergraduate participants. Participants spent less than 7 minutes per challenge problem.
Publications A. Edgcomb, F. Vahid.Privacy Perception and Fall Detection Accuracy for In-Home Video Assistive Monitoring with Privacy Enhancements, ACM SIGHIT (Special Interest Group on Health Informatics) Record, 2012.
A. Edgcomb, F. Vahid.Automated Fall Detection on Privacy-Enhanced Video, IEEE Engineering in Medicine & Biology Society, 2012, 4 pages.
A. Edgcomb, F. Vahid.MNFL: The Monitoring and Notification Flow Language for Assistive Monitoring, ACM SIGHIT International Health Informatics Symposium (IHI), 2012.
A. Edgcomb, F. Vahid.Feature Extractors for Integration of Cameras and Sensors during End-User Programming of Assistive Monitoring Systems, Wireless Health, 2011.