Print Email Facebook Twitter An intelligent camera system for the Healthcare Title An intelligent camera system for the Healthcare Author Aertssen, A.J.J. Contributor Jonker, P.P. (mentor) Rudinac, M. (mentor) Faculty Mechanical, Maritime and Materials Engineering Department BioMechanical Engineering Programme BME Date 2011-08-22 Abstract Objective: Injuries caused by falls of elderly people are a common worldwide problem and ageing of population will even further increase related burdens and costs. Recent technology using active monitoring systems have proven their success in order to analyze human actions. What is lacking in these researches is implementation in real elderly home environments. Most of the healthcare researches are focusing on the detection of falls and not on the detection of normal daily actions. We present a single camera with a fisheye lens which is capable of monitoring an entire room. The use of only one camera reduces the costs and simplifies the computational burden which results in a real time system. While different research is done on the detection of such actions, none of these is done using real data by elderly people in their own living environment. Using this data will increase the difficulty level of the action recognition, because every living environment will have different settings and noise factors. Main: We developed an action detection system which monitors the actions of elderly people in their homes during normal daily activities with the idea to raise the alarm in the case of danger. Our system is equipped with a single wide angle camera mounted on the ceiling of an elderly home. This gives a topview image of the environment resulting in a clear map of household objects without any occlusions. The main idea is to monitor the motion information of elderly and to model actions as a change of motion or poses in time that leads to a specific action. After background subtraction using Gaussian Mixture Models, the motion information is extracted using the Motion History Images method and analyzed to detect important actions. We propose to model actions as the shape deformations of the motion history image in time. Every action is defined at several moments in time, called “Action peaks” using different features, the holistic area, contour and location measurements as well as the Fourier shape descriptors. We combine all the measurements into the Bag of Word model and create unique action representations called Action Signatures?. These action signatures are then transformed and combined using feature fusion in order to learn the optimal combination of features for each action. Learning the optimal feature fusion is performed using Support Vector Machines. The final trained system is used to classify each new action. Results: the result section is divided into 2 sections. First the scientific data is used which is recorded in a testing room, simulating elderly home, with colleagues and students. We recorded and detected multiple actions: Bending, Walking, Falling, Collapsing, all with very high accuracy rates, above 93%. Finally real data is recorded in real elderly homes observing 4 elderly people. Different actions are monitored: Walking, Sitting, Open Door, and Eating. Results in a real environment depict high detection rates and prove that the system is able to detect multiple human actions using only one single camera. Subject Computer VisionHealthcare ApplicationsMotion & Action Detection To reference this document use: http://resolver.tudelft.nl/uuid:a7517678-7120-45f6-973e-248de17998be Embargo date 2011-09-22 Part of collection Student theses Document type master thesis Rights (c) 2011 Aertssen, A.J.J. Files PDF Total_Thesis_Jerry_Aertssen.pdf 2.99 MB Close viewer /islandora/object/uuid:a7517678-7120-45f6-973e-248de17998be/datastream/OBJ/view