Sciweavers

Share
IJCAI
2007

Common Sense Based Joint Training of Human Activity Recognizers

8 years 8 months ago
Common Sense Based Joint Training of Human Activity Recognizers
Given sensors to detect object use, commonsense priors of object usage in activities can reduce the need for labeled data in learning activity models. It is often useful, however, to understand how an object is being used, i.e., the action performed on it. We show how to add personal sensor data (e.g., accelerometers) to obtain this detail, with little labeling and feature selection overhead. By synchronizing the personal sensor data with object-use data, it is possible to use easily specified commonsense models to minimize labeling overhead. Further, combining a generative common sense model of activity with a discriminative model of actions can automate feature selection. On observed activity data, automatically trained action classifiers give 40/85% precision/recall on 10 actions. Adding actions to pure objectuse improves precision/recall from 76/85% to 81/90% over 12 activities.
Shiaokai Wang, William Pentney, Ana-Maria Popescu,
Added 29 Oct 2010
Updated 29 Oct 2010
Type Conference
Year 2007
Where IJCAI
Authors Shiaokai Wang, William Pentney, Ana-Maria Popescu, Tanzeem Choudhury, Matthai Philipose
Comments (0)
books