Dynamic sensor data segmentation for real-time knowledge-driven activity recognition

https://doi.org/10.1016/j.pmcj.2012.11.004Get rights and content

Abstract

Approaches and algorithms for activity recognition have recently made substantial progress due to advancements in pervasive and mobile computing, smart environments and ambient assisted living. Nevertheless, it is still difficult to achieve real-time continuous activity recognition as sensor data segmentation remains a challenge. This paper presents a novel approach to real-time sensor data segmentation for continuous activity recognition. Central to the approach is a dynamic segmentation model, based on the notion of varied time windows, which can shrink and expand the segmentation window size by using temporal information of sensor data and activities as well as the state of activity recognition. The paper first analyzes the characteristics of activities of daily living from which the segmentation model that is applicable to a wide range of activity recognition scenarios is motivated and developed. It then describes the working mechanism and relevant algorithms of the model in the context of knowledge-driven activity recognition based on ontologies. The presented approach has been implemented in a prototype system and evaluated in a number of experiments. Results have shown average recognition accuracy above 83% in all experiments for real time activity recognition, which proves the approach and the underlying model.

Introduction

Ambient Assisted Living (AAL) is motivated by the need to support independent living, whereby technology is used to provide people with proactive services in their normal environments, e.g. at home. Smart Homes (SH) have emerged as a viable technology that can support individuals, such as the elderly and disabled, for independent and dignified living. To provide assistance for individual inhabitants of an SH, activity recognition is required to identify the task that the individual is currently undertaking. In addition, activity recognition also determines whether the individual has any difficulties completing tasks.

To perform activity recognition, three important tasks are undertaken, namely activity modeling, activity monitoring, and pattern recognition. During activity modeling, suitable computational models of activities are created and presented in a format that can be automatically processed by computer systems. Existing literature provides a number of modeling approaches that fall in two main categories: data-driven and knowledge-driven activity modeling. In data-driven activity modelling  [1], [2], [3], activity models are learnt from pre-existing activity datasets. In knowledge-driven activity modelling  [4], [5], [6], knowledge engineers and/or domain experts employ knowledge engineering techniques to specify activity models explicitly. The resulting knowledge bases capture and encode commonsense domain knowledge. The activity monitoring task captures an inhabitant’s contextual information, e.g. location, time, objects used, and previous tasks performed, and which is then used to infer ongoing activities. Various monitoring techniques, such as dense sensing  [7], [8], computer vision  [9], [10], [11], and wearable sensors  [12], [13], have been adopted for collecting contextual information. Finally, during pattern recognition, incoming sensor data is processed against the activity models to infer the ongoing activities. Analogous to activity modeling approaches, pattern recognition can be performed through either the data-driven or the knowledge-driven approach. Data-driven approaches  [13], [14], [15] use machine learning techniques, typically statistical and probability analysis methods, to process sensor data against the activity models for pattern recognition. Conversely, knowledge-driven approaches  [4], [5], [16], [17], [18] make use of knowledge-based inference techniques to infer ongoing activities. Usually, they take as input the available sensor data and process them against the predefined explicit activity models.

While vision-based activity monitoring has been widely used in security surveillance, dense sensor based activity monitoring has gained currency in SH environments due to privacy and ethical considerations. In such environments, sensors are attached to objects in the environment (e.g. fridges, cupboards, e.t.c.) and an inhabitant’s interactions with these objects are monitored and used to identify the ongoing activities of daily living (ADLs). A key problem in dense sensor based activity recognition when sensors are activated along a timeline is how the sensor data are segmented so that the set of sensor interactions represents exactly a unique activity.

Recently, the ontology-based knowledge driven approach to activity recognition has attracted increasing attention. Ontology is essentially a formal, explicit specification of a shared conceptualization of a domain  [19]. It provides a vocabulary for modeling a domain by specifying the latter’s objects and/or concepts, properties, and relationships. In this way, domain and prior knowledge can be exploited to predefine activity models, i.e., the so-called activity ontologies. Whenever sensor data are obtained, the approach determines the likely ADL by reasoning against the model through ontological inference. Nevertheless, existing works on ontology-based activity recognition  [8], [9], [20] and similar work on knowledge-driven activity recognition  [16], [17], [21] do not clearly articulate the mechanism about how and what sensor data are selected from a live data stream for performing activity inference. In some research experiments that support on-line continuous activity recognition, the experiments restart manually each time an ADL is identified. For the approach to be applicable to real-world use scenarios it is necessary that after an ADL is identified, the activity recognition process should continue on fresh sensor data and decide what to exclude from those already used in the previously identified ADL(s). Obviously, this is not a trivial task and requires the development of a suitable discriminating strategy. To this end, we develop a segmentation approach that makes use of temporal information associated with sensor data and temporal characteristics of an activity for real-time activity recognition. The approach addresses two important issues: ‘segmentation and aggregation’ and ‘the conditions that trigger ontological reasoning’. Segmentation breaks down a sensor data stream into fragments that can be mapped to activity descriptions; while aggregation combines a finite collection of sensor data items available in a segment for activity inference.

The main purpose of this work is to develop a systematic approach to dynamic sensor data segmentation for real-time continuous activity recognition. The approach can dynamically decide an appropriate set of sensor data from a live sensor data stream for real-time activity recognition. Also, it is able to support continuous segmentation and aggregation along a timeline, thus allowing real-time ongoing activity recognition. As a result, this paper makes the following contributions. Firstly, we propose a time window based segmentation model that is applicable to a wide range of activity recognition scenarios. Secondly, we develop various mechanisms for dynamic manipulation of model parameters during activity recognition, such as the setting, shrinking, and expansion of the time window’s length, thus adapting the segmentation model in terms of the way activities are performed. Thirdly, we integrate the dynamic sensor data segmentation approach into an ontology-based algorithm for real-time, continuous activity recognition. This provides a basis for the implementation of re-useable knowledge-driven algorithms and applications for real-time activity recognition. In addition, we develop a synthetic ADL data generator that can be used to quickly generate temporally-rich synthetic ADL data for evaluation of activity recognition algorithms. Finally, we evaluate the performance of the proposed model and algorithms in supporting real-time activity recognition. We believe the time window based segmentation model and associated algorithms in activity recognition provide a realistically scalable, reusable approach that can continuously recognize activities of different complexities in a Smart Home context. The research is based on typical ADL activities that an inhabitant can perform in the kitchen, lounge, and bathroom of a Smart Home, e.g. cooking, watching television, and showering.

The remainder of the paper is organized as follows. Section  2 outlines related work. Section  3 describes the proposed approach, including ontological activity modeling. Sensor data stream segmentation and analysis is described in Section  4 which includes formal time window based modeling, recognition algorithms, and mechanisms used to dynamically vary the time windows. The implementation of various components and evaluation of the approach is presented in Section  5. Finally, Section  6 summarizes the results and discusses future work.

Section snippets

Related work

In this section we first briefly review related work in activity recognition. Secondly, since this work is motivated by the need to segment a sensor data stream using temporal information, we also review papers that use temporal segmentation for activity recognition.

Ontological activity modeling

Ontological modeling allows the creation of logical activity models to formally conceptualize the Smart Home domain. Activity models are based on objects, environmental elements, events, and interrelationships (e.g. “is-a” and “part-of” relations) between activities. Ontological activity modeling encodes activities as ADLs and uses ontologies to represent this knowledge for use in activity recognition. The resulting activity models can be processed by an automated system, through semantic

Characterization of segmentation and recognition

A key factor in continuous real-time activity recognition is how to select the set of activated sensor data to be aggregated for activity classification. In a typical Smart Home, sensors will be continuously activated and the resulting sensor data sequence needs to be broken down into fragments that can be mapped to specific ADL activities. To segment a sensor data stream, this work presents a number of scenarios and configurations that can be considered. The scenarios are divided into two main

Implementation and evaluation

We have applied the proposed approach to develop a SH-based activity recognition system. The system is implemented using Java language and a raft of semantic technologies and tools. Specifically, we developed ADL ontologies based on OWL-DL  [35] using Protégé editor  [41] as shown in Fig. 7. The ADL ontology captures information about ADLs such as ADL concepts, hierarchical relationships among concepts, property restrictions for ADLs and contextual information, and sensor related concepts.

To

Conclusion and future work

This paper presented an approach based on dynamically varied time windows to support sensor data segmentation for use in continuous, real-time activity recognition. It characterizes activity recognition and sensor data segmentation from which it formally defines a time window based segmentation model. The paper has detailed the rationale and operation algorithms of the model in the context of knowledge-driven activity recognition. In addition, different scenarios regarding dynamic manipulation

References (46)

  • X. Hong et al.

    Evidential fusion of sensor data for activity recognition in smart homes

    Pervasive and Mobile Computing

    (2009)
  • L. Bao, S.S. Intille, Activity recognition from user-annotated acceleration data, in: Second International Conference,...
  • O. Brdiczka et al.

    Learning situation models in a smart home

    IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics

    (2009)
  • D. Sanchez et al.

    Activity recognition for the smart hospital

    IEEE Intelligent Systems

    (2008)
  • L. Chen et al.

    A knowledge-driven approach to activity recognition in smart homes

    IEEE Transactions on Knowledge and Data Engineering

    (2011)
  • S. Knox, L. Coyle, S. Dobson, Using ontologies in case-based activity recognition, in: 23rd International Florida...
  • H. Storf, M. Becker, M. Riedl, Rule-based activity recognition framework: challenges, technique and learning, in: 2009...
  • D.J. Patterson, D. Fox, H. Kautz, M. Philipose, Fine-grained activity recognition by aggregating abstract object usage,...
  • N. Yamada et al.

    Applying ontology and probabilistic model to human activity recognition from surrounding things

    Transactions of the Information Processing Society of Japan

    (2007)
  • U. Akdemir, P. Turaga, R. Chellappa, An ontology based approach for activity recognition from video, in: 16th ACM...
  • P. Turaga et al.

    Machine recognition of human activities: a survey

    IEEE Transactions on Circuits and Systems for Video Technology

    (2008)
  • Y. Jie, C. Jian, L. Hanqing, Human activity recognition based on the blob features, in: 2009 IEEE international...
  • T. Huynh, U. Blanke, B. Schiele, Scalable recognition of daily activities with wearable sensors, in: Third...
  • E.M. Tapia, S.S. Intille, W. Haskell, K. Larson, J. Wright, A. King, R. Friedman, Real-time recognition of physical...
  • T. Huynh, B. Schiele, Unsupervised discovery of structure in activity data using multiple eigenspaces, in: 2nd...
  • J. Liao, Y. Bi, C. Nugent, Evidence fusion for activity recognition using the dempster-shafer theory of evidence, in:...
  • B. Bouchard et al.

    A smart home agent for plan recognition of cognitively-impaired patients

    Journal of Computers

    (2006)
  • S. Chua, S. Marsland, H.W. Guesgen, Spatio-temporal and context reasoning in smart homes, in: International Conference...
  • D. Riboni, L. Pareschi, L. Radaelli, C. Bettini, Is ontology-based activity recognition really effective? in:...
  • T.R. Gruber

    A translation approach to portable ontology specifications

    Knowledge Acquisition

    (1993)
  • L. Chen et al.

    Ontology-based activity recognition in intelligent pervasive environments

    International Journal of Web Information Systems

    (2009)
  • D. Lymberopoulos, A. Bamis, T. Teixeira, A. Savvides, BehaviorScope: real-time remote human monitoring using sensor...
  • L. Chen et al.

    Activity recognition: approaches, practices and trends

  • Cited by (140)

    • Simulation of badminton sports injury prediction based on the internet of things and wireless sensors

      2021, Microprocessors and Microsystems
      Citation Excerpt :

      Abuse wounds are recorded as the prevailing kind of injury. Serious/preparing injury shifts for various gatherings, including predominance [16]. Unmistakably the study of disease transmission of badminton-related wounds shows segment contrasts.

    View all citing articles on Scopus
    View full text