Browsing by Subject "Activity recognition"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Automating Cloud Services Lifecycle through Semantic technologies(IEEE, 2014-01-01) Joshi, Karuna Pande; Yesha, Yelena; Finin, TimManaging virtualized services efficiently over the cloud is an open challenge. Traditional models of software development are not appropriate for the cloud computing domain, where software (and other) services are acquired on demand. In this paper, we describe a new integrated methodology for the lifecycle of IT services delivered on the cloud, and demonstrate how it can be used to represent and reason about services and service requirements and so automate service acquisition and consumption from the cloud. We have divided the IT service lifecycle into five phases of requirements, discovery, negotiation, composition, and consumption. We detail each phase and describe the ontologies that we have developed to represent the concepts and relationships for each phase. To show how this lifecycle can automate the usage of cloud services, we describe a cloud storage prototype that we have developed. This methodology complements previous work on ontologies for service descriptions in that it is focused on supporting negotiation for the particulars of a service and going beyond simple matchmaking.Item Context-Aware Multi-Inhabitant Functional and Physiological Health Assessment in Smart Home Environment(2017-01-01) Alam, Mohammad Arif UlAlam, Mohammad Arif Ul; Roy, Nirmalya; Information Systems; Information SystemsRecognizing the human activity, behavior, and physiological symptoms in smart home environments is of utmost importance for the functional, physiological, and cognitive health assessment of the older adults. Unprecedented data from everyday devices such as smart wristbands, smart ornaments, smartphones, and ambient sensors provide opportunities for activity mining and inference, but pose fundamental research challenges in data processing, physiological feature extraction, activity labeling, learning and inference in the presence of multiple inhabitants. In this thesis, we develop micro-activity driven macro-activity recognition approaches while considering the underpinning spatiotemporal constraints and correlations across multiple inhabitants. We postulate an activity recognition framework that helps recognize the unseen activities by exploiting the underlying taxonomical structure. We also design novel signal processing and machine learning algorithms to detect fine-grained physiological symptoms such as stress, depression and agitation. We combine these activity recognition methodologies along with the physiological health assessment approaches to quantify the functional, behavioral, and cognitive health of the older adults. We collected data from a continuing care retirement community center using our smart home sensor setup. Finally, we evaluate, compare, and benchmark our proposed computational approaches with the clinical tools used extensively for functional and cognitive health assessment in practice.Item Cross-Domain Scalable Activity Recognition Models in Smart Environments(2019-01-01) Khan, Md Abdullah Al Hafiz; Roy, Nirmalya; Information Systems; Information SystemsThe success of Activity Recognition (AR) methodology largely depends on the availability of labeled training samples and adaptability of activity recognition models in cross-domains such as diverse users, heterogeneous devices, and different smart environments. The availability of new era of Internet-of-Things (IoT) devices ranging from smartphones, smartwatches, micro-radars, Amazon Echo in users everyday environments ease the recognition of human activities, behaviors, and occupancy. Nevertheless, the variabilities across emerging sensors, heterogeneities in consumer devices, and inherent variations in users' activities hinder the design and development of scalable activity recognition models. Motivated by this, in this thesis, we investigate the problem of making human activity recognition scalable–i.e., allowing AR classifiers trained in one context to be readily adapted to a different contextual domain. To allow such adaptation without requiring the onerous step of collecting large volumes of labeled training data, we proposed a transfer learning model that is specifically tuned to the properties of convolutional neural networks (CNNs). We designed different variants of this Heterogeneous Deep Convolutional Neural Network (HDCNN) model that help to automatically adapt and learn the model across different domains, such as different users, device-types, device-instances in presence of completely or partially alike activities in source and target. We also extended the above cross-domain activity recognition models to learn the unseen activities using the deep features transfer learning technique while aggregating the domain knowledge from both the source and target domains. Evaluation on real world datasets attested that our proposed cross-domain activity recognition models are able to achieve high accuracy even without any labeled training data in the target domain, and often offer higher accuracy (compared to shallow and deep classifiers) even with a modest amount of labeled training data.Item Scaling Human Activity Recognition via Deep Learning-based Domain Adaptation(IEEE, 2018-08-23) Khan, Md Abdullah Al Hafiz; Roy, Nirmalya; Misra, ArchanWe investigate the problem of making human activity recognition (AR) scalable-i.e., allowing AR classifiers trained in one context to be readily adapted to a different contextual domain. This is important because AR technologies can achieve high accuracy if the classifiers are trained for a specific individual or device, but show significant degradation when the same classifier is applied context-e.g., to a different device located at a different on-body position. To allow such adaptation without requiring the onerous step of collecting large volumes of labeled training data in the target domain, we proposed a transductive transfer learning model that is specifically tuned to the properties of convolutional neural networks (CNNs). Our model, called HDCNN, assumes that the relative distribution of weights in the different CNN layers will remain invariant, as long as the set of activities being monitored does not change. Evaluation on real-world data shows that HDCNN is able to achieve high accuracy even without any labeled training data in the target domain, and offers even higher accuracy (significantly outperforming competitive shallow and deep classifiers) when even a modest amount of labeled training data is available.Item UnTran: Recognizing Unseen Activities with Unlabeled data using Transfer Learning(IEEE, 2018) Khan, Md Abdullah Al Hafiz; Roy, NirmalyaThe success and impact of activity recognition algorithms largely depends on the availability of the labeled training samples and adaptability of activity recognition models across various domains. In a new environment, the pre-trained activity recognition models face challenges in presence of sensing bias- ness, device heterogeneities, and inherent variabilities in human behaviors and activities. Activity Recognition (AR) system built in one environment does not scale well in another environment, if it has to learn new activities and the annotated activity samples are scarce. Indeed building a new activity recognition model and training the model with large annotated samples often help overcome this challenging problem. However, collecting annotated samples is cost-sensitive and learning activity model at wild is computationally expensive. In this work, we propose an activity recognition framework, UnTran that utilizes source domains' pre-trained autoencoder enabled activity model that transfers two layers of this network to generate a common feature space for both source and target domain activities. We postulate a hybrid AR framework that helps fuse the decisions from a trained model in source domain and two activity models (raw and deep-feature based activity model) in target domain reducing the demand of annotated activity samples to help recognize unseen activities. We evaluated our framework with three real-world data traces consisting of 41 users and 26 activities in total. Our proposed UnTran AR framework achieves ≈ 75% F1 score in recognizing unseen new activities using only 10% labeled activity data in the target domain. UnTran attains ≈ 98% F1 score while recognizing seen activities in presence of only 2-3% of labeled activity samples.