Skip to content

SCAI-Lab/andre_muff_master_thesis_adl_segmentation

Repository files navigation

Master's Thesis in Human Activity Segmentation for ADL Classification in SCI Individuals

This project investigates how Human Activity Segmentation (HAS) can be implemented into ADL classificationm, replacing the conventional fixed size sliding window approach for time series segmentation.

Features

  • Data synchronization & anonymization (data_alignment/)
  • Label completness analysis (label_completness_analysis/)
  • Human activity segmentation (HAS/)
    • HAS analysis on 2 public datasets - Kaggle & HAR-Reyes - (HAS/HAS_public_datasets.py)
    • Comparison stacked vs. original datset (HAS/stacked_sensei_analysis/)
    • HAS analysis (stacked) OutSense dataset (HAS/HAS_Outsense.py)
  • Human acitivty recognition (HAR) / ADL classification (HAS/HAR/)

Installation

The packages to run HAR are not compatible with the packages for the HAS ClaSP and FLUSS. As only the Ruptures library is used for the segmenation for HAR, this is not a problem. But different environments are used wheter you want to reprduce the resutls of the HAS benchmark (including ClaSP and FLUSS methods) or if you want to run the HAR pipeline (with Ruptures)

1. clone repo:

git clone https://github.com/SCAI-Lab/andre_muff_master_thesis_adl_segmentation

To run HAR pipeline

# 2. create virtual env with micromamba using python 3.10.19
micromamba create -n har_env python=3.10.19

# 3. activate viratual env
micromamba activate har_env

# 4. install packages from requierements_HAR.txt file
pip install -r Master_thesis/requirements_HAR.txt

To run HAS benchmark

# 2. create virtual env with micromamba using python 3.10.19
micromamba create -n has_env python=3.9.18

# 3. activate viratual env
micromamba activate has_env

# 4. install packages from requierements_HAR.txt file
pip install -r Master_thesis/requirements_HAS.txt

#5. To run the benchmark the Kaggle dataset you need to downlaod the using the following command:
cd HAS
git clone https://github.com/patrickzib/human_activity_segmentation_challenge

This will create a folder in the the HAS/ folder where the dataset and the loading funcionalties are included. How to run the benchmark see (Usage -> 1. Public dataset).

Usage

The main features of the code are explained in the following:

Human acitivty segmentation (benchmark):

  1. Public dataset: To run the benchmark on the public datasets including all methods expect the supervised EventDector run the file "HAS/HAS_public_datasets.py". Inside the file you have the option to specify which dataset you want to use and whicht methods you want to test. More details are explained inside the .py file.

  2. Outsense dataset: To run the benchmark on the OutSense data (expect EventDetector method) run the file "HAS/HAS_Outsense.py". The preprocessing of the Outsense data is located in the file: "HAS/preprocessing_synced_data.py". Inside the file you have the option to specify which subjects and which methods you want to test (default is all methods and all subjects, but this takes several hours to process).

  3. Event Detector: To run the Eventdetector method, run the file HAS/event_detector/EventDetector.py and select the wanted dataset.

  4. All the codes for the analyis of the stacked SENSEI is found in the folder "HAS/stacked_sensei_analysis/". The file "HAS/stacked_sensei_analysis/sensei_stacked.py" is used to generate different stacking methods (different combinations of filter types and filter window sizes). These results are compared to the results coming from "HAS/HAS_original_SENSEI" which produces the results for the original (unstacked) SENSEI data. The results of this comparison are visualized in "HAS/stacked_sensei_analysis/comparsion_stacked_vs_sensei.ipynb".

Human Activity Recognition:

All the functionalites are implemented in the file "HAS/HAR/HAR.ipynb". This file can be used to generate the HAR results for the sliding window, HAS (using PELT Multivariate) or optimal segmenation (ground truth segments). Make sure to activate the correct virtual env (har_env) to run this code.

Project Structure

The project is original run on the server with the following file structure:

/home/muff_an/:

        ├── micromamba/envs

        ├── Master_thesis/

            ├── data_alignment/

            ├── HAS/

            |── label_completness_analysis/

            ├── parameters/

            ├── public_dataset/

            ├── README.md
            
            └── Requierements.txt

Make sure to use the same folder structure as some path are defined as absolute names and cannot automatically adapt to new naming schemes!

Configuration

Parameters are defined in the /parameters folder:

Activity_Mapping_v2: Here the mapping between labeled ADLs and ADL category used for our study is stored

Final_Labels_corrected: This is a backup the labels created from video data, these labels were anonymized and stored on the server. In the code the lables are automatically loaded for the server.

Label_corrections: This is a list of bad labels, this list was created based on a visual analysis of the daa

subject_id_mapping: This is the mapping from original to anonymized subject names

Sync_Events_Times: This is a list of the ground truth synchronization events

Sync_Parameters_andre: This is a list of time-shift per sensor per subject. This parameters were manually determined based on a visual analysis of the ground truth sync event and the event appearing in the sensors data.

SyncTimesVideos: This is a list coming from the camera sensor, where the time shift and drift between video time (from metadata of camera) and clock visual in video is stored.

About

A pipeline for adl data stacking, and adl segmentation of outsense dataset

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors