This repository contains the implementation and experiments for the project “Activity Classification using Motion History Images (MHI)”. The project explores a lightweight approach to human activity recognition by extracting temporal features (MHI and MEI) from video frames and classifying them using traditional machine learning models such as KNN, SVM, and Random Forest. It includes scripts for feature generation, model training, evaluation, and reproduction of results discussed in the accompanying report. The experiments use the KTH human action dataset — place the extracted KTH videos under the input_videos/ directory with one folder per class (for example input_videos/boxing/, input_videos/walking/, etc.), which is the structure expected by the scripts.
Follow the steps below to run the code:
- Create the conda environment using the
environment.ymlfile:
conda env create -f environment.yml- Activate the conda environment:
conda activate cv_proj- Prepare the dataset. Inside
input_videosfolder, extract the KTH dataset in a way that each class is in a separate folder. The folder structure should look like this:
input_videos/
├── boxing
│ ├── person01_boxing_d1_uncomp.avi
...
├── walking
│ ├── person01_walking_d1_uncomp.avi
...
├── running
...cdtosrcdirectory, then run theexperiment.pyscript. The script will extract the features from the videos and run the experiments. The experiment images will be saved in theoutput_imagesfolder, whereas temporary data will be saved in thelocaldatafolder. The code will run about less than 3 minutes until completion.
cd src
python experiment.pyThe code is tested on Windows 11 and Ubuntu 20.04. The environment file is generated using the following command:
conda env export --no-builds --from-history > environment.yaml