To Train datasets.
- Clone the repository
- Create a folder called datasets in the root directory fo the repository
- To train on FER13 dataset , create folder called fer2013 in the datasets folder and place the files in the folder
- To train on CK+ dataset, create folder called CKPlus in the datasets folder and place the files
- Finally run the train_emotion_classifer.py file. (Double check on the paths set in the code)
To see a demo,
- give the name of the model to load in demo.py file
- place the image to be tested in images directory
- run the demo.py file
- new file called ‘predicted_result’ wil be generate containing the prediction
To generate key-points
- place all images for which keypoints has to be generated in a folder
- create a folder where keypoint generated images has to be saved
- set the path variables in keypointgenerator.py file to the folder where the images are present
- run the keypointgenerator.py file
Datasets,
CK and CK+: http://www.pitt.edu/~emotion/ck-spread.htm FER2013: https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data FER-plus: https://github.com/Microsoft/FERPlus We have created one hot encoding for all the ground truth labels in the dataset directory as well, use them if needed
Environment Setup
- Install miniconda or Anaconda
- In the command prompt type, conda env create -f environment.yml
- this will create all the necessary setup related for the project
- activate the environment by typing conda activate gpu_env in command prompt
- deactivae the environment by typing conda deactivate
Acknowledgements,
We sincerely thank Octavio Arriaga et al., for their codes posted in GitHub at https://github.com/oarriaga/face_classification