Skip to content

hamhanry/RockPaperScissor-Game

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RockPaperScissor-Game

RockPaperScissor-Game

Introduction

It is a model that used to predict rock, paper, scissor game using hand sign. The model is trained using a single GPU with around 6,000 images.

How to

This project was built upon python 3.12.2.

  1. You could create the virtual environment by using anaconda command
conda create -n rps python=3.12
  1. Install the requirements
pip install -r requirements.txt
  1. Download the given pretrained model below and put those under pretrained folder
  2. execute the inference using your webcam
cd RockPaperScissor-Game/rps_detection
python -m rps_detection.infer_webcam

Inference Snippet

AI.Model.untuk.bermain.Rock.Scissor.Paper.-.Game.mp4

https://youtu.be/jIjMTfd7xmA

Results

The methods used are YOLOV8 with the following results:

model size (pixels) mAP50 val mAP50-95 val FLOPs (B) Link
YOLOv8n 640 0.956 0.78 8.7 download
YOLOv8x 640 0.962 0.792 257.8 download

Note : I trained with 2 backbone options that you could choose, the smallest one (YOLOv8n) or the biggest one (YOLOv8X).

Contact

If any feature requests please write on this github issues

More

Hopefully this pretrained model benefits for those who need it. If you are one that benefits this project, please give me a high five by assigning a STAR.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages