This project is an advanced anti-cheating system designed to monitor users during online exams or proctored sessions. It leverages computer vision and machine learning to detect suspicious behaviors such as looking away, turning the head, using hands, having multiple faces in the frame, or using unauthorized devices (e.g., phones, laptops). The system captures photos of suspicious activities with detailed reasons, ensuring secure and fair monitoring.
- Gaze Detection: Tracks eye movements to detect if the user is looking away from the center (e.g., left, right, up, down).
- Head Orientation Detection: Monitors head movements to identify if the user is turning away (e.g., left, right, up, down).
- Hand Detection: Detects hand presence, which might indicate writing or device usage.
- Multiple Faces Detection: Identifies if more than one person is in the frame, suggesting potential collaboration.
- Device Detection: Uses YOLOv5 to detect unauthorized devices like phones, laptops, or tablets.
- Identity Verification: Ensures only the authorized user is present by matching their face against a reference image.
- Photo Capture: Captures photos with timestamps and reasons when suspicious behavior is detected for more than 3 seconds.
- Calibration: Includes calibration steps for gaze and head orientation to adapt to individual users.
- Python: Core programming language for the project.
- Mediapipe: For face mesh and hand landmark detection.
- YOLOv5 (Ultralytics): For device detection (e.g., phones, laptops).
- TensorFlow/Keras: For the pre-trained gaze detection model.
- face_recognition: For identity verification and multiple faces detection.
- OpenCV (cv2): For video capture, image processing, and visualization.
- NumPy: For numerical computations and feature processing.
anti_cheating.py: Main script that runs the anti-cheating system.gaze_model_with_head.h5: Pre-trained TensorFlow model for gaze detection.scaler.pkl: Scaler for normalizing gaze detection features.label_encoder.pkl: Label encoder for mapping gaze predictions to directions.yolov5s.pt: YOLOv5 small model for device detection (download from Ultralytics).suspicious_photos/: Directory where photos of suspicious behavior are saved (created automatically).reference.jpg: Reference image for identity verification (user-provided).
- Python 3.8 or higher
- A webcam for video input
- A reference image (
reference.jpg) of the authorized user for identity verification
- Set Up a Virtual Environment (Optional but Recommended):
python -m venv venv_tf source venv_tf/bin/activate # On Windows: venv_tf\Scripts\activate
- Install Dependencies:
pip install opencv-python mediapipe numpy tensorflow face_recognition ultralytics
- On macOS/Linux, install additional dependencies if needed:
brew install cmake brew install libopenblas pip install dlib
- Prepare the Reference Image:
- Place a clear photo of the authorized user named
reference.jpgin the project directory.
- Place a clear photo of the authorized user named
-
Activate the Virtual Environment (if created):
source venv_tf/bin/activate # On Windows: venv_tf\Scripts\activate
-
Run the Script:
python anti_cheating.py
-
Follow the Prompts:
- Identity Verification: Enter the path to
reference.jpgwhen prompted and look at the camera for 5 seconds to verify your identity. - Head Orientation Calibration: Look straight at the center of the camera for 5 seconds to calibrate head orientation.
- Gaze Detection Calibration: Look straight at the center of the camera for another 5 seconds to calibrate gaze detection.
- Identity Verification: Enter the path to
-
Monitoring Phase:
- The system will monitor your behavior in real-time.
- Photos will be saved in the
suspicious_photos/directory if suspicious behavior (e.g., looking away, hand movement, device detection) is detected for more than 3 seconds.