This project analyzes videos using common OpenCV techniques: color masking (e.g. isolate red/orange objects), frame differencing (movement vs. a reference frame), and blob detection (locate simple objects like a ball and get their coordinates). It reads a video from AWS S3, runs these three analyses in parallel, and writes three annotated output videos back to S3 (mask-only, difference, and keypoints with a circle around the detected object). It can run locally or as an AWS Lambda function triggered when a new video is uploaded to S3, with optional transcoding via Elastic Transcoder.
Part of a startup project at 4Geeks for image identification and video analysis. The implementation and concepts were documented in the 4Geeks blog: Analyzing Videos With Python.
Video analysis is used in security, sports, cinema, transport, health, and home automation. This code focuses on:
- Movement detection — What changed between the first frame and the rest (frame differencing).
- Color-based filtering — Isolate objects by color (HSV mask) and ignore the rest.
- Object detection and position — Find blob-like objects (e.g. a ball), get (x, y) and size, and draw them on the video.
Defining what you want to analyze (detect vs. track, how many objects, entry/exit rules) drives how you implement it; this project demonstrates a simple pipeline for detection and masking.
- Mask: Convert each frame to HSV, apply
cv2.inRangewith lower/upper red (e.g.[0, 130, 75]–[255, 255, 255]), thencv2.bitwise_andso only the colored region is visible. - Frame difference: Store the first frame, then
cv2.absdiff(first_frame, frame)per frame to highlight what moved (e.g. ball vs. static background). - Blob detection:
cv2.SimpleBlobDetectorwith color filter (e.g. blobColor 255 for dark blobs). Detector returns keypoints (position and size); the code draws a circle around the first keypoint and writes a “keypoints” video. - Pipeline: One video is read from an S3 URL; three
VideoWriters producered_*,diff_*, andkeypoints_*outputs. These are uploaded to S3 and optionally sent to AWS Elastic Transcoder for transcoding.
Local
- Create a virtual environment and install dependencies:
pip install -r requirements.txt - AWS credentials: set
ACCESS_KEY_IDandSECRET_ACCESS_KEYin the environment, or use a localsettingsmodule with those variables. - Run the analyzer (example):
python analyzer.py
The default in__main__uses a fixed S3 bucket and key (blog4geeks,original/vid_test.mp4); change those inanalyzer.pyfor your bucket and object key.
AWS Lambda
- Package:
./make.sh(orzip analyzer.zip analyzer.py lambda_function.py). - Deploy the zip as a Lambda function and set the handler to
lambda_function.lambda_handler. - Configure an S3 trigger so that when a video is uploaded to the bucket, the event passes
bucketandobject keyto the handler; the handler creates aDetectorand runsdetect_movement().
Dependencies: Python 3, NumPy, OpenCV (opencv-python, opencv-contrib-python). For Lambda, include these in the deployment package (or use a layer). For AWS features: boto3 (add to requirements.txt if missing).
Python, OpenCV (cv2), NumPy, AWS (S3, Lambda, Elastic Transcoder), boto3.
Startup project at 4Geeks for image identification and video analysis. See Analyzing Videos With Python for the accompanying blog post.