Experience the power of state-of-the-art video object detection with YOLO's user-friendly commands. The below article walks you through the process, from installation to execution, in just three simple steps.
Video Object Detection Made Easy
Object detection in videos has become an essential task in various applications, from surveillance to autonomous driving. Thanks to advancements in deep learning and the powerful libraries available, what once required extensive code and computational resources can now be accomplished with minimal effort. In this article, I’ll demonstrate how to perform video object detection using the YOLO (You Only Look Once) model with just three lines of code. This is especially useful if you are a beginner and want to explore the capabilities of computer vision technology. Let’s dive into the process!
Step 1: Install the YOLO Library
First, we need to install the ultralytics package, which provides the tools necessary for using YOLO models. This package simplifies the implementation of YOLO, allowing you to focus on the application rather than the underlying complexities. I have used Google Collab IDE to run this project, but you can use your preferred choice.
!pip install ultralytics -q
The -q flag stands for "quiet mode," which suppresses detailed installation logs, making the process cleaner and more efficient.
Embed from Getty ImagesStep 2: Run YOLO to Detect Objects in a Video
Next, we use the YOLO model to detect objects in a specified video file. The following command processes the video, detecting objects frame by frame and outputting a new video with the detections highlighted. An important note is that the fewer frames a video has, the less time it takes to complete the execution of the process. So if you are just looking to test the project, I suggest you use a video that is less than 20 or 30 seconds long.
!yolo detect predict model=yolov8m.pt source='/content/drive/MyDrive/ComputerVision/project_videos/7515858-hd_1080_1920_30fps.mp4'
Here,
model=yolov8m.pt: Specifies the pre-trained YOLOv8 medium model.
source='/content/drive/MyDrive/ComputerVision/project_videos/7515858-hd_1080_1920_30fps.mp4': Points to the input video file stored in Google Drive.
You can use your own recorded video as input or download a free video from the internet. I used the Pexels to get the video for this project. Credits to RDNE Stock project for the video I used in this project.
Step 3: Convert the Output Video Format
Finally, we convert the format of the output video using ffmpeg. This step ensures that the processed video is in a widely supported format (MP4), making it easy to share and view.
!ffmpeg -i '/content/runs/detect/predict/7515858-hd_1080_1920_30fps.avi' -vcodec libx264 final.mp4
Here,
-i '/content/runs/detect/predict/7515858-hd_1080_1920_30fps.avi': Specifies the input video file generated by YOLO.
-vcodec libx264 final.mp4: Encodes the video using the H.264 codec and saves it as final.mp4.
Embed from Getty ImagesComplete Code
Here's the complete code to perform video object detection using the YOLO model in just three lines:
!pip install ultralytics -q!yolo detect predict model=yolov8m.pt source='/content/drive/MyDrive/ComputerVision/project_videos/7515858-hd_1080_1920_30fps.mp4'!ffmpeg -i '/content/runs/detect/predict/7515858-hd_1080_1920_30fps.avi' -vcodec libx264 final.mp4
You can find the full code and additional resources on my GitHub profile. This repository includes detailed instructions and the demo example video to help you get started with video object detection using YOLO.
Conclusion
With just three lines of code, we’ve successfully performed video object detection using the YOLO model. This streamlined approach demonstrates the power and simplicity of modern deep learning tools, making advanced computer vision techniques accessible to everyone.