Skip to content

A complete deep learning workflow for training and deploying custom object detection models using Ultralytics YOLOv8.

License

Notifications You must be signed in to change notification settings

shivamprasad1001/yolo-custom-trainer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

23 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🎯 Custom YOLOv8 Object Detection Pipeline

End-to-End Deep Learning for Real-World Applications

A complete object detection solution using Ultralytics YOLOv8 with custom dataset support, training, and evaluation.



πŸ“œ Project Overview

This pipeline provides a complete workflow for custom object detection:

  1. Dataset Collection: Smartphone-captured images
  2. Annotation: Label Studio for precise bounding boxes
  3. Preparation: Auto-conversion to YOLO format
  4. Training: Configurable YOLOv8 model training
  5. Evaluation: Comprehensive metrics (mAP, PR curves)
  6. Deployment: Ready-to-use model output

πŸ“Έ Sample Outputs

Sample Image


🌟 Key Features

  • πŸ“Έ Custom Dataset Support - Use your own images
  • 🏷️ Label Studio Integration - Streamlined annotation
  • βš™οΈ Configurable Training - Edit config.yaml for different models
  • πŸ“Š Visual Evaluation - PR curves, confusion matrices
  • πŸ”„ Reproducible - Version controlled with requirements

πŸ“‚ Project Structure

graph TD
    A[Data Collection] --> B[Label Studio Annotation]
    B --> C[prepare_yolo_dataset.py]
    C --> D[YOLO-formatted Dataset]
    D --> E[train.py]
    E --> F[Model Training]
    F --> G[evaluate.py]
    G --> H[Performance Metrics]
Loading
project-root/
β”œβ”€β”€ config.yaml                  # Training configuration
β”œβ”€β”€ prepare_yolo_dataset.py      # Dataset preparation
β”œβ”€β”€ training/
β”‚   β”œβ”€β”€ train.py                 # Training script
β”‚   └── my_model.pt             # Output model
β”œβ”€β”€ evaluate.py                  # Evaluation script
β”œβ”€β”€ data/                       # Organized dataset
β”‚   β”œβ”€β”€ train/images/           # Training images
β”‚   β”œβ”€β”€ train/labels/           # Training labels
β”‚   β”œβ”€β”€ val/images/             # Validation images
β”‚   β”œβ”€β”€ val/labels/             # Validation labels
β”‚   └── data.yaml               # Dataset config
└── runs/                       # Training outputs

πŸ› οΈ Tech Stack

Component Technology
Framework YOLOv8 (PyTorch)
Annotation Label Studio
Data Processing OpenCV, Pandas
Configuration YAML
Version Control Git

conda activate yolo-env1
pip install ultralytics opencv-python numpy

πŸ“Œ Script Arguments

The script takes the following arguments:

Argument Required Description
--model βœ… Yes Path to YOLO model file (.pt), e.g., runs/detect/train/weights/best.pt
--source βœ… Yes Input source: image file (test.jpg), folder (./images/), video file (video.mp4), or USB webcam (usb0)
--thresh ❌ No Minimum confidence threshold for detections (default: 0.5)
--resolution ❌ No Output resolution in WxH format (e.g., 640x480). Default is source resolution.
--record ❌ No Record results (only works with video or webcam). Saves to demo1.avi. Requires --resolution.

πŸ“Œ Usage Examples

1. Run on a single image

python yolo_detect.py --model runs/detect/train/weights/best.pt --source test.jpg

2. Run on a folder of images

python yolo_detect.py --model runs/detect/train/weights/best.pt --source ./images/

3. Run on a video file

python yolo_detect.py --model runs/detect/train/weights/best.pt --source video.mp4

4. Run on a USB webcam

python yolo_detect.py --model runs/detect/train/weights/best.pt --source usb0 --resolution 640x480

5. Record results from webcam

python yolo_detect.py --model runs/detect/train/weights/best.pt --source usb0 --resolution 640x480 --record

πŸ“Œ Controls During Inference

  • Press q β†’ Quit
  • Press s β†’ Pause inference
  • Press p β†’ Save current frame as capture.png

πŸš€ Getting Started

βœ… Prerequisites

  • Python 3.8+
  • Ultralytics YOLOv8 (pip install ultralytics)
  • Label Studio (for annotation)

βš™οΈ Setup

git clone https://github.com/shivamprasad1001/yolo-project.git
cd yolo-project
pip install -r requirements.txt

οΏ½ Dataset Preparation

  1. Annotate images in Label Studio (YOLO format)
  2. Export as data.zip
  3. Run:
python prepare_yolo_dataset.py

πŸ‹οΈ Training

Edit config.yaml then:

python training/train.py

πŸ“Š Evaluation

python evaluate.py

βš™οΈ Configuration (config.yaml)

# Model configuration
model: yolov8n.pt        # yolov8n/s/m/l/x
data: data/data.yaml      # Dataset config
epochs: 50               # Training epochs
imgsz: 640               # Image size
batch: 16                # Batch size
project: runs/train      # Output directory
name: custom             # Run name

πŸ” Security & Best Practices

  • All training data remains local
  • Model weights can be encrypted for deployment
  • Git ignores sensitive training outputs

🚧 Future Roadmap

  • TensorRT optimization for deployment
  • Web-based annotation interface
  • Automated hyperparameter tuning
  • Docker support for easy setup

πŸ‘¨β€πŸ’» Author

Shivam Prasad
GitHub | LinkedIn


πŸ“ License

MIT License - See LICENSE for details.

About

A complete deep learning workflow for training and deploying custom object detection models using Ultralytics YOLOv8.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages