Train a custom object detector using YOLOv7

Train a custom object detector using YOLOv7


HOW TO BEGIN?

  • First, ✅Subscribe to my YouTube channel ???????? https://www.youtube.com/c/techzizou ????????
  • Open my Colab notebook on your browser.
  • Click on File in the menu bar and click on Save a copy in drive. This will open a copy of my Colab notebook on your browser which you can now use.
  • Next, once you have opened the copy of my notebook and are connected to the Google Colab VM, click on Runtime in the menu bar and click on Change runtime type. Select GPU and click on save.

Use any one of the following models:

In this tutorial, we will be using the yolov7 model. You can use any of the following:

Follow these 7 steps to train your custom YOLOv7 object detector:

NOTE: If you get disconnected or lose your session for some reason you have to run steps 1,3 and 5 again.


1) Mount the drive

#mount drive
%cd ..
from google.colab import drive
drive.mount('/content/gdrive')

# this creates a symbolic link so that now the path /content/gdrive/My\ Drive/ is equal to /mydrive
!ln -s /content/gdrive/My\ Drive/ /mydrive

# list the contents of /mydrive
!ls /mydrive

2) Clone the yolov7 official git repository

!git clone https://github.com/WongKinYiu/yolov7.git

3) Navigate to yolov7 folder, Install PyTorch and all the required libraries and dependencies

#Navigate to /mydrive/yolov7
%cd /mydrive/yolov7

!pip install -r requirements.txt
#Check if pytorch installed
import torch
import os
from IPython.display import Image, clear_output # to display images

print(f"Setup complete. Using torch {torch.__version__} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})")

4) Create & upload the following files which we need for training a custom detector

a. Labeled Custom Dataset

b. process_yolov7.py file (to split dataset into train-test-val folders for training)

c. data.yaml file

d. config file (custom_yolov7s.yaml)

e. YOLOv7 Pre-trained weights

CHANGE I have uploaded my custom files on GitHub. I am working with 2 classes i.e. with_mask and without_mask


4(a) Upload the Labeled custom dataset obj.zip file to the yolov7 folder on your drive and unzip it

Create the zip file obj.zip from the obj folder containing both the folders’ images and labels. The images folder has all the input images “.jpg” files and the labels folder has their corresponding YOLO format labeled “.txt” files.

Upload the zip file to the yolov7 folder on your drive.

Labeling your Dataset

Input image example (Image1.jpg)

Original Photo by Ali Pazani from Pexels

You can use any software for labeling like the labelImg tool.

labelImg GUI for Image1.jpg

Click on the link below to know more about the labeling process and other software for it:

NOTE: Garbage In = Garbage Out. Choosing and labeling images is the most important part. Try to find good-quality images. The quality of the data goes a long way toward determining the quality of the result.

The output YOLO format TXT label file looks as shown below.

Image1.txt

Unzip the obj.zip dataset and its contents

!unzip -q /mydrive/yolov7/obj.zip -d /mydrive/yolov7

Note: You can also use other methods to get your dataset like the curl command to download the dataset from Roboflow. Visit Roboflow and go to the Public Datasets tab for more datasets.

curl -L "https://public.roboflow.ai/ds/YOUR-API-KEY-HERE" > roboflow.zip; unzip roboflow.zip; rm roboflow.zip

*Since I already have a simple dataset archive ready I will be using that.


4(b) Split the dataset images and labels into train-test-val. Run the “process_yolov7.py” python script to create the train, test & val folders inside the yolov7/obj directory

Here the train folder will have 80% of the dataset images and their labels while the test & val folders will each have 10% of the dataset images and their labels.

The split folders will be in the following order:

Run the process script

# run process_yolov7.py ( this creates the train,test and val folders containing the images and labels folder )
!python process_yolov7.py

# list the contents of obj folder to check if the train,test and val folders have been created 
!ls obj

4(c) Create your data.yaml file and upload it to the yolov7 folder in your drive

This file contains the path to your train and val images. We created these 2 text folders in the previous step.
This file also contains the number of classes and their names.

data.yaml file is shown below:

# train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/]

train: /mydrive/yolov7/obj/train/images/
val: /mydrive/yolov7/obj/val/images/
test: /mydrive/yolov7/obj/test/images/ #optional

nc: 2
names: ['with_mask', 'without_mask']

4(d) Set the config file

(Define YOLOv7 Model Configuration and Architecture)

Next, we write a model configuration file for our custom object detector. For this tutorial, we will be using YOLOv7. You have the option to pick from all the YOLOv7 model configs mentioned below:

  • yolov7.yaml
  • yolov7x.yaml
  • yolov7-w6.yaml
  • yolov7-e6.yaml
  • yolov7-d6.yaml
  • yolov7-e6e.yaml
  • yolov7-tiny.yaml

You will find these inside the cfg/training directory. You can also edit the structure of the network in this step, though rarely will you need to do this. Here is the YOLOv7 model configuration file, which I am naming as custom_yolov7.yaml( I edited the yolov7.yaml file in the cfg/training folder and renamed it to custom_yolov7.yaml).

The default yolov7.yaml config file is shown below:

# parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple

# anchors
anchors:
- [12,16, 19,36, 40,28] # P3/8
- [36,75, 76,55, 72,146] # P4/16
- [142,110, 192,243, 459,401] # P5/32

# yolov7 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [32, 3, 1]], # 0

[-1, 1, Conv, [64, 3, 2]], # 1-P1/2
[-1, 1, Conv, [64, 3, 1]],

[-1, 1, Conv, [128, 3, 2]], # 3-P2/4
[-1, 1, Conv, [64, 1, 1]],
[-2, 1, Conv, [64, 1, 1]],
[-1, 1, Conv, [64, 3, 1]],
[-1, 1, Conv, [64, 3, 1]],
[-1, 1, Conv, [64, 3, 1]],
[-1, 1, Conv, [64, 3, 1]],
[[-1, -3, -5, -6], 1, Concat, [1]],
[-1, 1, Conv, [256, 1, 1]], # 11

[-1, 1, MP, []],
[-1, 1, Conv, [128, 1, 1]],
[-3, 1, Conv, [128, 1, 1]],
[-1, 1, Conv, [128, 3, 2]],
[[-1, -3], 1, Concat, [1]], # 16-P3/8
[-1, 1, Conv, [128, 1, 1]],
[-2, 1, Conv, [128, 1, 1]],
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, Conv, [128, 3, 1]],
[[-1, -3, -5, -6], 1, Concat, [1]],
[-1, 1, Conv, [512, 1, 1]], # 24

[-1, 1, MP, []],
[-1, 1, Conv, [256, 1, 1]],
[-3, 1, Conv, [256, 1, 1]],
[-1, 1, Conv, [256, 3, 2]],
[[-1, -3], 1, Concat, [1]], # 29-P4/16
[-1, 1, Conv, [256, 1, 1]],
[-2, 1, Conv, [256, 1, 1]],
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, Conv, [256, 3, 1]],
[[-1, -3, -5, -6], 1, Concat, [1]],
[-1, 1, Conv, [1024, 1, 1]], # 37

[-1, 1, MP, []],
[-1, 1, Conv, [512, 1, 1]],
[-3, 1, Conv, [512, 1, 1]],
[-1, 1, Conv, [512, 3, 2]],
[[-1, -3], 1, Concat, [1]], # 42-P5/32
[-1, 1, Conv, [256, 1, 1]],
[-2, 1, Conv, [256, 1, 1]],
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, Conv, [256, 3, 1]],
[[-1, -3, -5, -6], 1, Concat, [1]],
[-1, 1, Conv, [1024, 1, 1]], # 50
]

# yolov7 head
head:
[[-1, 1, SPPCSPC, [512]], # 51

[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[37, 1, Conv, [256, 1, 1]], # route backbone P4
[[-1, -2], 1, Concat, [1]],

[-1, 1, Conv, [256, 1, 1]],
[-2, 1, Conv, [256, 1, 1]],
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, Conv, [128, 3, 1]],
[[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
[-1, 1, Conv, [256, 1, 1]], # 63

[-1, 1, Conv, [128, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[24, 1, Conv, [128, 1, 1]], # route backbone P3
[[-1, -2], 1, Concat, [1]],

[-1, 1, Conv, [128, 1, 1]],
[-2, 1, Conv, [128, 1, 1]],
[-1, 1, Conv, [64, 3, 1]],
[-1, 1, Conv, [64, 3, 1]],
[-1, 1, Conv, [64, 3, 1]],
[-1, 1, Conv, [64, 3, 1]],
[[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
[-1, 1, Conv, [128, 1, 1]], # 75

[-1, 1, MP, []],
[-1, 1, Conv, [128, 1, 1]],
[-3, 1, Conv, [128, 1, 1]],
[-1, 1, Conv, [128, 3, 2]],
[[-1, -3, 63], 1, Concat, [1]],

[-1, 1, Conv, [256, 1, 1]],
[-2, 1, Conv, [256, 1, 1]],
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, Conv, [128, 3, 1]],
[[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
[-1, 1, Conv, [256, 1, 1]], # 88

[-1, 1, MP, []],
[-1, 1, Conv, [256, 1, 1]],
[-3, 1, Conv, [256, 1, 1]],
[-1, 1, Conv, [256, 3, 2]],
[[-1, -3, 51], 1, Concat, [1]],

[-1, 1, Conv, [512, 1, 1]],
[-2, 1, Conv, [512, 1, 1]],
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, Conv, [256, 3, 1]],
[[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
[-1, 1, Conv, [512, 1, 1]], # 101

[75, 1, RepConv, [256, 3, 1]],
[88, 1, RepConv, [512, 3, 1]],
[101, 1, RepConv, [1024, 3, 1]],

[[102,103,104], 1, IDetect, [nc, anchors]], # Detect(P3, P4, P5)
]

Copy the yaml file above to the yolov7 root folder, rename it to custom_yolov7.yaml and lastly change the number of classes(nc) in the custom_yolov7.yaml file

  • Set the names of the files below according to the model you are using. I am training YOLOv7 therefore I’m using the filenames with yolov7
  • Change the number of classes(nc) to what you have.
#Copy the yolov7.yaml config file to your drive inside the yolov7 folder
!cp cfg/training/yolov7.yaml /mydrive/yolov7/

#Rename the yolov7.yaml file to custom_yolov7.yaml
!mv yolov7.yaml custom_yolov7.yaml

#Change number of classes in the custom_yolov7.yaml
!sed -i 's/nc: 80/nc: 2/' custom_yolov7.yaml

4)e) Download the YOLOv7 pre-trained weights

For training, download the pre-trained weights checkpoint for your selected model. Since I am using the yolov7 model, I will download the pre-trained weights for yolov7 which is “yolov7_training.pt”. You can download the weights for your model.

!wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7_training.pt

#!wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7x_training.pt

#!wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-w6_training.pt

#!wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-e6_training.pt

#!wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-d6_training.pt

#!wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-e6e_training.pt

#!wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-tiny.pt

Note: yolov7-tiny does not have a pretrained weights checkpoint file for training with a training suffix. You can simply use the yolov7-tiny.pt pretrained weights file here for it.

5) Load Tensorboard to visualize training metrics

#@title Select YOLOv7 logger {run: 'auto'}
logger = 'TensorBoard' #@param ['ClearML', 'Comet', 'TensorBoard']

if logger == 'ClearML':
%pip install -q clearml
import clearml; clearml.browser_login()
elif logger == 'Comet':
%pip install -q comet_ml
import comet_ml; comet_ml.init()
elif logger == 'TensorBoard':
%load_ext tensorboard
%tensorboard --logdir runs/train

6) Train the model

We can pass the following arguments in the training command:-

  • img: define input image size. The default is 640. See this table for more info
  • batch-size: determine batch size
  • epochs: define the number of training epochs.
  • data: set the path to our yaml file (data.yaml file)
  • cfg: specify our model configuration file
  • conf-thres: Minimum threshold value for detection
  • hyp: hyperparameters path (default value: 'data/hyp.scratch.p5.yaml')
  • workers: how many subprocesses to parallelize during training
  • name: name of the output directory
  • weights: specify the path to the pre-trained weights checkpoint for training. You can use the weights of whichever model you want to train.
     — yolov7_training.pt
     — yolov7x_training.pt
    — yolov7-w6_training.pt
     — yolov7-e6_training.pt
     — yolov7-d6_training.pt
     — yolov7-e6e_training.pt
     — yolov-tiny.pt 

We downloaded the above specified pre-trained weights in step 4)e).

Note: yolov7-tiny does not have a pretrained weights checkpoint file for training with a training suffix. You can simply use the yolov7-tiny.pt pretrained weights file here for it.

There are more parameters we can give here. Read RoboFlow’s post here.

Train using the train.py script in the yolov7 root folder.

# Train on single GPU
!python train.py --workers 1 --device 0 --batch-size 16 --data data.yaml --img-size 640 640 --cfg custom_yolov7.yaml --weights yolov7_training.pt --name custom-detector --hyp data/hyp.scratch.custom.yaml --epochs 50

Retraining from the last saved checkpoint

!python train.py --weights /content/yolov7/runs/train/custom-detector/weights/last.pt --epochs 100 --img-size 640 640

7) Test the trained model

This performs the evaluation on the images in the test folder we created in step 4b and gave the path in the “data.yaml file”.

#Use either of the 2 commands

!python test.py --weights /mydrive/yolov7/runs/train/custom-detector/weights/best.pt --data data.yaml --task test

#!python test.py --data data.yaml --img 640 --batch 16 --conf-thres 0.001 --iou 0.65 --device 0 --weights /mydrive/yolov7/runs/train/custom-detector/weights/best.pt

DETECTION ON IMAGES

Run Detector

!python detect.py --source /mydrive/mask_test_images --weights /mydrive/yolov7/runs/train/custom-detector/weights/best.pt --img-size 640 --conf-thres 0.2 

Display the image outputs

#display inference on ALL test images
import glob
from IPython.display import Image, display

for imageName in glob.glob('/mydrive/yolov7/runs/detect/exp/*.jpg'): #assuming JPG
display(Image(filename=imageName))
print("\n")

DETECTION ON VIDEOS

Run Detector

!python detect.py --source /mydrive/mask_test_videos/test.mp4 --weights /mydrive/yolov7/runs/train/custom-detector/weights/best.pt --img-size 640 --conf-thres 0.2 
# 0 # webcam
# img.jpg # image
# vid.mp4 # video
# path/ # directory
# path/*.jpg # glob
# 'https://youtu.be/Zgi9g1ksQHc' # YouTube
# 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream


(OPTIONAL STEP) CONVERSION

Convert to ONNX, TorchScript, TorchScript Lite

(Using the export.py file in YOLOv7’s repository here)

Install dependencies

#Requirements:
# $ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime openvino-dev tensorflow-cpu # CPU
# $ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime-gpu openvino-dev tensorflow # GPU

!pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime-gpu

Convert using the export.py script (Read more on exporting into different formats here)

# ONNX WITH NMS
!python export.py --weights /mydrive/yolov7/runs/train/custom-detector/weights/best.pt --grid --end2end --simplify \
--topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 --max-wh 640

That’s it for YOLOv7!!! ????


References

https://github.com/WongKinYiu/yolov7

https://blog.roboflow.com/yolov7-custom-dataset-training-tutorial/#training-the-yolov7-with-custom-data

Leave a Reply

en_USEnglish