YOLOv8 Tutorial


AI Made Simple: User-Friendly, Step-by-Step YOLOv8 Tutorials.
This article provides a comprehensive guide to setting up and using YOLOv8, taking you from beginner to expert. With meticulously detailed steps and well-structured sections, it not only ensures a smooth learning curve but also makes the instructions applicable to future versions.

Install Anaconda

Download the installer from the official Anaconda website to your computer.

Please choose the appropriate installer based on your operating system.

All versions provide a graphical user interface for installation.

Setting Environment Variables

1. Search for Environment Variables in your system settings.
2. Click on Environment Variables.
3. Under System Variables, select Path and click Edit.


4. Click New and add the following environment variable paths:

Environment Variable Paths

C:\ProgramData\Anaconda3
C:\ProgramData\Anaconda3\Script\Library\bin
C:\ProgramData\Anaconda3\Scripts
C:\ProgramData\Anaconda3\condabin

Creating a YOLOv8 Environment

1. Launch Anaconda Navigator.


2. At the bottom of the interface, click Create to add a new environment.


3. In the Create New Environment dialog, configure the environment as follows:
  • Name: YOLOv8
  • Location: C:\Users\%username%\envs\YOLOv8
  • Package: Python 3.10.16

Activating the YOLOv8 Environment

1. Select the newly created YOLOv8 environment and click Open Terminal.


2. You should now see the terminal window for the environment.

Advanced Usage

  • Open Windows Command Prompt (Windows 11 is renamed to Windows Terminal; we will refer to it as Windows Command here)
  • Create the environment with the following command:
conda create --name <environment_name> python=<python_version>

Example:

conda create --name YOLOv8 python=3.10.16

  • To activate the environment, use:
conda activate YOLOv8
  • Once the environment is activated successfully, your terminal should look like this:



Install NVIDIA Driver

Installing the NVIDIA CUDA Toolkit

Download and install the CUDA Toolkit (Version 11.8).


🖊️Note
Be sure to select the correct version of your operating system.
This guide uses Windows 11 as an example.

Installing the NVIDIA cuDNN

1. Go to the cuDNN Archive and download the version highlighted in the red box.

2. Open the section titled Download cuDNN v8.9.7 (December 5th, 2023), for CUDA 11.x, and select the file as shown in the red box.


3. You must sign in to your NVIDIA Account to download. If you don't have one, please register in a new tab.


4. After downloading, you will get the file: 
cudnn-windows-x86_64-8.9.7.29_cuda11-archive.zip


5. Locate the following path:

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8


6. Extract the contents of the ZIP file and copy the following folders into the CUDA directory above: bin, include, lib.
These should be merged into the corresponding folders inside.

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8

Verifying the NVIDIA Driver Installation


Open the Windows Command Prompt and type the following command to check if the GPU is detected by the driver:

nvidia-smi

If successful, you will see an output like the following:


Then, enter the following command to confirm the CUDA compiler (nvcc) is correctly installed:

nvcc -V

Installing the YOLOv8

1.  Download the required files: 7-Zip, the YOLOv8 repository, and the pretrained YOLOv8n.pt model.

❤️ Reminder
Please download the following files from the provided links before proceeding.

  • Download the 7-Zip.
  • Clone or download the YOLOv8 from the GitHub repository.
  • Download the YOLOv8n.pt model(weight).

2. The first step is to install 7-Zip, and next, to verify the installation, right-click on any file and check if the 7-Zip menu appears as shown below:


3. Extract the YOLOv8-main.zip file using 7-Zip or another archive tool.


4. After extraction, you may rename the folder to YOLOv8 and place it in a directory of your choice.
For example:


💡Tip
In this tutorial, we use D:\ as the example directory path.


5. Move the yolov8n.pt model file into the YOLOv8 folder.

6. In the command line inside the YOLOv8 folder, run the following command to install the Ultralytics package: 

pip install ultralytics

Installing PyTorch

1. Visit the official PyTorch website to find the appropriate version for your environment.

2. In the Windows Command Prompt, change the directory to your YOLOv8 project folder
cd /d D:\YOLOv8


3. Install PyTorch using the following command:

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

4. If this version does not work for your system, visit the Previous PyTorch Versions page to find a suitable one.


...

Using the YOLOv8

Please finish up the step, then follow this section. 

1. Run this command to test that the environment is correct.

yolo predict model=yolov8m.pt source='https://ultralytics.com/images/bus.jpg'

2. If you execute the command in step 1 and no errors are displayedyou've you've done it correctly. 


🖊️Note

If your computer has a GPU but is only using the CPU, this is important because it may indicate that the wrong version of PyTorch was installed. In this case, first remove PyTorch and its dependencies from the entire environment and install the correct version. If this doesn't work, delete the environment and reinstall it.


3. To clarify the author's directory structure more clearly, the following information will be provided for your reference.


└─dataset
│ ├─test
│ │ ├─images
│ │ │ ├─test_img001.jpg
│ │ │ ├─test_img002.jpeg
│ │ │ ├─test_img003.png
│ │ │ └─test_img004.tif
│ │ └─labels
│ │ ├─test_img001.txt
│ │ ├─test_img002.txt
│ │ ├─test_img003.txt
│ │ └─test_img004.txt
│ ├─train
│ │ ├─images
│ │ │ ├─train_img001.jpg
│ │ │ ├─train_img002.jpeg
│ │ │ ├─train_img003.png
│ │ │ └─train_img004.tif
│ │ └─labels
│ │ ├─train_img001.txt
│ │ ├─train_img002.txt
│ │ ├─train_img003.txt
│ │ └─train_img004.txt
│ └─valid
│ ├─images
│ │ ├─valid_img001.jpg
│ │ ├─valid_img002.jpeg
│ │ ├─valid_img003.png
│ │ └─valid_img004.tif
│ └─labels
│ ├─valid_img001.txt
│ ├─valid_img002.txt
│ ├─valid_img003.txt
│ └─valid_img004.txt
└─data.yaml

Training the YOLOv8 with Custom Dataset

1. Create the data.YAML file -- inside your YOLOv8 directory, create a file named data.yaml.

The YAML file will contain the following example content, which is basically the path to the folders for the training set, validation set, and test set text files and images.

train: dataset/train
val: dataset/val
test: dataset/test
nc: 2
names: ['Helmet', 'Coverall']

2. Parameter Descriptions
  • train: Path to the training set.
  • val: Path to the validation set.
  • test: Path to the test set.
  • nc : Number of classes.
  • names: Class names in order. 0 is corresponds to Helmet, 1 to Coverall.

3. Check your Label format
The label file will contain content similar to the example below. Please check it carefully.

15 0.402734 0.114583 0.099219 0.159722
16 0.417969 0.538889 0.178125 0.744444


4. The format is as follows:
<class_id> <x_center> <y_center> <width> <height>

5. If you find that the class_id is different from what you defined, or if there are other errors, you can use my tool, YOLO-Class-ID-Replacer, to fix the incorrect class_id.

This tool is now open source on GitHub. Feel free to use it and give me a star ⭐, thank you!

6. Start the training YOLOv8 model, please run the command.

yolo detect train data=data.yaml model=yolov8n.pt epochs=100 imgsz=640 conf=0 device=0


The training logs: Scan your train, val, and test folders. Display GPU and dataset configuration. Start training from 1/100 to 100/100 epochs.

How to track the training metrics?
  • Epoch: Current training round. 
  • GPU mem: GPU memory usage.
  • box loss: Bounding box regression loss.
  • cls loss: Classification loss.
  • dfl loss: Distribution focal loss (YOLOv8-specific).
  • Instances: Number of objects processed in the current batch.

How to track the validation metrics?
  • Size: Input image size. 
  • Class: Number of detected classes.
  • Images: Number of pictures in validation batch.
  • Box (P): Precision score. 
  • R: Recall score. mAP50: Mean Average Precision at IoU 0.50.
  • mAP50-95: Average precision across IoU thresholds from 0.50 to 0.95.

Model Checkpoints
  • best.pt: The model checkpoint has the highest performance on the validation set. Use this for final prediction or deployment.
  • last.pt: The checkpoint from the final epoch. Useful for further fine-tuning, not necessarily the best performing.

Run Inference with Custom Model
Run this command: 

yolo detect predict model=best.pt epochs=100 imgsz=640 conf=0 device=0 source=0 save=True show=True

Model Checkpoints
  • model: Path to model checkpoint. Use pre-trained (yolov8n.pt) or your custom model (best.pt or last.pt)
  • epochs: Epoch count for further training (ignored in prediction mode)
  • imgsz: Image resolution (e.g., 640, 1280)
  • conf: Confidence threshold. conf=0.25 will filter out predictions below 25% confidence.
  • device: Device to use for prediction
  • 0 = first GPU, cpu = CPU, mps = Apple Silicon GPU
  • source: Input source
    • 0 = webcam
    • images/sample.jpg = single image
    • videos/sample.mp4 = video file
    • Folder path or livestream URL also accepted.
  • save: Whether to save output images/videos
    • True = save output to /runs/
    • False = do not save
  • show: Display results in a pop-up window
    • True = show prediction
    • False = headless mode

My Environment

Platform & Tools
  • System: Windows 11 Pro
  • CPU: Intel Core i9–12900K
  • RAM: 16GB * 2 (32GB)
  • GPU: NVIDIA GeForce RTX 3090
  • Visual Studio Code: Sep 2024 (version 1.94)
  • Environment management: Conda 24.9.0
  • Python: Version 3.10.16

留言

這個網誌中的熱門文章

Install NVIDIA GPU Driver

LabelImg Tutorial