YOLOv8 Tutorial
Install Anaconda
Download the installer from the official Anaconda website to your computer.
Please choose the appropriate installer based on your operating system.
All versions provide a graphical user interface for installation.
Setting Environment Variables
2. Click on Environment Variables.
3. Under System Variables, select Path and click Edit.
- Name: YOLOv8
- Location: C:\Users\%username%\envs\YOLOv8
- Package: Python 3.10.16
Activating the YOLOv8 Environment
Advanced Usage
- Open Windows Command Prompt (Windows 11 is renamed to Windows Terminal; we will refer to it as Windows Command here)
- Create the environment with the following command:
Example:
Install NVIDIA Driver
Installing the NVIDIA CUDA Toolkit
🖊️Note
Be sure to select the correct version of your operating system.
This guide uses Windows 11 as an example.
Installing the NVIDIA cuDNN
1. Go to the cuDNN Archive and download the version highlighted in the red box.
2. Open the section titled Download cuDNN v8.9.7 (December 5th, 2023), for CUDA 11.x, and select the file as shown in the red box.
6. Extract the contents of the ZIP file and copy the following folders into the CUDA directory above: bin, include, lib.
These should be merged into the corresponding folders inside.
Verifying the NVIDIA Driver Installation
Open the Windows Command Prompt and type the following command to check if the GPU is detected by the driver:
Then, enter the following command to confirm the CUDA compiler (nvcc) is correctly installed:
Installing the YOLOv8
❤️ Reminder
Please download the following files from the provided links before proceeding.
- Download the 7-Zip.
- Clone or download the YOLOv8 from the GitHub repository.
- Download the YOLOv8n.pt model(weight).
💡Tip
In this tutorial, we use D:\ as the example directory path.
5. Move the yolov8n.pt model file into the YOLOv8 folder.
Installing PyTorch
Using the YOLOv8
Please finish up the step, then follow this section.
🖊️Note
If your computer has a GPU but is only using the CPU, this is important because it may indicate that the wrong version of PyTorch was installed. In this case, first remove PyTorch and its dependencies from the entire environment and install the correct version. If this doesn't work, delete the environment and reinstall it.
Training the YOLOv8 with Custom Dataset
train: dataset/train
val: dataset/val
test: dataset/test
nc: 2
names: ['Helmet', 'Coverall']
- train: Path to the training set.
- val: Path to the validation set.
- test: Path to the test set.
- nc : Number of classes.
- names: Class names in order. 0 is corresponds to Helmet, 1 to Coverall.
3. Check your Label format
The label file will contain content similar to the example below. Please check it carefully.
15 0.402734 0.114583 0.099219 0.159722
4. The format is as follows:
<class_id> <x_center> <y_center> <width> <height>
5. If you find that the class_id is different from what you defined, or if there are other errors, you can use my tool, YOLO-Class-ID-Replacer, to fix the incorrect class_id.
This tool is now open source on GitHub. Feel free to use it and give me a star ⭐, thank you!
6. Start the training YOLOv8 model, please run the command.
yolo detect train data=data.yaml model=yolov8n.pt epochs=100 imgsz=640 conf=0 device=0
The training logs: Scan your train, val, and test folders. Display GPU and dataset configuration. Start training from 1/100 to 100/100 epochs.
How to track the training metrics?
- Epoch: Current training round.
- GPU mem: GPU memory usage.
- box loss: Bounding box regression loss.
- cls loss: Classification loss.
- dfl loss: Distribution focal loss (YOLOv8-specific).
- Instances: Number of objects processed in the current batch.
How to track the validation metrics?
- Size: Input image size.
- Class: Number of detected classes.
- Images: Number of pictures in validation batch.
- Box (P): Precision score.
- R: Recall score. mAP50: Mean Average Precision at IoU 0.50.
- mAP50-95: Average precision across IoU thresholds from 0.50 to 0.95.
Model Checkpoints
- best.pt: The model checkpoint has the highest performance on the validation set. Use this for final prediction or deployment.
- last.pt: The checkpoint from the final epoch. Useful for further fine-tuning, not necessarily the best performing.
Run Inference with Custom Model
Run this command:
yolo detect predict model=best.pt epochs=100 imgsz=640 conf=0 device=0 source=0 save=True show=True
Model Checkpoints
- model: Path to model checkpoint. Use pre-trained (yolov8n.pt) or your custom model (best.pt or last.pt)
- epochs: Epoch count for further training (ignored in prediction mode)
- imgsz: Image resolution (e.g., 640, 1280)
- conf: Confidence threshold. conf=0.25 will filter out predictions below 25% confidence.
- device: Device to use for prediction
- 0 = first GPU, cpu = CPU, mps = Apple Silicon GPU
- source: Input source
- 0 = webcam
- images/sample.jpg = single image
- videos/sample.mp4 = video file
- Folder path or livestream URL also accepted.
- save: Whether to save output images/videos
- True = save output to /runs/
- False = do not save
- show: Display results in a pop-up window
- True = show prediction
- False = headless mode
My Environment
- System: Windows 11 Pro
- CPU: Intel Core i9–12900K
- RAM: 16GB * 2 (32GB)
- GPU: NVIDIA GeForce RTX 3090
- Visual Studio Code: Sep 2024 (version 1.94)
- Environment management: Conda 24.9.0
- Python: Version 3.10.16
留言
張貼留言