Setups

  1. pip install -r requirements.txt (mediapipe, cvzone, ulltralytics)
  2. torch CUDA should be installed for better fps in the inference

Steps

1. Collecte the Datatset For the training by taking a Threshold according to the Bluriness and Score using Mediapipe :

  1. Find faces in realtime using the light weight model provided in the mediapipe library.

  2. When we install the cvzone Library we found in it FaceDetecionModule.py based on the mediapipe Library …It contain Function named findFaces that returns img, bboxs

    While True : 
    		detector = FaceDetector()
    		success, img = cap.read()
    		    imgOut = img.copy() 
    		    img, bboxs = detector.findFaces(img, draw=False) 
    
  3. Collecting the classeFake Dataset Notated by 0

  4. Collecting the classe Real Dataset Notated by 1

  5. Merging all of them in ALL folder To do The training on It using YOLO v8 nano Version

  6. All folder conttains 20 Fake Image + 20 Real Image (80 file = 40 image + 40 teir text files)

2.Saving the Labels in a text file .txt to be used for training purposes (YOLO)

listInfo.append(f"{classID} {xcn} {ycn} {wn} {hn}\\n")
#->Exemple 
0 0.485156 0.482292 0.417187 0.789583

3. Preparate the folders train, val ,test from the folder All for the training : Use SplitData.py

4. Prepare the yaml file for the training (YOLO needs this )

  1. put the test , val , train in tge same folder and zip it to put it on Google colab

  2. Unzip it !unzip data.zip -d /content/Data

  3. Train the model

    !yolo task=detect mode=train  data=/content/Data/data.yaml model=yolov8l.pt epochs=100 imgsz=640 patience=25
    

    With data.yaml :

    path: ./Data
    train: train/images
    val: val/images
    test: test/images
    
    nc: 2
    names: ["fake", "real"]
    

    <aside> ❗

    For The training Procedure cheeck My Kaggle Notebook

    </aside>

5. Select a Pretrained Model for The inference instead of the weights of My training dataset : Use main.py