×

Face Recognition using OpenCV

Face recognition is a machine learning technology where a human face in a digital image or a frame from a video will be matched against a database of images and predict the name of the human. Face recognition involves detecting the faces and then predicting the name of the face.

LBPH algorithm

Local Binary Patterns( LBP ) was first proposed in 1994, later in 2006, it was found when LBP combined with HOG will determine much better results on some datasets. It is widely used in facial recognition due to its Computational simplicity and Discriminative power which can recognize both front and side faces.

If we have an image of 30X30 dimensions then we divide it into regions of the same height and dimensions. Thus in each cell, if we consider there are 9 pixels thresholding is performed based on the intensity of the middle pixel.

Pixels that are having intensity less than the middle pixel are rounded to zero and pixels with intensities more than the middle pixel are rounded to one thus binary operation is performed on the image and then represented for histograms.

Image 82

Loading Face Dataset

In this post, we use the LBPH algorithm for recognizing the faces in the yale dataset and can download the zip file using the link provided https://drive.google.com/file/d/19rJrpVbZfUqgQRZA_UXBq_C_dO1W_fy8/view?usp=sharing.

As we are using a google collab notebook we need to store the dataset in google drive. Once the dataset is uploaded into google drive we can mount our collab notebook with the dataset.

from google.colab import drive
drive.mount('/content/drive')

Once you run the above code it redirects you to the google account you would like to connect and returns a security code that needs to be copied and pasted in the notebook. After successfully mounting the google drive it returns a message.

Mounted at /content/drive

Now we load the dataset by unzipping the folder, we can import the zipfile library to perform this task. we can open the zip file in reading mode and extract it to the root folder in google drive using zip_object.extractall('./').

Once all the files have been extracted the zip_object need to be closed to save all the changes made.

import zipfile

path = '/content/drive/MyDrive/Colab Notebooks/database-2021.zip'

#opening file in read only mode
zip_object = zipfile.ZipFile(file = path, mode = 'r')

#extract the unzipped file into root location of the drive
zip_object.extractall('./')

#saving all the changes made
zip_object.close()
Image 88

Pre-processing the images

Pre-processing the images refers to performing all transformations on the raw data before feeding it to the artificial intelligence or machine learning algorithm. Image pre-processing involves resizing, changing the format, orientation, color scale, cleaning file names, etc.,

If we skip this step then there is a chance of bad outcomes from the model.

In this step, we clean the names of the images and assign them a ”id” for images belonging to a particular person, so that it would be easy to classify and handle them, the name of each file is in the form of ‘path ‘ and we remove unnecessary data from the path using os.path.split(path)[1].split('.')[0].

Note:- While Training the classifier we need to pass objects of the same type. Since we are passing images as an array of pixel values it’s considered as an integer type, so we assign the id of the same faces the same integer value such as “1”, “2”, “3” and so on.

import os
path = '/content/drive/MyDrive/Colab Notebooks/database'
def get_data():
    path = [os.path.join('/content/drive/MyDrive/Colab Notebooks/database', f) for f in os.listdir('/content/drive/MyDrive/Colab Notebooks/database')]
    faces = []
    ids = []
    count = 3
    for path in paths:
        #storing the pixels of the image in an array
        image = Image.open(path).convert('L')
        image_np = np.array(image,'uint8')
     #file name with "adam" are given id :- '1'
     if "adam" in path:
        id=int(1)
        #id = os.path.split(path)[1].split('.')[0]
        #print(id)
         ids+=[id]
   
      #file name with "mathew" are given id :- '2'
      elif "mathew" in path:
          id=int(2)
          ids+=[id]

      #any other faces are given id as 'count'
      else:
         id = count
         count+=1
         ids+=[id]

      #append all the array of pixels
      faces+=[image_np]
      return np.array(ids),faces
ids, faces = get_data()
print(ids, faces)

Output:-

[3 4 5 6 7 8 2 2 2 2 2 2 1 1 1 1 1 1] [array([[106, 113, 112, ...,  57,  53,  48],
       [115, 112, 107, ...,  50,  49,  50],
       [117, 107, 102, ...,  48,  46,  49],
       ...,
       [122, 119, 118, ...,  16,   5,   3],
       [121, 117, 118, ...,  19,   6,   4],
       [121, 117, 118, ...,  19,   6,   4]], dtype=uint8), array([[248, 248, 248, ...,  50,  55,  51],
       [248, 248, 248, ...,  48,  51,  47],
       [248, 248, 248, ...,  52,  52,  50],
       ...,#cropped the output because of space constraint on the article

Training the LBPH classifier

Local Binary Patterns combines with Histograms classifier as the name itself determines it recognizes the faces based on the Histograms of the trained dataset with the test dataset.

By default, the classifier divides an image into 8 rows and 8 columns, and the pixels in each cell are stored as a Histogram. So in total, each image will be having 64 histograms in the training dataset to be compared with the other 64 histograms in the test image.

lbph_classifier = cv2.face.LBPHFaceRecognizer_create(radius = 1, neighbors=8, grid_x = 8, grid_y = 8)
lbph_classifier.train(faces, ids)

All these histograms are stored as a ‘yml’ file.

lbph_classifier.write('lbph_classifier.yml')

Code:-

# radius: 1
# neighbors: 8
# grid_x: 8 , as 8 columns
# grid_y: 8 , as 8 rows

lbph_classifier = cv2.face.LBPHFaceRecognizer_create(radius = 1, neighbors=8, grid_x = 8, grid_y = 8)
lbph_classifier.train(faces, ids)
lbph_classifier.write('lbph_classifier.yml')
Image 89

Recognizing the Faces

We can create the classifier using cv2.face.LBPHFaceRecognizer_create(). Using this object we can open the Trained Classifier that we previously stored as a yml file.

lbph_face_classifier = cv2.face.LBPHFaceRecognizer_create()

#loading the classifier into recognizer object
lbph_face_classifier.read('/content/lbph_classifier.yml')

To recognize the faces we can pass the test video https://drive.google.com/file/d/1OHIral0AparSTcu3nP8FWH6Vh1Sg3NhI/view?usp=sharing that has the person faces that we used for training data.

We can pass the test image in the form of NumPy array into the lbph_face_classifier.predict() for predicting the faces, but remember the channel of the image has to be a single channel image so we need to convert the image into grayscale.

image_np = np.array(image,'uint8')
prediction = lbph_face_classifier.predict(image_np)

Parameters in LBPH

Parameters in LBPH improve the predicting capacity of the classifier. The different parameters that we pass are:-

  • Radius:- While training the classifier it divides the image into different number of cells, the number of cells that need to be taken into consideration for drawing each histogram depends on the radius we pass into the classifier.
  • Neighbours:- After dividing the image into rows and colums the binarization will be done on each cell with pixel that was at the center as reference. The number of neighbours the middle pixel should contain in each cell is defined by Neighbour parameter
  • grid_x:- As make partitions along X-axis defines the number of columns the image is divided into, thus grid_x applies number of partitions in column-wise has to be made.
  • grid_y:- The partitions alongs the Y-axis defines number of rows the image is divided into, thus grid_y applies number of partitions in row-wise hs to be made.
  • Thresholding:- Threshold determines the confidence of the detection by classifier. the higher the value of thresholding the higher is the prediction.

Detecting Facial points

Detecting Facial points involves a pre-trained model recognizing the 68-point facial landmarks ( x, y ) in a human face. These points localize the region around the eyes, nose, mouth, and other regions that basically lie inside the bounding box.

For detecting the facial points we can use the shape_predictor_68_face_landmarks.dat and can be downloaded from the Github link. Once the file is download and saves into your google drive we can load it into our Collab file.

import dlib
import cv2

points_detector = dlib.shape_predictor('/content/drive/MyDrive/weights/shape_predictor_68_face_landmarks.dat')

After loading the 68 points detector, we can detect the face and draw a bounding box around the face using dlib.get_frontal_face_detector().

face_detector = dlib.get_frontal_face_detector()
detections = face_detector(image, 1)

While drawing bounding boxes around the face we can detect all the face points inside it. the whole source code goes as follows:-

import dlib
import cv2
from google.colab.patches import cv2_imshow
face_detector = dlib.get_frontal_face_detector()
points_detector = dlib.shape_predictor('/content/drive/MyDrive/weights/shape_predictor_68_face_landmarks.dat')
image = cv2.imread('/content/drive/MyDrive/Colab Notebooks/database/adam6.PNG')
face_detection = face_detector(image, 1)
for face in face_detection:
  points = points_detector(image, face)
  for point in points.parts():
  
    #drawing small circles around all the points detected
    cv2.circle(image, (point.x, point.y), 2, (0,255,0), 1)
  l, t, r, b = face.left(), face.top(), face.right(), face.bottom()
  cv2.rectangle(image, (l, t), (r, b), (0,255,255), 2)
cv2_imshow(image)

Output:-

Image 90