Swap Faces using OpenCV-Dlib

Face Swapping involves extracting the face from the first image and pasting it on the face of the second image. If we directly perform this operation the swapped image doesn’t look real. To make it look real we need to make few adjustments such as face color, rotation, enlargement.

To provide a simple example of how face-swapping works, when we use filters in Snapchat the face expressions exactly match with the swapped face and the image mostly resembles the source.

Detect face points using Dlib and OpenCV

To analyze the faces in a digital image, the computer uses 68 points as a reference. For this purpose, there is a pre-built model Dlib which can detect all the 68 points in the face.

In this article we will be learning the following:-

  • Extracting the face from image
  • Performing Triangulation to match the face expressions
  • Performing resizing of the face
  • Face swapping
  • Matching the color scheme while swapping

All these operations involve manipulating the face using the 68 points that have been detected. To understand the process of how the dlib works and detects points in a face you can check out our previous tutorial on face recognition.

Extracting the Face from the image

To extract the Face from the image we first plot all the 68 points on the face and crop the image using the convex method. To extract face from the image we can use a pre-built library provided by OpenCV which is known cv2. fillConvexPoly().

Just as a reminder we are developing the code on google colab, and the dlib file for shape prediction can be downloaded from the link shape_predictor_68_face_landmarks.dat

import dlib
import cv2
from google.colab.patches import cv2_imshow
face_detector = dlib.get_frontal_face_detector()
points_detector = dlib.shape_predictor('/content/drive/MyDrive/weights/shape_predictor_68_face_landmarks.dat')
image = cv2.imread('/content/drive/MyDrive/Colab Notebooks/database/adam6.PNG')
face_detection = face_detector(image, 1)
for face in face_detection:
  points = points_detector(image, face)

Since the points that we have detected are in the dlib encoded format, we need to normalize them into an array of x & y coordinates.

points_list = []
    for n in range(0, 68):
        x = points.part(n).x
        y = points.part(n).y
        points_list+=[(x, y)]
points = np.array(points_list, np.int32)

Once we detect all the face points the next we need to do is to obtain the boundary point which is known as the Convex hull. Convex Hull of a shape or a group of points is a tight-fitting convex boundary around the points or the shape. Convex Hull has several applications in the field of mathematics, statistics, economics, and geometric modeling.

convexhull = cv2.convexHull(points)

We draw a polygon with the exact shape of the convex hull we create a blank image having the same dimensions.

mask = np.zeros_like(img_gray)

By using all the points on the boundary of the face we can extract the pixels inside the face using cv2.fillConvexPoly(). fillConvexPoly() fills the enclosed polygon if the coordinates are provided. These boundary coordinates of the polygon are obtained from the Convex hull and we can draw the polygon on a mask we created in the previous step.

cv2.fillConvexPoly(mask, convexhull, 255)

Now the final step for extracting the face is to apply cv2.bitwise_and() to the image using the mask of the same dimensions that contains the polygon with the same shape of the face.

face_image_1 = cv2.bitwise_and(img, img, mask=mask)
import cv2
import numpy as np
import dlib
from google.colab.patches import cv2_imshow
img = cv2.imread("/content/drive/MyDrive/charles.jpg")
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
mask = np.zeros_like(img_gray)
face_detector = dlib.get_frontal_face_detector()
points_detector = dlib.shape_predictor("/content/drive/MyDrive/weights/shape_predictor_68_face_landmarks.dat")
faces = face_detector(img_gray)
for face in faces:
    points = points_detector(img_gray, face)
    points_list = []
    for n in range(0, 68):
        x = points.part(n).x
        y = points.part(n).y
        points_list+=[(x, y)]

    points = np.array(points_list, np.int32)
    convexhull = cv2.convexHull(points)
    #cv2.polylines(img, [convexhull], True, (255, 0, 0), 3)
    cv2.fillConvexPoly(mask, convexhull, 255)
    face_image_1 = cv2.bitwise_and(img, img, mask=mask)


Drawing Triangles on Faces

To draw triangles on faces we need to join the 68 points that are on the face. OpenCV provides inbuilt functions for drawing such triangles known as getTriangleList(). But the parameters it accepts is a list of all the points that are used for drawing the triangles.

We don’t pass all the points to getTriangleList() instead, the points that lie inside the rectangle of the bounding box on the face are used. To get those points we can draw a bounding box on the convex Hull that is created in the previous step.

Once the bounding box is created we can draw the triangles using the points that lie inside the triangle.

rect = cv2.boundingRect(convexhull)
subdiv = cv2.Subdiv2D(rect)
triangles = subdiv.getTriangleList()
triangles = np.array(triangles, dtype=np.int32)
for t in triangles:
  pt1 = (t[0], t[1])
  pt2 = (t[2], t[3])
  pt3 = (t[4], t[5])
  cv2.line(img, pt1, pt2, (0, 0, 255), 1)
  cv2.line(img, pt2, pt3, (0, 0, 255), 1)
  cv2.line(img, pt1, pt3, (0, 0, 255), 1)
Image 195

In the next part, we create a list for storing the pairs of points that are used for drawing the triangles on the face. Please note that these are not the coordinates of the points these are the index of the points that we use to join to make a triangle.

The main reason for creating a list that stores the indexes of the points of the triangle is that obviously the same indexes of the 68-points are going to be used on the second image are used to draw the triangles on that face.

rect = cv2.boundingRect(convexhull)
subdiv = cv2.Subdiv2D(rect)
triangles = subdiv.getTriangleList()
triangles = np.array(triangles, dtype=np.int32)

triangles_id = []
def index_nparray(nparray):
    index = None
    for num in nparray[0]:
        index = num
    return index
for t in triangles:
  pt1 = (t[0], t[1])
  pt2 = (t[2], t[3])
  pt3 = (t[4], t[5])

  id_pt1 = np.where((points == pt1).all(axis=1))
  id_pt1 = index_nparray(id_pt1)
  id_pt2 = np.where((points == pt2).all(axis=1))
  id_pt2 = index_nparray(id_pt2)
  id_pt3 = np.where((points == pt3).all(axis=1))
  id_pt3 = index_nparray(id_pt3)

  if id_pt1 is not None and id_pt2 is not None and id_pt3 is not None:
    triangle = [id_pt1, id_pt2, id_pt3]
#print list of indices used for drawing a triangle

>>> [[39, 20, 21], [20, 39, 38], [1, 0, 31],...,...,...]

Detecting the Face in second image

In the previous step, we have the pairs of indices that make a triangle. Using the same indices we can draw the triangles on the second face but first, we need to draw a convex hull on the face that acts as a boundary on the face.

img2 = cv2.imread("/content/drive/MyDrive/pedram.jpg")

img2_gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)

faces2 = face_detector(img2_gray)
for face in faces2:
    points_predict2 = points_detector(img2_gray, face)
    points_list2 = []
    for n in range(0, 68):
        x = points_predict2.part(n).x
        y = points_predict2.part(n).y
        points_list2.append((x, y))
    points2 = np.array(points_list2, np.int32)
    convexhull2 = cv2.convexHull(points2)

Image 196

Delaunay Triangulation

Delaunay Triangulation provides triangles that can be made on the image which can be used for making a polygon. In our case, the polygon is a face, and the regions on the face are divided using triangles.

Since the location of the points on the faces of the two images is the same, we can extract a triangle from the first image and re-modify it according to the triangle in the second face, and apply the same technique on all the triangles to make a face that is similar to the face that is being swapped.

For this process, we create a for loop that can loop over all the triangle index pairs, and then using the same index pair at an instant we draw a bounding box around the triangle. This triangle consists of the pixels of the image that lie under the triangle area.

Once the bounding boxes are drawn on the triangles of both faces, we need to crop the triangle. Since the triangle does not fit exactly into the bounding box rectangle, the extra area that lies in the non-intersecting region of the bounding box and triangle needs to be removed. Thus we perform cropping the triangle.

We can crop a triangle by performing a bitwise_and operation between the image and its mask. The mask has to be of the same dimensions as the bounding box and it can be created using np.zeros().

Once the mask of the same dimensions is created, a triangle that has the same shape and coordinates has to be drawn in the mask. To draw it we can use the convex hull method and fill the entire interior region of the polygon with white color.

Now the mask is ready and it can be used to crop the triangle. The same process goes for the triangle in the second image too. Such that at the end of the loop we can apply affine transformations on the first triangle to make the dimensions of the first triangle to be equal to the second triangle.

In the same loop, we create a replica of the second image with only zero as pixel intensities but with the same dimensions. The triangle after going through affine transformations is known as a warped triangle and has to be added to the new blank image that was created.

We need to remember a point that the warped triangle has to be pasted at the exact coordinates of the similar triangle of the second image. This can be done using the coordinates of the rectangle we are handling in the second image at that particular loop.

As we keep on looping over all triangle pairs, we can construct the face that has the properties of the second image, such as same cheek size, same nose length, etc.,

img2_new_face = np.zeros_like(img2, np.uint8)
for triangle_index in triangles_id:

    tr1_pt1 = points_list[triangle_index[0]]
    tr1_pt2 = points_list[triangle_index[1]]
    tr1_pt3 = points_list[triangle_index[2]]
    triangle1 = np.array([tr1_pt1, tr1_pt2, tr1_pt3], np.int32)
    rect1 = cv2.boundingRect(triangle1)
    (x1, y1, w1, h1) = rect1
    cropped_triangle = img[y1: y1 + h1, x1: x1 + w1]
    cropped_tr1_mask = np.zeros((h1, w1), np.uint8)
    points = np.array([[tr1_pt1[0] - x1, tr1_pt1[1] - y1],
                      [tr1_pt2[0] - x1, tr1_pt2[1] - y1],
                      [tr1_pt3[0] - x1, tr1_pt3[1] - y1]], np.int32)
    cv2.fillConvexPoly(cropped_tr1_mask, points, 255)
    cropped_triangle = cv2.bitwise_and(cropped_triangle, cropped_triangle,

    tr2_pt1 = points_list2[triangle_index[0]]
    tr2_pt2 = points_list2[triangle_index[1]]
    tr2_pt3 = points_list2[triangle_index[2]]
    triangle2 = np.array([tr2_pt1, tr2_pt2, tr2_pt3], np.int32)
    rect2 = cv2.boundingRect(triangle2)
    (x2, y2, w2, h2) = rect2
    cropped_triangle2 = img2[y2: y2 + h2, x2: x2 + w2]
    cropped_tr2_mask = np.zeros((h2, w2), np.uint8)
    points2 = np.array([[tr2_pt1[0] - x2, tr2_pt1[1] - y2],
                       [tr2_pt2[0] - x2, tr2_pt2[1] - y2],
                       [tr2_pt3[0] - x2, tr2_pt3[1] - y2]], np.int32)
    cv2.fillConvexPoly(cropped_tr2_mask, points2, 255)
    cropped_triangle2 = cv2.bitwise_and(cropped_triangle2, cropped_triangle2,

    points = np.float32(points)
    points2 = np.float32(points2)
    M = cv2.getAffineTransform(points, points2)
    warped_triangle = cv2.warpAffine(cropped_triangle, M, (w2, h2))
    warped_triangle = cv2.bitwise_and(warped_triangle, warped_triangle, mask=cropped_tr2_mask)

    img2_new_face_rect_area = img2_new_face[y2: y2 + h2, x2: x2 + w2]
    img2_new_face_rect_area_gray = cv2.cvtColor(img2_new_face_rect_area, cv2.COLOR_BGR2GRAY)
    _, mask_triangles_designed = cv2.threshold(img2_new_face_rect_area_gray, 1, 255, cv2.THRESH_BINARY_INV)
    warped_triangle = cv2.bitwise_and(warped_triangle, warped_triangle, mask=mask_triangles_designed)

    img2_new_face_rect_area = cv2.add(img2_new_face_rect_area, warped_triangle)
    img2_new_face[y2: y2 + h2, x2: x2 + w2] = img2_new_face_rect_area

The triangles that have been drawn on the new blank image looks like the below image

Image 197

Swap the Face into second image

To swap the source face into the second image we need to create a mask such that the original face on the second image would be replaced with the second image. To create a mask we need to create a blank image having the same dimensions equal to the image_2.

img2_face_mask = np.zeros_like(img2_gray)

For extracting the face region in the image we can use the convex hull that we have created previously. Using this convex hull we can create a convex polygon that covers the boundary of the face which can act as a mask.

img2_head_mask = cv2.fillConvexPoly(img2_face_mask, convexhull2, 255)

Applying a bitwise_not() on the img2_head_mask makes the mask to be in black and the pixels outside the mask to be white.

img2_face_mask = cv2.bitwise_not(img2_head_mask)
Image 198

Now if we apply bitwise_and() between the original image with img2_face_mask it returns an image with no face. This image can be used to perform arithmetic operations to add a swapping face. Let’s check how the image looks without the face.

img2_noface = cv2.bitwise_and(img2, img2, mask=img2_face_mask)
Image 199

To add the swapping face into the destination face ( i.e.,image_2 ) we can perform arithmetic addition operation using cv2.add(). Since cv2.add() performs an addition operation between two images if and only if they are of the same dimensions, this is the reason we created img2_new_face with equal dimensions as of the image_2.

Also while performing the addition operation it automatically places the face in the no-face location since the img2_new_face is built using the warped triangles that have been placed at an exact position.

result = cv2.add(img2_noface, img2_new_face)
Image 200

But the final image after swapping does not look like it got completely blended into the destination image. To overcome this situation we have a pre-built library in OpenCV that performs seamless cloning known as cv2.seamlessClone().


Image editing tasks concern either global changes (color/intensity corrections, filters, deformations) or local changes concerned with a selection. Here we are interested in achieving local changes, ones that are restricted to a region manually selected (ROI), in a seamless and effortless manner. The extent of the changes ranges from slight distortions to complete replacement by source content.

The parameters passed into cv2.seamlessClone() are:-

cv2.seamlessClone(src, dst, mask, center, flags, output_format)

  • Source:- A 8-bit image with 3 channels, that needs to be cloned into image_2.
  • Destination:- A 8-bit image with 3 channels, such that source needs to be cloned into it.
  • Mask:- A 8-bit image with 1 or 3 channels. This mask should have dimensions equal to the source image, because source image needs to be blended into it.
  • Center:- The point known as center of face in the image_2, it is used to place the source image at this center on destination image.
  • Flags:- Depending upon the requirement of cloning the image flags are set, such as NORMAL_CLONE, MIXED_CLONE, MONOCHROME_TRANSFER.

cv2.NOMAL_CLONE:- The power of the method is fully expressed when inserting objects with complex outlines into a new background.

cv2.MIXED_CLONE:- The classic method, color-based selection, and alpha masking might be time-consuming and often leaves an undesirable halo. Seamless cloning even averaged with the original image, is not effective. Mixed seamless cloning based on a loose selection proves effective.

cv2.MONOCHROME_TRANSFER:- Monochrome transfer allows the user to easily replace certain features of one object with alternative features.

Calculating the Center:– To calculate the center of the face in image_2, we make use of the convex hull concept that covers all the points that lie on the boundary of the face. Using this convex hull we can draw boundingRect which that returns the coordinates of the rectangle.

Thus calculating the center of the rectangle gives the center of the face which can be used in the seamless cloning of the face.

(x3, y3, w3, h3) = cv2.boundingRect(convexhull2)
center_face = (int((x3 + x3 + w3) / 2), int((y3 + y3 + h3) / 2))
#removing the face from the img2
img2_face_mask = np.zeros_like(img2_gray)
img2_head_mask = cv2.fillConvexPoly(img2_face_mask, convexhull2, 255)
img2_face_mask = cv2.bitwise_not(img2_head_mask)
img2_noface = cv2.bitwise_and(img2, img2, mask=img2_face_mask)

result = cv2.add(img2_noface, img2_new_face)

#cloning face into the img2
(x3, y3, w3, h3) = cv2.boundingRect(convexhull2)
center_face = (int((x3 + x3 + w3) / 2), int((y3 + y3 + h3) / 2))
seamlessclone = cv2.seamlessClone(result, img2, img2_head_mask, center_face2, cv2.MONOCHROME_TRANSFER)

Image 202

We can check the result of swapping. Our program not only adjusts the size of the image, but also the color enhancement, and face orientation too.

To download the whole source code you can click the link face_swap.ipynb. This file is a google colab file, and please remember to change the path according to your images, and the path of the dlib model.