Face Recognition and Image Analysis using Python

3
11697
Source - cuvt.cz
Source - cuvt.cz

                            Face Recognition and Analysis using Python

By Nishank Biswas

Face recognition and analysis is the domain which substantially acquires consideration because of its ability to synchronise with modern day powerful computation technology which in combination possess an outrageous capability to address many of the tasks like crowd surveillance, identity verification, design of Human Computer Interface and content based image database management sophisticatedly. One of the crucial aspects of the face recognition system is image processing as the images in these system serves as the origin for feature extraction which would be in turn propagated for further analysis. Depending upon the field and objective of image processing various kinds of transformation of the image like thresholding, Gaussian Filtering, watersheding, etc can become handy especially in the domain like medical diagnosis using imaging and therefore it is an advantage to have standard packages which can serve the purpose and some of them are scikit-image, Inception-v3 and mahotas.

The core of the analysis lies upon the machine learning algorithm which is working behind and governing the frame work of image processing from the beginning stage of providing chewable data to the stage from where data sets can be intelligently classified into subgroups. In the context of machine learning algorithm used for image processing and recognition, we would be using Haar feature-based cascade classifier as a tool for image processing which was proposed by Paul Viola and Michael Jones. Along with the cascade classifier, we would also make the use of machine learning algorithm called Local Binary Patterns Histograms Face Recognizer (LBPH Recognizer) to create the classifier model from the available set of data and ultimately can be used to perform face recognition. In order to initiate the analysis, we would begin from the objective of extracting the pixels from area of interest which in this analysis happens to be the face region and to accomplish the task the feature extraction is done in the context of Haar-features. The Haar-features which are pre-designed space analogous to the convolutional kernel are maneuvered over the pixels of the image by altering the size of the Haar-feature to extract sense from the image with the help of the metrics obtained from the feature extraction. Although the feature extraction seems appealing but it is worth emphasising the computational expense which would be involved in the process therefore in order to make things much more adaptable to work on real time environment three methods or algorithms are generally deployed in synchronisation viz., integral imaging, adabosost algorihm and cascade classifier. Having mentioned about the evaluation of metric for feature identification it would be crucial to note the amount of time taken as it requires computation along each pixel and to tackle the same problem the concept of Integral image is introduced in the context of pixel calculation which is very much computational friendly. Apart from that the other domain which needs prime concern is the problem of handling tonnes of features for classification and in order to tackle the problem of same the concept of combination of key features has been introduced in the context of Adaboost machine learning algorithm. This algorithm strives to extract the key features out from all other and the reason behind great advantage of the same can be attributed to the fact of the presence of most of the features which are redundant especially in the context of image processing. The key features which are hence extracted are often termed as weak classifiers because of their inability to classify the entire data into subgroups efficiently rather they can be integrated to form final classifier which is evident to show appreciable results thereby significantly increasing the overall efficiency. Although till now it has became quite soothing to analyse tonnes of data but it would be all the more great if we would be able to make decisions on the fly i.e., eliminating the non face objects as early as possible to avoid further feature extraction on the same and this can be achieved by the use of the cascade classifiers which uses adaboost algorithm to create chunks of features with a combination of stages which are sophisticated enough to figure odd one out at early stage and thereby terminating the further unnecessary extraction. At this stage, we can appreciate the potential of Haar feature-based cascade classifier on its ability to pinpoint the facial image which could be used for further analysis in more or less real time environment. It becomes interesting to note that from this point onward the baton is passed to the face recognising algorithm which ultimately models its parameters based on the available set of data to classify the further images into existing subgroups. The opencv module which we would be using for the analysis is known to provide three sets of recogniser viz., Eigenface Recogniser ,Fisherface Recogniser and LBPH Face Recogniser. Although the LBPH Recogniser is proven to be much more robust when compared with the others especially when availability of the data becomes a concern but this is not significantly problematic in the present situation and therefore it would be optional to go with any of the recogniser algorithm. However, we would be using the LBPH Face Recogniser to emphasise on its simplicity typically when it comes to characterising the images data set locally.

Implementation and Analysis using Python

To demonstrate the application of face recognition system in python let us consider a data set consisting of various facial expressions among different person and which can be obtained from http://vision.ucsd.edu/content/yale-face-database and to be precise, it contains the gray scale images of 15 individuals in 11 different facial expressions. In order to access the images for processing the images has been named in the specific format of subject<number>.<expression>, where number ranges from 01-15 and some of the expressions are happy, sad, normal, etc. We will begin by importing the modules required for this purpose and the first of which would be numpy as we would be storing the images data in the form of numpy array.  Secondly, we would import the cv2 module which is basically an OpenCV module and contains the functions for face detection and recognition. Also we will have to import the os module in order to access the files from the respective directory. Lastly Image module from PIL package is needed to be imported so as to read the image from the file and consecutively convert it into gray scale image which can be ultimately converted into numpy array. And all this can be achieved by the command

import cv2, os

import numpy as np

from PIL import Image

Now having loaded the required module we can straight away head towards our first objective of extracting the features from the image, which in this case happens to be frontal face and eye region, with the help of Haar feature-based Cascade Classifier provided by opencv. In order to accomplish that we will firstly set up the cascade classifiers as

faceCascade = cv2.CascadeClassifier(“haarcascade_frontalface_default.xml”)

eyeCascade = cv2.CascadeClassifier(‘haarcascade_eye.xml’)

Once we are done with the set up then the next step becomes to compartmentalise the data set, which would be accomplished by os module, and allow it to get processed by the cascade classifier.

def get_images_eyes_and_labels(path):

image_paths = [os.path.join(path, f) for f in os.listdir(path) if not f.endswith(‘.sad’)]

# images to contain face images

images = []

# labels to contain the label that is assigned to the image

labels = []

# eye to contain images of eyes

eye=[]

for image_path in image_paths:

# To read the image and convert to grayscale

image_pil = Image.open(image_path).convert(‘L’)

# To convert the image format into numpy array

image = np.array(image_pil, ‘uint8’)

#  To extract the label from the image which would be used for sub grouping

nbr = int(os.path.split(image_path)[1].split(“.”)[0].replace(“subject”, “”))

# To detect the face in the image, we now trigger the loaded face and eye cascade

faces = faceCascade.detectMultiScale(image,1.3,5)

# If the face is detected, append the face to images, eyes to eye and the label to labels

for (x, y, w, h) in faces:

images.append(image[y: y + h, x: x + w])

labels.append(nbr)

roi=image[y:y+h, x:x+w]

eyes=eyeCascade.detectMultiScale(roi)

for (ex,ey,ew,eh) in eyes:

eye.append(roi[ey:ey+eh, ex:ex+ew])

cv2.imshow(“Adding faces to traning set…”, image[y: y + h, x: x + w])

cv2.imshow(“Adding eyes to traning set…”, roi[ey: ey + eh, ex: ex + ew])

cv2.waitKey(50)

# return the images list, eye list and labels list

return images, eye, labels

To be precise, the above function get_images_eyes_and_labels is used to extract the region of interest from the image accessed from the path supplied as an argument to the function in the form of list as frontal face image and image of eyes along with their corresponding labels showing the respective classes. The lists returned from the function becomes suitable for image recognition process and the same can be achieved for YaleFace Dataset with the help of these commands

path = ‘./yalefaces’

images, eye, labels = get_images_eyes_and_labels(path)

cv2.destroyAllWindows()

Uptil now the purpose of getting the chewable data set had been taken care of and now we can invoke the machine learning algorithm encapsulated in the LBPH Recogniser, which can be trusted to perform the processing on the eye dataset as well, to perform image recognition and to do that, we will have to load the recogniser into the environment as

recognizer = cv2.createLBPHFaceRecognizer()

erecognizer= cv2.createLBPHFaceRecognizer()

Having loaded the recognizers for each of the feature, we would be able to perform the training operation for image recognition based on the data set extracted above and the same can be accomplished by the recognizer loaded above for the particular task as

recognizer.train(images, np.array(labels))

erecognizer.train(eye, np.array(labels))

At this stage, we have the estimated model for image recognition in our hand and we can perform the cross validation or testing operations on the data set which was left out explicitly for this task so as to have an idea about the performance of the model.

# Append the images with the extension .sad into image_paths

image_paths = [os.path.join(path, f) for f in os.listdir(path) if f.endswith(‘.sad’)]

for image_path in image_paths:

predict_image_pil = Image.open(image_path).convert(‘L’)

predict_image = np.array(predict_image_pil, ‘uint8’)

faces = faceCascade.detectMultiScale(predict_image)

for (x, y, w, h) in faces:

nbr_predicted, conf = recognizer.predict(predict_image[y: y + h, x: x + w])

e_roi=predict_image[y: y + h, x: x + w]

eyes=eyeCascade.detectMultiScale(eroi)

for (ex,ey,ew,eh) in eyes:

e_nbr_predicted, e_conf = erecognizer.predict(e_predict_image[y: y + h, x: x + w])

nbr_actual = int(os.path.split(image_path)[1].split(“.”)[0].replace(“subject”, “”))

if nbr_actual == nbr_predicted:

if nbr_predicted == e_nbr_predicted:

print “{} is Completely Recognized with confidence {}”.format(nbr_actual, conf+econf)

else:

print “{} is Partially Recognized with confidence {}”.format(nbr_actual, conf)

else:

print “{} is Incorrect Recognized as {}”.format(nbr_actual, nbr_predicted)

cv2.imshow(“Recognizing Face”, predict_image[y: y + h, x: x + w])

cv2.waitKey(1000)

The above segment of code extracts the response of the trained model towards the set of unseen data and strives to mimic out the performance of the model based on the results obtained. In order to have a robust image recognition system the above code firstly extracts the response of frontal face classifier and then cross validate the same with eye classifier and expresses the output in the light of confidence level based on the accuracy of both the classifier.

References –

  1. Opencv.org
  2. stackoverflow.com
  3. hanzratech.in

 

 

 

3 COMMENTS

  1. Why not use opencv module alobe to read and convert it into gray..by default it stores the image as numpy ndarray..are there any difference in computation time betweeb PIL and Opencv in loading and converting it into gray?

  2. Hi,
    Thanks for this Tuto. I work on face recognition, I have to compare pictures from my camera and to compare with my bank of user . This program has to return the name of the person in front of the camera.

    I have an error for the line : “erecognizer.update(eye, np.array(labels))”

    The error is :

    OpenCV Error: Bad argument (The number of samples (src) must equal the number of labels (labels). Was len(samples)=245, len(labels)=0.) in train, file /##/opencv_contrib/modules/face/src/lbph_faces.cpp, line 362
    Traceback (most recent call last):
    File “script.py”, line 51, in
    erecognizer.update(eye, np.array(labels))
    cv2.error: /##/opencv_contrib/modules/face/src/lbph_faces.cpp:362: error: (-5) The number of samples (src) must equal the number of labels (labels). Was len(samples)=245, len(labels)=0. in function train

    Any idea ?

  3. Traceback (most recent call last):
    File “E:\python files\ip\rec1.py”, line 39, in
    images, labels= get_images_and_labels(path)
    File “E:\python files\ip\rec1.py”, line 19, in get_images_and_labels
    image_pil=Image.open(image_path).convert(‘L’)
    File “C:\Python27\lib\site-packages\PIL\Image.py”, line 1980, in open
    raise IOError(“cannot identify image file”)
    IOError: cannot identify image file

    i am getting the above error. what to write in place of image_path?

LEAVE A REPLY