SLP Header

AN EFFICIENT IMPLEMENTATION OF FPGA BASED FACE DETECTION AND FACE RECOGNITION SYSTEM USING HAAR CLASSIFIERS

IJCSEC Front Page

Abstract
This paper introduces a novel techniqueto detect faces similarly recognizes in real-time with very high rate. It is essentially a feature-based approach, in which a classifier is trained for Haar-like rectangular features selected by AdaBoost algorithm and efficient representation method histogram equalization is usedfor varying illumination in the image. The facedetection system generates an integral image window to perform a Haar feature classification during one clockcycle. And then it performs classification operations inparallel using Haar classifiers to detect a face in the image sequence. The classifiers in the beginning of the cascade are simpler and consist of smaller numbers of features. Although a face detection module is typically designed to deal with single images, its performance can be further improved if video stream is available.However, as one proceeds in the cascade, the classifiers become more complex. A region is reportedas detection only if it passes all the classifier stages inthe cascade. If it is rejected at any stage, it is discardedand not processed further. If all stages are passed the face of a candidate is concluded to be recognized face.
Keywords:AdaBoost algorithm, haar features, histogram equalization, integral image.
I.Introduction
Computer vision is one of the foremost fields which have experienced increasing number of applications in the recent years in various directional domains like biomedical imaging, surveillance systems, interactivesystems like gesture, recognition, gaming etc.Detection of human faces is one of the key elements in the applications of computer vision in the above mentioned domains. Face detection is based on identifying and locating a human face in image regardless of size, position, and condition. Numerous approaches have been proposed for face detection in images. Simple features such as color, motion, and texture are used for the face detection in early researches. However, these methods break down easily because of the complexity of the real world. Face detection proposed by Viola and Jones [1]is most popular among the face detection approachesbased on statistic methods. This face detection is avariant of the AdaBoost algorithm [2] which achieves rapid and robust face detection. The proposed facedetection framework based on the AdaBoost learning algorithm using Haar features with varying illumination is considered one of the most difficult tasks for face detection. Variation caused by illumination is highly non linear and makes task extremely complex. Well known one is contrast enhancement algorithm, histogram equalization is applied for compensating theillumination conditions. Over past two decades, the problem of face detection has attracted substantial attention and witnessed an impressive growth in basic and applied research, product development and application. The purpose of this paper is to implement and thereby recreate the face detection algorithm presented by Viola-Jones with a refinement of histogram equalization technique. This algorithm should be capable of functioning in an unconstrained environment meaning that it should detect all visible faces in any conceivable image. In order to guarantee optimum performance of the developed algorithm thevast majority of images used for training, evaluation and testing are either found on the internet or taken from private collections.
A. Overview Facial feature detection methods generally modeltwo types of information. The first is local texturearound a given feature, for example the pixel values in a small region around an eye. The second is thegeometric configuration of a given set of facial features, e.g. eyes, nose, mouth etc.This paper encloses four main contributions of ourface detection framework. We will introduce each ofthese ideas briefly below and then describe them indetail in subsequent sections comprising, will discuss the various composition required for facedetection: integral image, haar features, adaboostalgorithm and histogram equalization. Section 3 will specifically discuss proposed hardware architecture.Section 4 deals with the implementation and experimental results.
COMPOSITION REQUIRED FOR FACEDETECTION
A. Integral ImageIntegral images can be defined as two-dimensionallookup tables in the form of a size of the original image .This allows to computesum of rectangular areas in the image, at any positionor scale, using only four lookups ill (1) Sum=pt4 - pt3 - pt2+ ptl (1) where points p1" belong to the integral image.The new image representation called integralimage paves way for fast feature evaluation [3]. Theintegral image at location (x, y) contains the sum ofthe pixels above and to left of (x, y) The shaded region represents the sum of pixels up tothe position (x, y) of the image.Using the integral image any rectangular sum canbe computed in four array references [4], figure 2illustrates the integral image sum generation, the sumof the pixels within rectangle D can be computedwith four array references. The value of the integralimage at location 1 is the sum of the pixels inrectangle A. The value at location 2 is A + B, atlocation 3 is A + C, and at location 4 is A + B + C +D. The sum within D can be computed as 4 + 1 - (2+ 3).
B. Haar Features Haar-like features are digital image features usedin face detection. They owe their name to theirintuitive similarity with Haar wavelets and were usedin the first real-time face detector [5]. A Haar-likefeature considers adjacent rectangular regions at aspecific location in a detection window as indicatedin figure 3, it sums up the pixel intensities in theseregions and calculates the difference between them.This difference is then used to categorize subsections of an image. For example, let us say we have animage database with human faces. It is a commonobservation that among all faces the region of theeyes is darker than the region of the cheeks.Therefore a common haar feature for face detection isa set of two adjacent rectangles that lie above the eyeand the cheek region. The position of these rectanglesis defined relative to a detection window that acts likea bounding box to the target object (the face in this case ).
C. Haar Features CalculationHaar features are composed of either two or threerectangles. Face candidates are scanned and searchedfor Haar features of the current stage. The weight andsize of each feature and the features themselves aregenerated using a machine learning algorithm fromAdaBoost [6]. Each Haar feature has a value that iscalculated by taking the area of each rectangle,multiplying each by their respective weights, andt hen summing the results. Several Haar featurecompose a stage. A stage comparator sums the entire Haar feature resulting in a stage and compares this summation with a stage threshold. The threshold is aconstant obtained from the AdaBoost algorithm [7]. The face detection algorithm eliminates facecandidates quickly using a cascade of stages. Thecascade eliminates candidates by making stricterrequirements in each stage with later stages being much more difficult for a c ndidate to pass.Candidates exit the cascade if they pass all stages or fail any stage. A face is detected if a candidate passesall stages. This process is shown in Figure.Candidate must pass all stages in the cascade to be concluded as a face.
D. Adaboost Algorithm AdaBoost, short for Adaptive Boosting, is amachine learning algorithm, formulated by YoavFreund and Robert Schapire [8].The adaboostalgorithm is based on the idea that a strong classifiercan be created by linearly combining a number ofweak classifiers. A weak classifier consists of a feature (j), a thres hold (θ), and a polarity (P)indicates the direction of the inequality: In the boosting algorithm T hypotheses are constructed each using a single feature.The finalhypothesis is a weighted linear combination of the Thypotheses where the weights are inverselyproportional to the training errors. Each iteration t, itwill train a best weak classifier which can minimizesthe training errors. After T iteration, we can obtain astrong classifier which is the linear combination ofthe T best weak classifiers multiplied by the weight values. The AdaBoost algorithm is used to select a set of features and train a classifier. Locating such featuresis an important stage in many facial image Interpretation tasks (such as face verification, face tracking or face expression recognition). We adopt the fast and efficient face finder recently described by Viola and Jones to locate the approximate position of each face in an image. A detector is used to cascade the structure to reduce the number of features considered for each sub-window. We then use the same method, trained on regions around facial feature points, to locate interior points on the face. However, there is often insufficient local structure around eachfeature to train really reliable feature finders. We find that when set with thresholds sufficient to locate the true position of the face.
E. HistogramEqualizationOften images may be limited to colors that is, itmay be extremely grey, it lacks detail since the rangeof colors seems limited to mid grey levels or lackingin contrast enhancement. This enhancement is donewith histogram equalization, as shown in figure 5(a)and 5(b), to expand the colors within the image. Todo this, we need to calculate the cumulativefrequencies within the image. The cumulativefrequency for grey level g is defined as the sum of thehistogram data 0 to g [9]. If the cumulative frequencyis stored in an array, histogram equalization can bewritten as: alpha = 255 / number of Pixels (3) g(x,y) = cumulative Frequency[f(x,y)]*alpha (4) g(x, y) is the grey level of pixel (x, y), f(x, y) is the cumulative frequency.
PROPOSED FACE DETECTION The entire paper can be split up into two parts: a)Training b) Recognition.

References:

  1. [1] Paul Viola and Michael J. Jones, "Robust real-timeface detection," In International Journal ofComputer Vision, pp. 137-154,2004.
  2. [2] Yoav Freund and Robert E. Schapire, "A decisiontheoreticgeneralization of on-line learning and anapplication to boosting," in Journal of Computer andSystem Sciences, pp. 119-I39, 1997.
  3. [3] Konstantinos G. Derpanis, "integral image-basedrepresentations," Department of Computer Science andEngineering,York University, July 14,2007.
  4. [4] Takeshi Mita Toshimitsu Kaneko Osamu Hori, " JointHaar-like Features for Face Detection," MultimediaLaboratory, Corporate Research & DevelopmentCenter, Toshiba Corporation, Kanagawa 212-8582,Japan, Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV'05) 2005 IEEE
  5. [5] Junguk Cho.Shahnam Mirzaei,Jason Oberg and Ryan\ Kastner, " FPGA-Based Face Detection System UsingHaar Classifiers," FPGA'09, February 22-24, 2009.
  6. [6] Shian-Ru AJex.Ke,”sFace Detection based onAdaBoost," Department of Electrical Engineering,University of Washington, U.S.A.
  7. [7] Ben Weber, "Generic Object Detection using AdaBoost,"Department of Computer Science, University of California, Santa Cruz.
  8. [8] David Cristinacce and Tim Cootes, " Facial featuredetection using AdaBoost with shape constraints,"Dept. Imaging Science and Biomedical Engineering, University of Manchester, Manchester,U.K.