### Data and Code

• Updated code: mp6code.zip. This contains some updates that make the notation more consistent, and then avoid over-writing the matlab native function called feature.m.
• Contents:
• jpg/*.jpeg: 168 image files
• rects/allrects.txt: 168 lines. Each line contains 3 classes: LIPS, FACE, and OTHER. Each class contains 4 rectangles. Each rectangle contains 4 integers: [XMIN, YMIN, WIDTH, HEIGHT]. Thus there are a total of 3*4*4=48 integers on each line. For more information about the specification of a rectangle, see Matlab documentation.
• mp6code: useful programs.
• showrects.m: Run this program to show yourself some of the rectangles, so you can see what they look like.
• plotrects.m: A utility function used by showrects.m.
• integralimage.m: Use this to compute the integral image for each image.
• differimage.m: You don't need this, but you can use it to reverse the processing of integralimage, to make sure it worked.
• rectfeature.m: Compute a Viola-Jones feature.
• bestthreshold.m: For any given one-dimensional feature, that has been computed over the entire training database, this function finds the best classification threshold, polarity, and error rate.
• rectsmatch.m: Not needed.

In this MP, you will create a face detector using the Viola-Jones features, combined with Adaboost. Steps are as follows.

1. DATA: There are 168 images, and 168 corresponding lines in allrects.txt. Use the first 126 images as training data (four faces/image, four nonface rectangles/image), and the last 42 as test data.
• Load each JPG image, and extract its integral image. I recommend keeping all of the integral images in RAM, so you can rapidly compute features for the whole database.
• Load allrects.txt. This file contains 12 rectangles per image. The first 4 rectangles/image are lip rectangles; you can ignore these. The next 4 are face rectangles: these are positive examples. The last four are randomly generated negative examples. You can see what they look like using showrects; that file also gives some hints how you can load the images and the rectangles.
2. FEATURES: The function rectfeature.m calculates one type of feature, for the entire database. It takes three parameters: RECTF is the sub-rectangle, VERT tells whether the feature should be oriented vertically or horizontally within the sub-rectangle, and ORDER tells whether there should be 1, 2, 3, or 4 sub-sub-rectangles extracted within the sub-rectangle.
3. TRAINING: You will use the Adaboost algorithm, which combines the outputs of T=40 different weak classifiers in order to make one strong classifier. You will need to create a weight vector or matrix that has as many weights as there are training rectangles (126*8). You will also need to specify the correct label (0 or 1) of each of those (126*8) training rectangles. The weights should initially all be equal. Train each weak classifier separately (for t=1:40):
1. To create a weak classifier, the first thing you do is to renormalize the training weights, W=W/sum(sum(W));.
2. The next thing you do is to search through every possible one-dimensional feature, to see which one gives the lowest weighted classification error. "Every possible feature" means: for xmin=[0:(1/6):(5/6)],
• for ymin=[0:(1/6):(5/6)],
• for wid=[(1/6):(1/6):(1-xmin)],
• for hgt=[(1/6):(1/6):(1-ymin)], rectf=[xmin,ymin,wid,hgt],
• for vert=[0,1],
• for order=1:4,
• run rectfeature.m
• run bestthreshold.m to find the best error rate possible using this feature.
• If the best error rate using this feature is better than your current best, then save this feature definition as your current best.
3. Once you've found the best feature for the current set of weights, then you need to reduce the weight of each training rectangle that was correctly classified during the current iteration. Reduce the weight of each correctly classified rectangle by a factor of beta=err/(1-err), where err is the weighted error rate of the best feature. Save these beta values; you'll need them during testing.
4. TESTING: Run your 40 weak classifiers on the test data. This time, instead of generating 0/1 as output, generate +/-1 as output of the classifier. Multiply each +/-1 label by the corresponding classifier's alpha=-log(beta), and add them all together. If the result is greater than zero, call that test rectangle a face; if not, call it a non-face.
5. RESULTS TO HAND IN: As part of your standard narrative report, hand in a figure showing
1. The total unweighted test-corpus error rate of the strong classifier, as a function of the number of weak classifiers it includes (abscissa is numbered from T=1:40, ordinate is total error rate).
2. The total unweighted error rate of the t'th weak classifier, for t=1:40.
3. The weighted error rate (=beta/(1+beta)) of the t'th weak classifier.
Some of these curves go up, some go down. Why?
6. EXTRA CREDIT: For extra credit, figure out how to run your trained face classifier as a face detector. Divide each image into four quadrants. In each quadrant, test every possible rectangle within that quadrant, to see which one has the highest face detection score (this will be most efficient if you figure out how to re-write rectfeature.m so that it can test all 40 of your features, all at once, very quickly for one rectangle; or alternatively, so that it can compute one feature, very quickly, for every possible candidate rectangle in an image). Hand in code that generates and then plots (using plotrect.m) the four detected face rectangles, and (using a different color) the four true face rectangles from the same image.