openmv/usr/examples/07-Face-Detection/face_tracking.py
Kwabena W. Agyeman 0abd5d3688 Working on scripts...
Moved feature detection scripts into their own folders and added explict
frame_skip value per Ibrahim's request.

Finished working on snapshot and video recording scripts for next
release.

... From CMUcam4 work I learned that people will just want examples that
do "X" thing. So, in general, our examples should include a simple
script showing off a feature and then a more complex script that does "X"
where "X" is some app that a person would want. For example, we'll get
reuqests for face tracking with servos, and movement detection with
servos. So, instead of answering this question a million times with an
example script we'll just have examples for all kinds of things people
will want.

Gotta automate dealing with help support at the end of the day...
2016-04-02 11:18:20 -04:00

76 lines
2.5 KiB
Python

import sensor, time, image
# Rotation.
NORMALIZED=False
# Keypoint extractor threshold, range from 0 to any number.
# This threshold is used when extracting keypoints, the lower
# the threshold the higher the number of keypoints extracted.
KEYPOINTS_THRESH=32
# Keypoint-level threshold, range from 0 to 100.
# This threshold is used when matching two keypoint descriptors, it's the
# percentage of the distance between two descriptors to the max distance.
# In other words, the minimum matching percentage between 2 keypoints.
MATCHING_THRESH=80
# Reset sensor
sensor.reset()
# Sensor settings
sensor.set_contrast(1)
sensor.set_gainceiling(16)
sensor.set_framesize(sensor.QQVGA)
sensor.set_pixformat(sensor.GRAYSCALE)
# Skip a few frames to allow the sensor settle down
# Note: This takes more time when exec from the IDE.
for i in range(0, 10):
img = sensor.snapshot()
img.draw_string(0, 0, "Please wait...")
# Load Haar Cascade
# By default this will use all stages, lower satges is faster but less accurate.
face_cascade = image.HaarCascade("frontalface", stages=25)
print(face_cascade)
# First set of keypoints
kpts1 = None
# Find a face!
while (kpts1 == None):
img = sensor.snapshot()
img.draw_string(0, 0, "Looking for a face...")
# Find faces
objects = img.find_features(face_cascade, threshold=0.5, scale=1.5)
if objects:
# Expand the ROI by 11 pixels in each direction (half the pattern scale)
face = (objects[0][0]-22, objects[0][1]-22,objects[0][2]+22*2, objects[0][3]+22*2)
# Extract keypoints using the detect face size as the ROI
kpts1 = img.find_keypoints(threshold=KEYPOINTS_THRESH, normalized=NORMALIZED, roi=face)
# Draw a rectangle around the first face
img.draw_rectangle(objects[0])
# Draw keypoints
print(kpts1)
img.draw_keypoints(kpts1, size=12)
time.sleep(1000)
# FPS clock
clock = time.clock()
while (True):
clock.tick()
img = sensor.snapshot()
# Extract keypoints using the detect face size as the ROI
kpts2 = img.find_keypoints(threshold=KEYPOINTS_THRESH, normalized=NORMALIZED)
if (kpts2):
# Match the first set of keypoints with the second one
c=image.match_descriptor(image.FREAK, kpts1, kpts2, threshold=MATCHING_THRESH)
# If more than 10% of the keypoints match draw the matching set
if (c[2]>25):
img.draw_cross(c[0], c[1], size=5)
img.draw_string(0, 10, "Match %d%%"%(c[2]))
# Draw FPS
img.draw_string(0, 0, "FPS:%.2f"%(clock.fps()))