There's not a lot of actual functionality changes from the last commit.
However, switching the basic wrapper library to just long_jump on
failure and moving all the state info to structs required changes to all
the base functions in the last commit. The rest of the changes are to
link in the new functionality and to get the code to compile (usbdbg.c
edits).
Next I'll work on a function which abstracts the problem of opening an
image up and executing a line by line function op on it. I already
worked the code out for that. But, it's not in this commit to keep
things streamlined.
* With the new integral moving window we can support face detection,
keypoints and template matching on QVGA frames. However, it was only
implemented and tested for face detection.
* Increasing the max integral frame now for easier testing.
Fimrware will now automatically detect the appropriate file type and read
in that file type correctly.
Working on tying on of this stuff togheter next. It's getting a little
bit too complicated to deal with error cases. Need to add error message
function layer.
RGB565 reading and writing is going to be slow. But, grayscale is going
to be going as fast as the system can go.
If Omnivision has just reversed the byte order of data sent to the
camera we wouldn't have this problem for RGB565.
Added BMP file format reading and writing support code and modified the
ppm code to match. Upper level glue code has been left intact to be
altered in future commits.
Tested save() and ppm writing functionality still works. More
comprehensive tests coming soon.
... Kinda concerend that standard image file formats might not cut it for
the speed we'd like to have when using image files in function calls. I
think only grayscale is going to be fast. All other formats require a
lot of prep work.
I think I may modify some of this low level stuff in the future to
autodetect if an entire grayscale image can be read in or written out
in one go to speed that stuff up.
The negate function gives you the ability to negate an image before
running difference on it. The difference function will subtract two images
from each other and return the abs() of the result.
I believe it would have been optimal to work on the RGB565 image in the
LAB color space. However, since we don't have an inverse LAB lut this is
not possible. If we could replace LAB with YUV then that would free up
space to have an inverse YUV table (YUV->RGB).