The morph function lets you convolve the image with a kernel. It's
decently fast right now. But, in the future we'll have to optimize it by
a lot (unrolling loops, using SIMD instructions, etc.).
Anyway, along with morph I added an edge detection test script showing
how you can use a high pass filter on an image to get all the edges in
it. This is not as good as canny edge dection... but, it's about the
same and fast enough.
We'll need a Hough Transform system in the future to make edge dection
useful. Not sure how that will be implemented... so, that's going to be
far away for now.
The old code did not actually implement the errode anhd dilate kernels
correctly. However, it migh have been a little faster because it avoided
the boundary problem.
In the future we can optimize all the kernel code to have different loops
for doing the edges of image versus the center. But, for now, this is
good enough. QVGA color tracking with kernels will be slow, but, the
speed can be improved with QQVGA resolution. Using a 3x3 kernel is
plenty fast. Larger ones are slower.
I also added the ability for you to set the threshold for erode and
dialte. This lets you make the kenrel a little bit smarter so that it
won't errode or dilate a pixel unless the threshold is met. Meaning,
you'll be able to use erode to erode an image down to 1 pixel wide
lines.
All the work previously has been more or less leading up to supporting
this function. The line op function will open a file and execute a
function pointer on each line of the file opened to modify the frame
buffer.
It now figures out the file type from the file extension. If no file
extension is given it just saves the file as BMP if its not a JPEG image
or JPEG if it's a JPEG image. If you specify an extension and the file is
not of that type then it will give you an error.
The new test_save.py should run until you reach the JPEG image part
where it quits due to lack of JPEG support natively on OV7725 boards.
Maybe JPEG mode should be supoorted by just compressing pictures?
There's not a lot of actual functionality changes from the last commit.
However, switching the basic wrapper library to just long_jump on
failure and moving all the state info to structs required changes to all
the base functions in the last commit. The rest of the changes are to
link in the new functionality and to get the code to compile (usbdbg.c
edits).
Next I'll work on a function which abstracts the problem of opening an
image up and executing a line by line function op on it. I already
worked the code out for that. But, it's not in this commit to keep
things streamlined.
* With the new integral moving window we can support face detection,
keypoints and template matching on QVGA frames. However, it was only
implemented and tested for face detection.
* Increasing the max integral frame now for easier testing.
Fimrware will now automatically detect the appropriate file type and read
in that file type correctly.
Working on tying on of this stuff togheter next. It's getting a little
bit too complicated to deal with error cases. Need to add error message
function layer.
RGB565 reading and writing is going to be slow. But, grayscale is going
to be going as fast as the system can go.
If Omnivision has just reversed the byte order of data sent to the
camera we wouldn't have this problem for RGB565.
Added BMP file format reading and writing support code and modified the
ppm code to match. Upper level glue code has been left intact to be
altered in future commits.
Tested save() and ppm writing functionality still works. More
comprehensive tests coming soon.
... Kinda concerend that standard image file formats might not cut it for
the speed we'd like to have when using image files in function calls. I
think only grayscale is going to be fast. All other formats require a
lot of prep work.
I think I may modify some of this low level stuff in the future to
autodetect if an entire grayscale image can be read in or written out
in one go to speed that stuff up.
The negate function gives you the ability to negate an image before
running difference on it. The difference function will subtract two images
from each other and return the abs() of the result.
I believe it would have been optimal to work on the RGB565 image in the
LAB color space. However, since we don't have an inverse LAB lut this is
not possible. If we could replace LAB with YUV then that would free up
space to have an inverse YUV table (YUV->RGB).
* Filter functions bypass the default line processing in sensor.c, and pre-process lines.
* Processing is done on the fly, i.e. filters are called from after each line is received.
All the drawing functions have been updated to handle automatic clipping
when drawing offscren and work with both grayscale and RGB565.
Additionally, all functions now accept color arguments.
I've also updated the example scripts with the new functions and tested
them out to make sure they work.
Additionally, I wrote a test suite for the drawing functions to make
sure they work.
* Use a scanning factor proportional to the current scale.
* Use the new integral moving window to allow two integral images
(sum and sum squared) for fast mean, variance and standard deviation.
* Higher FPS and more accurate detection.
* A new integral image implementation that uses a moving window.
* Integral image is computed in steps, each shift computes n new lines.
* This only requires (image_width * (feature_height+1) * 4) bytes.
* Allows Haar detector to run on QVGA, and allows a second squared
integral image for standard deviation calculations.
The alloc functions allow you to use the framebuffer as a storage space.
It's very simple but effective. You can alloc which puts some memory on a
stack... and then when you're done you can free which pops the stack.
Pops (frees) must be done in reverse order of pushes (allocs).
In general, functions should call the init code before using the stack.
It could be in a bad state.
Also, I added some wrappers for file system functions to make that stuff
easier. This will be used in the future.
With new RGB565<->RGB888 scaling. This included redoing the LAB/YUV/XYZ
tables. I translated the table gen code to python also and added
comments as to where the math came from.
And yes, I tested and compared the tables to make sure they weren't
borken. The tables are slightly different... but, if look at the
progression of values loosely you'll see the triplets are very close to
each other when doing a compare. This is to be expected given I used a
slightly better scaling algo.
And modified the rainbow table so that the RGB888 to RGB565 translation
is done using a rounding technique versus hard floor. This is also used
for the RGB565<->RGB888 LUTs.
Additionally, I added a bunch of stuff to the image library to make
working with images easier. I will using these helpers in the future.
Finally, I cleaned up trailing space in the font stuff (pet peeve).
Point didn't need many changes. However, for rect I made the merge
function alot better so it won't alloc while merging, just free.
Additionally, I added a function to get the intersecting rectangle of an
image. This will be used for all functions that accept a subimg
argument. This function allows the user to basically pass any wild and
crazy rect they want and the function will find the intersecting area (if
it exists) and return just that to operate on. This is good for "do what
I mean" functionality versus "do what I say".