Blob tracking has now been updated to work without requiring prior
segmentation of the image. You can still run it on a segmented image,
but, that is not needed anymore.
Use the copy color feature of the OpenMV IDE to get a color in the
image. Once you have that you can then pass the color to find_blobs which
will output a tuple of lists of blobs for each color. By default, all
blobs less than 1/1000th of the image are filtered out, however, you can
add a custom filter function which gets the image and the blob about to
be added to the list and you can decide to filter it or not.
For marker tracking, we now have a function called find markers which
basically merges all the blobs found by find blobs into one list of
blobs. Each new blob will have a color code value which will tell you
what colors are part of that blob. We support tracking up to 30 unique
colors this way.
Mean filter -> Fast and easy to use. This will likely be the only filter
that gets alot of action on the M4.
Median filter -> Works really well, but, slow. On grayscale at 160x120
you can get also 10 FPS with it for a 3x3 kernel. That said, it's still
slow. Also, the code only works for 3x3 and 5x5 kernels.
About the previous histogram filter... technically, that filter should be
better. However, it suffers from a startup cost. The operation of finding
the median point in the histogram costs too much to compute. This is
what causes it to be slow. On very large kernels it will be faster than
the sorting median alrogithm I put up... but, large kernels will be too
slow for anyone to use anyway. The paper Ibrahim linked to about it
showed it being used for like 7x7 kernels and up... so, I think the
researcher who thought of the idea was really thinking about the
algorithm for large kernels.
Mode filter -> Works great on grayscale. Not so much on color. I think it
needs to be run on the LAB color space instead of the RGB color space. I
say this because it causes pretty strong artifacts around edges. When we
get more flash we'll be able to have a reverse lookup table for LAB to
make the mode filter better. Until then...
So I'm just adding a function to do it cleanly and efficently. Call
skip_frames() after changing any camera settings to let it settle.
10 frames by default works fine. Tested it.
Has a bias value that allows you to control if its really a midpoint,
min, max filter, or something inbetween. Run at 160x120 or lower. 320x240
is slow (seems to be the case for all convoltions at that res).
Added setters for these camera settings. AWB is necessary for color
tracking to work correctly. AGC still runs, which causes lighting
shifts. It may need to be disabled too. Not sure... if I want to do that
or not however, because without it lightning won't get normalized to
remain at a certain level. So, turning AGC off may cause issues in other
ways.
First, a few things:
The MLX 16x4 sensor has just too low of a resolution for mass appeal for
the price. The product is not going to sell very well. We need to look
into supporting sensors with a better res. Like the FLIR 1. The MLX
module was renamed to the "flir" module with this idea in mind.
The flir code now takes care of doing scaling and blending itself. I did
this to get rid of the user having to scale the image themselves and
blend themselves. Its too easy to run out of memory given our current
ultra small heap. In general, anything that requires multiple images in
RAM has got to go. When we do another OpenMV Cam with external RAM in
the MB range then maybe such functions will be safe. But, right now they
are definately not.
Anyway, moving on, I fixed a few bugs with the MLX math code. But, for
the most part was correct. I also added reconmended polling code for
brownouts as required by the datasheet.
Last, I designed this code like the LCD code to support a type value
when inited. This will allow the system to user a different sensor in the
future without any API changes to the user.
I will add test scripts for this next. Basic usage follows:
import flir
flir.init()
flir.display_ir(sensor.snapshot())
And that's it. Super easy. If the user wants the raw temp values they
can use flir.read_ir() to get the ta and to values. The display function
has a hidden alpha and scale argument for controling blending and the
min/max scaling.
The previous way we worked out scaling kinda sucked... it was a good
shot, but, controllable min and maxes that autoscale by default just
work better. If the user knows the temp range then they can just set the
min and max.'
Anyway, longest commit ever done.
File reading is runing ultra fast now. We're getting that SD card speed
the STM32 promised now. The file buffer commands have been updated to
alloc as much available memory to read as much of a file in as possible
now to speed up things. This works really great.
Note however, while the file buffer is active you have to use the file
buffer versions of tell and size. Spent a few hours on tracking down an
error related to not using the buffered versions.
All file write functions now use fb_alloc to go much faster. Writes are
re-directed to the extra frame buffer RAM and are grouped until they can
be written in a massive multi-block write to the SD card. We get the
best SD card write speed by doing things this way.
Ideally we'd want to buffer the whole file... but, this is about as good
as we're going to get for now.
Going to fix reading functions to use the same buffer next.
The built-in mjpeg module allows you to record videos seamlessly. It
will automatically compress the frame buffer using the extra space in the
main ram. So... you don't have to pass it jpeg images. Gets 7 FPS at
320x240 while connected to the computer too (it has to compress the
frame twice in this situation).
Anyway, the module work like Gif.
Now you can just grab all the free ram in the frame buffer in one go.
This fixes problems figuring out how many lines to alloc. Will update line
op code with this new info later.
Color gifs look very good for how bad you'd expect them to be with just
7 bits of color (rgb232) - quite amazing. Also, I hardened the gif
module to make it "user ready".
You can now get the color stats for an area in the image. The stats
function returns the mean, median, mode, min, max, st_dev,
lower_quartile, and upper_quartile.
This function allows you to automate binary and threshold functions
based on what's in the iamge.
The morph function lets you convolve the image with a kernel. It's
decently fast right now. But, in the future we'll have to optimize it by
a lot (unrolling loops, using SIMD instructions, etc.).
Anyway, along with morph I added an edge detection test script showing
how you can use a high pass filter on an image to get all the edges in
it. This is not as good as canny edge dection... but, it's about the
same and fast enough.
We'll need a Hough Transform system in the future to make edge dection
useful. Not sure how that will be implemented... so, that's going to be
far away for now.
The old code did not actually implement the errode anhd dilate kernels
correctly. However, it migh have been a little faster because it avoided
the boundary problem.
In the future we can optimize all the kernel code to have different loops
for doing the edges of image versus the center. But, for now, this is
good enough. QVGA color tracking with kernels will be slow, but, the
speed can be improved with QQVGA resolution. Using a 3x3 kernel is
plenty fast. Larger ones are slower.
I also added the ability for you to set the threshold for erode and
dialte. This lets you make the kenrel a little bit smarter so that it
won't errode or dilate a pixel unless the threshold is met. Meaning,
you'll be able to use erode to erode an image down to 1 pixel wide
lines.
All the work previously has been more or less leading up to supporting
this function. The line op function will open a file and execute a
function pointer on each line of the file opened to modify the frame
buffer.
It now figures out the file type from the file extension. If no file
extension is given it just saves the file as BMP if its not a JPEG image
or JPEG if it's a JPEG image. If you specify an extension and the file is
not of that type then it will give you an error.
The new test_save.py should run until you reach the JPEG image part
where it quits due to lack of JPEG support natively on OV7725 boards.
Maybe JPEG mode should be supoorted by just compressing pictures?
There's not a lot of actual functionality changes from the last commit.
However, switching the basic wrapper library to just long_jump on
failure and moving all the state info to structs required changes to all
the base functions in the last commit. The rest of the changes are to
link in the new functionality and to get the code to compile (usbdbg.c
edits).
Next I'll work on a function which abstracts the problem of opening an
image up and executing a line by line function op on it. I already
worked the code out for that. But, it's not in this commit to keep
things streamlined.
* With the new integral moving window we can support face detection,
keypoints and template matching on QVGA frames. However, it was only
implemented and tested for face detection.
* Increasing the max integral frame now for easier testing.