Now you can find circles with your OpenMV Cam! The alrogithm can eek out
about 7 FPS on a 160x120 image which is quite impressive given how
computationally expensive circle finding is...
For easy line following mainly. In non-robust mode the line is computed
using least squares. In robust mode the line is computed using the
Theil-Sen median of slopes method. We do not use the Siegel Median of
Medians operation because it costs more CPU time... but, more
importantly there's no way to improve the centroid estimate so even if
the slope is more robust the line will be drawn in the wrong place.
These two new classes allow you to record image data for later viewing
at the same speed the image data was recorded. Unlike GIF/MJPEG the
image data is stored on the file system completely uncompressed in
native frame buffer format making super fast reading and writing
possible. Recording VGA Grayscale at ~13 FPS is possible along with
playing it back. (That's about 30 Mb/s folks).
...
The motivation for writing these scripts is so that you can record video
of something like a line following track, take that video home, and work
on computer vision algorithms for that data.
These classes should make it a lot easier to use the camera at home now.
Frame rate now can hit 30 FPS when JPEG compression is off. Merging of
lines is perfected too which greatly reduces the noise output. Also,
lines are now objects so you can get their values in an easy way.
Everything works. Running out of memory is fixed and the rotation value
is valid now. For 320x240 operation on the STM32H7 we're going to need
on the order of 1 MB in the entire frame buffer. The code is designed to
handle us getting this amount of memory without any new changes for
320x240 support.
Speed up the algorithm by fixing the abs() issue. Do not use that
function in any of your code. It by itself cut the speed of the code
in half. I don't know what's in that function but I'm guessing it does
ABS of a float using ints or something.
I made the zoom parameter functional now too so you can use lens_corr to
zoom in on the image. Argument parsing is handled too. Finally, I
updated the only script where this is used.
Note that I'm able to get more than 10 FPS at 160x120 on the M4 and 15
FPS at 160x120 on the M7. Previous this was at about 5 FPS and 7.5 FPS
respectively.
The new API is backwards compatible with the previous one except for
advanced features. The new blob code uses a flood fill algorithm that is
3x faster in filling out blobs that the previous code. On the M7 the
performance cap of 30 FPS is usually reached.
Additionally, blobs are objects with named attributes now so you don't
have to index access them anymore. However, index access is still
supported.
* Added pooling functions to make getting small images easy. set_binning
works too... but, it zooms in way to much. pooling functions aout you to
shrink the image while not zooming in.
* To make the pooling functions easy to use I created a version that
pools the image out of place and one that pools the image in place. The
inplace pooling function can work on the frame buffer (see edits to
sensor.c)
* I added the code to do hann windowing to the FFT lib. However, I
commented it out after it improved performance by basically zero.
Specialized windowing stuff will only come in handy for folks trying to
tune their algorithm... not in general for everything.
* I added subpixel resolution for the phase correlation code. You can
now track the image movement really precisely. Additionally, I fixed up
the displacement outputs to give expected results. I also added a QoR
output for the displacement code so that you can know when the results
are bad.
* Finally, an example script has been added to show off the features.
Finished going through imlib.c.
-> Histeq uses fb_alloc now and has hook for RGB histeq when reserve YUV
LUT is added (coming soon in next PR).
Cleanuped py_helper.c/h
-> No functional changes. Just added some header info.
Finished going through py_image.c
* 1 - Finished general code cleanup and updating everything to using new
library functions. In particular, I updated the remaining find_*
functions with the new roi clipping code when they accept rois.
* 2 - Made blob stuff return a list when nothing is found so you don't
have to do an if on the returned value anymore.
* 3 - img subscr is more powerful now allowing image reading and
writing. I updated this because I had to use it to find a previous bug
with socket.send() for the WINC driver.
* 4 - Renamed find_eyes to find_eye. Because it just finds one eye.
* 5 - Other than that just general code cleanup to make functions look
consistent.
And yes, changes have been test. Face tracking, eye tracking, keypoints,
etc. all work still.
Future things todo before release:
1 - Change all LAB stuff to YUV.
2 - Add in reverse YUV->RGB LUT and update functions like Mode() to use
this so they don't generate messed up outputs, also histeq() too.
3 - Add any remaining sensor control functions like agc control.
* Added the ability to control the quality on JPEG functions... However,
due to our JPEG implementation this doesn't seem to help. 90% JPEG
quality images and regular images should be about equal. But, you can
see heavy degredation with 90% still. E.g. text is unreabable. Not
exactly sure why this is happening but it can be fixed later.
* Changed the compress() function to compressed(). Also, it now
compresses using FB_Alloc to prevent realloc issues when compressing.
* Added new compress() function. This function compresses an image in
place and if that image is the frame bufffer then it will update the
frame buffer bpp value to reflect the image was compressed. Users can use
this function to basically finalize the frame buffer and then pass the FB
to functions that need to send image bytes. The benefit of using this
function is that it should allow higher quality JPEGs and let everything
run at a faster speed while connected to the IDE.
I made this function to speed up WiFi. However, I encountered a bug with
the winc.send() method. It appears to zero the bytes it sends. I didn't
debug further except to verify that the image data became zero after
calling send.
Everything except the DAC script works. That has to be fixed. Anyway, we
have a ton of example for launch. So, hopefully, comments about how to
do stuff should be limited.
That said, the PYB module is in a poor state still. Stuff kinda works and
kinda doesn't from it.
One day... There won't be any fires to put out on this project and I can
stop working so hard.
* Filled in all the board control examples. Everything works except for
DAC.
* Moved test drawing scripts to drawing dir and renamed them and added
comments.
* Filled in all the image filter stuff. There are still some tests that
can be renamed, commented, and added to this folder. But, I will do that
later.
* Fixed motion detection thresholds.
* Fixed LCD script comments.
* Fixed BLE return value.
Removed micropython code from the image libary. Also, blobs are now 10
tuple values by default now. The multilist thing has been removed from
blobs and it will return just a list of blobs instead of a tree of
lists.
Filter functions still work too.
Pixels, centroid, and orientation are calculated in the blob code now.
As for threshold, it is no longer needed (plus, it required storing a
secondary image in RAM which isn't really something we can handle).
Blob tracking has now been updated to work without requiring prior
segmentation of the image. You can still run it on a segmented image,
but, that is not needed anymore.
Use the copy color feature of the OpenMV IDE to get a color in the
image. Once you have that you can then pass the color to find_blobs which
will output a tuple of lists of blobs for each color. By default, all
blobs less than 1/1000th of the image are filtered out, however, you can
add a custom filter function which gets the image and the blob about to
be added to the list and you can decide to filter it or not.
For marker tracking, we now have a function called find markers which
basically merges all the blobs found by find blobs into one list of
blobs. Each new blob will have a color code value which will tell you
what colors are part of that blob. We support tracking up to 30 unique
colors this way.