Redid the phase correlation code again so it's one method call now. This
method call can either do logpolar phase correlation to get rotation/
scale or translation(x/y). Additionally, it will be able to also do both
at once. However, I don't have that quite working yet.
I've updated the example scripts to reflect the new code too.
Finally, I had to fix a bug in the rotation correction code.
...
Once I've got the full pipeline working I will post scripts for that. I
have all the code in there and it's been somewhat debugged... However, I
can't get a useful phase correlation lock out of the log polar fft mag.
I plan to look into noise filtering and spectral whitening solutions for
this.
Just doing one big commit/PR here since I noticed that breaking it up
causes issues.
Anyway, these fixes give us GOOD/WORKING/FAST optical flow now on the
OpenMV Cam M7. A number of changes were made to the optical flow
scripts. You have have absolute and differential estimation example
scripts. Additionally, you also have the ability to measure rotation and
scale changes too. Linear/Log Polar conversion was added for this. Users
may use the new code for generic image manipulation too. Finally, I
updated the power of 2 resolutions since you actually HAVE to use them
with optical flow for the phasecorrelation code to work correctly.
I have some more advanced scripts coming after this. But, This commit is
already getting kinda large so I'm stopping it here.
Image comparison using SSIM. It can be used to detect image
differences... but, the algorithm was designed to compare image quality
and look at compression artifacts. Anyway, it works kinda okay for
detecting frame differences.
Both algorithms were tested on the OpenMV Cam using images loaded from a
file and work correctly. However, shot noise from the sensor.snapshot()
makes the output value somewhat worthless except in a situation unless
you've controlled for it. Anyway, the illuminvar work best when the
image is constrained to a very particular view point looking at a flat
scene without shadow and then a shadow enters.
(Not adding demo's for these methods since the output looks like crap
unless you've put some work into constraining the scene... need to add
HDR code and other stuff to the sensor module to get better images).
Now you can find circles with your OpenMV Cam! The alrogithm can eek out
about 7 FPS on a 160x120 image which is quite impressive given how
computationally expensive circle finding is...
For easy line following mainly. In non-robust mode the line is computed
using least squares. In robust mode the line is computed using the
Theil-Sen median of slopes method. We do not use the Siegel Median of
Medians operation because it costs more CPU time... but, more
importantly there's no way to improve the centroid estimate so even if
the slope is more robust the line will be drawn in the wrong place.
These two new classes allow you to record image data for later viewing
at the same speed the image data was recorded. Unlike GIF/MJPEG the
image data is stored on the file system completely uncompressed in
native frame buffer format making super fast reading and writing
possible. Recording VGA Grayscale at ~13 FPS is possible along with
playing it back. (That's about 30 Mb/s folks).
...
The motivation for writing these scripts is so that you can record video
of something like a line following track, take that video home, and work
on computer vision algorithms for that data.
These classes should make it a lot easier to use the camera at home now.
Frame rate now can hit 30 FPS when JPEG compression is off. Merging of
lines is perfected too which greatly reduces the noise output. Also,
lines are now objects so you can get their values in an easy way.
The user can now call compressed_for_ide() and compress_for_ide() on an
image to make a jpeg compressed image formatted for transmission over a
data link other than USB. Note that OpenMV IDE will automatically handle
one of these compressed images ending up in the frame buffer and display
it like normal.
To send the image data the user can do:
print(img.compress_for_ide(), end='')
print(img.compressed_for_ide(), end='')
uart.write(img.compress_for_ide())
uart.write(img.compressed_for_ide())
and etc. As mentioned above, compress() compresses the image in place.
And that in place compressed image will then end up in the jpeg buffer.
OpenMV IDE will automatically handling decoding these special compressed
images when this happens.
All variations of the above code have been tested and are working.