All of our argument parsing code has now been updated to handle
positional as well as keyword arguments in our python libraries.
Basically, python allows you to pass some number of positional arguments
to functions/methods followed by keyword arguments (you cannot have more
positional arguments after keyword arguments). Previously, our code
would only look for keyword arguments. Now, it works better and will
grab as many positional arguments as it can followed by processing
keyword arguments. Note: If the case of a positional argument value for
a parameter being passed followed by a keyword for that same parameter
the keyword value is taken (since it comes aftward).
Because arguments were passed in keyword form before this update has no
affect on current code. However, moving forward, argument positions are
now locked and cannot be moved around.
Add binary image support to the math operations and updated them to
support masks. Replace now also support mirroring operations. Finally,
adding missing basic math ops like add/sub/mul/div. The operations are
designed to work as image blending operations so they take care of
scaling their output accordingly.
Binary() can noew zero things so you can remove bright lights. All the
line ops (and/or/xor/etc) accept masks. Erode and dilate now accept
masks. And finally, you can now pass arguments versus keywords for folks
who don't read the documentation. Also, the binary image type is now
supported for these methods.
I'm putting in all this work because I saw the need for it when I was
doing shadow removal.
Note: Some effort needs to be put into optimizing the py_image.c code
soon. This is on the todo list before the next release.
All the notes about how to implement wifi programming are in the code.
Steps:
1. Get wifi_apply_settings() working first and make sure you can turn
the wifi shield on in the right mode. Then add the necessary hooks into
the network code to make it such that previous user wifi code still
works. Also, make sure to handle start and shutdown gracefully.
Basically, get all the lifecycle code working first before moving to the
next step so notning gets in a weird state and bugs creep in...
2. Get the beacon method working. Once this works OpenMV IDE should see
the camera when you hit the connect button.
3. Do the code to turn off the regular usbdbg interface and swtich to
having the data come from wifi_dbg. This isn't a lot of code... but,
will be tricky since you no longer will have USB frames to work with.
All bytes are just going to come randomly and in bursts so you have to
handle the serial stream yourself... (Kwabena can help writing a
statemachine for dealing with this type of stuff if you want. I do it
all the time).
Calling remove_shadows() on an image without a background source of
truth image now works. However, that said, the shadow remover isn't
suitable for anything other than removing shadow on an image of concrete
flow or somthing of the like. In general, it can only remove shadows
from a scene that has nothing else in it except for a hard edge shadow.
Improving this to work for anything is about a month of work. I've
researched enough about shadow removal to now know the optimal way to do
it. However, it requires many steps and a large amount of RAM. On the H7
I may revist this as being possible.
...
In order to get the shadow remover working well I had to add a few
features to the image library and fix some of the convolution code.
These fixes will likely be more useful than the shadow removal code.
Note the addition of the new get_threshold() method. This computes
otsu's threshold on a histogram allowing you to pick the optimal color
bounds.
OpenMV IDE includes an ini file generator which will let you set board
settings easily from the IDE. Currently, the IDE has support for setting
the WiFi shield up along with adding a REPL Uart.
Anyway, this commit adds support for the OpenMV Cam to parse an ini file
on startup to configure things before starting main.py. WiFi support is
not yet implmented. However, we now have the ability to turn the UART
and put the REPL terminal on it on startup given a setting in the ini
file.
(Why not use boot.py like normal MP? While that is more flxible it's
much harder for the IDE to easily write out settings for you which is
what most users will want to do versus coding this up).
...
The movitation for adding REPL UART support in particular is so that the
OpenMV Cam can be used as a slave processor to IoT type processors like
the ESP32/ESP8266/ParticlePhoton/ElectricImp. In particular, a processor
like the ParticlePhoton can control the OpenMV Cam's reset wire. Wake
the camera up by releasing reset, then send a script to it after it
powers on over the UART. The camera will then run the script, do
computer vision, and report results back over the UART to the
ParticlePhoton. Users can then push new scripts to the OpenMV Cam from
the cloud allowing for semi-flexible firmware fixes for the OpenMV Cam
over low data rate networks.
By setting this feature up the need for OpenMV to offer a WiFi IoT
system is reduced as we can now just be the best camera for everything.
...
Due to... I don't know... ctrl-c doesn't work on the duplicated UART.
https://github.com/micropython/micropython/issues/1568
Not sure how to handle this. I don't want to fix it since it needs to be
fixed by MP upstream. Right now the work around is for the mastering MCU
to just reset the OpenMV Cam when it's done with the system.
That said, this does mean that once you start a script using the Open
Terminal command line system you won't be able to stop the script.
Add in support for shadow removal from the current image using a shadow
free background image. Test results show the algorithm works similar to
max() while still keeping dark objects around. The preformance impact of
the algorithm is not too high. An in memory example can achieve 30 FPS.
Redid the phase correlation code again so it's one method call now. This
method call can either do logpolar phase correlation to get rotation/
scale or translation(x/y). Additionally, it will be able to also do both
at once. However, I don't have that quite working yet.
I've updated the example scripts to reflect the new code too.
Finally, I had to fix a bug in the rotation correction code.
...
Once I've got the full pipeline working I will post scripts for that. I
have all the code in there and it's been somewhat debugged... However, I
can't get a useful phase correlation lock out of the log polar fft mag.
I plan to look into noise filtering and spectral whitening solutions for
this.
This commit updates the shadow free invariant image to 2 colors from
just grayscale.
If we need to save ROM room in the future we'll just disable the LUT and
have the algorithm run with the regular C code. Right now this is not an
issue.
Someone asked me about doing a field of receptors before. These scripts
show how to do that. Also, added example scripts for calling the linear
polar and log polar methods added previously which power
find_rotscale().
Just doing one big commit/PR here since I noticed that breaking it up
causes issues.
Anyway, these fixes give us GOOD/WORKING/FAST optical flow now on the
OpenMV Cam M7. A number of changes were made to the optical flow
scripts. You have have absolute and differential estimation example
scripts. Additionally, you also have the ability to measure rotation and
scale changes too. Linear/Log Polar conversion was added for this. Users
may use the new code for generic image manipulation too. Finally, I
updated the power of 2 resolutions since you actually HAVE to use them
with optical flow for the phasecorrelation code to work correctly.
I have some more advanced scripts coming after this. But, This commit is
already getting kinda large so I'm stopping it here.
* Added hmirror and vflip support to the MT9V034 and example scripts.
* Moved sensor example scripts to one place.
* Add delay to these script for register settling time.
* Textual register cleanup.
No functional changes.
* Add exposure control support.
You can now set the exposure for the camera in microseconds (versus an
opaque unknown value previously). First, we have a new method called
get_exposure_us() which will get the expsoure time in microseconds. This
let's you determine what the auto exposure algorithm set the exposure
time to. Second, the previously implmented set_auto_exposure() method
which allows you to turn aec off and on accepts a exposure_us keyword
argument when you turn aec off to manually control the exposure.
The next commit will add support for other sensor types.
* Cleanup register formatting.
No functional changes.
* Add exposure control support for the OV2640.
Register access for this chip is a PITA.
* Formatting Cleanup.
No functional changes.
* Add exposure control for ov9650.
Just doing it for all sensors.
* Add missing 2 factor.
* Added exposure control for the MT9V034.
* Add exposure control example.
Works well on the OV7725.
Just updating the code with the same style as other methods. I have
another new sister method for histeq() comming up next which I'll push
as soon as this PR is done. Didn't want to merge the two into one PR.
This fix allows "copy_to_fb" with a different resolution than the
current frame buffer to work. It also allows the frame buffer to be
resized, etc. In particular, the pooling methods I added for optical
flow work again... you'll also be able to scale the frame buffer too.
You can now allocate an extra frame buffer for storing images. However,
this takes memory from the main frame buffer. In particular this reduces
the RAM for many methods that do image processing making memory errors
more likely to happen. Note that you may allocate as many extra fb's as
you like. Dealloc happens in reverse order.
Anyway, you can use this method to now storage things like difference
images in RAM allowing for MUCH faster frame difference image
processing.
Moving on, to keep memory management sane... the second fb looks just
like an image and you can use all the image methods to load and update
it, etc. That said, if users deallocate the second FB they need to *NOT*
use the image pointer anymore. There's no way for me to delete the image
pointer in python right now so this is just something that has to be
manually managed (even if I did setup a deconstructor the second FB is
on a stack... so, things wouldn't work so easily with that).
It's now faster to be more useful.
Need to work on HDR for the sensor and making the sensor output better.
I fixed some issues with the illuminvar() method going crazy when it
gets colors with values near 0... but, the shot noise from the sensor
adds a lot of noise to everything. Fixing this will likely solve a lot
of algorithm problems.
Image comparison using SSIM. It can be used to detect image
differences... but, the algorithm was designed to compare image quality
and look at compression artifacts. Anyway, it works kinda okay for
detecting frame differences.
Both algorithms were tested on the OpenMV Cam using images loaded from a
file and work correctly. However, shot noise from the sensor.snapshot()
makes the output value somewhat worthless except in a situation unless
you've controlled for it. Anyway, the illuminvar work best when the
image is constrained to a very particular view point looking at a flat
scene without shadow and then a shadow enters.
(Not adding demo's for these methods since the output looks like crap
unless you've put some work into constraining the scene... need to add
HDR code and other stuff to the sensor module to get better images).
regression code for racing.
No more memcpys all over the place. Not sure why I was doing that.
... code must have been written by an idiot before :) (me).
* The following issues still need fixing:
* Al fb_alloc nlr hooks are DISABLED.
* modnetwork causes cam to hardfault.
* Had to reduce heap by 1K (vfs buffer had to be moved to bss/data).
* self-tests are disabled (cam gets stuck after executing).
Now you can find circles with your OpenMV Cam! The alrogithm can eek out
about 7 FPS on a 160x120 image which is quite impressive given how
computationally expensive circle finding is...
For easy line following mainly. In non-robust mode the line is computed
using least squares. In robust mode the line is computed using the
Theil-Sen median of slopes method. We do not use the Siegel Median of
Medians operation because it costs more CPU time... but, more
importantly there's no way to improve the centroid estimate so even if
the slope is more robust the line will be drawn in the wrong place.
These two new classes allow you to record image data for later viewing
at the same speed the image data was recorded. Unlike GIF/MJPEG the
image data is stored on the file system completely uncompressed in
native frame buffer format making super fast reading and writing
possible. Recording VGA Grayscale at ~13 FPS is possible along with
playing it back. (That's about 30 Mb/s folks).
...
The motivation for writing these scripts is so that you can record video
of something like a line following track, take that video home, and work
on computer vision algorithms for that data.
These classes should make it a lot easier to use the camera at home now.
Moved structs along with image copying code from sensor into
framebuffer.c so that we can use the new copy_fb_to_jpeg_fb() function
in the image library for methods with "copy_to_fb" so that they update
the IDE preview when called.
Also, I noticed that the MAIN_FB_SIZE() value is not calculated
correctly in all cases. Will fix later. Trying to keep this commit clean
for just the refactoring.
All changes have been tested. Too.