All the notes about how to implement wifi programming are in the code.
Steps:
1. Get wifi_apply_settings() working first and make sure you can turn
the wifi shield on in the right mode. Then add the necessary hooks into
the network code to make it such that previous user wifi code still
works. Also, make sure to handle start and shutdown gracefully.
Basically, get all the lifecycle code working first before moving to the
next step so notning gets in a weird state and bugs creep in...
2. Get the beacon method working. Once this works OpenMV IDE should see
the camera when you hit the connect button.
3. Do the code to turn off the regular usbdbg interface and swtich to
having the data come from wifi_dbg. This isn't a lot of code... but,
will be tricky since you no longer will have USB frames to work with.
All bytes are just going to come randomly and in bursts so you have to
handle the serial stream yourself... (Kwabena can help writing a
statemachine for dealing with this type of stuff if you want. I do it
all the time).
Calling remove_shadows() on an image without a background source of
truth image now works. However, that said, the shadow remover isn't
suitable for anything other than removing shadow on an image of concrete
flow or somthing of the like. In general, it can only remove shadows
from a scene that has nothing else in it except for a hard edge shadow.
Improving this to work for anything is about a month of work. I've
researched enough about shadow removal to now know the optimal way to do
it. However, it requires many steps and a large amount of RAM. On the H7
I may revist this as being possible.
...
In order to get the shadow remover working well I had to add a few
features to the image library and fix some of the convolution code.
These fixes will likely be more useful than the shadow removal code.
Note the addition of the new get_threshold() method. This computes
otsu's threshold on a histogram allowing you to pick the optimal color
bounds.
OpenMV IDE includes an ini file generator which will let you set board
settings easily from the IDE. Currently, the IDE has support for setting
the WiFi shield up along with adding a REPL Uart.
Anyway, this commit adds support for the OpenMV Cam to parse an ini file
on startup to configure things before starting main.py. WiFi support is
not yet implmented. However, we now have the ability to turn the UART
and put the REPL terminal on it on startup given a setting in the ini
file.
(Why not use boot.py like normal MP? While that is more flxible it's
much harder for the IDE to easily write out settings for you which is
what most users will want to do versus coding this up).
...
The movitation for adding REPL UART support in particular is so that the
OpenMV Cam can be used as a slave processor to IoT type processors like
the ESP32/ESP8266/ParticlePhoton/ElectricImp. In particular, a processor
like the ParticlePhoton can control the OpenMV Cam's reset wire. Wake
the camera up by releasing reset, then send a script to it after it
powers on over the UART. The camera will then run the script, do
computer vision, and report results back over the UART to the
ParticlePhoton. Users can then push new scripts to the OpenMV Cam from
the cloud allowing for semi-flexible firmware fixes for the OpenMV Cam
over low data rate networks.
By setting this feature up the need for OpenMV to offer a WiFi IoT
system is reduced as we can now just be the best camera for everything.
...
Due to... I don't know... ctrl-c doesn't work on the duplicated UART.
https://github.com/micropython/micropython/issues/1568
Not sure how to handle this. I don't want to fix it since it needs to be
fixed by MP upstream. Right now the work around is for the mastering MCU
to just reset the OpenMV Cam when it's done with the system.
That said, this does mean that once you start a script using the Open
Terminal command line system you won't be able to stop the script.
It's now faster to be more useful.
Need to work on HDR for the sensor and making the sensor output better.
I fixed some issues with the illuminvar() method going crazy when it
gets colors with values near 0... but, the shot noise from the sensor
adds a lot of noise to everything. Fixing this will likely solve a lot
of algorithm problems.
Moved structs along with image copying code from sensor into
framebuffer.c so that we can use the new copy_fb_to_jpeg_fb() function
in the image library for methods with "copy_to_fb" so that they update
the IDE preview when called.
Also, I noticed that the MAIN_FB_SIZE() value is not calculated
correctly in all cases. Will fix later. Trying to keep this commit clean
for just the refactoring.
All changes have been tested. Too.
We now have a nice and fast malloc system that easily offers 300KB+
dynamic memory... No need to use xalloc anymore except when we're
transfering objects to MP memory space.
* Added pooling functions to make getting small images easy. set_binning
works too... but, it zooms in way to much. pooling functions aout you to
shrink the image while not zooming in.
* To make the pooling functions easy to use I created a version that
pools the image out of place and one that pools the image in place. The
inplace pooling function can work on the frame buffer (see edits to
sensor.c)
* I added the code to do hann windowing to the FFT lib. However, I
commented it out after it improved performance by basically zero.
Specialized windowing stuff will only come in handy for folks trying to
tune their algorithm... not in general for everything.
* I added subpixel resolution for the phase correlation code. You can
now track the image movement really precisely. Additionally, I fixed up
the displacement outputs to give expected results. I also added a QoR
output for the displacement code so that you can know when the results
are bad.
* Finally, an example script has been added to show off the features.
The heart of the 1D FFT works. I tested this on the PC. However, 2D FFTs
may have issues and the phase correlation algorithm does not generate
the expected results. That said, most of the work is done. Stuff just
needs to be deubgged.
The FFT lib is designed to handle up to 1024 point real FFTs and 512
complex FFTs. As for 2D FFTs, we can do up to 64x64 pixels. After which,
we don't have enough RAM to handle them because they use up about 128KB
each.
Things to do... the 2D FFT needs to be verified. So, we need to run an
image through it and then back again to verify that there are no
problems. Then we need to compare the 2D FFT output with another 2D FFT
algorithm on the PC...
Once the FFTs are known to be good we then need to make sure the phase
corelation algorithm outs the correct results. We need to test that with
multiple shifted images, etc.
Mean filter -> Fast and easy to use. This will likely be the only filter
that gets alot of action on the M4.
Median filter -> Works really well, but, slow. On grayscale at 160x120
you can get also 10 FPS with it for a 3x3 kernel. That said, it's still
slow. Also, the code only works for 3x3 and 5x5 kernels.
About the previous histogram filter... technically, that filter should be
better. However, it suffers from a startup cost. The operation of finding
the median point in the histogram costs too much to compute. This is
what causes it to be slow. On very large kernels it will be faster than
the sorting median alrogithm I put up... but, large kernels will be too
slow for anyone to use anyway. The paper Ibrahim linked to about it
showed it being used for like 7x7 kernels and up... so, I think the
researcher who thought of the idea was really thinking about the
algorithm for large kernels.
Mode filter -> Works great on grayscale. Not so much on color. I think it
needs to be run on the LAB color space instead of the RGB color space. I
say this because it causes pretty strong artifacts around edges. When we
get more flash we'll be able to have a reverse lookup table for LAB to
make the mode filter better. Until then...
Has a bias value that allows you to control if its really a midpoint,
min, max filter, or something inbetween. Run at 160x120 or lower. 320x240
is slow (seems to be the case for all convoltions at that res).
First, a few things:
The MLX 16x4 sensor has just too low of a resolution for mass appeal for
the price. The product is not going to sell very well. We need to look
into supporting sensors with a better res. Like the FLIR 1. The MLX
module was renamed to the "flir" module with this idea in mind.
The flir code now takes care of doing scaling and blending itself. I did
this to get rid of the user having to scale the image themselves and
blend themselves. Its too easy to run out of memory given our current
ultra small heap. In general, anything that requires multiple images in
RAM has got to go. When we do another OpenMV Cam with external RAM in
the MB range then maybe such functions will be safe. But, right now they
are definately not.
Anyway, moving on, I fixed a few bugs with the MLX math code. But, for
the most part was correct. I also added reconmended polling code for
brownouts as required by the datasheet.
Last, I designed this code like the LCD code to support a type value
when inited. This will allow the system to user a different sensor in the
future without any API changes to the user.
I will add test scripts for this next. Basic usage follows:
import flir
flir.init()
flir.display_ir(sensor.snapshot())
And that's it. Super easy. If the user wants the raw temp values they
can use flir.read_ir() to get the ta and to values. The display function
has a hidden alpha and scale argument for controling blending and the
min/max scaling.
The previous way we worked out scaling kinda sucked... it was a good
shot, but, controllable min and maxes that autoscale by default just
work better. If the user knows the temp range then they can just set the
min and max.'
Anyway, longest commit ever done.
The built-in mjpeg module allows you to record videos seamlessly. It
will automatically compress the frame buffer using the extra space in the
main ram. So... you don't have to pass it jpeg images. Gets 7 FPS at
320x240 while connected to the computer too (it has to compress the
frame twice in this situation).
Anyway, the module work like Gif.
You can now get the color stats for an area in the image. The stats
function returns the mean, median, mode, min, max, st_dev,
lower_quartile, and upper_quartile.
This function allows you to automate binary and threshold functions
based on what's in the iamge.
The morph function lets you convolve the image with a kernel. It's
decently fast right now. But, in the future we'll have to optimize it by
a lot (unrolling loops, using SIMD instructions, etc.).
Anyway, along with morph I added an edge detection test script showing
how you can use a high pass filter on an image to get all the edges in
it. This is not as good as canny edge dection... but, it's about the
same and fast enough.
We'll need a Hough Transform system in the future to make edge dection
useful. Not sure how that will be implemented... so, that's going to be
far away for now.
Added BMP file format reading and writing support code and modified the
ppm code to match. Upper level glue code has been left intact to be
altered in future commits.
Tested save() and ppm writing functionality still works. More
comprehensive tests coming soon.
... Kinda concerend that standard image file formats might not cut it for
the speed we'd like to have when using image files in function calls. I
think only grayscale is going to be fast. All other formats require a
lot of prep work.
I think I may modify some of this low level stuff in the future to
autodetect if an entire grayscale image can be read in or written out
in one go to speed that stuff up.
* Filter functions bypass the default line processing in sensor.c, and pre-process lines.
* Processing is done on the fly, i.e. filters are called from after each line is received.
* A new integral image implementation that uses a moving window.
* Integral image is computed in steps, each shift computes n new lines.
* This only requires (image_width * (feature_height+1) * 4) bytes.
* Allows Haar detector to run on QVGA, and allows a second squared
integral image for standard deviation calculations.
The alloc functions allow you to use the framebuffer as a storage space.
It's very simple but effective. You can alloc which puts some memory on a
stack... and then when you're done you can free which pops the stack.
Pops (frees) must be done in reverse order of pushes (allocs).
In general, functions should call the init code before using the stack.
It could be in a bad state.
Also, I added some wrappers for file system functions to make that stuff
easier. This will be used in the future.
And modified the rainbow table so that the RGB888 to RGB565 translation
is done using a rounding technique versus hard floor. This is also used
for the RGB565<->RGB888 LUTs.
Additionally, I added a bunch of stuff to the image library to make
working with images easier. I will using these helpers in the future.
Finally, I cleaned up trailing space in the font stuff (pet peeve).