This commit fixes the following:
* Adds new methods to allow jpeg images to be encoded for transmission
to the IDE.
* Auomatically calls these methods to send the image to the IDE when
there's not enough space in the JPEG buffer. This isn't the fastest but
is better than help support requests about why it doesn't work at all.
* In JPEG mode the cacheable framebuffer memory is used directly by the DMA, instead of the line buffer.
Cache maintenance must be performed before the CPU accesses the framebuffer memory.
* Images are right side up now.
* Picture quality is acceptable (not as good as the OV7725)
* All auto functions work now (the OV2640 ignores exposure control
however)
* Added XGA frame size.
* JPEG mode is enabled but needs work still (not sure if the H7 hardware
can capture the packet stream fast enough not to drop bytes).
* Initialize all members of DMA structs for H7.
* Always reset and configure the H7 DMA peripheral.
* Add SPI IRQ priority.
* Enable SPI IRQ for H7 MCUs.
This fix creates a flag that prevents fb_alloc_free_till_mark() from
doing anything unless there was a previous fb_alloc_mark(). Once
fb_alloc_free_till_mark() is called it will no longer do anything until
there's another fb_alloc_mark().
This means that if an exception is triggered while in code that
previously did fb_alloc_mark() the stack will be cleaned up.
If the fb_alloc_mark() method is not called then the stack will not be
cleaned up and that memory fb_alloc()'ed will remain until a soft reset.
All OpenMV Cam library code is designed to fb_alloc_mark() before using
the fb stack and then fb_alloc_free_till_mark() when complete. However,
in the case of py_sensor_alloc_extra_fb() it doesn't mark first such
that the RAM it allocates stays across exceptions and is only free'd via
py_sensor_dealloc_extra_fb() or via a soft reset.
...
Summary of changes:
fb_alloc.c -> Added a semaphore lock to prevent
fb_alloc_free_till_mark() from doing anything unless fb_alloc_mark() was
called first.
py_sensor.c -> Removed calling fb_alloc_mark() and
fb_alloc_free_till_mark() and re-arranged code calls to prevent a trival
leak sitatuion on heap exhaustion.
py_image.c, py_fir.c, py_lcd.c py_tv.c -> Added fb_alloc_mark() and
fb_alloc_free_till_mark() to methods originally coded without using it.
...
Note - I coded the mark semaphore lock in such a way things work even if
fb_alloc_mark() and fb_alloc_free_till_mark() calls are nested. This
allows the find_blobs() call-back methods to call py_image.c methods
still and also allows us to add more call-backs in the future without
worry if we need to.
...
Finally, if you have an exception in an interrupt handler all this above
breaks terribly. Given MP already breaks if you try to allocate memory
in an exception this is a "won't fix problem". Don't call code that can
have exceptions or needs memory in an interrupt handler.
The FLIR Lepton 3.5 sometimes doesn't startup. Our previous code just
waited forever. The new code now will timeout but also will try to
recoever the FLIR Lepton 3.5 if possible if the video doesn't start in a
timely manner.
Measurement mode allows you to set a target temperature for the FLIR
lepton so that you can actually use it to measure object temperatures
and do useful things. We try to make the mode work on non-radiometric
FLIR letpons howeever it will not be accurate.
These bounds checks were incorrect if sourceX/Y rounds up. In this situation, the unrounded source will be smaller than the limit by a factional amount (C will elevate the int bounds to a float for comparison), but the post-rounded version will equal the limit.
E.g.,
sourceX = 10.5
img->w = 11 (i.e., valid indices are [0-10])
sourceX2 = 11, which is invalid memory
* Fix exception if the order of functions call is swapped (set_framesize before set_pixformat)
* The order of functions shouldn't matter, if necessary this check should be done in snapshot.
* This fixes issue #444
* This format is for use in the Image Lib module since sensor is where
we put the image types.
Will work on the lepton and global shutter drivers next.
While this shouldn't happen the method seems to sometimes round up past
limits. For example, 1 * 2.0 could be 3 because the 2.0 might be like
2.0000000000001.
So, avoid using roundf. There are other methods this needs to be
switched out on. But, will do these ones for now.
Note that not all roundf values must be removed... just areas where
there's a clear limit on the max value returned from roundf.
* Now OpenMV Cam's can be discovered in the wild by OpenMV IDE without
hardcoding the OpenMV Cam WiFi IP address and port.
* The firmware reads settings from OpenMV IDE for STA and AP mode.
* Boradcast in both modes works and OpenMV IDE can find the cam.
* AP mode works (albeit the driver needs help).
* Station mode sometimes connects every now and then. There's a bug in
the WiFi module that prevents this from working right. The same code
executes on the cam and in the IDE for both modes but station mode has
issues...
Both CIFAR and LENET work still.
The smile network... I couldn;t really get to work before or afterwards.
I noticed the Haar one has trouble finding my face. Maybe fix via using
the contrast settings of the previous Haar scripts?
It's not as good as mean shift filtering but can approximate it if you
heavily control the image image lighting conditions. That said, it's a
lot faster and less memory than mean shift filtering.
Runs faster than median filtering with a large kernel size. That said,
if sigma is set to low for the particular scene you can get corrupted
pixels if there's too much change in a particular kernel area. Tried a
few things to filter this out but was not successful. Not sure how to
fix... but, turning the sigma up hides the issue. It has something to do
with zeros in the luts used to speed the algorithm up causing
instability.
This brings all our basic operations code up to spec with other image
library offers.
Anyway, you can now pass a "color" value as the parameter to a basic op
method and it will apply that value to all pixels in the image.
Binary images are now handled. Cleaned up and optimized code. Some speed
gains after shifting to multiplies and not using int8_t.
Added a sharp and unsharp mask feature. Fixed up guassian. Added a
laplacian operation for edge detection.
Heavily upgraded the drawing features onboard the OpenMV Cam. We now
have all the basic drawing methods folks expect along with all the
parameters you need. Finally! You can make big text fonts.
* Added line thickness support
* Added shape fills
* Added text scaling
* Added draw arrow.
All of our argument parsing code has now been updated to handle
positional as well as keyword arguments in our python libraries.
Basically, python allows you to pass some number of positional arguments
to functions/methods followed by keyword arguments (you cannot have more
positional arguments after keyword arguments). Previously, our code
would only look for keyword arguments. Now, it works better and will
grab as many positional arguments as it can followed by processing
keyword arguments. Note: If the case of a positional argument value for
a parameter being passed followed by a keyword for that same parameter
the keyword value is taken (since it comes aftward).
Because arguments were passed in keyword form before this update has no
affect on current code. However, moving forward, argument positions are
now locked and cannot be moved around.
Add binary image support to the math operations and updated them to
support masks. Replace now also support mirroring operations. Finally,
adding missing basic math ops like add/sub/mul/div. The operations are
designed to work as image blending operations so they take care of
scaling their output accordingly.
Binary() can noew zero things so you can remove bright lights. All the
line ops (and/or/xor/etc) accept masks. Erode and dilate now accept
masks. And finally, you can now pass arguments versus keywords for folks
who don't read the documentation. Also, the binary image type is now
supported for these methods.
I'm putting in all this work because I saw the need for it when I was
doing shadow removal.
Note: Some effort needs to be put into optimizing the py_image.c code
soon. This is on the todo list before the next release.
All the notes about how to implement wifi programming are in the code.
Steps:
1. Get wifi_apply_settings() working first and make sure you can turn
the wifi shield on in the right mode. Then add the necessary hooks into
the network code to make it such that previous user wifi code still
works. Also, make sure to handle start and shutdown gracefully.
Basically, get all the lifecycle code working first before moving to the
next step so notning gets in a weird state and bugs creep in...
2. Get the beacon method working. Once this works OpenMV IDE should see
the camera when you hit the connect button.
3. Do the code to turn off the regular usbdbg interface and swtich to
having the data come from wifi_dbg. This isn't a lot of code... but,
will be tricky since you no longer will have USB frames to work with.
All bytes are just going to come randomly and in bursts so you have to
handle the serial stream yourself... (Kwabena can help writing a
statemachine for dealing with this type of stuff if you want. I do it
all the time).
Calling remove_shadows() on an image without a background source of
truth image now works. However, that said, the shadow remover isn't
suitable for anything other than removing shadow on an image of concrete
flow or somthing of the like. In general, it can only remove shadows
from a scene that has nothing else in it except for a hard edge shadow.
Improving this to work for anything is about a month of work. I've
researched enough about shadow removal to now know the optimal way to do
it. However, it requires many steps and a large amount of RAM. On the H7
I may revist this as being possible.
...
In order to get the shadow remover working well I had to add a few
features to the image library and fix some of the convolution code.
These fixes will likely be more useful than the shadow removal code.
Note the addition of the new get_threshold() method. This computes
otsu's threshold on a histogram allowing you to pick the optimal color
bounds.
OpenMV IDE includes an ini file generator which will let you set board
settings easily from the IDE. Currently, the IDE has support for setting
the WiFi shield up along with adding a REPL Uart.
Anyway, this commit adds support for the OpenMV Cam to parse an ini file
on startup to configure things before starting main.py. WiFi support is
not yet implmented. However, we now have the ability to turn the UART
and put the REPL terminal on it on startup given a setting in the ini
file.
(Why not use boot.py like normal MP? While that is more flxible it's
much harder for the IDE to easily write out settings for you which is
what most users will want to do versus coding this up).
...
The movitation for adding REPL UART support in particular is so that the
OpenMV Cam can be used as a slave processor to IoT type processors like
the ESP32/ESP8266/ParticlePhoton/ElectricImp. In particular, a processor
like the ParticlePhoton can control the OpenMV Cam's reset wire. Wake
the camera up by releasing reset, then send a script to it after it
powers on over the UART. The camera will then run the script, do
computer vision, and report results back over the UART to the
ParticlePhoton. Users can then push new scripts to the OpenMV Cam from
the cloud allowing for semi-flexible firmware fixes for the OpenMV Cam
over low data rate networks.
By setting this feature up the need for OpenMV to offer a WiFi IoT
system is reduced as we can now just be the best camera for everything.
...
Due to... I don't know... ctrl-c doesn't work on the duplicated UART.
https://github.com/micropython/micropython/issues/1568
Not sure how to handle this. I don't want to fix it since it needs to be
fixed by MP upstream. Right now the work around is for the mastering MCU
to just reset the OpenMV Cam when it's done with the system.
That said, this does mean that once you start a script using the Open
Terminal command line system you won't be able to stop the script.
Add in support for shadow removal from the current image using a shadow
free background image. Test results show the algorithm works similar to
max() while still keeping dark objects around. The preformance impact of
the algorithm is not too high. An in memory example can achieve 30 FPS.
Redid the phase correlation code again so it's one method call now. This
method call can either do logpolar phase correlation to get rotation/
scale or translation(x/y). Additionally, it will be able to also do both
at once. However, I don't have that quite working yet.
I've updated the example scripts to reflect the new code too.
Finally, I had to fix a bug in the rotation correction code.
...
Once I've got the full pipeline working I will post scripts for that. I
have all the code in there and it's been somewhat debugged... However, I
can't get a useful phase correlation lock out of the log polar fft mag.
I plan to look into noise filtering and spectral whitening solutions for
this.
This commit updates the shadow free invariant image to 2 colors from
just grayscale.
If we need to save ROM room in the future we'll just disable the LUT and
have the algorithm run with the regular C code. Right now this is not an
issue.
Someone asked me about doing a field of receptors before. These scripts
show how to do that. Also, added example scripts for calling the linear
polar and log polar methods added previously which power
find_rotscale().
Just doing one big commit/PR here since I noticed that breaking it up
causes issues.
Anyway, these fixes give us GOOD/WORKING/FAST optical flow now on the
OpenMV Cam M7. A number of changes were made to the optical flow
scripts. You have have absolute and differential estimation example
scripts. Additionally, you also have the ability to measure rotation and
scale changes too. Linear/Log Polar conversion was added for this. Users
may use the new code for generic image manipulation too. Finally, I
updated the power of 2 resolutions since you actually HAVE to use them
with optical flow for the phasecorrelation code to work correctly.
I have some more advanced scripts coming after this. But, This commit is
already getting kinda large so I'm stopping it here.
* Added hmirror and vflip support to the MT9V034 and example scripts.
* Moved sensor example scripts to one place.
* Add delay to these script for register settling time.
* Textual register cleanup.
No functional changes.
* Add exposure control support.
You can now set the exposure for the camera in microseconds (versus an
opaque unknown value previously). First, we have a new method called
get_exposure_us() which will get the expsoure time in microseconds. This
let's you determine what the auto exposure algorithm set the exposure
time to. Second, the previously implmented set_auto_exposure() method
which allows you to turn aec off and on accepts a exposure_us keyword
argument when you turn aec off to manually control the exposure.
The next commit will add support for other sensor types.
* Cleanup register formatting.
No functional changes.
* Add exposure control support for the OV2640.
Register access for this chip is a PITA.
* Formatting Cleanup.
No functional changes.
* Add exposure control for ov9650.
Just doing it for all sensors.
* Add missing 2 factor.
* Added exposure control for the MT9V034.
* Add exposure control example.
Works well on the OV7725.
Just updating the code with the same style as other methods. I have
another new sister method for histeq() comming up next which I'll push
as soon as this PR is done. Didn't want to merge the two into one PR.
This fix allows "copy_to_fb" with a different resolution than the
current frame buffer to work. It also allows the frame buffer to be
resized, etc. In particular, the pooling methods I added for optical
flow work again... you'll also be able to scale the frame buffer too.
You can now allocate an extra frame buffer for storing images. However,
this takes memory from the main frame buffer. In particular this reduces
the RAM for many methods that do image processing making memory errors
more likely to happen. Note that you may allocate as many extra fb's as
you like. Dealloc happens in reverse order.
Anyway, you can use this method to now storage things like difference
images in RAM allowing for MUCH faster frame difference image
processing.
Moving on, to keep memory management sane... the second fb looks just
like an image and you can use all the image methods to load and update
it, etc. That said, if users deallocate the second FB they need to *NOT*
use the image pointer anymore. There's no way for me to delete the image
pointer in python right now so this is just something that has to be
manually managed (even if I did setup a deconstructor the second FB is
on a stack... so, things wouldn't work so easily with that).
It's now faster to be more useful.
Need to work on HDR for the sensor and making the sensor output better.
I fixed some issues with the illuminvar() method going crazy when it
gets colors with values near 0... but, the shot noise from the sensor
adds a lot of noise to everything. Fixing this will likely solve a lot
of algorithm problems.
Image comparison using SSIM. It can be used to detect image
differences... but, the algorithm was designed to compare image quality
and look at compression artifacts. Anyway, it works kinda okay for
detecting frame differences.
Both algorithms were tested on the OpenMV Cam using images loaded from a
file and work correctly. However, shot noise from the sensor.snapshot()
makes the output value somewhat worthless except in a situation unless
you've controlled for it. Anyway, the illuminvar work best when the
image is constrained to a very particular view point looking at a flat
scene without shadow and then a shadow enters.
(Not adding demo's for these methods since the output looks like crap
unless you've put some work into constraining the scene... need to add
HDR code and other stuff to the sensor module to get better images).
regression code for racing.
No more memcpys all over the place. Not sure why I was doing that.
... code must have been written by an idiot before :) (me).
* The following issues still need fixing:
* Al fb_alloc nlr hooks are DISABLED.
* modnetwork causes cam to hardfault.
* Had to reduce heap by 1K (vfs buffer had to be moved to bss/data).
* self-tests are disabled (cam gets stuck after executing).
Now you can find circles with your OpenMV Cam! The alrogithm can eek out
about 7 FPS on a 160x120 image which is quite impressive given how
computationally expensive circle finding is...
For easy line following mainly. In non-robust mode the line is computed
using least squares. In robust mode the line is computed using the
Theil-Sen median of slopes method. We do not use the Siegel Median of
Medians operation because it costs more CPU time... but, more
importantly there's no way to improve the centroid estimate so even if
the slope is more robust the line will be drawn in the wrong place.
These two new classes allow you to record image data for later viewing
at the same speed the image data was recorded. Unlike GIF/MJPEG the
image data is stored on the file system completely uncompressed in
native frame buffer format making super fast reading and writing
possible. Recording VGA Grayscale at ~13 FPS is possible along with
playing it back. (That's about 30 Mb/s folks).
...
The motivation for writing these scripts is so that you can record video
of something like a line following track, take that video home, and work
on computer vision algorithms for that data.
These classes should make it a lot easier to use the camera at home now.
Moved structs along with image copying code from sensor into
framebuffer.c so that we can use the new copy_fb_to_jpeg_fb() function
in the image library for methods with "copy_to_fb" so that they update
the IDE preview when called.
Also, I noticed that the MAIN_FB_SIZE() value is not calculated
correctly in all cases. Will fix later. Trying to keep this commit clean
for just the refactoring.
All changes have been tested. Too.
With the new frame rate speed increase folks will be asking for smaller
resolutions to get 85 FPS or so when running an algorithm. This commit
adds all scaled modes of frame sizes we already support. We should be
good now on frame sizes for the present and future now.
Todo - skip frames does not run long enough anymore for auto white
balance and gain to stablize before they are turned off in some scripts.
This needs to be adjusted.
Frame rate now can hit 30 FPS when JPEG compression is off. Merging of
lines is perfected too which greatly reduces the noise output. Also,
lines are now objects so you can get their values in an easy way.
We now have a nice and fast malloc system that easily offers 300KB+
dynamic memory... No need to use xalloc anymore except when we're
transfering objects to MP memory space.
The user can now call compressed_for_ide() and compress_for_ide() on an
image to make a jpeg compressed image formatted for transmission over a
data link other than USB. Note that OpenMV IDE will automatically handle
one of these compressed images ending up in the frame buffer and display
it like normal.
To send the image data the user can do:
print(img.compress_for_ide(), end='')
print(img.compressed_for_ide(), end='')
uart.write(img.compress_for_ide())
uart.write(img.compressed_for_ide())
and etc. As mentioned above, compress() compresses the image in place.
And that in place compressed image will then end up in the jpeg buffer.
OpenMV IDE will automatically handling decoding these special compressed
images when this happens.
All variations of the above code have been tested and are working.
Main ZBar code, breaking the commit up because the main file is big.
I will refeactor UMM alloc out of apriltag.c and zbar.c once I'm
finished with this commit stream.
ZBar integration gives us support for basically all 1D linear barcodes.
* Delay the FB size check and corrections to snapshot(). If the frame doesn't
fit FB it gets cropped for GS, or the sensor is switched to bayer for RGB.
Everything works. Running out of memory is fixed and the rotation value
is valid now. For 320x240 operation on the STM32H7 we're going to need
on the order of 1 MB in the entire frame buffer. The code is designed to
handle us getting this amount of memory without any new changes for
320x240 support.
This file includes all of the relevant header/source files from the
april tag library merged into one big file. Additionally, it also
includes heap/quicksort code. I've done the work of going through
the april tag library line by line and fixing it to use fb_alloc,
floats, and our fast math functions.
Anyway, I'm sending this massive file by itself first since it's so
big. Note that we migh in the future want to pull things out of this
file for our own use later if we need linear algebra support.
I also tested the firmware for about an hour to make sure there was no
stack leak.
Note that I prefer for fb_free() to still be called versus
fb_free_till_mark() doing that for you in the code.
For functions without this fix they will just free the entire fb_alloc
stack when an exception happens. For functions with this fix they will
only free up to and including the mark. Since there are no places in the
firmware where you could start building a second fb_alloc stack when one
is already in place this point is moot currently. But, if we do
something like that in the future the problem will have already been
solved.
Any new code or re-worked code should use the mark function.
Speed up the algorithm by fixing the abs() issue. Do not use that
function in any of your code. It by itself cut the speed of the code
in half. I don't know what's in that function but I'm guessing it does
ABS of a float using ints or something.
I made the zoom parameter functional now too so you can use lens_corr to
zoom in on the image. Argument parsing is handled too. Finally, I
updated the only script where this is used.
Note that I'm able to get more than 10 FPS at 160x120 on the M4 and 15
FPS at 160x120 on the M7. Previous this was at about 5 FPS and 7.5 FPS
respectively.
* Detect when VBUS is connected and wait for enumeration, the IDE
timeout is only started after enumeration.
* A 2s timeout for enumeration is used so the cam doesn't get stuck
if it's connected to a charger or a power bank.
We now have a method to get an the normalized histogram of an image
patch. The histogram is returned as an object with methods too. You can
then get the stats off of the histogram or just get the CDF of it. The
CDF is particularly useful for automatically chaning the the color
tracking bounds.
* This function filters keypoints far from the centroid, it's very useful for finding an accurate bounding box for an object.
If a bounding box for the object is not needed, the centroid can be used instead since it's not affected too much by outliers.
* The filter finds the centroid of all the previously cross-matched keypoints then finds the mean, variance and standard deviation,
it then filters keypoints with a distance higher than standard deviation from the centroid.
The new API is backwards compatible with the previous one except for
advanced features. The new blob code uses a flood fill algorithm that is
3x faster in filling out blobs that the previous code. On the M7 the
performance cap of 30 FPS is usually reached.
Additionally, blobs are objects with named attributes now so you don't
have to index access them anymore. However, index access is still
supported.
* Added pooling functions to make getting small images easy. set_binning
works too... but, it zooms in way to much. pooling functions aout you to
shrink the image while not zooming in.
* To make the pooling functions easy to use I created a version that
pools the image out of place and one that pools the image in place. The
inplace pooling function can work on the frame buffer (see edits to
sensor.c)
* I added the code to do hann windowing to the FFT lib. However, I
commented it out after it improved performance by basically zero.
Specialized windowing stuff will only come in handy for folks trying to
tune their algorithm... not in general for everything.
* I added subpixel resolution for the phase correlation code. You can
now track the image movement really precisely. Additionally, I fixed up
the displacement outputs to give expected results. I also added a QoR
output for the displacement code so that you can know when the results
are bad.
* Finally, an example script has been added to show off the features.
There were some mistakes, they are fixed now. FFT 1D and 2D work
flawlessly. No problems with that code anymore.
As for phase correlation I need to study how to interpret the output
better. The function generates noisy results once you move the image too
far and I'm not quite sure if I have the code right for detecting
positive and negative displacements.
The heart of the 1D FFT works. I tested this on the PC. However, 2D FFTs
may have issues and the phase correlation algorithm does not generate
the expected results. That said, most of the work is done. Stuff just
needs to be deubgged.
The FFT lib is designed to handle up to 1024 point real FFTs and 512
complex FFTs. As for 2D FFTs, we can do up to 64x64 pixels. After which,
we don't have enough RAM to handle them because they use up about 128KB
each.
Things to do... the 2D FFT needs to be verified. So, we need to run an
image through it and then back again to verify that there are no
problems. Then we need to compare the 2D FFT output with another 2D FFT
algorithm on the PC...
Once the FFTs are known to be good we then need to make sure the phase
corelation algorithm outs the correct results. We need to test that with
multiple shifted images, etc.
Added the ability to turn AGC off. Kinda will need the ability to restore
AGC settings back to user specified ones in the future... but, this will
do for now.
Added the ability to turn AEC off. Objectively this function probably
won't be used. But, in low light situations it can help.
Added get_fb() to allow you to get the last image snapshot returned.
There was some old exposure function in the code that was getting
optimized out. So, I deleted the used methods that didn't have any code
in them and commented out the only method that did.
* Removed some unused descriptors, but mainly set the CDC interface number to (1)
same as MP, as Windows doesn't like different interface numbers for the same device.
Finished going through imlib.c.
-> Histeq uses fb_alloc now and has hook for RGB histeq when reserve YUV
LUT is added (coming soon in next PR).
Cleanuped py_helper.c/h
-> No functional changes. Just added some header info.
Finished going through py_image.c
* 1 - Finished general code cleanup and updating everything to using new
library functions. In particular, I updated the remaining find_*
functions with the new roi clipping code when they accept rois.
* 2 - Made blob stuff return a list when nothing is found so you don't
have to do an if on the returned value anymore.
* 3 - img subscr is more powerful now allowing image reading and
writing. I updated this because I had to use it to find a previous bug
with socket.send() for the WINC driver.
* 4 - Renamed find_eyes to find_eye. Because it just finds one eye.
* 5 - Other than that just general code cleanup to make functions look
consistent.
And yes, changes have been test. Face tracking, eye tracking, keypoints,
etc. all work still.
Future things todo before release:
1 - Change all LAB stuff to YUV.
2 - Add in reverse YUV->RGB LUT and update functions like Mode() to use
this so they don't generate messed up outputs, also histeq() too.
3 - Add any remaining sensor control functions like agc control.
* Added the ability to control the quality on JPEG functions... However,
due to our JPEG implementation this doesn't seem to help. 90% JPEG
quality images and regular images should be about equal. But, you can
see heavy degredation with 90% still. E.g. text is unreabable. Not
exactly sure why this is happening but it can be fixed later.
* Changed the compress() function to compressed(). Also, it now
compresses using FB_Alloc to prevent realloc issues when compressing.
* Added new compress() function. This function compresses an image in
place and if that image is the frame bufffer then it will update the
frame buffer bpp value to reflect the image was compressed. Users can use
this function to basically finalize the frame buffer and then pass the FB
to functions that need to send image bytes. The benefit of using this
function is that it should allow higher quality JPEGs and let everything
run at a faster speed while connected to the IDE.
I made this function to speed up WiFi. However, I encountered a bug with
the winc.send() method. It appears to zero the bytes it sends. I didn't
debug further except to verify that the image data became zero after
calling send.
*Changed subimg to copy.
*Made blend work the same way as all our other double image argument
functions.
*Changed bilt to replace (the name of bilt is way to esoteric). Replace
gives you the basic assignment op.
* Removed scale/sacled. I removed this code because we don't want to
encourage people to scale things and allocate additional images in
memory. I decided to keep copy() for completeness sakes... but, I don't
see anyone using it. (By completeness sakes I mean that we now have the
assignment op, copy op, etc. for an image object).
* Removed rainbow. This feature is built into the FIR module now.
Moving on, compress needs to be renamed to compressed and a new compress
function will need to be added.
The compress() function will compress the image (or frame buffer, etc)
and not return a new object. The compressed() function will return a new
object and not compress the original image.
The compress function will make it easier for users to compress images
once they are done working on them before sending the image some where.
I don't see compressed() being used much then after adding the
compress() function. Since the compress() function won't use up heap
space this makes it very good.
Removed micropython code from the image libary. Also, blobs are now 10
tuple values by default now. The multilist thing has been removed from
blobs and it will return just a list of blobs instead of a tree of
lists.
Filter functions still work too.
Pixels, centroid, and orientation are calculated in the blob code now.
As for threshold, it is no longer needed (plus, it required storing a
secondary image in RAM which isn't really something we can handle).
Blob tracking has now been updated to work without requiring prior
segmentation of the image. You can still run it on a segmented image,
but, that is not needed anymore.
Use the copy color feature of the OpenMV IDE to get a color in the
image. Once you have that you can then pass the color to find_blobs which
will output a tuple of lists of blobs for each color. By default, all
blobs less than 1/1000th of the image are filtered out, however, you can
add a custom filter function which gets the image and the blob about to
be added to the list and you can decide to filter it or not.
For marker tracking, we now have a function called find markers which
basically merges all the blobs found by find blobs into one list of
blobs. Each new blob will have a color code value which will tell you
what colors are part of that blob. We support tracking up to 30 unique
colors this way.
Mean filter -> Fast and easy to use. This will likely be the only filter
that gets alot of action on the M4.
Median filter -> Works really well, but, slow. On grayscale at 160x120
you can get also 10 FPS with it for a 3x3 kernel. That said, it's still
slow. Also, the code only works for 3x3 and 5x5 kernels.
About the previous histogram filter... technically, that filter should be
better. However, it suffers from a startup cost. The operation of finding
the median point in the histogram costs too much to compute. This is
what causes it to be slow. On very large kernels it will be faster than
the sorting median alrogithm I put up... but, large kernels will be too
slow for anyone to use anyway. The paper Ibrahim linked to about it
showed it being used for like 7x7 kernels and up... so, I think the
researcher who thought of the idea was really thinking about the
algorithm for large kernels.
Mode filter -> Works great on grayscale. Not so much on color. I think it
needs to be run on the LAB color space instead of the RGB color space. I
say this because it causes pretty strong artifacts around edges. When we
get more flash we'll be able to have a reverse lookup table for LAB to
make the mode filter better. Until then...
So I'm just adding a function to do it cleanly and efficently. Call
skip_frames() after changing any camera settings to let it settle.
10 frames by default works fine. Tested it.
Has a bias value that allows you to control if its really a midpoint,
min, max filter, or something inbetween. Run at 160x120 or lower. 320x240
is slow (seems to be the case for all convoltions at that res).
Added setters for these camera settings. AWB is necessary for color
tracking to work correctly. AGC still runs, which causes lighting
shifts. It may need to be disabled too. Not sure... if I want to do that
or not however, because without it lightning won't get normalized to
remain at a certain level. So, turning AGC off may cause issues in other
ways.
First, a few things:
The MLX 16x4 sensor has just too low of a resolution for mass appeal for
the price. The product is not going to sell very well. We need to look
into supporting sensors with a better res. Like the FLIR 1. The MLX
module was renamed to the "flir" module with this idea in mind.
The flir code now takes care of doing scaling and blending itself. I did
this to get rid of the user having to scale the image themselves and
blend themselves. Its too easy to run out of memory given our current
ultra small heap. In general, anything that requires multiple images in
RAM has got to go. When we do another OpenMV Cam with external RAM in
the MB range then maybe such functions will be safe. But, right now they
are definately not.
Anyway, moving on, I fixed a few bugs with the MLX math code. But, for
the most part was correct. I also added reconmended polling code for
brownouts as required by the datasheet.
Last, I designed this code like the LCD code to support a type value
when inited. This will allow the system to user a different sensor in the
future without any API changes to the user.
I will add test scripts for this next. Basic usage follows:
import flir
flir.init()
flir.display_ir(sensor.snapshot())
And that's it. Super easy. If the user wants the raw temp values they
can use flir.read_ir() to get the ta and to values. The display function
has a hidden alpha and scale argument for controling blending and the
min/max scaling.
The previous way we worked out scaling kinda sucked... it was a good
shot, but, controllable min and maxes that autoscale by default just
work better. If the user knows the temp range then they can just set the
min and max.'
Anyway, longest commit ever done.
File reading is runing ultra fast now. We're getting that SD card speed
the STM32 promised now. The file buffer commands have been updated to
alloc as much available memory to read as much of a file in as possible
now to speed up things. This works really great.
Note however, while the file buffer is active you have to use the file
buffer versions of tell and size. Spent a few hours on tracking down an
error related to not using the buffered versions.
All file write functions now use fb_alloc to go much faster. Writes are
re-directed to the extra frame buffer RAM and are grouped until they can
be written in a massive multi-block write to the SD card. We get the
best SD card write speed by doing things this way.
Ideally we'd want to buffer the whole file... but, this is about as good
as we're going to get for now.
Going to fix reading functions to use the same buffer next.
The built-in mjpeg module allows you to record videos seamlessly. It
will automatically compress the frame buffer using the extra space in the
main ram. So... you don't have to pass it jpeg images. Gets 7 FPS at
320x240 while connected to the computer too (it has to compress the
frame twice in this situation).
Anyway, the module work like Gif.
Now you can just grab all the free ram in the frame buffer in one go.
This fixes problems figuring out how many lines to alloc. Will update line
op code with this new info later.
Color gifs look very good for how bad you'd expect them to be with just
7 bits of color (rgb232) - quite amazing. Also, I hardened the gif
module to make it "user ready".
You can now get the color stats for an area in the image. The stats
function returns the mean, median, mode, min, max, st_dev,
lower_quartile, and upper_quartile.
This function allows you to automate binary and threshold functions
based on what's in the iamge.
The morph function lets you convolve the image with a kernel. It's
decently fast right now. But, in the future we'll have to optimize it by
a lot (unrolling loops, using SIMD instructions, etc.).
Anyway, along with morph I added an edge detection test script showing
how you can use a high pass filter on an image to get all the edges in
it. This is not as good as canny edge dection... but, it's about the
same and fast enough.
We'll need a Hough Transform system in the future to make edge dection
useful. Not sure how that will be implemented... so, that's going to be
far away for now.
The old code did not actually implement the errode anhd dilate kernels
correctly. However, it migh have been a little faster because it avoided
the boundary problem.
In the future we can optimize all the kernel code to have different loops
for doing the edges of image versus the center. But, for now, this is
good enough. QVGA color tracking with kernels will be slow, but, the
speed can be improved with QQVGA resolution. Using a 3x3 kernel is
plenty fast. Larger ones are slower.
I also added the ability for you to set the threshold for erode and
dialte. This lets you make the kenrel a little bit smarter so that it
won't errode or dilate a pixel unless the threshold is met. Meaning,
you'll be able to use erode to erode an image down to 1 pixel wide
lines.
All the work previously has been more or less leading up to supporting
this function. The line op function will open a file and execute a
function pointer on each line of the file opened to modify the frame
buffer.
It now figures out the file type from the file extension. If no file
extension is given it just saves the file as BMP if its not a JPEG image
or JPEG if it's a JPEG image. If you specify an extension and the file is
not of that type then it will give you an error.
The new test_save.py should run until you reach the JPEG image part
where it quits due to lack of JPEG support natively on OV7725 boards.
Maybe JPEG mode should be supoorted by just compressing pictures?
There's not a lot of actual functionality changes from the last commit.
However, switching the basic wrapper library to just long_jump on
failure and moving all the state info to structs required changes to all
the base functions in the last commit. The rest of the changes are to
link in the new functionality and to get the code to compile (usbdbg.c
edits).
Next I'll work on a function which abstracts the problem of opening an
image up and executing a line by line function op on it. I already
worked the code out for that. But, it's not in this commit to keep
things streamlined.
* With the new integral moving window we can support face detection,
keypoints and template matching on QVGA frames. However, it was only
implemented and tested for face detection.
* Increasing the max integral frame now for easier testing.
Fimrware will now automatically detect the appropriate file type and read
in that file type correctly.
Working on tying on of this stuff togheter next. It's getting a little
bit too complicated to deal with error cases. Need to add error message
function layer.
RGB565 reading and writing is going to be slow. But, grayscale is going
to be going as fast as the system can go.
If Omnivision has just reversed the byte order of data sent to the
camera we wouldn't have this problem for RGB565.
Added BMP file format reading and writing support code and modified the
ppm code to match. Upper level glue code has been left intact to be
altered in future commits.
Tested save() and ppm writing functionality still works. More
comprehensive tests coming soon.
... Kinda concerend that standard image file formats might not cut it for
the speed we'd like to have when using image files in function calls. I
think only grayscale is going to be fast. All other formats require a
lot of prep work.
I think I may modify some of this low level stuff in the future to
autodetect if an entire grayscale image can be read in or written out
in one go to speed that stuff up.
The negate function gives you the ability to negate an image before
running difference on it. The difference function will subtract two images
from each other and return the abs() of the result.
I believe it would have been optimal to work on the RGB565 image in the
LAB color space. However, since we don't have an inverse LAB lut this is
not possible. If we could replace LAB with YUV then that would free up
space to have an inverse YUV table (YUV->RGB).
* Filter functions bypass the default line processing in sensor.c, and pre-process lines.
* Processing is done on the fly, i.e. filters are called from after each line is received.
All the drawing functions have been updated to handle automatic clipping
when drawing offscren and work with both grayscale and RGB565.
Additionally, all functions now accept color arguments.
I've also updated the example scripts with the new functions and tested
them out to make sure they work.
Additionally, I wrote a test suite for the drawing functions to make
sure they work.
* Use a scanning factor proportional to the current scale.
* Use the new integral moving window to allow two integral images
(sum and sum squared) for fast mean, variance and standard deviation.
* Higher FPS and more accurate detection.
* A new integral image implementation that uses a moving window.
* Integral image is computed in steps, each shift computes n new lines.
* This only requires (image_width * (feature_height+1) * 4) bytes.
* Allows Haar detector to run on QVGA, and allows a second squared
integral image for standard deviation calculations.
The alloc functions allow you to use the framebuffer as a storage space.
It's very simple but effective. You can alloc which puts some memory on a
stack... and then when you're done you can free which pops the stack.
Pops (frees) must be done in reverse order of pushes (allocs).
In general, functions should call the init code before using the stack.
It could be in a bad state.
Also, I added some wrappers for file system functions to make that stuff
easier. This will be used in the future.
With new RGB565<->RGB888 scaling. This included redoing the LAB/YUV/XYZ
tables. I translated the table gen code to python also and added
comments as to where the math came from.
And yes, I tested and compared the tables to make sure they weren't
borken. The tables are slightly different... but, if look at the
progression of values loosely you'll see the triplets are very close to
each other when doing a compare. This is to be expected given I used a
slightly better scaling algo.
And modified the rainbow table so that the RGB888 to RGB565 translation
is done using a rounding technique versus hard floor. This is also used
for the RGB565<->RGB888 LUTs.
Additionally, I added a bunch of stuff to the image library to make
working with images easier. I will using these helpers in the future.
Finally, I cleaned up trailing space in the font stuff (pet peeve).
Point didn't need many changes. However, for rect I made the merge
function alot better so it won't alloc while merging, just free.
Additionally, I added a function to get the intersecting rectangle of an
image. This will be used for all functions that accept a subimg
argument. This function allows the user to basically pass any wild and
crazy rect they want and the function will find the intersecting area (if
it exists) and return just that to operate on. This is good for "do what
I mean" functionality versus "do what I say".
There were a lot of missing features in the array module. I added
quicksort based on the MP sort function and I expanded the array code so
you can do stuff like take() which lets you get an object from an array
and easily put it into another array.
I also fixed the "struct array" problems in the code. Anonymous structs
have to go.
It was previous set to 10 seconds... since the timeout is in ms. Now
it's at 1 second. This represents 100 clocks at 100KHz I2c. Also, I
noticed general call mode was being set for the I2C which is not at all
something we want (the ability to address multiple devices at once).
I tested the changes with all my cameras. No problems. This was 4 units
(2 being the original protos).
0 bytes and don't fail if you do that. Additionally, I added some
comments on behavior. (I studied what the gc functions did extensively
to know the behavior of this stuff). All changes have been tested with
code that does memory allocs.
* Add HAL_DCMI_Start_DMA_MB to allow line by line transfers for
raw frames using DMA double buffering feature.
* This means bigger grayscale resolution that would not otherwise
fit into RAM.
* YUV to Grayscale conversion on the fly (as the frame being read).
* It's possible to perform differencing (and maybe JPEG) on the fly.
* Additionally, FPS for grayscale should be exactly like RGB
(since there's no additional step after capturing the frame)
* Set the address of the DMA transfer to addr + offset to allow JPEG
Compression of the framebuffer without overwriting image pixels.
* This saves 1KBs of stack and conditionals in jpeg_put_bytes/char.