Just doing one big commit/PR here since I noticed that breaking it up
causes issues.
Anyway, these fixes give us GOOD/WORKING/FAST optical flow now on the
OpenMV Cam M7. A number of changes were made to the optical flow
scripts. You have have absolute and differential estimation example
scripts. Additionally, you also have the ability to measure rotation and
scale changes too. Linear/Log Polar conversion was added for this. Users
may use the new code for generic image manipulation too. Finally, I
updated the power of 2 resolutions since you actually HAVE to use them
with optical flow for the phasecorrelation code to work correctly.
I have some more advanced scripts coming after this. But, This commit is
already getting kinda large so I'm stopping it here.
* Textual register cleanup.
No functional changes.
* Add exposure control support.
You can now set the exposure for the camera in microseconds (versus an
opaque unknown value previously). First, we have a new method called
get_exposure_us() which will get the expsoure time in microseconds. This
let's you determine what the auto exposure algorithm set the exposure
time to. Second, the previously implmented set_auto_exposure() method
which allows you to turn aec off and on accepts a exposure_us keyword
argument when you turn aec off to manually control the exposure.
The next commit will add support for other sensor types.
* Cleanup register formatting.
No functional changes.
* Add exposure control support for the OV2640.
Register access for this chip is a PITA.
* Formatting Cleanup.
No functional changes.
* Add exposure control for ov9650.
Just doing it for all sensors.
* Add missing 2 factor.
* Added exposure control for the MT9V034.
* Add exposure control example.
Works well on the OV7725.
This fix allows "copy_to_fb" with a different resolution than the
current frame buffer to work. It also allows the frame buffer to be
resized, etc. In particular, the pooling methods I added for optical
flow work again... you'll also be able to scale the frame buffer too.
Moved structs along with image copying code from sensor into
framebuffer.c so that we can use the new copy_fb_to_jpeg_fb() function
in the image library for methods with "copy_to_fb" so that they update
the IDE preview when called.
Also, I noticed that the MAIN_FB_SIZE() value is not calculated
correctly in all cases. Will fix later. Trying to keep this commit clean
for just the refactoring.
All changes have been tested. Too.
With the new frame rate speed increase folks will be asking for smaller
resolutions to get 85 FPS or so when running an algorithm. This commit
adds all scaled modes of frame sizes we already support. We should be
good now on frame sizes for the present and future now.
Todo - skip frames does not run long enough anymore for auto white
balance and gain to stablize before they are turned off in some scripts.
This needs to be adjusted.
* Delay the FB size check and corrections to snapshot(). If the frame doesn't
fit FB it gets cropped for GS, or the sensor is switched to bayer for RGB.
* Added pooling functions to make getting small images easy. set_binning
works too... but, it zooms in way to much. pooling functions aout you to
shrink the image while not zooming in.
* To make the pooling functions easy to use I created a version that
pools the image out of place and one that pools the image in place. The
inplace pooling function can work on the frame buffer (see edits to
sensor.c)
* I added the code to do hann windowing to the FFT lib. However, I
commented it out after it improved performance by basically zero.
Specialized windowing stuff will only come in handy for folks trying to
tune their algorithm... not in general for everything.
* I added subpixel resolution for the phase correlation code. You can
now track the image movement really precisely. Additionally, I fixed up
the displacement outputs to give expected results. I also added a QoR
output for the displacement code so that you can know when the results
are bad.
* Finally, an example script has been added to show off the features.
Added the ability to turn AGC off. Kinda will need the ability to restore
AGC settings back to user specified ones in the future... but, this will
do for now.
Added the ability to turn AEC off. Objectively this function probably
won't be used. But, in low light situations it can help.
Added get_fb() to allow you to get the last image snapshot returned.
There was some old exposure function in the code that was getting
optimized out. So, I deleted the used methods that didn't have any code
in them and commented out the only method that did.
* Added the ability to control the quality on JPEG functions... However,
due to our JPEG implementation this doesn't seem to help. 90% JPEG
quality images and regular images should be about equal. But, you can
see heavy degredation with 90% still. E.g. text is unreabable. Not
exactly sure why this is happening but it can be fixed later.
* Changed the compress() function to compressed(). Also, it now
compresses using FB_Alloc to prevent realloc issues when compressing.
* Added new compress() function. This function compresses an image in
place and if that image is the frame bufffer then it will update the
frame buffer bpp value to reflect the image was compressed. Users can use
this function to basically finalize the frame buffer and then pass the FB
to functions that need to send image bytes. The benefit of using this
function is that it should allow higher quality JPEGs and let everything
run at a faster speed while connected to the IDE.
I made this function to speed up WiFi. However, I encountered a bug with
the winc.send() method. It appears to zero the bytes it sends. I didn't
debug further except to verify that the image data became zero after
calling send.
Added setters for these camera settings. AWB is necessary for color
tracking to work correctly. AGC still runs, which causes lighting
shifts. It may need to be disabled too. Not sure... if I want to do that
or not however, because without it lightning won't get normalized to
remain at a certain level. So, turning AGC off may cause issues in other
ways.
* Filter functions bypass the default line processing in sensor.c, and pre-process lines.
* Processing is done on the fly, i.e. filters are called from after each line is received.
* Add HAL_DCMI_Start_DMA_MB to allow line by line transfers for
raw frames using DMA double buffering feature.
* This means bigger grayscale resolution that would not otherwise
fit into RAM.
* YUV to Grayscale conversion on the fly (as the frame being read).
* It's possible to perform differencing (and maybe JPEG) on the fly.
* Additionally, FPS for grayscale should be exactly like RGB
(since there's no additional step after capturing the frame)
* Set the address of the DMA transfer to addr + offset to allow JPEG
Compression of the framebuffer without overwriting image pixels.
* This saves 1KBs of stack and conditionals in jpeg_put_bytes/char.
* Add slave address to sensor struct.
* Pass slave address to every SCCB_Read/Write function.
* Pass a pointer to the sensor struct to sensor functions.