Has a bias value that allows you to control if its really a midpoint,
min, max filter, or something inbetween. Run at 160x120 or lower. 320x240
is slow (seems to be the case for all convoltions at that res).
Added setters for these camera settings. AWB is necessary for color
tracking to work correctly. AGC still runs, which causes lighting
shifts. It may need to be disabled too. Not sure... if I want to do that
or not however, because without it lightning won't get normalized to
remain at a certain level. So, turning AGC off may cause issues in other
ways.
First, a few things:
The MLX 16x4 sensor has just too low of a resolution for mass appeal for
the price. The product is not going to sell very well. We need to look
into supporting sensors with a better res. Like the FLIR 1. The MLX
module was renamed to the "flir" module with this idea in mind.
The flir code now takes care of doing scaling and blending itself. I did
this to get rid of the user having to scale the image themselves and
blend themselves. Its too easy to run out of memory given our current
ultra small heap. In general, anything that requires multiple images in
RAM has got to go. When we do another OpenMV Cam with external RAM in
the MB range then maybe such functions will be safe. But, right now they
are definately not.
Anyway, moving on, I fixed a few bugs with the MLX math code. But, for
the most part was correct. I also added reconmended polling code for
brownouts as required by the datasheet.
Last, I designed this code like the LCD code to support a type value
when inited. This will allow the system to user a different sensor in the
future without any API changes to the user.
I will add test scripts for this next. Basic usage follows:
import flir
flir.init()
flir.display_ir(sensor.snapshot())
And that's it. Super easy. If the user wants the raw temp values they
can use flir.read_ir() to get the ta and to values. The display function
has a hidden alpha and scale argument for controling blending and the
min/max scaling.
The previous way we worked out scaling kinda sucked... it was a good
shot, but, controllable min and maxes that autoscale by default just
work better. If the user knows the temp range then they can just set the
min and max.'
Anyway, longest commit ever done.
This a python module driver for the BLE module. It puts the module into
a mode that's good for machine interfacing and handles parsing commands
for you. Additionally, it lets you get access to the low level serial
port.
Users who want to use this driver will need to read and understand the
TruConnect API for what commands they can execute. This driver simply
makes executing commands easy. The user simply needs to call the
"command()" function with the strings listed on the TruConnect API and
they will get the response from the command back as a byte object.
Once the user has executed the nessary commands to setup the BLE
connection they can then do:
ble.command("str")
To put the BLE module into streaming mode and then they can just
directly access the serial port via:
ble.uart().write(<data>)
And etc.
File reading is runing ultra fast now. We're getting that SD card speed
the STM32 promised now. The file buffer commands have been updated to
alloc as much available memory to read as much of a file in as possible
now to speed up things. This works really great.
Note however, while the file buffer is active you have to use the file
buffer versions of tell and size. Spent a few hours on tracking down an
error related to not using the buffered versions.
All file write functions now use fb_alloc to go much faster. Writes are
re-directed to the extra frame buffer RAM and are grouped until they can
be written in a massive multi-block write to the SD card. We get the
best SD card write speed by doing things this way.
Ideally we'd want to buffer the whole file... but, this is about as good
as we're going to get for now.
Going to fix reading functions to use the same buffer next.