* Filled in all the board control examples. Everything works except for
DAC.
* Moved test drawing scripts to drawing dir and renamed them and added
comments.
* Filled in all the image filter stuff. There are still some tests that
can be renamed, commented, and added to this folder. But, I will do that
later.
* Fixed motion detection thresholds.
* Fixed LCD script comments.
* Fixed BLE return value.
Removed micropython code from the image libary. Also, blobs are now 10
tuple values by default now. The multilist thing has been removed from
blobs and it will return just a list of blobs instead of a tree of
lists.
Filter functions still work too.
Pixels, centroid, and orientation are calculated in the blob code now.
As for threshold, it is no longer needed (plus, it required storing a
secondary image in RAM which isn't really something we can handle).
Blob tracking has now been updated to work without requiring prior
segmentation of the image. You can still run it on a segmented image,
but, that is not needed anymore.
Use the copy color feature of the OpenMV IDE to get a color in the
image. Once you have that you can then pass the color to find_blobs which
will output a tuple of lists of blobs for each color. By default, all
blobs less than 1/1000th of the image are filtered out, however, you can
add a custom filter function which gets the image and the blob about to
be added to the list and you can decide to filter it or not.
For marker tracking, we now have a function called find markers which
basically merges all the blobs found by find blobs into one list of
blobs. Each new blob will have a color code value which will tell you
what colors are part of that blob. We support tracking up to 30 unique
colors this way.
Moved feature detection scripts into their own folders and added explict
frame_skip value per Ibrahim's request.
Finished working on snapshot and video recording scripts for next
release.
... From CMUcam4 work I learned that people will just want examples that
do "X" thing. So, in general, our examples should include a simple
script showing off a feature and then a more complex script that does "X"
where "X" is some app that a person would want. For example, we'll get
reuqests for face tracking with servos, and movement detection with
servos. So, instead of answering this question a million times with an
example script we'll just have examples for all kinds of things people
will want.
Gotta automate dealing with help support at the end of the day...
Tried to emulate Arduino's 11 folders... I'd perfer to have all the
shield scripts in one folder... but, that might not make sense. I don't
really want one script per folder however. So, I might merge some more
stuff in the future. I have a grand idea here that will become evident as
I work though the examples.
Anyway, the current structure is not final. It will be in flux for a
little while.
As for Git History, folder history is the best we're going to get. Git
and GitHub don't seem to deal with moves too well.
This a python module driver for the BLE module. It puts the module into
a mode that's good for machine interfacing and handles parsing commands
for you. Additionally, it lets you get access to the low level serial
port.
Users who want to use this driver will need to read and understand the
TruConnect API for what commands they can execute. This driver simply
makes executing commands easy. The user simply needs to call the
"command()" function with the strings listed on the TruConnect API and
they will get the response from the command back as a byte object.
Once the user has executed the nessary commands to setup the BLE
connection they can then do:
ble.command("str")
To put the BLE module into streaming mode and then they can just
directly access the serial port via:
ble.uart().write(<data>)
And etc.
The built-in mjpeg module allows you to record videos seamlessly. It
will automatically compress the frame buffer using the extra space in the
main ram. So... you don't have to pass it jpeg images. Gets 7 FPS at
320x240 while connected to the computer too (it has to compress the
frame twice in this situation).
Anyway, the module work like Gif.
Color gifs look very good for how bad you'd expect them to be with just
7 bits of color (rgb232) - quite amazing. Also, I hardened the gif
module to make it "user ready".
The morph function lets you convolve the image with a kernel. It's
decently fast right now. But, in the future we'll have to optimize it by
a lot (unrolling loops, using SIMD instructions, etc.).
Anyway, along with morph I added an edge detection test script showing
how you can use a high pass filter on an image to get all the edges in
it. This is not as good as canny edge dection... but, it's about the
same and fast enough.
We'll need a Hough Transform system in the future to make edge dection
useful. Not sure how that will be implemented... so, that's going to be
far away for now.
The old code did not actually implement the errode anhd dilate kernels
correctly. However, it migh have been a little faster because it avoided
the boundary problem.
In the future we can optimize all the kernel code to have different loops
for doing the edges of image versus the center. But, for now, this is
good enough. QVGA color tracking with kernels will be slow, but, the
speed can be improved with QQVGA resolution. Using a 3x3 kernel is
plenty fast. Larger ones are slower.
I also added the ability for you to set the threshold for erode and
dialte. This lets you make the kenrel a little bit smarter so that it
won't errode or dilate a pixel unless the threshold is met. Meaning,
you'll be able to use erode to erode an image down to 1 pixel wide
lines.