Everything except the DAC script works. That has to be fixed. Anyway, we
have a ton of example for launch. So, hopefully, comments about how to
do stuff should be limited.
That said, the PYB module is in a poor state still. Stuff kinda works and
kinda doesn't from it.
One day... There won't be any fires to put out on this project and I can
stop working so hard.
* Filled in all the board control examples. Everything works except for
DAC.
* Moved test drawing scripts to drawing dir and renamed them and added
comments.
* Filled in all the image filter stuff. There are still some tests that
can be renamed, commented, and added to this folder. But, I will do that
later.
* Fixed motion detection thresholds.
* Fixed LCD script comments.
* Fixed BLE return value.
*Changed subimg to copy.
*Made blend work the same way as all our other double image argument
functions.
*Changed bilt to replace (the name of bilt is way to esoteric). Replace
gives you the basic assignment op.
* Removed scale/sacled. I removed this code because we don't want to
encourage people to scale things and allocate additional images in
memory. I decided to keep copy() for completeness sakes... but, I don't
see anyone using it. (By completeness sakes I mean that we now have the
assignment op, copy op, etc. for an image object).
* Removed rainbow. This feature is built into the FIR module now.
Moving on, compress needs to be renamed to compressed and a new compress
function will need to be added.
The compress() function will compress the image (or frame buffer, etc)
and not return a new object. The compressed() function will return a new
object and not compress the original image.
The compress function will make it easier for users to compress images
once they are done working on them before sending the image some where.
I don't see compressed() being used much then after adding the
compress() function. Since the compress() function won't use up heap
space this makes it very good.
Removed micropython code from the image libary. Also, blobs are now 10
tuple values by default now. The multilist thing has been removed from
blobs and it will return just a list of blobs instead of a tree of
lists.
Filter functions still work too.
Pixels, centroid, and orientation are calculated in the blob code now.
As for threshold, it is no longer needed (plus, it required storing a
secondary image in RAM which isn't really something we can handle).
Blob tracking has now been updated to work without requiring prior
segmentation of the image. You can still run it on a segmented image,
but, that is not needed anymore.
Use the copy color feature of the OpenMV IDE to get a color in the
image. Once you have that you can then pass the color to find_blobs which
will output a tuple of lists of blobs for each color. By default, all
blobs less than 1/1000th of the image are filtered out, however, you can
add a custom filter function which gets the image and the blob about to
be added to the list and you can decide to filter it or not.
For marker tracking, we now have a function called find markers which
basically merges all the blobs found by find blobs into one list of
blobs. Each new blob will have a color code value which will tell you
what colors are part of that blob. We support tracking up to 30 unique
colors this way.
Mean filter -> Fast and easy to use. This will likely be the only filter
that gets alot of action on the M4.
Median filter -> Works really well, but, slow. On grayscale at 160x120
you can get also 10 FPS with it for a 3x3 kernel. That said, it's still
slow. Also, the code only works for 3x3 and 5x5 kernels.
About the previous histogram filter... technically, that filter should be
better. However, it suffers from a startup cost. The operation of finding
the median point in the histogram costs too much to compute. This is
what causes it to be slow. On very large kernels it will be faster than
the sorting median alrogithm I put up... but, large kernels will be too
slow for anyone to use anyway. The paper Ibrahim linked to about it
showed it being used for like 7x7 kernels and up... so, I think the
researcher who thought of the idea was really thinking about the
algorithm for large kernels.
Mode filter -> Works great on grayscale. Not so much on color. I think it
needs to be run on the LAB color space instead of the RGB color space. I
say this because it causes pretty strong artifacts around edges. When we
get more flash we'll be able to have a reverse lookup table for LAB to
make the mode filter better. Until then...
Moved feature detection scripts into their own folders and added explict
frame_skip value per Ibrahim's request.
Finished working on snapshot and video recording scripts for next
release.
... From CMUcam4 work I learned that people will just want examples that
do "X" thing. So, in general, our examples should include a simple
script showing off a feature and then a more complex script that does "X"
where "X" is some app that a person would want. For example, we'll get
reuqests for face tracking with servos, and movement detection with
servos. So, instead of answering this question a million times with an
example script we'll just have examples for all kinds of things people
will want.
Gotta automate dealing with help support at the end of the day...