* Wrap-up more CDC functions. Note these were moved to common code
upstream, so in the future we'll only need to wrap one or two functions.
* Recover from text ringbuf overflow by resetting it.
* More efficient text ringbuf read/write.
The GET_STATE command is a command that returns flags, frame width,
height, size, and the text buffer (up to 40 bytes), in a single 64
bytes packet to reduce the bandwidth/overhead of the protocol.
The packet format is:
word word word word 2 words 40 bytes
<flags> <width> <height> <size> <reserved> <null-terminated text>
The flags are mostly reserved, only the following bits are defined:
0x001 script running
0x010 text buffer valid.
0x100 JPEG frame buffer ready.
* DMA buffers regions are automatically rounded up to the next power of 2 via
the linker script. This ensures that these buffers, when rounded up, can actually
fit into their respective memories. It also ensures that when/if the MPU is used
to configure these regions, it will not round up the regions sizes, which may cause
the MPU to configure a region bigger than the DMA buffer.
* GC blocks can be rearranged in any order, including the main heap/first block.
This is very important for boards with limited RAM to avoid fragmenting the contiguous
large heap early before it's actually needed.
* Moved VOSPI memory to its own section. The offset is no longer required, and the
linker script can detect overlaps.
* Renamed GC heap memory to allow more than one heap to exist, and added support for
it in the common linker script. This change allows adding a second heap for malloc/libc
easily if needed.
* For STM32 boards, the domain-specific DMA buffers can now be located anywhere within
their memory regions, as their MPU regions' base addresses and sizes are all set via
linker script variables. Previously, this was defined in headers, and sections could
have easily overlapped without warning.
This patch decouples the MicroPython TF module from the TensorFlow library,
allowing support for more DL/ML libraries and engines in the future.
The ML backend has been completely redesigned; the model object can now be
passed directly to the backend, allowing it to initialize the model internally.
Additionally, the backend's state/memory is now persistent (surviving across
invocations), which improves inference speed by around 20% and supports models
that require persistent memory, such as LSTM.
Finally, the ML module has been mostly rewritten to handle model input/output
shapes and data properly, and to support models with multiple outputs
The new built-in model system allows fine-grained control over which models
get built into the firmware image. This patch enables FOMO for all boards
and audio processing models for boards with mics.