Compare commits

...

14 Commits

Author SHA1 Message Date
Kwabena W Agyeman
3a1aa26663
Merge 5482495ce3 into d038dc7ab8 2025-10-22 11:05:39 -07:00
Ibrahim Abdelkader
d038dc7ab8
Merge pull request #2874 from kwagyeman/kwabena/add_yolo_v2_v5_examples
Some checks failed
🔥 Firmware Build / build-firmware (false, 0, false, DOCKER) (push) Has been cancelled
🔥 Firmware Build / build-firmware (false, 0, true, MPS2_AN500) (push) Has been cancelled
🔥 Firmware Build / build-firmware (false, 0, true, MPS3_AN547) (push) Has been cancelled
🔥 Firmware Build / build-firmware (false, 1, false, OPENMV_N6) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, ARDUINO_GIGA) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, ARDUINO_NANO_33_BLE_SENSE) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, ARDUINO_NANO_RP2040_CONNECT) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, ARDUINO_NICLA_VISION) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, ARDUINO_PORTENTA_H7) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, OPENMV2) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, OPENMV3) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, OPENMV4) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, OPENMV4P) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, OPENMVPT) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, OPENMV_AE3) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, OPENMV_N6) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, OPENMV_RT1060) (push) Has been cancelled
🔥 Firmware Build / code-size-report (push) Has been cancelled
🔥 Firmware Build / stable-release (push) Has been cancelled
🔥 Firmware Build / development-release (push) Has been cancelled
scripts/examples: Add yolov2 and yolov5 template examples.
2025-10-22 19:36:55 +03:00
Ibrahim Abdelkader
7a46d0c82e
Merge pull request #2886 from kwagyeman/kwabena/fix_blazeface
scripts/examples: Fix blazeface and fomo examples.
2025-10-22 16:34:18 +03:00
Kwabena W. Agyeman
b897ec4f16 scripts/examples: Switch palm examples to static tuples. 2025-10-21 21:53:41 -07:00
Kwabena W. Agyeman
ce18e680b2 scripts/examples: Fix fomo example. 2025-10-21 21:48:18 -07:00
Kwabena W. Agyeman
74faef3a8e scripts/examples: Fix blazeface detector. 2025-10-21 21:48:08 -07:00
Kwabena W. Agyeman
781a7bf86f scripts/examples: Add yolov2 and yolov5 template examples. 2025-10-19 15:58:23 +04:00
Kwabena W. Agyeman
d2d1a9448f boards: Remove outdated YOLOV2 and YOLOV5 networks. 2025-10-19 15:58:23 +04:00
Kwabena W. Agyeman
c4b0b5a3dc scripts/libraries: Fix issue with using YOLOV2. 2025-10-19 15:58:23 +04:00
Ibrahim Abdelkader
7cbdb927da
Merge pull request #2873 from kwagyeman/kwabena/add_yolo_lc
Some checks failed
🔥 Firmware Build / build-firmware (false, 0, false, DOCKER) (push) Has been cancelled
🔥 Firmware Build / build-firmware (false, 0, true, MPS2_AN500) (push) Has been cancelled
🔥 Firmware Build / build-firmware (false, 0, true, MPS3_AN547) (push) Has been cancelled
🔥 Firmware Build / build-firmware (false, 1, false, OPENMV_N6) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, ARDUINO_GIGA) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, ARDUINO_NANO_33_BLE_SENSE) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, ARDUINO_NANO_RP2040_CONNECT) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, ARDUINO_NICLA_VISION) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, ARDUINO_PORTENTA_H7) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, OPENMV2) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, OPENMV3) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, OPENMV4) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, OPENMV4P) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, OPENMVPT) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, OPENMV_AE3) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, OPENMV_N6) (push) Has been cancelled
🔥 Firmware Build / build-firmware (true, 0, false, OPENMV_RT1060) (push) Has been cancelled
🔥 Firmware Build / code-size-report (push) Has been cancelled
🔥 Firmware Build / stable-release (push) Has been cancelled
🔥 Firmware Build / development-release (push) Has been cancelled
scripts/examples: Add YOLO LC person tracking example.
2025-10-19 12:09:14 +03:00
Ibrahim Abdelkader
5ae4a02d41
Merge pull request #2889 from openmv/dependabot/github_actions/softprops/action-gh-release-2.4.1
build(deps): bump softprops/action-gh-release from 2.3.3 to 2.4.1
2025-10-19 12:05:39 +03:00
Kwabena W. Agyeman
3e22e0ea03 scripts/examples: Add YOLO LC person tracking example. 2025-10-18 21:29:43 -07:00
Kwabena W. Agyeman
30f499ea2d boards: Add YOLO LC model. 2025-10-18 21:27:14 -07:00
dependabot[bot]
cdb0d91d74
build(deps): bump softprops/action-gh-release from 2.3.3 to 2.4.1
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.3.3 to 2.4.1.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](6cbd405e2c...6da8fa9354)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-version: 2.4.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-13 15:35:54 +00:00
24 changed files with 203 additions and 82 deletions

View File

@ -223,7 +223,7 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: '🔥 Create stable release'
uses: softprops/action-gh-release@6cbd405e2c4e67a21c47fa9e383d020e4e28b836
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090
with:
draft: true
files: firmware_*.zip
@ -275,7 +275,7 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: '🔥 Create development release'
uses: softprops/action-gh-release@6cbd405e2c4e67a21c47fa9e383d020e4e28b836
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090
with:
draft: false
name: Development Release

View File

@ -14,12 +14,6 @@
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/yolo_v5_224_nano.tflite",
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/force_int_quant.tflite",
@ -38,6 +32,12 @@
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/yolo_lc_192.tflite",
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/blazeface_front_128.tflite",

View File

@ -32,6 +32,12 @@
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/yolo_lc_192.tflite",
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/blazeface_front_128.tflite",

View File

@ -14,12 +14,6 @@
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/yolo_v5_224_nano.tflite",
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/force_int_quant.tflite",
@ -38,6 +32,12 @@
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/yolo_lc_192.tflite",
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/blazeface_front_128.tflite",

View File

@ -14,12 +14,6 @@
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/yolo_v5_224_nano.tflite",
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/force_int_quant.tflite",
@ -38,6 +32,12 @@
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/yolo_lc_192.tflite",
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/blazeface_front_128.tflite",

View File

@ -14,12 +14,6 @@
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/yolo_v5_224_nano.tflite",
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/force_int_quant.tflite",
@ -38,6 +32,12 @@
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/yolo_lc_192.tflite",
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/blazeface_front_128.tflite",

View File

@ -22,7 +22,7 @@
},
{
"type": "tflite",
"path": "{TOP}/lib/models/yolo_v2_224_small.tflite",
"path": "{TOP}/lib/models/yolo_lc_192.tflite",
"alignment": 16,
"optimize": "Performance"
},

View File

@ -10,13 +10,7 @@
},
{
"type": "tflite",
"path": "{TOP}/lib/models/yolo_v2_224_small.tflite",
"alignment": 32,
"profile": "default"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/yolo_v5_224_nano.tflite",
"path": "{TOP}/lib/models/yolo_lc_192.tflite",
"alignment": 32,
"profile": "default"
},

View File

@ -14,12 +14,6 @@
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/yolo_v5_224_nano.tflite",
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/force_int_quant.tflite",
@ -38,6 +32,12 @@
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/yolo_lc_192.tflite",
"alignment": 16,
"optimize": "Performance"
},
{
"type": "tflite",
"path": "{TOP}/lib/models/blazeface_front_128.tflite",

Binary file not shown.

View File

@ -0,0 +1 @@
person

Binary file not shown.

View File

@ -1,2 +0,0 @@
background
person

Binary file not shown.

View File

@ -1,2 +0,0 @@
background
person

View File

@ -2,12 +2,12 @@
# Copyright (c) 2013-2025 OpenMV LLC. All rights reserved.
# https://github.com/openmv/openmv/blob/master/LICENSE
#
# This example shows off Google's MediaPipe BlazeFace face detection model.
# This example shows off Google's MediaPipe Face Detection model.
import csi
import time
import ml
from ml.postprocessing import mediapipe_face_detection_postprocess
from ml.postprocessing.mediapipe import BlazeFace
# Initialize the sensor.
csi0 = csi.CSI()
@ -17,25 +17,22 @@ csi0.framesize(csi.VGA)
csi0.window((400, 400))
# Load built-in face detection model
model = ml.Model("/rom/blazeface_front_128.tflite")
model = ml.Model("/rom/blazeface_front_128.tflite", postprocess=BlazeFace(threshold=0.4))
print(model)
# Create the face detection post-processor. This post-processor dynamically
# generates anchors for the model input size which should only be done once.
face_detection_postprocess = mediapipe_face_detection_postprocess(threshold=0.6)
clock = time.clock()
while True:
clock.tick()
img = csi0.snapshot()
# faces is a list of ((x, y, w, h), score, keypoints) tuples
faces = model.predict([img], callback=face_detection_postprocess)
faces = model.predict([img])
# Draw bounding boxes around the detected faces and keypoints.
if faces:
for r, score, keypoints in faces[0]:
ml.utils.draw_predictions(img, [r], ["face"], [(0, 0, 255)], format=None)
ml.utils.draw_predictions(img, [r], ("face",), ((0, 0, 255),), format=None)
# keypoints is a ndarray of shape (6, 2)
# 0 - right eye (x, y)
# 1 - left eye (x, y)
@ -43,7 +40,6 @@ while True:
# 3 - mouth (x, y)
# 4 - right ear (x, y)
# 5 - left ear (x, y)
for kp in keypoints.tolist():
img.draw_circle(int(kp[0]), int(kp[1]), 4, color=(255, 0, 0))
ml.utils.draw_keypoints(img, keypoints, color=(255, 0, 0))
print(clock.fps(), "fps")

View File

@ -25,12 +25,6 @@ print(model)
# Line connections between hand joints for drawing the hand skeleton.
palm_lines = ((0, 1), (1, 2), (2, 3), (3, 4), (4, 0), (0, 5), (5, 6))
# Visualization parameters.
palm_labels = ["palm"]
palm_colors = [(0, 0, 255)]
kp_color = (255, 0, 0)
line_color = (0, 255, 0)
clock = time.clock()
while True:
clock.tick()
@ -42,7 +36,7 @@ while True:
# Draw bounding boxes around the detected palms and keypoints.
if palms:
for r, score, keypoints in palms[0]:
ml.utils.draw_predictions(img, [r], palm_labels, palm_colors, format=None)
ml.utils.draw_predictions(img, [r], ("palm",), ((0, 0, 255),), format=None)
# keypoints is a ndarray of shape (7, 2)
# 0 - wrist (x, y)
@ -55,6 +49,6 @@ while True:
#
# mcp = Metacarpophalangeal Joint - the knuckle
# cmc = Carpometacarpal Joint - the base of the thumb
ml.utils.draw_skeleton(img, keypoints, palm_lines, kp_color=kp_color, line_color=line_color)
ml.utils.draw_skeleton(img, keypoints, palm_lines, kp_color=(255, 0, 0), line_color=(0, 255, 0))
print(clock.fps(), "fps")

View File

@ -33,11 +33,6 @@ hand_lines = ((0, 1), (1, 2), (2, 3), (3, 4), (0, 5), (5, 6), (6, 7), (7, 8),
(5, 9), (9, 10), (10, 11), (11, 12), (9, 13), (13, 14), (14, 15), (15, 16),
(13, 17), (17, 18), (18, 19), (19, 20), (0, 17))
# Visualization parameters.
palm_colors = [(0, 0, 255)]
kp_color = (255, 0, 0)
line_color = (0, 255, 0)
clock = time.clock()
while True:
clock.tick()
@ -61,7 +56,7 @@ while True:
# Draw bounding boxes around the detected hands and keypoints.
for i, detections in enumerate(hands):
for r, score, keypoints in detections:
ml.utils.draw_predictions(img, [r], ["right" if i else "left"], palm_colors, format=None)
ml.utils.draw_predictions(img, [r], ("right",) if i else ("left",), ((0, 0, 255),), format=None)
# keypoints: ndarray (21, 3) of hand joints (x, y, z)
# Indices follow MediaPipe convention:
@ -72,6 +67,6 @@ while True:
# Ring: 13 mcp, 14 pip, 15 dip, 16 tip
# Pinky: 17 mcp, 18 pip, 19 dip, 20 tip
# (cmc=base, mcp=knuckle, pip=mid, dip=distal, ip=thumb joint, tip=fingertip)
ml.utils.draw_skeleton(img, keypoints, hand_lines, kp_color=kp_color, line_color=line_color)
ml.utils.draw_skeleton(img, keypoints, hand_lines, kp_color=(255, 0, 0), line_color=(0, 255, 0))
print(clock.fps(), "fps")

View File

@ -33,11 +33,6 @@ hand_lines = ((0, 1), (1, 2), (2, 3), (3, 4), (0, 5), (5, 6), (6, 7), (7, 8),
(5, 9), (9, 10), (10, 11), (11, 12), (9, 13), (13, 14), (14, 15), (15, 16),
(13, 17), (17, 18), (18, 19), (19, 20), (0, 17))
# Visualization parameters.
palm_colors = [(0, 0, 255)]
kp_color = (255, 0, 0)
line_color = (0, 255, 0)
# Tracking vars.
n = None
@ -71,7 +66,7 @@ while True:
# Draw bounding boxes around the detected hands and keypoints.
for i, detections in enumerate(hands):
for r, score, keypoints in detections:
ml.utils.draw_predictions(img, [r], ["right" if i else "left"], [(0, 0, 255)], format=None)
ml.utils.draw_predictions(img, [r], ("right",) if i else ("left",), ((0, 0, 255),), format=None)
# keypoints: ndarray (21, 3) of hand joints (x, y, z)
# Indices follow MediaPipe convention:
@ -82,7 +77,7 @@ while True:
# Ring: 13 mcp, 14 pip, 15 dip, 16 tip
# Pinky: 17 mcp, 18 pip, 19 dip, 20 tip
# (cmc=base, mcp=knuckle, pip=mid, dip=distal, ip=thumb joint, tip=fingertip)
ml.utils.draw_skeleton(img, keypoints, hand_lines, kp_color=kp_color, line_color=line_color)
ml.utils.draw_skeleton(img, keypoints, hand_lines, kp_color=(255, 0, 0), line_color=(0, 255, 0))
# Center new_wider_rect on hand for tracking
new_wider_rect = (r[0] + (r[2] // 2) - (wider_rect[2] // 2),

View File

@ -9,7 +9,7 @@
import sensor
import time
import ml
from ml.postprocessing import fomo_postprocess
from ml.postprocessing.edgeimpulse import Fomo
import math
sensor.reset() # Reset and initialize the sensor.
@ -19,7 +19,7 @@ sensor.set_windowing((240, 240)) # Set 240x240 window.
sensor.skip_frames(time=2000) # Let the camera adjust.
# Load built-in FOMO face detection model
model = ml.Model("/rom/fomo_face_detection.tflite")
model = ml.Model("/rom/fomo_face_detection.tflite", postprocess=Fomo(threshold=0.4))
print(model)
# Alternatively, models can be loaded from the filesystem storage.
@ -39,10 +39,9 @@ colors = [ # Add more colors if you are detecting more than 7 types of classes
clock = time.clock()
while True:
clock.tick()
img = sensor.snapshot()
for i, detection_list in enumerate(model.predict([img], callback=fomo_postprocess())):
for i, detection_list in enumerate(model.predict([img])):
if i == 0:
continue # background class
if len(detection_list) == 0:

View File

@ -0,0 +1,44 @@
# This work is licensed under the MIT license.
# Copyright (c) 2013-2025 OpenMV LLC. All rights reserved.
# https://github.com/openmv/openmv/blob/master/LICENSE
#
# TensorFlow Lite YOLO LC Person Detector Example
#
# YOLO LC is a variant of YOLOV2 that is fast enough to run on OpenMV Cams without NPUs.
import csi
import time
import ml
from ml.postprocessing.darknet import YoloLC
# Initialize the sensor.
csi0 = csi.CSI()
csi0.reset()
csi0.pixformat(csi.RGB565)
csi0.framesize(csi.VGA)
csi0.window((400, 400))
# Load built-in person detection model
model = ml.Model("/rom/yolo_lc_192.tflite", postprocess=YoloLC(threshold=0.4))
print(model)
# Visualization parameters.
n = len(model.labels)
model_class_colors = [(int(255 * i // n), int(255 * (n - i - 1) // n), 255) for i in range(n)]
clock = time.clock()
while True:
clock.tick()
img = csi0.snapshot()
# boxes is a list of list per class of ((x, y, w, h), score) tuples
boxes = model.predict([img])
# Draw bounding boxes around the detected objects
for i, class_detections in enumerate(boxes):
rects = [r for r, score in class_detections]
labels = [model.labels[i] for j in range(len(rects))]
colors = [model_class_colors[i] for j in range(len(rects))]
ml.utils.draw_predictions(img, rects, labels, colors, format=None)
print(clock.fps(), "fps")

View File

@ -0,0 +1,50 @@
# This work is licensed under the MIT license.
# Copyright (c) 2013-2025 OpenMV LLC. All rights reserved.
# https://github.com/openmv/openmv/blob/master/LICENSE
#
# TensorFlow Lite YOLO V2 Example
#
# This example runs a YOLO V2 object detection model.
# Please see OpenMV IDE's model zoo for example yolo v2 models.
#
# For more information on YOLO V2, please see:
# https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/tiny_yolo_v2
#
# NOTE: This exaxmple requires an OpenMV Cam with an NPU like the AE3 or N6 to run real-time.
import csi
import time
import ml
from ml.postprocessing.darknet import YoloV2
# Initialize the sensor.
csi0 = csi.CSI()
csi0.reset()
csi0.pixformat(csi.RGB565)
csi0.framesize(csi.VGA)
csi0.window((400, 400))
# Load YOLO V2 model from ROM FS.
model = ml.Model("/rom/<model_file_name>", postprocess=YoloV2(threshold=0.4))
print(model)
# Visualization parameters.
n = len(model.labels)
model_class_colors = [(int(255 * i // n), int(255 * (n - i - 1) // n), 255) for i in range(n)]
clock = time.clock()
while True:
clock.tick()
img = csi0.snapshot()
# boxes is a list of list per class of ((x, y, w, h), score) tuples
boxes = model.predict([img])
# Draw bounding boxes around the detected objects
for i, class_detections in enumerate(boxes):
rects = [r for r, score in class_detections]
labels = [model.labels[i] for j in range(len(rects))]
colors = [model_class_colors[i] for j in range(len(rects))]
ml.utils.draw_predictions(img, rects, labels, colors, format=None)
print(clock.fps(), "fps")

View File

@ -0,0 +1,50 @@
# This work is licensed under the MIT license.
# Copyright (c) 2013-2025 OpenMV LLC. All rights reserved.
# https://github.com/openmv/openmv/blob/master/LICENSE
#
# TensorFlow Lite YOLO V5 Example
#
# This example runs a YOLO V5 object detection model.
# Please see OpenMV IDE's model zoo for example yolo v5 models.
#
# You can train your own custom YOLOV5 models using Edge Impulse:
# https://github.com/edgeimpulse/ml-block-yolov5
#
# NOTE: This exaxmple requires an OpenMV Cam with an NPU like the AE3 or N6 to run real-time.
import csi
import time
import ml
from ml.postprocessing.ultralytics import YoloV5
# Initialize the sensor.
csi0 = csi.CSI()
csi0.reset()
csi0.pixformat(csi.RGB565)
csi0.framesize(csi.VGA)
csi0.window((400, 400))
# Load YOLO V5 model from ROM FS.
model = ml.Model("/rom/<model_file_name>", postprocess=YoloV5(threshold=0.4))
print(model)
# Visualization parameters.
n = len(model.labels)
model_class_colors = [(int(255 * i // n), int(255 * (n - i - 1) // n), 255) for i in range(n)]
clock = time.clock()
while True:
clock.tick()
img = csi0.snapshot()
# boxes is a list of list per class of ((x, y, w, h), score) tuples
boxes = model.predict([img])
# Draw bounding boxes around the detected objects
for i, class_detections in enumerate(boxes):
rects = [r for r, score in class_detections]
labels = [model.labels[i] for j in range(len(rects))]
colors = [model_class_colors[i] for j in range(len(rects))]
ml.utils.draw_predictions(img, rects, labels, colors, format=None)
print(clock.fps(), "fps")

View File

@ -47,7 +47,6 @@ class YoloV2:
def __init__(self, threshold=0.6, anchors=None, nms_threshold=0.1, nms_sigma=0.1):
self.threshold = threshold
self.anchors = anchors
self.anchors_len = len(self.anchors)
self.nms_threshold = nms_threshold
self.nms_sigma = nms_sigma
@ -58,6 +57,8 @@ class YoloV2:
[5.55170, 9.30660],
[9.72600, 11.1422]])
self.anchors_len = len(self.anchors)
def __call__(self, model, inputs, outputs):
def softmax(x):