This commit is contained in:
Refound-445 2024-11-16 17:45:31 +08:00
parent b9b54d500e
commit 4f129a0be7
13 changed files with 608 additions and 226 deletions

View File

@ -187,41 +187,50 @@ pip install nonebot-plugin-nailongremove-base -U
在 nonebot2 项目的 `.env` 文件中添加下表中的必填配置
| 配置项 | 必填 | 默认值 | 说明 |
| :-------------------------: | :--: | :--------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| **全局配置** | | | |
| `PROXY` | 否 | `None` | 下载模型等文件时使用的代理地址 |
| **响应配置** | | | |
| `NAILONG_BYPASS_SUPERUSER` | 否 | `True` | 是否不检查超级用户发送的图片 |
| `NAILONG_BYPASS_ADMIN` | 否 | `True` | 是否不检查群组管理员发送的图片 |
| `NAILONG_NEED_ADMIN` | 否 | `False` | 当自身不为群组管理员时是否不检查群内所有图片 |
| `NAILONG_LIST_SCENES` | 否 | `[]` | 聊天场景 ID 黑白名单列表<br />在单级聊天下为该聊天 ID如 QQ 群号;<br />在多级聊天下为以 `_` 分割的各级聊天 ID如频道下的子频道或频道下私聊 |
| `NAILONG_BLACKLIST` | 否 | `True` | 是否使用黑名单模式 |
| `NAILONG_USER_BLACKLIST` | 否 | `[]` | 用户 ID 黑名单列表 |
| `NAILONG_PRIORITY` | 否 | `100` | Matcher 优先级 |
| **行为配置** | | | |
| `NAILONG_RECALL` | 否 | `True` | 是否撤回消息 |
| `NAILONG_MUTE_SECONDS` | 否 | `0` | 设置禁言时间,默认为 0 即不禁言<br/>单位:秒 |
| `NAILONG_TIP` | 否 | `{"nailong": "本群禁止发奶龙!"}` | 发送的提示,使用 [Alconna 的消息模板](https://nonebot.dev/docs/best-practice/alconna/uniseg#%E4%BD%BF%E7%94%A8%E6%B6%88%E6%81%AF%E6%A8%A1%E6%9D%BF),可用变量见下,可以根据标签自定义对应值,如遇其中没有的标签会回退到 `nailong` |
| `NAILONG_FAILED_TIP` | 否 | `{"nailong": "{:Reply($message_id)}呜,不要发奶龙了嘛 🥺 👉👈"}` | 撤回失败或禁用撤回时发送的提示,同上 |
| `NAILONG_CHECK_ALL_FRAMES` | 否 | `False` | 使用模型 1 时是否检查图片中的所有帧,启用该项后消息模板中的 `$checked_result` 变量当原图为动图时会变为动图 |
| **模型通用配置** | | | |
| `NAILONG_MODEL_DIR` | 否 | `./data/nailongremove` | 模型的下载位置 |
| `NAILONG_MODEL` | 否 | `1` | 选择需要加载的模型,可用模型见下 |
| `NAILONG_AUTO_UPDATE_MODEL` | 否 | `True` | 是否自动更新模型 |
| `NAILONG_CONCURRENCY` | 否 | `1` | 当图片为动图时,针对该图片并发识别图片帧的最大并发数 |
| `NAILONG_ONNX_PROVIDERS` | 否 | `["CPUExecutionProvider"]` | 加载 onnx 模型使用的 provider 列表,请参考上方安装文档 |
| **模型 1 特定配置** | | | |
| `NAILONG_MODEL1_TYPE` | 否 | `tiny` | 模型 1 使用的模型类型,可用 `tiny` / `m` |
| `NAILONG_MODEL1_YOLOX_SIZE` | 否 | `None` | 针对模型 1自定义模型输入可能会有尺寸更改 |
| `NAILONG_MODEL1_SCORE` | 否 | `{"nailong": 0.5}` | 模型 1 置信度阈值,范围 `0` ~ `1`,可以根据标签自定义对应值,设置对应标签的阈值以检测该标签,设为 `null` 或者不填可以忽略该标签 |
| **杂项配置** | | | |
| `NAILONG_GITHUB_TOKEN` | 否 | `None` | GitHub Access Token遇到模型下载或更新问题时可尝试填写 |
| 配置项 | 必填 | 默认值 | 说明 |
|:-----------------------------------:| :--: |:-------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| **全局配置** | | | |
| `PROXY` | 否 | `None` | 下载模型等文件时使用的代理地址 |
| **响应配置** | | | |
| `NAILONG_BYPASS_SUPERUSER` | 否 | `True` | 是否不检查超级用户发送的图片 |
| `NAILONG_BYPASS_ADMIN` | 否 | `True` | 是否不检查群组管理员发送的图片 |
| `NAILONG_NEED_ADMIN` | 否 | `False` | 当自身不为群组管理员时是否不检查群内所有图片 |
| `NAILONG_LIST_SCENES` | 否 | `[]` | 聊天场景 ID 黑白名单列表<br />在单级聊天下为该聊天 ID如 QQ 群号;<br />在多级聊天下为以 `_` 分割的各级聊天 ID如频道下的子频道或频道下私聊 |
| `NAILONG_BLACKLIST` | 否 | `True` | 是否使用黑名单模式 |
| `NAILONG_USER_BLACKLIST` | 否 | `[]` | 用户 ID 黑名单列表 |
| `NAILONG_PRIORITY` | 否 | `100` | Matcher 优先级 |
| **行为配置** | | | |
| `NAILONG_RECALL` | 否 | `True` | 是否撤回消息 |
| `NAILONG_MUTE_SECONDS` | 否 | `0` | 设置禁言时间,默认为 0 即不禁言<br/>单位:秒 |
| `NAILONG_TIP` | 否 | `{"nailong": "本群禁止发奶龙!"}` | 发送的提示,使用 [Alconna 的消息模板](https://nonebot.dev/docs/best-practice/alconna/uniseg#%E4%BD%BF%E7%94%A8%E6%B6%88%E6%81%AF%E6%A8%A1%E6%9D%BF),可用变量见下,可以根据标签自定义对应值,如遇其中没有的标签会回退到 `nailong` |
| `NAILONG_FAILED_TIP` | 否 | `{"nailong": "{:Reply($message_id)}呜,不要发奶龙了嘛 🥺 👉👈"}` | 撤回失败或禁用撤回时发送的提示,同上 |
| `NAILONG_CHECK_ALL_FRAMES` | 否 | `False` | 使用模型 1 时是否检查图片中的所有帧,需要同时设置`NAILONG_CHECK_MODE`为0启用该项后消息模板中的 `$checked_result` 变量当原图为动图时会变为动图 |
| `NAILONG_CHECK_MODE` | 否 | `0` | 选择对GIF动图的检测方式<br/>0.检测所有帧<br/>1.只检测第一帧<br/>2.随机抽帧检测 |
| **相似度检测配置** | | | |
| `NAILONG_SIMILARITY_ON` | 否 | `False` | 是否启用处理图片前对本地存储进行相似度检测(该功能仍在更新中,目前可能耗能较大且处理较慢) |
| `NAILONG_SIMILARITY_MAX_STORAGE` | 否 | `10` | 本地存储报错图片最大上限,到达上限会压缩并删除上次记录 |
| `NAILONG_SIMILARITY_MAX_BATCH_SIZE` | 否 | `10` | 本地存储相似度检测时处理的最大批数量 |
| **模型通用配置** | | | |
| `NAILONG_MODEL_DIR` | 否 | `./data/nailongremove` | 模型的下载位置 |
| `NAILONG_MODEL` | 否 | `1` | 选择需要加载的模型,可用模型见下 |
| `NAILONG_AUTO_UPDATE_MODEL` | 否 | `True` | 是否自动更新模型 |
| `NAILONG_CONCURRENCY` | 否 | `1` | 当图片为动图时,针对该图片并发识别图片帧的最大并发数 |
| `NAILONG_ONNX_PROVIDERS` | 否 | `["CPUExecutionProvider"]` | 加载 onnx 模型使用的 provider 列表,请参考上方安装文档 |
| **模型 1 特定配置** | | | |
| `NAILONG_MODEL1_TYPE` | 否 | `tiny` | 模型 1 使用的模型类型,可用 `tiny` / `m` |
| `NAILONG_MODEL1_YOLOX_SIZE` | 否 | `None` | 针对模型 1自定义模型输入可能会有尺寸更改 |
| **模型 2 特定配置** | | | |
| `NAILONG_MODEL2_ONLINE` | 否 | `False` | 针对模型 2是否启用在线推理此模式目前不适用`NAILONG_CHECK_MODE`为0 |
| **模型 1&2 特定配置** | | | |
| `NAILONG_MODEL1_SCORE` | 否 | `{"nailong": 0.5}` | 模型 1&2 置信度阈值,范围 `0` ~ `1`,可以根据标签自定义对应值,设置对应标签的阈值以检测该标签,设为 `null` 或者不填可以忽略该标签 |
| **杂项配置** | | | |
| `NAILONG_GITHUB_TOKEN` | 否 | `None` | GitHub Access Token遇到模型下载或更新问题时可尝试填写 |
### 可用模型
- `0`:基于 Renet50 图像分类模型训练推理,感谢 @spawner1145 提供的模型,原链接:[spawner1145/NailongRecognize](https://github.com/spawner1145/NailongRecognize.git)
- `1`:基于 YOLOX 目标检测模型训练推理,感谢 @NKXingXh 提供的模型,原链接:[nkxingxh/NailongDetection](https://github.com/nkxingxh/NailongDetection)
- `2`:基于 YOLOv11 目标检测模型训练推理,感谢 @Hakureirm 提供的模型,原链接:[Hakureirm/NailongKiller](https://huggingface.co/spaces/Hakureirm/NailongKiller)
### 消息模板可用变量
@ -238,16 +247,23 @@ pip install nonebot-plugin-nailongremove-base -U
只要有人发奶龙表情包被识别出来,就会被撤回并提醒。
本地存储报错图片(`SUPERUSERS`);发送"这是[种类]"+图片,例如:"这是nailong+图片",便会自动存储到本地,开启相似度检测后,在下一次检测图片会优先识别本地已存储的图片。
## 📞 联系
- Nonebot2 官方交流群768887710基础的安装部署问题可在这里询问
- 机器人插件学习交流群200980266安装部署机器人 BUG 模型精度等问题反馈来这里哟
- 人工智能学习交流群949992679学习交流 AI 相关技术可以来这里捏)
- 机器人插件学习交流群200980266机器人 BUG 模型精度等问题反馈来这里哟)
欢迎大家进群一起学习交流~
## 📝 更新日志
### 2.3.2
- 更新对GIF动图的三种帧处理模式通过`NAILONG_CHECK_MODE`自行选择
- 更新对于报错图片临时处理方案,通过设置`NAILONG_SIMILARITY_ON`开启浏览本地存储相似度匹配,通过`SUPERUSERS`发送"这是[种类]"+图片,可将报错图片保存到本地记录
- `NAILONG_MODEL`加入model2基于YOLOv11训练的模型目前仅支持奶龙识别
### 2.3.1
- 修改插件依赖以避免一些问题,影响了安装过程,请查看安装文档了解

View File

@ -9,7 +9,7 @@ require("nonebot_plugin_uninfo")
from . import handler as handler
from .config import Config
__version__ = "2.3.1.post1"
__version__ = "2.3.0"
__plugin_meta__ = PluginMetadata(
name="自动撤回奶龙",
description="一个基于图像分类模型的简单插件~",

View File

@ -13,6 +13,7 @@ DEFAULT_LABEL = "nailong"
class ModelType(int, Enum):
CLASSIFICATION = 0
TARGET_DETECTION = 1
HF_DETECTION = 2
class Model1Type(StrEnum):
@ -54,13 +55,18 @@ class Config(BaseModel):
nailong_model: ModelType = ModelType.TARGET_DETECTION
nailong_auto_update_model: bool = True
nailong_concurrency: int = 1
nailong_onnx_providers: List[str] = ["CPUExecutionProvider"]
nailong_onnx_try_to_use_gpu: bool = True
nailong_model1_type: Model1Type = Model1Type.TINY
nailong_model1_yolox_size: Optional[Tuple[int, int]] = None
nailong_model1_score: Dict[str, Optional[float]] = {
DEFAULT_LABEL: 0.5,
}
nailong_model2_online: bool = False
nailong_check_mode: int = 0
nailong_similarity_on: bool = False
nailong_similarity_max_storage: int = 10
nailong_similarity_max_batch_size: int = 10
nailong_github_token: Optional[str] = None
@ -71,7 +77,9 @@ class Config(BaseModel):
mode="before",
)
def transform_to_dict(cls, v: Any): # noqa: N805
return v if isinstance(v, dict) else {DEFAULT_LABEL: v}
if not isinstance(v, dict):
return {DEFAULT_LABEL: v}
return v
@field_validator(
"nailong_tip",
@ -84,21 +92,5 @@ class Config(BaseModel):
raise ValueError(f"Please ensure default label {DEFAULT_LABEL} in dict")
return v
@field_validator("nailong_onnx_providers", mode="before")
def transform_to_list(cls, v: Any): # noqa: N805
return v if isinstance(v, list) else [v]
@field_validator("nailong_onnx_providers", mode="after")
def validate_provider_available(cls, v: Any): # noqa: N805
try:
from onnxruntime.capi import _pybind_state as c
except ImportError:
pass
else:
available_providers: List[str] = c.get_available_providers() # type: ignore
if any(p not in available_providers for p in v):
raise ValueError(f"Provider {v} not available in onnxruntime")
return v
config = get_plugin_config(Config)

View File

@ -128,8 +128,8 @@ async def _(source: PilImageFrameSource, frames: Iterator[np.ndarray]) -> Segmen
def repack_save(
source: FrameSource,
frames: Iterator[np.ndarray],
source: FrameSource,
frames: Iterator[np.ndarray],
) -> Awaitable[Segment]:
if (k := type(source)) not in repack_savers:
raise NotImplementedError
@ -180,7 +180,7 @@ async def extract_source(seg: Segment) -> FrameSource:
async def iter_sources_in_message(
message: UniMessage,
message: UniMessage,
) -> AsyncIterator[Tuple[FrameSource, Segment]]:
for seg in message:
try:

View File

@ -1,16 +1,17 @@
import re
from typing import Any, Awaitable, Callable, Iterable, List, TypeVar
from nonebot import logger, on_message
from nonebot.adapters import Bot as BaseBot, Event as BaseEvent
from nonebot.permission import SUPERUSER
from nonebot.rule import Rule
from nonebot_plugin_alconna.uniseg import UniMessage, UniMsg
from nonebot_plugin_alconna.uniseg import UniMessage, UniMsg, Text
from nonebot_plugin_uninfo import QryItrface, Uninfo
from .config import DEFAULT_LABEL, config
from .frame_source import iter_sources_in_message, source_extractors
from .model import check
from .uniapi import mute, recall
from .model.utils.common import process_gif_and_save_jpgs
T = TypeVar("T")
@ -20,7 +21,7 @@ def judge_list(lst: Iterable[T], val: T, blacklist: bool) -> bool:
async def execute_functions_any_ok(
func: Iterable[Callable[[], Awaitable[Any]]],
func: Iterable[Callable[[], Awaitable[Any]]],
) -> bool:
ok = False
for f in func:
@ -35,36 +36,36 @@ async def execute_functions_any_ok(
async def nailong_rule(
bot: BaseBot,
event: BaseEvent,
session: Uninfo,
ss_interface: QryItrface,
msg: UniMsg,
bot: BaseBot,
event: BaseEvent,
session: Uninfo,
ss_interface: QryItrface,
msg: UniMsg,
) -> bool:
return (
# check if it's a group chat
bool(session.member) # this prop only exists in group chats
# user blacklist
and (session.user.id not in config.nailong_user_blacklist)
# scene blacklist or whitelist
and judge_list(
config.nailong_list_scenes,
session.scene_path,
config.nailong_blacklist,
)
# bypass superuser
and ((not config.nailong_bypass_superuser) or (not await SUPERUSER(bot, event)))
# bypass group admin
and (
(not config.nailong_bypass_admin)
or ((not session.member.role) or session.member.role.level <= 1)
)
# msg has supported seg
and (any(True for x in msg if type(x) in source_extractors))
# self is admin
and (
(not config.nailong_need_admin)
or bool(
bool(session.member) # this prop only exists in group chats
# user blacklist
and (session.user.id not in config.nailong_user_blacklist)
# scene blacklist or whitelist
and judge_list(
config.nailong_list_scenes,
session.scene_path,
config.nailong_blacklist,
)
# bypass superuser
and ((not config.nailong_bypass_superuser) or (not await SUPERUSER(bot, event)))
# bypass group admin
and (
(not config.nailong_bypass_admin)
or ((not session.member.role) or session.member.role.level <= 1)
)
# msg has supported seg
and (any(True for x in msg if type(x) in source_extractors))
# self is admin
and (
(not config.nailong_need_admin)
or bool(
(
self_info := await ss_interface.get_member(
session.scene.type,
@ -75,41 +76,64 @@ async def nailong_rule(
and self_info.role
and self_info.role.level > 1,
)
)
)
)
nailong = on_message(rule=Rule(nailong_rule), priority=config.nailong_priority)
input_shape = config.nailong_model1_yolox_size or config.nailong_model1_type.yolox_size
@nailong.handle()
async def handle_function(bot: BaseBot, ev: BaseEvent, msg: UniMsg, session: Uninfo):
save_img = False
if await SUPERUSER(bot, ev):
for seg in msg:
if type(seg) == Text and "这是" in seg.text:
save_img = True
label = re.search(r"这是(\S+)", seg.text.replace(" ", "")).group(1)
break
async for source, seg in iter_sources_in_message(msg):
try:
check_res = await check(source)
except Exception:
logger.exception(f"Failed to check {seg!r}")
continue
if not check_res.ok:
continue
if save_img:
frames = []
temp_iter = iter(source)
while True:
try:
temp_image = next(temp_iter)
frames.append(temp_image)
except StopIteration:
break
zip_filename = process_gif_and_save_jpgs(frames, label, input_shape)
if zip_filename is None:
await nailong.finish(f"已保存数据到目录{config.nailong_model_dir}\\records\\{label},标签:{label}")
else:
await nailong.finish(
f"记录数据超过{config.nailong_similarity_max_storage},已清除原记录数据,压缩并保存至{zip_filename}\n已保存数据到目录{config.nailong_model_dir}\\records\\{label},标签:{label}")
else:
try:
check_res = await check(source)
except Exception:
logger.exception(f"Failed to check {seg!r}")
continue
if not check_res.ok or check_res.label not in config.nailong_tip:
continue
functions: List[Callable[[], Awaitable[Any]]] = []
if config.nailong_recall:
functions.append(lambda: recall(bot, ev))
if config.nailong_mute_seconds > 0:
functions.append(lambda: mute(bot, ev, config.nailong_mute_seconds))
punish_ok = functions and (await execute_functions_any_ok(functions))
template_dict = config.nailong_tip if punish_ok else config.nailong_failed_tip
template_str = template_dict[
check_res.label if (check_res.label in template_dict) else DEFAULT_LABEL
]
mapping = {
"$event": ev,
"$target": msg.get_target(),
"$message_id": msg.get_message_id(),
"$msg": msg,
"$ss": session,
**check_res.extra_vars,
}
await UniMessage.template(template_str).format_map(mapping).finish()
functions: List[Callable[[], Awaitable[Any]]] = []
if config.nailong_recall:
functions.append(lambda: recall(bot, ev))
if config.nailong_mute_seconds > 0:
functions.append(lambda: mute(bot, ev, config.nailong_mute_seconds))
punish_ok = functions and (await execute_functions_any_ok(functions))
template_dict = config.nailong_tip if punish_ok else config.nailong_failed_tip
template_str = template_dict[
check_res.label if (check_res.label in template_dict) else DEFAULT_LABEL
]
mapping = {
"$event": ev,
"$target": msg.get_target(),
"$message_id": msg.get_message_id(),
"$msg": msg,
"$ss": session,
**check_res.extra_vars,
}
await UniMessage.template(template_str).format_map(mapping).finish()

View File

@ -22,17 +22,11 @@ if config.nailong_model is ModelType.CLASSIFICATION:
raise_extra_import_error(e, "model0")
elif config.nailong_model is ModelType.TARGET_DETECTION:
try:
from .target_detection import check as check
except ImportError as e:
raise ImportError(
"To avoid dependency issues, please install onnxruntime manually.\n"
"If you have a compatible GPU, "
"please run `pip install onnxruntime-gpu` in your project's environment, "
"then edit plugin's `NAILONG_ONNX_PROVIDERS` config to use it;\n"
"Otherwise run `pip install onnxruntime` in your project's environment "
"and use CPU to compute.",
) from e
pass
elif config.nailong_model is ModelType.HF_DETECTION:
from .hf_detection import check as check
else:
raise NotImplementedError # never reach here
raise ValueError("Invalid model type")

View File

@ -9,8 +9,9 @@ from torchvision import transforms
from ..config import DEFAULT_LABEL
from ..frame_source import FrameSource
from .utils.common import CheckResult, CheckSingleResult, race_check
from .utils.common import CheckResult, CheckSingleResult, race_check, similarity_process
from .utils.update import GitHubRepoModelUpdater
from ..config import config
model_path = GitHubRepoModelUpdater(
"spawner1145",
@ -35,17 +36,23 @@ SIZE = 224
@run_sync
def check_single(image: np.ndarray) -> CheckSingleResult[None]:
if image.shape[0] < SIZE or image.shape[1] < SIZE:
def check_single(image: np.ndarray, is_gif: bool = False) -> CheckSingleResult[None]:
if is_gif:
res = similarity_process(image, dsize=(SIZE, SIZE))
if res is not None:
return res
return CheckSingleResult.not_ok(None)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (SIZE, SIZE))
image = transform(image)
image = image.unsqueeze(0) # type: ignore
with torch.no_grad():
output = model(image.to(device)) # type: ignore
_, pred = torch.max(output, 1)
return CheckSingleResult(ok=pred.item() == 1, label=DEFAULT_LABEL, extra=None)
else:
if image.shape[0] < SIZE or image.shape[1] < SIZE:
return CheckSingleResult.not_ok(None)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (SIZE, SIZE))
image = transform(image)
image = image.unsqueeze(0) # type: ignore
with torch.no_grad():
output = model(image.to(device)) # type: ignore
_, pred = torch.max(output, 1)
return CheckSingleResult(ok=pred.item() == 1, label=DEFAULT_LABEL, extra=None)
async def check(source: FrameSource):

View File

@ -0,0 +1,186 @@
import asyncio
import datetime
import os
import cv2
from PIL import Image
import torch
import numpy as np
from cookit import with_semaphore
from nonebot.utils import run_sync
from ..config import config
from ..frame_source import FrameSource, repack_save
from .utils.common import CheckResult, CheckSingleResult, race_check, similarity_process
import itertools
from nonebot import logger
if config.nailong_model2_online:
from gradio_client import Client, handle_file
import base64
import io
import shutil
FILENAME = "nailong_yolo11.pt"
client = Client("Hakureirm/NailongKiller")
logger.info(f"Using model {FILENAME} online")
else:
from ultralytics import YOLO
from huggingface_hub import hf_hub_download, hf_api
REPO_ID = "Hakureirm/NailongKiller"
FILENAME = "nailong_yolo11.pt"
model_path = os.path.join(str(config.nailong_model_dir), FILENAME)
if config.nailong_auto_update_model or not os.path.exists(model_path):
api = hf_api.HfApi()
file_path = os.path.join(str(config.nailong_model_dir), FILENAME)
model_info = api.model_info(REPO_ID)
def get_file_last_modified_time(file_path):
try:
timestamp = os.path.getmtime(file_path)
last_modified_time = datetime.datetime.fromtimestamp(timestamp, tz=datetime.timezone.utc)
return last_modified_time
except FileNotFoundError:
return None
local_time = get_file_last_modified_time(file_path)
if local_time is None or model_info.last_modified >= local_time:
hf_hub_download(repo_id=REPO_ID, filename=FILENAME, local_dir=config.nailong_model_dir)
logger.info(f"Update model {FILENAME} successfully!")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = YOLO(model_path).to(device)
logger.info(f"Using model {FILENAME}")
input_shape = config.nailong_model1_yolox_size or config.nailong_model1_type.yolox_size
@run_sync
def _check_single(frame: np.ndarray, is_gif: bool = False) -> CheckSingleResult:
if is_gif:
res = similarity_process(frame, dsize=input_shape)
if res is not None:
return CheckSingleResult(ok=res.ok, label=res.label, extra=frame)
return CheckSingleResult(ok=False, label=None, extra=frame)
else:
if config.nailong_model2_online:
input_image = Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
if not os.path.exists(os.path.join(str(config.nailong_model_dir), "online_temp")):
os.makedirs(os.path.join(str(config.nailong_model_dir), "online_temp"))
image_path = os.path.join(str(config.nailong_model_dir), "online_temp",
"temp_{}.jpg".format(datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")))
while os.path.exists(image_path):
basename = os.path.basename(image_path)
image_path = os.path.join(str(config.nailong_model_dir), "online_temp", f"exist-{basename}")
input_image.save(image_path, format='JPEG')
result_image, result_info = client.predict(
img=handle_file(image_path),
api_name="/predict"
)
os.remove(image_path)
if "检测到的目标数量: " in result_info and int(
result_info.split("检测到的目标数量: ")[1].split("\n")[0]) < 1:
return CheckSingleResult(ok=False, label=None, extra=frame)
if isinstance(result_image, str):
if result_image.startswith('data:image'):
img_data = base64.b64decode(result_image.split(',')[1])
img = Image.open(io.BytesIO(img_data))
result_image = np.array(img)
else:
img_data = result_image
img = Image.open(img_data)
result_image = np.array(img)
shutil.rmtree(os.path.dirname(os.path.dirname(img_data)))
result_image = cv2.cvtColor(result_image, cv2.COLOR_BGR2RGB)
return CheckSingleResult(ok=True, label="nailong", extra=result_image)
else:
input_image = Image.fromarray(frame)
original_size = input_image.size
max_size = max(original_size)
pad_w = max_size - original_size[0]
pad_h = max_size - original_size[1]
padded_img = Image.new('RGB', (max_size, max_size), (114, 114, 114))
padded_img.paste(input_image, (pad_w // 2, pad_h // 2))
img_array = np.array(padded_img)
results = model.predict(
img_array,
conf=config.nailong_model1_score['nailong'],
iou=0.5,
max_det=100,
verbose=False
)
cls = results[0].boxes.cls
if len(cls) < 1:
return CheckSingleResult(ok=False, label=None, extra=frame)
result_img = results[0].plot()
if pad_w > 0 or pad_h > 0:
result_img = result_img[pad_h // 2:pad_h // 2 + original_size[1],
pad_w // 2:pad_w // 2 + original_size[0]]
return CheckSingleResult(ok=True, label='nailong', extra=result_img)
async def check_single(frame: np.ndarray, is_gif: bool = False) -> CheckSingleResult:
if is_gif:
res = await _check_single(frame, True)
return CheckSingleResult(
ok=res.ok,
label=res.label,
extra=res.extra,
)
else:
res = await _check_single(frame)
return CheckSingleResult(
ok=res.ok,
label=res.label,
extra=res.extra,
)
async def check(source: FrameSource) -> CheckResult:
label = None
extra_vars = {}
if config.nailong_check_all_frames and config.nailong_check_mode == 0:
if config.nailong_similarity_on:
tem_source = itertools.tee(source, 1)[0]
sem = asyncio.Semaphore(config.nailong_concurrency)
results = await asyncio.gather(
*(with_semaphore(sem)(check_single)(frame, True) for frame in tem_source),
)
ok = any(r.ok for r in results)
else:
ok = False
if not ok:
sem = asyncio.Semaphore(config.nailong_concurrency)
results = await asyncio.gather(
*(with_semaphore(sem)(check_single)(frame) for frame in source),
)
ok = any(r.ok for r in results)
if ok:
all_labels = {r.label for r in results if r.label}
label = next(
(x for x in config.nailong_model1_score if x in all_labels),
None,
)
extra_vars["$checked_result"] = await repack_save(
source,
(r.extra for r in results),
)
else:
res = await race_check(check_single, source)
ok = bool(res)
if res:
label = res.label
extra_vars["$checked_result"] = await repack_save(
source,
iter((res.extra,)),
)
return CheckResult(ok, label, extra_vars)

View File

@ -3,19 +3,19 @@ from dataclasses import dataclass
from typing import Optional
from typing_extensions import override
# import torch before onnxruntime
import torch as torch # isort: skip
import onnxruntime # isort: skip
import numpy as np
import onnxruntime
from cookit import with_semaphore
from nonebot.utils import run_sync
from ..config import config
from ..frame_source import FrameSource, repack_save
from .utils.common import CheckResult, CheckSingleResult, race_check
from .utils.update import GitHubLatestReleaseModelUpdater, ModelInfo, UpdaterGroup
from .utils.yolox import demo_postprocess, multiclass_nms, preprocess, vis
from plugins.nonebot_plugin_nailongremove.config import config
from plugins.nonebot_plugin_nailongremove.frame_source import FrameSource, repack_save
from plugins.nonebot_plugin_nailongremove.model.utils.common import CheckResult, CheckSingleResult, race_check, \
similarity_process
from plugins.nonebot_plugin_nailongremove.model.utils.update import GitHubLatestReleaseModelUpdater, ModelInfo, \
UpdaterGroup
from plugins.nonebot_plugin_nailongremove.model.utils.yolox import demo_postprocess, multiclass_nms, preprocess, vis
import itertools
model_filename_sfx = f"_{config.nailong_model1_type.value}.onnx"
@ -47,7 +47,15 @@ labels = labels_path.read_text("u8").splitlines()
session = onnxruntime.InferenceSession(
model_path,
providers=config.nailong_onnx_providers,
providers=(
[
"TensorrtExecutionProvider",
"CUDAExecutionProvider",
"CPUExecutionProvider",
]
if config.nailong_onnx_try_to_use_gpu
else ["CPUExecutionProvider"]
),
)
input_shape = config.nailong_model1_yolox_size or config.nailong_model1_type.yolox_size
@ -80,60 +88,84 @@ class FrameInfo:
@run_sync
def _check_single(frame: np.ndarray) -> CheckSingleResult[Optional[Detections]]:
img, ratio = preprocess(frame, input_shape)
ort_inputs = {session.get_inputs()[0].name: img[None, :, :, :]}
output = session.run(None, ort_inputs)
predictions = demo_postprocess(output[0], input_shape)[0]
def _check_single(frame: np.ndarray, is_gif: bool = False) -> CheckSingleResult[Optional[Detections]]:
if is_gif:
res = similarity_process(frame, dsize=input_shape)
if res is not None:
return res
return CheckSingleResult.not_ok(None)
else:
img, ratio = preprocess(frame, input_shape)
ort_inputs = {session.get_inputs()[0].name: img[None, :, :, :]}
output = session.run(None, ort_inputs)
predictions = demo_postprocess(output[0], input_shape)[0]
boxes = predictions[:, :4]
scores = predictions[:, 4:5] * predictions[:, 5:]
boxes = predictions[:, :4]
scores = predictions[:, 4:5] * predictions[:, 5:]
boxes_xyxy = np.ones_like(boxes)
boxes_xyxy[:, 0] = boxes[:, 0] - boxes[:, 2] / 2.0
boxes_xyxy[:, 1] = boxes[:, 1] - boxes[:, 3] / 2.0
boxes_xyxy[:, 2] = boxes[:, 0] + boxes[:, 2] / 2.0
boxes_xyxy[:, 3] = boxes[:, 1] + boxes[:, 3] / 2.0
boxes_xyxy /= ratio
dets = multiclass_nms(boxes_xyxy, scores, nms_thr=0.45, score_thr=0.1)
if dets is None:
boxes_xyxy = np.ones_like(boxes)
boxes_xyxy[:, 0] = boxes[:, 0] - boxes[:, 2] / 2.0
boxes_xyxy[:, 1] = boxes[:, 1] - boxes[:, 3] / 2.0
boxes_xyxy[:, 2] = boxes[:, 0] + boxes[:, 2] / 2.0
boxes_xyxy[:, 3] = boxes[:, 1] + boxes[:, 3] / 2.0
boxes_xyxy /= ratio
dets = multiclass_nms(boxes_xyxy, scores, nms_thr=0.45, score_thr=0.1)
if dets is None:
return CheckSingleResult.not_ok(None)
final_boxes, final_scores, final_cls_ids = (
dets[:, :4], # type: ignore
dets[:, 4], # type: ignore
dets[:, 5], # type: ignore
)
for c, s in zip(final_cls_ids, final_scores):
label = labels[int(c)]
expected = config.nailong_model1_score.get(label)
if (expected is not None) and s >= expected:
return CheckSingleResult(
ok=True,
label=label,
extra=Detections(final_boxes, final_scores, final_cls_ids),
)
return CheckSingleResult.not_ok(None)
final_boxes, final_scores, final_cls_ids = (
dets[:, :4], # type: ignore
dets[:, 4], # type: ignore
dets[:, 5], # type: ignore
)
for c, s in zip(final_cls_ids, final_scores):
label = labels[int(c)]
expected = config.nailong_model1_score.get(label)
if (expected is not None) and s >= expected:
return CheckSingleResult(
ok=True,
label=label,
extra=Detections(final_boxes, final_scores, final_cls_ids),
)
return CheckSingleResult.not_ok(None)
async def check_single(frame: np.ndarray) -> CheckSingleResult[FrameInfo]:
res = await _check_single(frame)
return CheckSingleResult(
ok=res.ok,
label=res.label,
extra=FrameInfo(frame, res.extra),
)
async def check_single(frame: np.ndarray, is_gif: bool = False) -> CheckSingleResult[FrameInfo]:
if is_gif:
res = await _check_single(frame, True)
return CheckSingleResult(
ok=res.ok,
label=res.label,
extra=FrameInfo(frame, res.extra),
)
else:
res = await _check_single(frame)
return CheckSingleResult(
ok=res.ok,
label=res.label,
extra=FrameInfo(frame, res.extra),
)
async def check(source: FrameSource) -> CheckResult:
label = None
extra_vars = {}
if config.nailong_check_all_frames:
sem = asyncio.Semaphore(config.nailong_concurrency)
results = await asyncio.gather(
*(with_semaphore(sem)(check_single)(frame) for frame in source),
)
ok = any(r.ok for r in results)
if config.nailong_check_all_frames and config.nailong_check_mode == 0:
if config.nailong_similarity_on:
tem_source = itertools.tee(source, 1)[0]
sem = asyncio.Semaphore(config.nailong_concurrency)
results = await asyncio.gather(
*(with_semaphore(sem)(check_single)(frame, True) for frame in tem_source),
)
ok = any(r.ok for r in results)
else:
ok = False
if not ok:
sem = asyncio.Semaphore(config.nailong_concurrency)
results = await asyncio.gather(
*(with_semaphore(sem)(check_single)(frame) for frame in source),
)
ok = any(r.ok for r in results)
if ok:
all_labels = {r.label for r in results if r.label}
label = next(

View File

@ -1,7 +1,17 @@
import asyncio
import datetime
import glob
import os
import random
import shutil
from dataclasses import dataclass, field
from typing import Any, Awaitable, Callable, Dict, Generic, Optional, TypeVar
import itertools
import cv2
import torch
from PIL import Image
from typing_extensions import TypeAlias
import torch.nn.functional as F
import numpy as np
@ -10,6 +20,8 @@ from ...frame_source import FrameSource
T = TypeVar("T")
device = torch.device("cuda" if config.nailong_onnx_try_to_use_gpu and torch.cuda.is_available() else "cpu")
@dataclass
class CheckSingleResult(Generic[T]):
@ -33,36 +45,150 @@ class CheckResult:
return cls(ok=False, label=None, extra_vars={})
FrameChecker: TypeAlias = Callable[[np.ndarray], Awaitable[CheckSingleResult[T]]]
FrameChecker: TypeAlias = Callable[[[np.ndarray], bool], Awaitable[CheckSingleResult[T]]]
async def race_check(
checker: FrameChecker[T],
frames: FrameSource,
concurrency: int = config.nailong_concurrency,
checker: FrameChecker[T],
frames: FrameSource,
concurrency: int = config.nailong_concurrency,
) -> Optional[CheckSingleResult[T]]:
iterator = iter(frames)
if config.nailong_similarity_on:
temp_frames = itertools.tee(frames, 1)[0]
async def worker() -> CheckSingleResult:
if config.nailong_similarity_on:
while True:
try:
frame = next(temp_frames)
except StopIteration:
break
res = await checker(frame, True)
if res.ok:
return res
while True:
try:
frame = next(iterator)
except StopIteration:
return CheckSingleResult.not_ok(None)
res = await checker(frame)
res = await checker(frame, False)
if res.ok:
return res
tasks = [asyncio.create_task(worker()) for _ in range(concurrency)]
while True:
if not tasks:
break
done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)
for t in done:
if (res := t.result()).ok:
for pt in pending:
pt.cancel()
if config.nailong_check_mode == 0:
tasks = [asyncio.create_task(worker()) for _ in range(concurrency)]
while True:
if not tasks:
break
done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)
for t in done:
if (res := t.result()).ok:
for pt in pending:
pt.cancel()
return res
tasks = pending
elif config.nailong_check_mode == 1:
frame = next(iterator)
if config.nailong_similarity_on:
res = await checker(frame, True)
if res.ok:
return res
tasks = pending
res = await checker(frame, False)
if res.ok:
return res
elif config.nailong_check_mode == 2:
records = []
while True:
try:
frame = next(iterator)
records.append(frame)
except StopIteration:
break
frame = records[random.randint(0, len(records) - 1)]
if config.nailong_similarity_on:
res = await checker(frame, True)
if res.ok:
return res
res = await checker(frame, False)
if res.ok:
return res
return None
def similarity_process(image1: np.ndarray, dsize) -> Optional[CheckSingleResult]:
path = list(glob.glob(os.path.join(config.nailong_model_dir, 'records/*/*.jpg')))
if len(path) == 0:
return None
image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)
image1 = cv2.resize(image1, dsize, interpolation=cv2.INTER_LINEAR)
image1_tensor = torch.tensor(image1, dtype=torch.float32).permute(2, 0, 1).unsqueeze(0)
image1_tensor = image1_tensor.reshape(1, -1).to(device)
for i in range(0, len(path), config.nailong_similarity_max_batch_size):
temp_paths = path[i:(
i + config.nailong_similarity_max_batch_size if i + config.nailong_similarity_max_batch_size < len(
path) else len(path))]
image2s = []
for image_path in temp_paths:
image2 = cv2.imread(image_path)
image2 = cv2.cvtColor(image2, cv2.COLOR_BGR2RGB)
image2 = cv2.resize(image2, dsize, interpolation=cv2.INTER_LINEAR)
image2s.append(image2)
image2_tensor = torch.tensor(np.array(image2s), dtype=torch.float32).permute(0, 3, 1, 2)
image2_tensor = image2_tensor.reshape(image2_tensor.shape[0], -1).to(device)
similarities = F.cosine_similarity(image1_tensor, image2_tensor)
indices = torch.nonzero(similarities > 0.99)
index = indices[0].item() if indices.numel() > 0 else None
if index is not None:
image_path = path[index]
label = os.path.split(image_path)[-2].split('\\')[-1]
return CheckSingleResult(ok=True, label=label, extra=None)
return None
def process_gif_and_save_jpgs(frames, label, dsize, similarity_threshold=0.85):
if len(list(glob.glob(
os.path.join(str(config.nailong_model_dir), 'records/*/*.jpg')))) >= config.nailong_similarity_max_storage:
zip_filename = shutil.make_archive(os.path.join(str(config.nailong_model_dir), '{}_records'.format(
datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S"))), 'zip',
os.path.join(str(config.nailong_model_dir), 'records'))
shutil.rmtree(os.path.join(str(config.nailong_model_dir), 'records'))
else:
zip_filename = None
output_dir = os.path.join(str(config.nailong_model_dir), 'records', label)
if not os.path.exists(output_dir):
os.makedirs(output_dir)
frame_count = [i for i in range(len(frames))]
while len(frame_count) > 0:
frame_num1 = frame_count[0]
frame_count.remove(frame_num1)
frame1 = frames[frame_num1]
frame_filename = os.path.join(output_dir, "frame{}_{}.jpg".format(frame_num1,
datetime.datetime.now().strftime(
"%Y-%m-%d_%H-%M-%S")))
while os.path.exists(frame_filename):
frame_filename = "exist-" + frame_filename
cv2.imwrite(frame_filename, frame1)
# frame1 = cv2.cvtColor(frame1, cv2.COLOR_BGR2RGB)
frame1 = cv2.resize(frame1, dsize)
image1_tensor = torch.tensor(frame1, dtype=torch.float32).permute(2, 0, 1).unsqueeze(0)
image1_tensor = image1_tensor.reshape(1, -1).to(device)
max_length = len(list(frame_count))
indexs = []
for i in range(0, max_length, config.nailong_similarity_max_batch_size):
frame2_num = frame_count[i:(
i + config.nailong_similarity_max_batch_size if i + config.nailong_similarity_max_batch_size < max_length else max_length)]
frame2 = [frames[i] for i in frame2_num]
# frame2 = cv2.cvtColor(frame2, cv2.COLOR_BGR2RGB)
frame2 = [cv2.resize(t, dsize) for t in frame2]
image2_tensor = torch.tensor(np.array(frame2), dtype=torch.float32).permute(0, 3, 1, 2)
image2_tensor = image2_tensor.reshape(image2_tensor.shape[0], -1).to(device)
similarities = F.cosine_similarity(image1_tensor, image2_tensor)
indices = torch.nonzero(similarities > similarity_threshold)
index = indices.squeeze().tolist() if indices.numel() > 0 else None
if type(index) is int:
index = [index]
if index is not None:
indexs.extend([frame2_num[i] for i in index])
frame_count = [i for i in frame_count if i not in indexs]
return zip_filename

View File

@ -56,10 +56,10 @@ def create_parent_dir(path: Path, create: bool = True):
def find_file(
path: Path,
checker: Union[Callable[[Path], bool], str, None] = None,
recursive: bool = False,
last_modified: bool = True,
path: Path,
checker: Union[Callable[[Path], bool], str, None] = None,
recursive: bool = False,
last_modified: bool = True,
) -> Optional[Path]:
if isinstance(checker, str) and checker:
if (p := path / checker).exists():
@ -99,10 +99,12 @@ class ModelInfo(Generic[T]):
class ModelUpdater(ABC):
@abstractmethod
def find_from_local(self) -> Optional[Path]: ...
def find_from_local(self) -> Optional[Path]:
...
@abstractmethod
def get_info(self) -> ModelInfo: ...
def get_info(self) -> ModelInfo:
...
@property
def root_dir(self) -> Path:
@ -119,8 +121,8 @@ class ModelUpdater(ABC):
def check_local_ver(self, info: ModelInfo) -> Optional[str]:
if (
self.get_path(info.filename).exists()
and (ver_path := self.get_ver_path(info.filename)).exists()
self.get_path(info.filename).exists()
and (ver_path := self.get_ver_path(info.filename)).exists()
):
return ver_path.read_text(encoding="u8").strip()
return None
@ -162,10 +164,10 @@ class ModelUpdater(ABC):
return
def validate_with_unlink(
self,
path: Path,
info: ModelInfo,
clear_ver: bool = True,
self,
path: Path,
info: ModelInfo,
clear_ver: bool = True,
) -> Any:
try:
return self.validate(path, info)
@ -177,9 +179,9 @@ class ModelUpdater(ABC):
def _get(self, force_update: bool = False) -> Path:
if (
(not force_update)
and (not config.nailong_auto_update_model)
and (local := self.find_from_local())
(not force_update)
and (not config.nailong_auto_update_model)
and (local := self.find_from_local())
):
logger.info("Update skipped")
return local
@ -299,10 +301,10 @@ class GitHubRepoModelUpdater(GitHubModelUpdater):
class GitHubLatestReleaseModelUpdater(GitHubModelUpdater):
def __init__(
self,
owner: str,
repo: str,
local_filename_checker: Optional[Callable[[str], bool]] = None,
self,
owner: str,
repo: str,
local_filename_checker: Optional[Callable[[str], bool]] = None,
) -> None:
super().__init__()
self.owner = owner

View File

@ -92,7 +92,7 @@ _COLORS = (
],
)
.astype(np.float32)
.reshape(-1, 3)
.reshape(-1, 3)
) # fmt: skip

View File

@ -13,13 +13,16 @@ dependencies = [
"nonebot-plugin-uninfo>=0.5.0",
"opencv-python>=4.5",
"numpy>=1.19",
"keras>=2.4",
"pillow>=9",
"cookit[pydantic]>=0.8.1",
"httpx>=0.27.2",
"githubkit>=0.11.14",
"yarl>=1.17.1",
"tqdm>=4.66.6",
"huggingface-hub>=0.26",
"ultralytics>=8.3",
"gradio_client>=1.4"
]
license = { text = "MIT" }
readme = "README.md"