up
Some checks failed
Pre Commit Check / pre-commit (push) Has been cancelled

This commit is contained in:
Refound-445 2025-01-17 17:13:51 +08:00
parent 483aa50a12
commit e3a296e68e
7 changed files with 110 additions and 73 deletions

View File

@ -219,11 +219,12 @@ Add the required configurations from the table below to the `.env` file in your
| `NAILONG_USER_BLACKLIST` | No | `[]` | List of user IDs to be blacklisted. |
| `NAILONG_PRIORITY` | No | `100` | Matcher priority. |
| **Behavior Configuration** | | | |
| `NAILONG_RECALL` | No | `True` | Whether to recall the message. |
| `NAILONG_MUTE_SECONDS` | No | `0` | Time to mute, 0 means no mute (in seconds). |
| `NAILONG_RECALL` | No | `["nailong"]` | Whether to recall the message |
| `NAILONG_MUTE_SECONDS` | No | `{"nailong":0}` | Set the mute duration. If not set or the duration is 0, no mute will be applied.<br/>Unit: seconds | |
| `NAILONG_TIP` | No | `{"nailong": ["This group prohibits NaiLong images!"]}` | Message to send as a tip, using [Alconna message template](https://nonebot.dev/docs/best-practice/alconna/uniseg#%E4%BD%BF%E7%94%A8%E6%B6%88%E6%81%AF%E6%A8%A1%E6%9D%BF) with custom variables. |
| `NAILONG_FAILED_TIP` | No | `{"nailong": ["{:Reply($message_id)} Oh no, please don't send NaiLong images! 🥺 👉👈"]}` | Message sent when recalling fails or when recalling is disabled. |
| `NAILONG_CHECK_ALL_FRAMES` | No | `False` | Specifies whether to check all frames in the image when using model 1. Requires setting `NAILONG_CHECK_MODE` to 0. When enabled, the `$checked_result` variable in the message template will return a GIF if the original image is animated. |
| `NAILONG_CHECK_RATE` | No | `0.8` | When checking all frames of an image, the image will only be recalled or processed if a certain proportion of the frames meet the detection criteria. |
| `NAILONG_CHECK_MODE` | No | `0` | Selects the detection method for GIF animations.<br/>0. Check all frames<br/>1. Check only the first frame<br/>2. Random frame sampling |
| **Similarity Detection Configuration** | | | |
| `NAILONG_SIMILARITY_ON` | No | `False` | Specifies whether to enable similarity detection on local storage before processing images. |
@ -292,6 +293,14 @@ Welcome everyone to join the group for learning and exchange!
## 📝 Changelog
### 2.3.5
- The update adds a feature to select a mute tag, allowing users to choose whether to mute or recall the processing for
different types of images.
- A new configuration option, `NAILONG_CHECK_RATE`, has been added. When detecting all frames of an animated image, this
optional configuration allows success in judgment when the proportion of frames containing the "nailong" frame reaches
a certain threshold.
### 2.3.4
- Added `model3` to `NAILONG_MODEL`, a model trained based on YOLOv11. It is recommended to set `{"nailong": 0.78}` in
@ -301,26 +310,38 @@ Welcome everyone to join the group for learning and exchange!
### 2.3.3
- Optimized temporary processing solutions to reduce performance pressure and improve speed (the vector library faiss also supports GPU processing, but it is not recommended for non-professionals to use GPU due to the complex installation process).
- Optimized temporary processing solutions to reduce performance pressure and improve speed (the vector library faiss
also supports GPU processing, but it is not recommended for non-professionals to use GPU due to the complex
installation process).
- Added `NAILONG_HF_TOKEN` to automatically upload errored images to the Hugging Face dataset.
- Changed the formats of the configuration items `NAILONG_TIP` and `NAILONG_FAILED_TIP`, allowing random response messages. When the corresponding value is an empty list `[]`, only the image will be checked (or the mute/revoke action will be performed) without returning a message.
- Changed the formats of the configuration items `NAILONG_TIP` and `NAILONG_FAILED_TIP`, allowing random response
messages. When the corresponding value is an empty list `[]`, only the image will be checked (or the mute/revoke
action will be performed) without returning a message.
### 2.3.2
- Updated the three frame processing modes for GIFs. You can choose through `NAILONG_CHECK_MODE`.
- Updated the temporary handling for errored images. By enabling `NAILONG_SIMILARITY_ON`, local storage similarity matching can be used. Additionally, by sending "This is [type]"+image through `SUPERUSERS`, errored images can be saved to local records.
- Added `model2` to `NAILONG_MODEL`, which is based on the YOLOv11-trained model. Currently, it only supports Nailong recognition.
- Updated the temporary handling for errored images. By enabling `NAILONG_SIMILARITY_ON`, local storage similarity
matching can be used. Additionally, by sending "This is [type]"+image through `SUPERUSERS`, errored images can be
saved to local records.
- Added `model2` to `NAILONG_MODEL`, which is based on the YOLOv11-trained model. Currently, it only supports Nailong
recognition.
### 2.3.1
- Modified plugin dependencies to avoid some issues that affected the installation process. Please refer to the installation documentation for more details.
- Corresponding configuration changes: Removed the `NAILONG_ONNX_TRY_TO_USE_GPU` configuration item and added the `NAILONG_ONNX_PROVIDERS` configuration item.
- Modified plugin dependencies to avoid some issues that affected the installation process. Please refer to the
installation documentation for more details.
- Corresponding configuration changes: Removed the `NAILONG_ONNX_TRY_TO_USE_GPU` configuration item and added
the `NAILONG_ONNX_PROVIDERS` configuration item.
### 2.3.0
- Added support for checking all frames in a GIF and re-encapsulating the results into a new GIF. This is disabled by default. The `$checked_image` variable has been deprecated, and a new `$checked_result` variable has been added.
- The input size for model 1 can now be automatically configured based on the model type, but if specified in the configuration, it will be used as the priority.
- Supported processing of images containing other tags. Some configuration items now allow custom values based on the tags.
- Added support for checking all frames in a GIF and re-encapsulating the results into a new GIF. This is disabled by
default. The `$checked_image` variable has been deprecated, and a new `$checked_result` variable has been added.
- The input size for model 1 can now be automatically configured based on the model type, but if specified in the
configuration, it will be used as the priority.
- Supported processing of images containing other tags. Some configuration items now allow custom values based on the
tags.
- Added a user blacklist.
- The default model has been changed to 1.
@ -333,7 +354,8 @@ Welcome everyone to join the group for learning and exchange!
- Renamed the configuration item `NAILONG_YOLOX_SIZE` to `NAILONG_MODEL1_YOLOX_SIZE`.
- Model 1 can now automatically get the latest version, and you can also choose the model type through configuration.
- Model 1 can now control the confidence threshold for recognition via configuration.
- When loading the ONNX model, the system will attempt to use GPU by default. If it fails, a warning will be shown. If you don't want to see the warning, you can refer to the above to disable the corresponding configuration.
- When loading the ONNX model, the system will attempt to use GPU by default. If it fails, a warning will be shown. If
you don't want to see the warning, you can refer to the above to disable the corresponding configuration.
### 2.1.4

View File

@ -69,7 +69,8 @@ NaiLongRemove 是一款由简单的 AI 模型建立的奶龙识别插件,可
## 💿 安装
**如果你从来没接触过 NoneBot请查看 [这个文档](https://github.com/Refound-445/nonebot-plugin-nailongremove/blob/main/docs/tutorial.md)**
**如果你从来没接触过
NoneBot请查看 [这个文档](https://github.com/Refound-445/nonebot-plugin-nailongremove/blob/main/docs/tutorial.md)**
为避免依赖问题,我们把使用 GPU 推理的插件安装方式与普通安装分开了,供有需要的用户选择安装
@ -207,11 +208,12 @@ pip install nonebot-plugin-nailongremove-base -U
| `NAILONG_USER_BLACKLIST` | 否 | `[]` | 用户 ID 黑名单列表 |
| `NAILONG_PRIORITY` | 否 | `100` | Matcher 优先级 |
| **行为配置** | | | |
| `NAILONG_RECALL` | 否 | `True` | 是否撤回消息 |
| `NAILONG_MUTE_SECONDS` | 否 | `0` | 设置禁言时间,默认为 0 即不禁言<br/>单位:秒 |
| `NAILONG_RECALL` | 否 | `["nailong"]` | 是否撤回消息 |
| `NAILONG_MUTE_SECONDS` | 否 | `{"nailong":0}` | 设置禁言时间未被设置或者设置时间为0即不禁言<br/>单位:秒 |
| `NAILONG_TIP` | 否 | `{"nailong": ["本群禁止发奶龙!"]}` | 发送的提示,使用 [Alconna 的消息模板](https://nonebot.dev/docs/best-practice/alconna/uniseg#%E4%BD%BF%E7%94%A8%E6%B6%88%E6%81%AF%E6%A8%A1%E6%9D%BF),可用变量见下,可以根据标签自定义对应值,随机发送列表其中一条消息,<br/>如遇其中没有的标签会回退到 `nailong`<br/>如果对应值为空列表`[]`,则会检测而不会发送消息 |
| `NAILONG_FAILED_TIP` | 否 | `{"nailong": ["{:Reply($message_id)}呜,不要发奶龙了嘛 🥺 👉👈"]}` | 撤回失败或禁用撤回时发送的提示,同上 |
| `NAILONG_CHECK_ALL_FRAMES` | 否 | `False` | 使用模型 1 时是否检查图片中的所有帧,需要同时设置`NAILONG_CHECK_MODE`为0启用该项后消息模板中的 `$checked_result` 变量当原图为动图时会变为动图 |
| `NAILONG_CHECK_RATE` | 否 | `0.8` | 检查图片中的所有帧时,当被检测到的图片满足一定比例时才会被撤回等处理 |
| `NAILONG_CHECK_MODE` | 否 | `0` | 选择对GIF动图的检测方式<br/>0.检测所有帧<br/>1.只检测第一帧<br/>2.随机抽帧检测 |
| **相似度检测配置** | | | |
| `NAILONG_SIMILARITY_ON` | 否 | `False` | 是否启用处理图片前对本地存储进行相似度检测 |
@ -235,20 +237,25 @@ pip install nonebot-plugin-nailongremove-base -U
### 可用模型
- `0`:基于 Renet50 图像分类模型训练推理,感谢 [@spawner1145](https://github.com/spawner1145) 提供的模型,原链接:[spawner1145/NailongRecognize](https://github.com/spawner1145/NailongRecognize.git)
- `1`:基于 YOLOX 目标检测模型训练推理,感谢 [@NKXingXh](https://github.com/nkxingxh) 提供的模型,原链接:[nkxingxh/NailongDetection](https://github.com/nkxingxh/NailongDetection)
- `2`:基于 YOLOv11 目标检测模型训练推理,感谢 [@Hakureirm](https://github.com/Hakureirm) 提供的模型,原链接:[Hakureirm/NailongKiller](https://huggingface.co/Hakureirm/NailongKiller)
- `3`:基于 YOLOv11 目标检测模型训练推理,感谢 [@Threkork](https://github.com/Threkork) 提供的模型,原链接:[Threkork/kovi-plugin-check-alllong](https://github.com/Threkork/kovi-plugin-check-alllong),建议`NAILONG_MODEL1_SCORE`配置项中设置`{"nailong": 0.78}``NAILONG_MODEL1_YOLOX_SIZE`设置为`[640,640]`
- `0`:基于 Renet50 图像分类模型训练推理,感谢 [@spawner1145](https://github.com/spawner1145)
提供的模型,原链接:[spawner1145/NailongRecognize](https://github.com/spawner1145/NailongRecognize.git)
- `1`:基于 YOLOX 目标检测模型训练推理,感谢 [@NKXingXh](https://github.com/nkxingxh)
提供的模型,原链接:[nkxingxh/NailongDetection](https://github.com/nkxingxh/NailongDetection)
- `2`:基于 YOLOv11 目标检测模型训练推理,感谢 [@Hakureirm](https://github.com/Hakureirm)
提供的模型,原链接:[Hakureirm/NailongKiller](https://huggingface.co/Hakureirm/NailongKiller)
- `3`:基于 YOLOv11 目标检测模型训练推理,感谢 [@Threkork](https://github.com/Threkork)
提供的模型,原链接:[Threkork/kovi-plugin-check-alllong](https://github.com/Threkork/kovi-plugin-check-alllong)
,建议`NAILONG_MODEL1_SCORE`配置项中设置`{"nailong": 0.78}``NAILONG_MODEL1_YOLOX_SIZE`设置为`[640,640]`
### 消息模板可用变量
| 变量名 | 类型 | 说明 |
| ----------------- | ---------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------- |
| `$event` | [`Event`](https://nonebot.dev/docs/api/adapters/#Event) | 当前事件 |
| `$target` | [`Target`](https://nonebot.dev/docs/best-practice/alconna/uniseg#%E6%B6%88%E6%81%AF%E5%8F%91%E9%80%81%E5%AF%B9%E8%B1%A1) | 事件目标 |
| `$message_id` | `str` | 消息 ID |
| `$msg` | [`UniMessage`](https://nonebot.dev/docs/best-practice/alconna/uniseg#%E9%80%9A%E7%94%A8%E6%B6%88%E6%81%AF%E5%BA%8F%E5%88%97) | 当前消息 |
| `$ss` | [`Session`](https://github.com/RF-Tar-Railt/nonebot-plugin-uninfo?tab=readme-ov-file#session) | 当前会话 |
| 变量名 | 类型 | 说明 |
|-------------------|------------------------------------------------------------------------------------------------------------------------------|-----------------------------|
| `$event` | [`Event`](https://nonebot.dev/docs/api/adapters/#Event) | 当前事件 |
| `$target` | [`Target`](https://nonebot.dev/docs/best-practice/alconna/uniseg#%E6%B6%88%E6%81%AF%E5%8F%91%E9%80%81%E5%AF%B9%E8%B1%A1) | 事件目标 |
| `$message_id` | `str` | 消息 ID |
| `$msg` | [`UniMessage`](https://nonebot.dev/docs/best-practice/alconna/uniseg#%E9%80%9A%E7%94%A8%E6%B6%88%E6%81%AF%E5%BA%8F%E5%88%97) | 当前消息 |
| `$ss` | [`Session`](https://github.com/RF-Tar-Railt/nonebot-plugin-uninfo?tab=readme-ov-file#session) | 当前会话 |
| `$checked_result` | [`Image`](https://nonebot.dev/docs/best-practice/alconna/uniseg#%E9%80%9A%E7%94%A8%E6%B6%88%E6%81%AF%E6%AE%B5) | 框选出对应目标后的图片,仅在模型配置为 `1` 时存在 |
## 🎉 使用
@ -267,12 +274,17 @@ pip install nonebot-plugin-nailongremove-base -U
## 📝 更新日志
### 2.3.5
- 更新可以增加禁言标签选择功能,分别对不同种类的图片选择是否禁言或者撤回处理
- 增加配置项`NAILONG_CHECK_RATE`,检测动图的全部帧时,可选配置全部帧出现奶龙帧到某个比例时成功判定
### 2.3.4
- `NAILONG_MODEL`加入model3基于YOLOv11训练的模型建议`NAILONG_MODEL1_SCORE`配置项中设置`{"nailong": 0.78}``NAILONG_MODEL1_YOLOX_SIZE`设置为`[640,640]`
- `NAILONG_MODEL`加入model3基于YOLOv11训练的模型建议`NAILONG_MODEL1_SCORE`
配置项中设置`{"nailong": 0.78}``NAILONG_MODEL1_YOLOX_SIZE`设置为`[640,640]`
- 更新配置项默认值`NAILONG_BYPASS_SUPERUSER`->`False``NAILONG_BYPASS_ADMIN`->`False`
### 2.3.3
- 优化临时处理方案减小性能压力同时提升速度向量库faiss也支持GPU处理但非专业人士不推荐使用GPU因为这个安装过程比较复杂
@ -282,13 +294,14 @@ pip install nonebot-plugin-nailongremove-base -U
### 2.3.2
- 更新对GIF动图的三种帧处理模式通过`NAILONG_CHECK_MODE`自行选择
- 更新对于报错图片临时处理方案,通过设置`NAILONG_SIMILARITY_ON`开启浏览本地存储相似度匹配,通过`SUPERUSERS`发送"这是[种类]"+图片,可将报错图片保存到本地记录
- 更新对于报错图片临时处理方案,通过设置`NAILONG_SIMILARITY_ON`开启浏览本地存储相似度匹配,通过`SUPERUSERS`发送"
这是[种类]"+图片,可将报错图片保存到本地记录
- `NAILONG_MODEL`加入model2基于YOLOv11训练的模型目前仅支持奶龙识别
### 2.3.1
- 修改插件依赖以避免一些问题,影响了安装过程,请查看安装文档了解
- 对应配置项修改:删除配置项 `NAILONG_ONNX_TRY_TO_USE_GPU`,添加配置项 `NAILONG_ONNX_PROVIDERS`
- 对应配置项修改:删除配置项 `NAILONG_ONNX_TRY_TO_USE_GPU`,添加配置项 `NAILONG_ONNX_PROVIDERS`
### 2.3.0

View File

@ -9,7 +9,7 @@ require("nonebot_plugin_uninfo")
from . import handler as handler
from .config import Config
__version__ = "2.3.4"
__version__ = "2.3.5"
__plugin_meta__ = PluginMetadata(
name="自动撤回奶龙",
description="一个基于图像分类模型的简单插件~",

View File

@ -41,8 +41,8 @@ class Config(BaseModel):
nailong_user_blacklist: List[str] = Field(default_factory=list)
nailong_priority: int = 100
nailong_recall: bool = True
nailong_mute_seconds: int = 0
nailong_recall: List[str] = ["nailong"]
nailong_mute_seconds: Dict[str,int] = {"nailong":0}
nailong_tip: Dict[str, List[str]] = {
DEFAULT_LABEL: ["本群禁止发送奶龙!"],
}
@ -50,6 +50,7 @@ class Config(BaseModel):
DEFAULT_LABEL: ["{:Reply($message_id)}呜,不要发奶龙了嘛 🥺 👉👈"],
}
nailong_check_all_frames: bool = False
nailong_check_rate: float = 0.8
nailong_model_dir: Path = Field(
default_factory=lambda: Path.cwd() / "data" / "nailongremove",

View File

@ -23,7 +23,7 @@ def judge_list(lst: Iterable[T], val: T, blacklist: bool) -> bool:
async def execute_functions_any_ok(
func: Iterable[Callable[[], Awaitable[Any]]],
func: Iterable[Callable[[], Awaitable[Any]]],
) -> bool:
ok = False
for f in func:
@ -38,36 +38,36 @@ async def execute_functions_any_ok(
async def nailong_rule(
bot: BaseBot,
event: BaseEvent,
session: Uninfo,
ss_interface: QryItrface,
msg: UniMsg,
bot: BaseBot,
event: BaseEvent,
session: Uninfo,
ss_interface: QryItrface,
msg: UniMsg,
) -> bool:
return (
# check if it's a group chat
bool(session.member) # this prop only exists in group chats
# user blacklist
and (session.user.id not in config.nailong_user_blacklist)
# scene blacklist or whitelist
and judge_list(
config.nailong_list_scenes,
session.scene_path,
config.nailong_blacklist,
)
# bypass superuser
and ((not config.nailong_bypass_superuser) or (not await SUPERUSER(bot, event)))
# bypass group admin
and (
(not config.nailong_bypass_admin)
or ((not session.member.role) or session.member.role.level <= 1)
)
# msg has supported seg
and (any(True for x in msg if type(x) in source_extractors))
# self is admin
and (
(not config.nailong_need_admin)
or bool(
bool(session.member) # this prop only exists in group chats
# user blacklist
and (session.user.id not in config.nailong_user_blacklist)
# scene blacklist or whitelist
and judge_list(
config.nailong_list_scenes,
session.scene_path,
config.nailong_blacklist,
)
# bypass superuser
and ((not config.nailong_bypass_superuser) or (not await SUPERUSER(bot, event)))
# bypass group admin
and (
(not config.nailong_bypass_admin)
or ((not session.member.role) or session.member.role.level <= 1)
)
# msg has supported seg
and (any(True for x in msg if type(x) in source_extractors))
# self is admin
and (
(not config.nailong_need_admin)
or bool(
(
self_info := await ss_interface.get_member(
session.scene.type,
@ -78,7 +78,7 @@ async def nailong_rule(
and self_info.role
and self_info.role.level > 1,
)
)
)
)
@ -124,10 +124,11 @@ async def handle_function(bot: BaseBot, ev: BaseEvent, msg: UniMsg, session: Uni
continue
functions: List[Callable[[], Awaitable[Any]]] = []
if config.nailong_recall:
if check_res.label in config.nailong_recall:
functions.append(lambda: recall(bot, ev))
if config.nailong_mute_seconds > 0:
functions.append(lambda: mute(bot, ev, config.nailong_mute_seconds))
if check_res.label in config.nailong_mute_seconds.keys() and config.nailong_mute_seconds[
check_res.label] > 0:
functions.append(lambda: mute(bot, ev, config.nailong_mute_seconds[check_res.label]))
punish_ok = functions and (await execute_functions_any_ok(functions))
template_dict = (
config.nailong_tip if punish_ok else config.nailong_failed_tip

View File

@ -185,7 +185,7 @@ async def check(source: FrameSource) -> CheckResult:
for frame in tem_source
),
)
ok = any(r.ok for r in results)
ok = True if sum(1 for r in results if r.ok) / len(results) >= config.nailong_check_rate else False
else:
ok = False
if not ok:
@ -193,7 +193,7 @@ async def check(source: FrameSource) -> CheckResult:
results = await asyncio.gather(
*(with_semaphore(sem)(check_single)(frame) for frame in source),
)
ok = any(r.ok for r in results)
ok = True if sum(1 for r in results if r.ok) / len(results) >= config.nailong_check_rate else False
if ok:
all_labels = {r.label for r in results if r.label}
label = next(

View File

@ -83,8 +83,8 @@ class FrameInfo:
@run_sync
def _check_single(
frame: np.ndarray,
is_gif: bool = False,
frame: np.ndarray,
is_gif: bool = False,
) -> CheckSingleResult[Optional[Detections]]:
if is_gif:
res = similarity_process(frame)
@ -127,8 +127,8 @@ def _check_single(
async def check_single(
frame: np.ndarray,
is_gif: bool = False,
frame: np.ndarray,
is_gif: bool = False,
) -> CheckSingleResult[FrameInfo]:
if is_gif:
res = await _check_single(frame, True)
@ -158,7 +158,7 @@ async def check(source: FrameSource) -> CheckResult:
for frame in tem_source
),
)
ok = any(r.ok for r in results)
ok = True if sum(1 for r in results if r.ok) / len(results) >= config.nailong_check_rate else False
else:
ok = False
if not ok:
@ -166,7 +166,7 @@ async def check(source: FrameSource) -> CheckResult:
results = await asyncio.gather(
*(with_semaphore(sem)(check_single)(frame) for frame in source),
)
ok = any(r.ok for r in results)
ok = True if sum(1 for r in results if r.ok) / len(results) >= config.nailong_check_rate else False
if ok:
all_labels = {r.label for r in results if r.label}
label = next(