use nms in the model itself, simplify and make postprocess faster.

make inference in batches, fix c# handling, add overlap handling
This commit is contained in:
Alex Bezdieniezhnykh
2025-02-10 14:55:00 +02:00
parent ba3e3b4a55
commit c1b5b5fee2
19 changed files with 259 additions and 140 deletions
+12 -1
View File
@@ -13,6 +13,17 @@ Results (file or annotations) is putted to the other queue, or the same socket,
<h2>Installation</h2>
Prepare correct onnx model from YOLO:
```python
from ultralytics import YOLO
import netron
model = YOLO("azaion.pt")
model.export(format="onnx", imgsz=1280, nms=True, batch=4)
netron.start('azaion.onnx')
```
Read carefully about [export arguments](https://docs.ultralytics.com/modes/export/), you have to use nms=True, and batching with a proper batch size
<h3>Install libs</h3>
https://www.python.org/downloads/
@@ -45,7 +56,7 @@ This is crucial for the build because build needs Python.h header and other file
```
python -m pip install --upgrade pip
pip install opencv-python cython msgpack cryptography rstream pika zmq pyjwt pyinstaller tensorboard
pip install requirements.txt
```
In case of fbgemm.dll error (Windows specific):