[AZ-171] Enable dynamic batch size for ONNX, TensorRT, and CoreML exports

Made-with: Cursor
This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-03-28 17:25:15 +02:00
parent c1d27c7a47
commit 433e080a07
5 changed files with 126 additions and 6 deletions
+2 -2
View File
@@ -111,8 +111,8 @@ Raw annotations (Queue) → /azaion/data-seed/ (unvalidated)
| Format | Use | Export Details |
|--------|-----|---------------|
| `.pt` | Training checkpoint | YOLOv11 PyTorch weights |
| `.onnx` | Cross-platform inference | 1280px, batch=4, NMS baked in |
| `.engine` | GPU inference (production) | TensorRT FP16, batch=4, per-GPU architecture |
| `.onnx` | Cross-platform inference | 1280px, dynamic batch (18), NMS baked in |
| `.engine` | GPU inference (production) | TensorRT FP16, dynamic batch max 8, per-GPU architecture |
| `.rknn` | Edge inference | RK3588 target (OrangePi5) |
## Integration Points