mirror of
https://github.com/azaion/ai-training.git
synced 2026-04-22 10:46:35 +00:00
[AZ-171] Enable dynamic batch size for ONNX, TensorRT, and CoreML exports
Made-with: Cursor
This commit is contained in:
@@ -111,8 +111,8 @@ Raw annotations (Queue) → /azaion/data-seed/ (unvalidated)
|
||||
| Format | Use | Export Details |
|
||||
|--------|-----|---------------|
|
||||
| `.pt` | Training checkpoint | YOLOv11 PyTorch weights |
|
||||
| `.onnx` | Cross-platform inference | 1280px, batch=4, NMS baked in |
|
||||
| `.engine` | GPU inference (production) | TensorRT FP16, batch=4, per-GPU architecture |
|
||||
| `.onnx` | Cross-platform inference | 1280px, dynamic batch (1–8), NMS baked in |
|
||||
| `.engine` | GPU inference (production) | TensorRT FP16, dynamic batch max 8, per-GPU architecture |
|
||||
| `.rknn` | Edge inference | RK3588 target (OrangePi5) |
|
||||
|
||||
## Integration Points
|
||||
|
||||
Reference in New Issue
Block a user