Oleksandr Bezdieniezhnykh 3984507221 [AZ-180] Fix INT8 conversion: set FP16 flag alongside INT8 for TensorRT 10.x
In TensorRT 10.x, INT8 conversion requires FP16 to be set as a fallback for
network layers (e.g. normalization ops in detection models) that have no INT8
kernel implementation. Without FP16, build_serialized_network can return None
on Jetson for YOLO-type models. INT8 flag is still the primary precision;
FP16 is only the layer-level fallback within the same engine.

Made-with: Cursor
2026-04-02 07:32:16 +03:00
2026-03-17 19:15:55 +02:00
2026-03-17 19:15:55 +02:00
2026-03-29 21:18:18 +03:00

Azaion.Detections

Cython/Python service for YOLO inference (TensorRT / ONNX Runtime). GPU-enabled container.

S
Description
No description provided
Readme 73 MiB
Languages
Python 61.4%
Cython 30.7%
Shell 7.4%
Dockerfile 0.5%