mirror of
https://github.com/azaion/detections.git
synced 2026-04-22 05:26:32 +00:00
3984507221d99b93138e4d6a9570e9b97cdf7316
In TensorRT 10.x, INT8 conversion requires FP16 to be set as a fallback for network layers (e.g. normalization ops in detection models) that have no INT8 kernel implementation. Without FP16, build_serialized_network can return None on Jetson for YOLO-type models. INT8 flag is still the primary precision; FP16 is only the layer-level fallback within the same engine. Made-with: Cursor
Azaion.Detections
Cython/Python service for YOLO inference (TensorRT / ONNX Runtime). GPU-enabled container.
Description
Languages
Python
61.4%
Cython
30.7%
Shell
7.4%
Dockerfile
0.5%