[AZ-180] Add Jetson Orin Nano support with INT8 TensorRT engine

- Dockerfile.jetson: JetPack 6.x L4T base image (aarch64), TensorRT and PyCUDA from apt
- requirements-jetson.txt: derived from requirements.txt, no pip tensorrt/pycuda
- docker-compose.jetson.yml: runtime: nvidia for NVIDIA Container Runtime
- tensorrt_engine.pyx: convert_from_source accepts optional calib_cache_path; INT8 used when cache present, FP16 fallback; get_engine_filename encodes precision suffix to avoid engine cache confusion
- inference.pyx: init_ai tries INT8 engine then FP16 on lookup; downloads calibration cache before conversion thread; passes cache path through to convert_from_source
- constants_inf: add INT8_CALIB_CACHE_FILE constant
- Unit tests for AC-3 (INT8 flag set when cache provided) and AC-4 (FP16 when no cache)

Made-with: Cursor
This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-04-02 07:12:45 +03:00
parent 097811a67b
commit 2149cd6c08
12 changed files with 381 additions and 29 deletions
+23
View File
@@ -0,0 +1,23 @@
FROM nvcr.io/nvidia/l4t-base:r36.3.0
RUN apt-get update && apt-get install -y \
python3 python3-pip python3-dev gcc \
libgl1 libglib2.0-0 \
python3-libnvinfer python3-libnvinfer-dev \
python3-pycuda \
&& rm -rf /var/lib/apt/lists/*
RUN python3 -c "import tensorrt" || \
(echo "TensorRT Python bindings not found; check PYTHONPATH for JetPack installation" && exit 1)
WORKDIR /app
COPY requirements-jetson.txt ./
RUN pip3 install --no-cache-dir -r requirements-jetson.txt
COPY . .
RUN python3 setup.py build_ext --inplace
ENV PYTHONPATH=/app/src
RUN adduser --disabled-password --no-create-home --gecos "" appuser \
&& chown -R appuser /app
USER appuser
EXPOSE 8080
CMD ["python3", "-m", "uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8080"]