- Dockerfile.jetson: JetPack 6.x L4T base image (aarch64), TensorRT and PyCUDA from apt - requirements-jetson.txt: derived from requirements.txt, no pip tensorrt/pycuda - docker-compose.jetson.yml: runtime: nvidia for NVIDIA Container Runtime - tensorrt_engine.pyx: convert_from_source accepts optional calib_cache_path; INT8 used when cache present, FP16 fallback; get_engine_filename encodes precision suffix to avoid engine cache confusion - inference.pyx: init_ai tries INT8 engine then FP16 on lookup; downloads calibration cache before conversion thread; passes cache path through to convert_from_source - constants_inf: add INT8_CALIB_CACHE_FILE constant - Unit tests for AC-3 (INT8 flag set when cache provided) and AC-4 (FP16 when no cache) Made-with: Cursor
861 B
Autopilot State
Current Step
flow: existing-code step: 9 name: Implement status: in_progress sub_step: batch_01 retry_count: 0
Cycle Notes
AZ-178 cycle (steps 8–14) completed 2026-04-02. step: 8 (New Task) — DONE (AZ-178 defined) step: 9 (Implement) — DONE (implementation_report_streaming_video.md, 67/67 tests pass) step: 10 (Run Tests) — DONE (67 passed, 0 failed) step: 11 (Update Docs) — DONE (docs updated during step 9 implementation) step: 12 (Security Audit) — DONE (Critical/High findings remediated 2026-04-01; 64/64 tests pass) step: 13 (Performance Test) — SKIPPED (500ms latency validated by real-video integration test) step: 14 (Deploy) — DONE (all artifacts + 5 scripts created)
AZ-180 cycle started 2026-04-02. step: 8 (New Task) — DONE (AZ-180: Jetson Orin Nano support + INT8) step: 9 (Implement) — NOT STARTED