- Dockerfile.jetson: JetPack 6.x L4T base image (aarch64), TensorRT and PyCUDA from apt
- requirements-jetson.txt: derived from requirements.txt, no pip tensorrt/pycuda
- docker-compose.jetson.yml: runtime: nvidia for NVIDIA Container Runtime
- tensorrt_engine.pyx: convert_from_source accepts optional calib_cache_path; INT8 used when cache present, FP16 fallback; get_engine_filename encodes precision suffix to avoid engine cache confusion
- inference.pyx: init_ai tries INT8 engine then FP16 on lookup; downloads calibration cache before conversion thread; passes cache path through to convert_from_source
- constants_inf: add INT8_CALIB_CACHE_FILE constant
- Unit tests for AC-3 (INT8 flag set when cache provided) and AC-4 (FP16 when no cache)
Made-with: Cursor
- Changed the current step from "Refactor" to "Implement" in the autopilot state, indicating a transition to the next phase of development.
- Updated the dependencies table to reflect the completion of 11 tasks and the addition of 4 new tasks related to the distributed architecture.
- Removed outdated task documentation for AZ-173, AZ-174, AZ-175, and AZ-176 as they are now obsolete following the architectural changes.
- Enhanced the execution order for new tasks, organizing them into batches based on dependencies.
These updates aim to align the project documentation with the current development phase and improve clarity on task management moving forward.