[AZ-180] Refactor inference and engine factory for improved model handling

- Updated the autopilot state to reflect the current task status as in progress.
- Refactored the inference module to streamline model downloading and conversion processes, replacing the download_model method with a more efficient load_source method.
- Introduced asynchronous model building in the inference module to enhance performance during model conversion.
- Enhanced the engine factory to include a new method for building and caching models, improving error handling and logging during the upload process.
- Added calibration cache handling in the Jetson TensorRT engine for better resource management.

Made-with: Cursor
This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-04-03 06:41:11 +03:00
parent 834f846dc8
commit 8116b55813
4 changed files with 64 additions and 54 deletions
+2 -2
View File
@@ -4,8 +4,8 @@
flow: existing-code
step: 8
name: New Task
status: not_started
sub_step: 0
status: in_progress
sub_step: 1 — Gather Feature Description
retry_count: 0
## Cycle Notes