mirror of
https://github.com/azaion/ai-training.git
synced 2026-04-22 22:56:34 +00:00
142c6c4de8
- Replaced module-level path variables in constants.py with a structured Pydantic Config class. - Updated all relevant modules (train.py, augmentation.py, exports.py, dataset-visualiser.py, manual_run.py) to access paths through the new config structure. - Fixed bugs related to image processing and model saving. - Enhanced test infrastructure to accommodate the new configuration approach. This refactor improves code maintainability and clarity by centralizing configuration management.
2.1 KiB
2.1 KiB
Module: start_inference
Purpose
Entry point for running inference on video files using a TensorRT engine. Downloads the encrypted model from the API/CDN, initializes the engine, and processes video.
Public Interface
| Function | Signature | Returns | Description |
|---|---|---|---|
get_engine_filename |
(device_id=0) -> str | None |
Engine filename | Generates GPU-specific engine filename (duplicate of TensorRTEngine.get_engine_filename) |
__main__ block: Creates ApiClient, downloads encrypted TensorRT model (split big/small), initializes TensorRTEngine, runs Inference on a test video.
Internal Logic
- Model download flow: ApiClient →
load_big_small_resource→ reassembles from local big part + API-downloaded small part → decrypts with model encryption key → raw engine bytes. - Inference setup: TensorRTEngine initialized from decrypted bytes, Inference configured with confidence_threshold=0.5, iou_threshold=0.3.
- Video source: Hardcoded to
tests/ForAI_test.mp4. - get_engine_filename(): Duplicates
TensorRTEngine.get_engine_filename()— generatesazaion.cc_{major}.{minor}_sm_{sm_count}.enginebased on CUDA device compute capability and SM count.
Dependencies
constants— config file pathsapi_client— ApiClient, ApiCredentials for model downloadcdn_manager— CDNManager, CDNCredentials (imported but CDN managed by api_client)inference/inference— Inference pipelineinference/tensorrt_engine— TensorRTEnginesecurity— model encryption keyutils— Dotdictpycuda.driver(external) — CUDA device queriesyaml(external)
Consumers
None (entry point).
Data Models
None.
Configuration
- Confidence threshold: 0.5
- IoU threshold: 0.3
- Video path:
tests/ForAI_test.mp4(hardcoded)
External Integrations
- Azaion API + CDN for model download
- TensorRT GPU inference
- OpenCV video capture and display
Security
- Model is downloaded encrypted (split big/small) and decrypted locally
- Uses hardware-bound and model encryption keys
Tests
None.