Refactor inference and AI configuration handling

- Updated the `Inference` class to replace the `get_onnx_engine_bytes` method with `download_model`, allowing for dynamic model loading based on a specified filename.
- Modified the `convert_and_upload_model` method to accept `source_bytes` instead of `onnx_engine_bytes`, enhancing flexibility in model conversion.
- Introduced a new property `engine_name` to the `Inference` class for better access to engine details.
- Adjusted the `AIRecognitionConfig` structure to include a new method pointer `from_dict`, improving configuration handling.
- Updated various test cases to reflect changes in model paths and timeout settings, ensuring consistency and reliability in testing.
This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-03-30 00:22:56 +03:00
parent 6269a7485c
commit 27f4aceb52
25 changed files with 40974 additions and 6172 deletions
+6 -1
View File
@@ -88,7 +88,12 @@ cdef class TensorRTEngine(InferenceEngine):
return None
@staticmethod
def convert_from_onnx(bytes onnx_model):
def get_source_filename():
import constants_inf
return constants_inf.AI_ONNX_MODEL_FILE
@staticmethod
def convert_from_source(bytes onnx_model):
workspace_bytes = int(TensorRTEngine.get_gpu_memory_bytes(0) * 0.9)
explicit_batch_flag = 1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)