Refactor inference and AI configuration handling

- Updated the `Inference` class to replace the `get_onnx_engine_bytes` method with `download_model`, allowing for dynamic model loading based on a specified filename.
- Modified the `convert_and_upload_model` method to accept `source_bytes` instead of `onnx_engine_bytes`, enhancing flexibility in model conversion.
- Introduced a new property `engine_name` to the `Inference` class for better access to engine details.
- Adjusted the `AIRecognitionConfig` structure to include a new method pointer `from_dict`, improving configuration handling.
- Updated various test cases to reflect changes in model paths and timeout settings, ensuring consistency and reliability in testing.
This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-03-30 00:22:56 +03:00
parent 6269a7485c
commit 27f4aceb52
25 changed files with 40974 additions and 6172 deletions
+2 -3
View File
@@ -4,9 +4,8 @@ from engines.inference_engine cimport InferenceEngine
cdef class CoreMLEngine(InferenceEngine):
cdef object model
cdef str input_name
cdef tuple input_shape
cdef list _output_names
cdef int img_width
cdef int img_height
cdef tuple get_input_shape(self)
cdef int get_batch_size(self)