mirror of
https://github.com/azaion/ai-training.git
synced 2026-04-23 04:26:35 +00:00
Refactor constants management to use Pydantic BaseModel for configuration
- Replaced module-level path variables in constants.py with a structured Pydantic Config class. - Updated all relevant modules (train.py, augmentation.py, exports.py, dataset-visualiser.py, manual_run.py) to access paths through the new config structure. - Fixed bugs related to image processing and model saving. - Enhanced test infrastructure to accommodate the new configuration approach. This refactor improves code maintainability and clarity by centralizing configuration management.
This commit is contained in:
@@ -0,0 +1,62 @@
|
||||
# ONNX Inference Tests
|
||||
|
||||
**Task**: AZ-161_test_onnx_inference
|
||||
**Name**: ONNX Inference Tests
|
||||
**Description**: Implement 4 tests for ONNX model loading, inference execution, postprocessing, and CPU latency
|
||||
**Complexity**: 3 points
|
||||
**Dependencies**: AZ-152_test_infrastructure
|
||||
**Component**: Blackbox Tests
|
||||
**Jira**: AZ-161
|
||||
**Epic**: AZ-151
|
||||
|
||||
## Problem
|
||||
|
||||
The ONNX inference engine loads a model, runs detection on images, and postprocesses results. Tests must verify the full pipeline works on CPU (smoke test — no precision validation).
|
||||
|
||||
## Outcome
|
||||
|
||||
- 4 passing pytest tests
|
||||
- Blackbox tests in `tests/test_onnx_inference.py`
|
||||
- Performance test in `tests/performance/test_inference_perf.py`
|
||||
|
||||
## Scope
|
||||
|
||||
### Included
|
||||
- BT-INF-01: Model loads successfully (no exception, valid engine)
|
||||
- BT-INF-02: Inference returns output (array shape [batch, N, 6+])
|
||||
- BT-INF-03: Postprocessing returns valid detections (x,y,w,h ∈ [0,1], cls ∈ [0,79], conf ∈ [0,1])
|
||||
- PT-INF-01: ONNX inference latency (single image ≤ 10s on CPU)
|
||||
|
||||
### Excluded
|
||||
- TensorRT inference (requires NVIDIA GPU)
|
||||
- Detection precision/recall validation (smoke-only per user decision)
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
**AC-1: Model loads**
|
||||
Given azaion.onnx bytes
|
||||
When OnnxEngine(model_bytes) is constructed
|
||||
Then no exception; engine has valid input_shape and batch_size
|
||||
|
||||
**AC-2: Inference output**
|
||||
Given ONNX engine + 1 preprocessed image
|
||||
When engine.run(input_blob) is called
|
||||
Then returns list of numpy arrays; first array has shape [batch, N, 6+]
|
||||
|
||||
**AC-3: Valid detections**
|
||||
Given ONNX engine output from real image
|
||||
When Inference.postprocess() is called
|
||||
Then returns list of Detection objects; each has x,y,w,h ∈ [0,1], cls ∈ [0,79], confidence ∈ [0,1]
|
||||
|
||||
**AC-4: CPU latency**
|
||||
Given 1 preprocessed image + ONNX model
|
||||
When single inference runs
|
||||
Then completes within 10 seconds
|
||||
|
||||
## Constraints
|
||||
|
||||
- Uses onnxruntime (CPU) not onnxruntime-gpu
|
||||
- ONNX model is 77MB, loaded once (session fixture)
|
||||
- Image preprocessing must match model input size (1280×1280)
|
||||
- Performance test marked: `@pytest.mark.performance`
|
||||
- This is a smoke test — validates structure, not detection accuracy
|
||||
Reference in New Issue
Block a user