mirror of
https://github.com/azaion/ai-training.git
synced 2026-04-22 21:46:35 +00:00
142c6c4de8
- Replaced module-level path variables in constants.py with a structured Pydantic Config class. - Updated all relevant modules (train.py, augmentation.py, exports.py, dataset-visualiser.py, manual_run.py) to access paths through the new config structure. - Fixed bugs related to image processing and model saving. - Enhanced test infrastructure to accommodate the new configuration approach. This refactor improves code maintainability and clarity by centralizing configuration management.
1.4 KiB
1.4 KiB
Performance Test Scenarios
PT-AUG-01: Augmentation throughput
- Input: 10 images from fixture dataset
- Action: Run
augment_annotations(), measure wall time - Expected: Completes within 60 seconds (10 images × 8 outputs = 80 files)
- Traces: Restriction: Augmentation runs continuously
- Note: Threshold is generous; actual performance depends on CPU
PT-AUG-02: Parallel augmentation speedup
- Input: 10 images from fixture dataset
- Action: Run with ThreadPoolExecutor vs sequential, compare times
- Expected: Parallel is ≥ 1.5× faster than sequential
- Traces: AC: Parallelized per-image processing
PT-DSF-01: Dataset formation throughput
- Input: 100 images + labels
- Action: Run
form_dataset(), measure wall time - Expected: Completes within 30 seconds
- Traces: Restriction: Dataset formation before training
PT-ENC-01: Encryption throughput
- Input: 10MB random bytes
- Action: Encrypt + decrypt roundtrip, measure wall time
- Expected: Completes within 5 seconds
- Traces: AC: Model encryption feasible for large models
PT-INF-01: ONNX inference latency (single image)
- Input: 1 preprocessed image + ONNX model
- Action: Run single inference, measure wall time
- Expected: Completes within 10 seconds on CPU (no GPU requirement for test)
- Traces: AC: Inference capability
- Note: Production uses GPU; CPU is slower but validates correctness