- Changed the directory structure for task specifications to include a dedicated `todo/` folder within `_docs/02_tasks/` for tasks ready for implementation. - Updated references in various skills and documentation to reflect the new task lifecycle, including changes in the `implementer` and `decompose` skills. - Enhanced the README and flow documentation to clarify the new task organization and its implications for the implementation process. These updates improve task management clarity and streamline the implementation workflow.
6.5 KiB
Test Infrastructure
Task: AZ-152_test_infrastructure Name: Test Infrastructure Description: Scaffold the test project — pytest configuration, fixtures, conftest, test data management, Docker test environment Complexity: 3 points Dependencies: None Component: Blackbox Tests Jira: AZ-152 Epic: AZ-151
Test Project Folder Layout
tests/
├── conftest.py
├── test_augmentation.py
├── test_dataset_formation.py
├── test_label_validation.py
├── test_encryption.py
├── test_model_split.py
├── test_annotation_classes.py
├── test_hardware_hash.py
├── test_onnx_inference.py
├── test_nms.py
├── test_annotation_queue.py
├── performance/
│ ├── conftest.py
│ ├── test_augmentation_perf.py
│ ├── test_dataset_perf.py
│ ├── test_encryption_perf.py
│ └── test_inference_perf.py
├── resilience/
│ └── (resilience tests embedded in main test files via markers)
├── security/
│ └── (security tests embedded in main test files via markers)
└── resource_limits/
└── (resource limit tests embedded in main test files via markers)
Layout Rationale
Flat test file structure per functional area matches the existing codebase module layout. Performance tests are separated into a subdirectory so they can be run independently (slower, threshold-based). Resilience, security, and resource limit tests use pytest markers (@pytest.mark.resilience, @pytest.mark.security, @pytest.mark.resource_limit) within the main test files to avoid unnecessary file proliferation while allowing selective execution.
Mock Services
No mock services required. All 55 test scenarios operate offline against local code modules. External services (Azaion API, S3 CDN, RabbitMQ Streams, TensorRT) are excluded from the test scope per user decision.
Docker Test Environment
docker-compose.test.yml Structure
| Service | Image / Build | Purpose | Depends On |
|---|---|---|---|
| test-runner | Build from Dockerfile.test |
Runs pytest suite | — |
Single-container setup: the system under test is a Python library (not a service), so tests import modules directly. No network services required.
Volumes
| Volume Mount | Purpose |
|---|---|
./test-results:/app/test-results |
JUnit XML output for CI parsing |
./_docs/00_problem/input_data:/app/_docs/00_problem/input_data:ro |
Fixture images, labels, ONNX model (read-only) |
Test Runner Configuration
Framework: pytest
Plugins: pytest (built-in JUnit XML via --junitxml)
Entry point (local): scripts/run-tests-local.sh
Entry point (Docker): docker compose -f docker-compose.test.yml up --build --abort-on-container-exit
Fixture Strategy
| Fixture | Scope | Purpose |
|---|---|---|
fixture_images_dir |
session | Path to 100 JPEG images from _docs/00_problem/input_data/dataset/images/ |
fixture_labels_dir |
session | Path to 100 YOLO labels from _docs/00_problem/input_data/dataset/labels/ |
fixture_onnx_model |
session | Bytes of _docs/00_problem/input_data/azaion.onnx |
fixture_classes_json |
session | Path to classes.json |
work_dir |
function | tmp_path based working directory for filesystem tests |
sample_image_label |
function | Copies 1 image + label to tmp_path |
sample_images_labels |
function | Copies N images + labels to tmp_path (parameterizable) |
corrupted_label |
function | Generates a label file with coords > 1.0 in tmp_path |
edge_bbox_label |
function | Generates a label with bbox near image edge in tmp_path |
empty_label |
function | Generates an empty label file in tmp_path |
Test Data Fixtures
| Data Set | Source | Format | Used By |
|---|---|---|---|
| 100 annotated images | _docs/00_problem/input_data/dataset/images/ |
JPEG | Augmentation, dataset formation, inference |
| 100 YOLO labels | _docs/00_problem/input_data/dataset/labels/ |
TXT | Augmentation, dataset formation, label validation |
| ONNX model (77MB) | _docs/00_problem/input_data/azaion.onnx |
ONNX | Encryption roundtrip, inference |
| Class definitions | classes.json (project root) |
JSON | Annotation class loading, YAML generation |
| Corrupted labels | Generated at test time | TXT | Label validation, dataset formation |
| Edge-case bboxes | Generated at test time | In-memory | Augmentation bbox correction |
| Detection objects | Generated at test time | In-memory | NMS overlap removal |
| Msgpack messages | Generated at test time | bytes | Annotation queue parsing |
| Random binary data | Generated at test time (os.urandom) |
bytes | Encryption tests |
| Empty label files | Generated at test time | TXT | Augmentation edge case |
Data Isolation
Each test function receives an isolated tmp_path directory. Fixture files are copied (not symlinked) to tmp_path to prevent cross-test interference. Session-scoped fixtures (image dir, model bytes) are read-only references. No test modifies the source fixture data.
Test Reporting
Format: JUnit XML
Output path: test-results/test-results.xml (local), /app/test-results/test-results.xml (Docker)
CI integration: Standard JUnit XML parseable by GitHub Actions, Azure Pipelines, GitLab CI
Constants Patching Strategy
The production code uses hardcoded paths from constants.py (e.g., /azaion/data/). Tests must override these paths to point to tmp_path directories. Strategy: use monkeypatch or unittest.mock.patch to override constants.* module attributes at test function scope.
Acceptance Criteria
AC-1: Local test runner works
Given requirements-test.txt is installed
When scripts/run-tests-local.sh is executed
Then pytest discovers and runs tests, produces JUnit XML in test-results/
AC-2: Docker test runner works
Given Dockerfile.test and docker-compose.test.yml exist
When docker compose -f docker-compose.test.yml up --build is executed
Then test-runner container runs all tests, JUnit XML is written to mounted test-results/ volume
AC-3: Fixtures provide test data
Given conftest.py defines session and function-scoped fixtures
When a test requests fixture_images_dir
Then it receives a valid path to 100 JPEG images
AC-4: Constants are properly patched
Given a test patches constants.data_dir to tmp_path
When the test runs augmentation or dataset formation
Then all file operations target tmp_path, not /azaion/