# Test Infrastructure Task Template Use this template for the test infrastructure bootstrap (Step 1t in tests-only mode). Save as `TASKS_DIR/01_test_infrastructure.md` initially, then rename to `TASKS_DIR/[JIRA-ID]_test_infrastructure.md` after Jira ticket creation. --- ```markdown # Test Infrastructure **Task**: [JIRA-ID]_test_infrastructure **Name**: Test Infrastructure **Description**: Scaffold the Blackbox test project — test runner, mock services, Docker test environment, test data fixtures, reporting **Complexity**: [3|5] points **Dependencies**: None **Component**: Blackbox Tests **Jira**: [TASK-ID] **Epic**: [EPIC-ID] ## Test Project Folder Layout ``` e2e/ ├── conftest.py ├── requirements.txt ├── Dockerfile ├── mocks/ │ ├── [mock_service_1]/ │ │ ├── Dockerfile │ │ └── [entrypoint file] │ └── [mock_service_2]/ │ ├── Dockerfile │ └── [entrypoint file] ├── fixtures/ │ └── [test data files] ├── tests/ │ ├── test_[category_1].py │ ├── test_[category_2].py │ └── ... └── docker-compose.test.yml ``` ### Layout Rationale [Brief explanation of directory structure choices — framework conventions, separation of mocks from tests, fixture management] ## Mock Services | Mock Service | Replaces | Endpoints | Behavior | |-------------|----------|-----------|----------| | [name] | [external service] | [endpoints it serves] | [response behavior, configurable via control API] | ### Mock Control API Each mock service exposes a `POST /mock/config` endpoint for test-time behavior control (e.g., simulate downtime, inject errors). A `GET /mock/[resource]` endpoint returns recorded interactions for assertion. ## Docker Test Environment ### docker-compose.test.yml Structure | Service | Image / Build | Purpose | Depends On | |---------|--------------|---------|------------| | [system-under-test] | [build context] | Main system being tested | [mock services] | | [mock-1] | [build context] | Mock for [external service] | — | | [e2e-consumer] | [build from e2e/] | Test runner | [system-under-test] | ### Networks and Volumes [Isolated test network, volume mounts for test data, model files, results output] ## Test Runner Configuration **Framework**: [e.g., pytest] **Plugins**: [e.g., pytest-csv, sseclient-py, requests] **Entry point**: [e.g., pytest --csv=/results/report.csv] ### Fixture Strategy | Fixture | Scope | Purpose | |---------|-------|---------| | [name] | [session/module/function] | [what it provides] | ## Test Data Fixtures | Data Set | Source | Format | Used By | |----------|--------|--------|---------| | [name] | [volume mount / generated / API seed] | [format] | [test categories] | ### Data Isolation [Strategy: fresh containers per run, volume cleanup, mock state reset] ## Test Reporting **Format**: [e.g., CSV] **Columns**: [e.g., Test ID, Test Name, Execution Time (ms), Result, Error Message] **Output path**: [e.g., /results/report.csv → mounted to host] ## Acceptance Criteria **AC-1: Test environment starts** Given the docker-compose.test.yml When `docker compose -f docker-compose.test.yml up` is executed Then all services start and the system-under-test is reachable **AC-2: Mock services respond** Given the test environment is running When the e2e-consumer sends requests to mock services Then mock services respond with configured behavior **AC-3: Test runner executes** Given the test environment is running When the e2e-consumer starts Then the test runner discovers and executes test files **AC-4: Test report generated** Given tests have been executed When the test run completes Then a report file exists at the configured output path with correct columns ``` --- ## Guidance Notes - This is a PLAN document, not code. The `/implement` skill executes it. - Focus on test infrastructure decisions, not individual test implementations. - Reference environment.md and test-data.md from the test specs — don't repeat everything. - Mock services must be deterministic: same input always produces same output. - The Docker environment must be self-contained: `docker compose up` sufficient.