Files
Oleksandr Bezdieniezhnykh cdcd1f6ea7 Fix .cursor skills consistency: flow resolution, tracker-agnostic refs, report naming, error recovery
- Rewrite autopilot flow resolution to 4 deterministic rules based on source code + docs + state file presence
- Replace all hard-coded Jira references with tracker-agnostic terminology across 30+ files
- Move project-management.mdc to _project.md (project-specific, not portable with .cursor)
- Rename FINAL_implementation_report.md to context-dependent names (implementation_report_tests/features/refactor)
- Remove "acknowledged tech debt" option from test-run — failing tests must be fixed or removed
- Add debug/error recovery protocol to protocols.md
- Align directory paths: metrics -> 06_metrics/, add 05_security/, reviews/, 02_task_plans/ to README
- Add missing skills (test-spec, test-run, new-task, ui-design) to README
- Use language-appropriate comment syntax for Arrange/Act/Assert in coderule + testing rules
- Copy updated coderule.mdc to parent suite/.cursor/rules/
- Raise max task complexity from 5 to 8 points in decompose
- Skip test-spec Phase 4 (script generation) during planning context
- Document per-batch vs post-implement test run as intentional
- Add skill-internal state cross-check rule to state.md
2026-03-28 02:34:00 +02:00

4.2 KiB

Test Infrastructure Task Template

Use this template for the test infrastructure bootstrap (Step 1t in tests-only mode). Save as TASKS_DIR/01_test_infrastructure.md initially, then rename to TASKS_DIR/[TRACKER-ID]_test_infrastructure.md after work item ticket creation.


# Test Infrastructure

**Task**: [TRACKER-ID]_test_infrastructure
**Name**: Test Infrastructure
**Description**: Scaffold the Blackbox test project — test runner, mock services, Docker test environment, test data fixtures, reporting
**Complexity**: [3|5] points
**Dependencies**: None
**Component**: Blackbox Tests
**Tracker**: [TASK-ID]
**Epic**: [EPIC-ID]

## Test Project Folder Layout

e2e/ ├── conftest.py ├── requirements.txt ├── Dockerfile ├── mocks/ │ ├── [mock_service_1]/ │ │ ├── Dockerfile │ │ └── [entrypoint file] │ └── [mock_service_2]/ │ ├── Dockerfile │ └── [entrypoint file] ├── fixtures/ │ └── [test data files] ├── tests/ │ ├── test_[category_1].py │ ├── test_[category_2].py │ └── ... └── docker-compose.test.yml


### Layout Rationale

[Brief explanation of directory structure choices — framework conventions, separation of mocks from tests, fixture management]

## Mock Services

| Mock Service | Replaces | Endpoints | Behavior |
|-------------|----------|-----------|----------|
| [name] | [external service] | [endpoints it serves] | [response behavior, configurable via control API] |

### Mock Control API

Each mock service exposes a `POST /mock/config` endpoint for test-time behavior control (e.g., simulate downtime, inject errors). A `GET /mock/[resource]` endpoint returns recorded interactions for assertion.

## Docker Test Environment

### docker-compose.test.yml Structure

| Service | Image / Build | Purpose | Depends On |
|---------|--------------|---------|------------|
| [system-under-test] | [build context] | Main system being tested | [mock services] |
| [mock-1] | [build context] | Mock for [external service] | — |
| [e2e-consumer] | [build from e2e/] | Test runner | [system-under-test] |

### Networks and Volumes

[Isolated test network, volume mounts for test data, model files, results output]

## Test Runner Configuration

**Framework**: [e.g., pytest]
**Plugins**: [e.g., pytest-csv, sseclient-py, requests]
**Entry point**: [e.g., pytest --csv=/results/report.csv]

### Fixture Strategy

| Fixture | Scope | Purpose |
|---------|-------|---------|
| [name] | [session/module/function] | [what it provides] |

## Test Data Fixtures

| Data Set | Source | Format | Used By |
|----------|--------|--------|---------|
| [name] | [volume mount / generated / API seed] | [format] | [test categories] |

### Data Isolation

[Strategy: fresh containers per run, volume cleanup, mock state reset]

## Test Reporting

**Format**: [e.g., CSV]
**Columns**: [e.g., Test ID, Test Name, Execution Time (ms), Result, Error Message]
**Output path**: [e.g., /results/report.csv → mounted to host]

## Acceptance Criteria

**AC-1: Test environment starts**
Given the docker-compose.test.yml
When `docker compose -f docker-compose.test.yml up` is executed
Then all services start and the system-under-test is reachable

**AC-2: Mock services respond**
Given the test environment is running
When the e2e-consumer sends requests to mock services
Then mock services respond with configured behavior

**AC-3: Test runner executes**
Given the test environment is running
When the e2e-consumer starts
Then the test runner discovers and executes test files

**AC-4: Test report generated**
Given tests have been executed
When the test run completes
Then a report file exists at the configured output path with correct columns

Guidance Notes

  • This is a PLAN document, not code. The /implement skill executes it.
  • Focus on test infrastructure decisions, not individual test implementations.
  • Reference environment.md and test-data.md from the test specs — don't repeat everything.
  • Mock services must be deterministic: same input always produces same output.
  • The Docker environment must be self-contained: docker compose up sufficient.