Files
loader/.cursor/skills/plan/templates/test-data.md
T
Oleksandr Bezdieniezhnykh b0a03d36d6 Add .cursor AI autodevelopment harness (agents, skills, rules)
Made-with: Cursor
2026-03-26 01:06:55 +02:00

2.5 KiB

Test Data Template

Save as DOCUMENT_DIR/tests/test-data.md.


# Test Data Management

## Seed Data Sets

| Data Set | Description | Used by Tests | How Loaded | Cleanup |
|----------|-------------|---------------|-----------|---------|
| [name] | [what it contains] | [test IDs] | [SQL script / API call / fixture file / volume mount] | [how removed after test] |

## Data Isolation Strategy

[e.g., each test run gets a fresh container restart, or transactions are rolled back, or namespaced data, or separate DB per test group]

## Input Data Mapping

| Input Data File | Source Location | Description | Covers Scenarios |
|-----------------|----------------|-------------|-----------------|
| [filename] | `_docs/00_problem/input_data/[filename]` | [what it contains] | [test IDs that use this data] |

## Expected Results Mapping

| Test Scenario ID | Input Data | Expected Result | Comparison Method | Tolerance | Expected Result Source |
|-----------------|------------|-----------------|-------------------|-----------|----------------------|
| [test ID] | `input_data/[filename]` | [quantifiable expected output] | [exact / tolerance / pattern / threshold / file-diff] | [± value or N/A] | `input_data/expected_results/[filename]` or inline |

## External Dependency Mocks

| External Service | Mock/Stub | How Provided | Behavior |
|-----------------|-----------|-------------|----------|
| [service name] | [mock type] | [Docker service / in-process stub / recorded responses] | [what it returns / simulates] |

## Data Validation Rules

| Data Type | Validation | Invalid Examples | Expected System Behavior |
|-----------|-----------|-----------------|------------------------|
| [type] | [rules] | [invalid input examples] | [how system should respond] |

Guidance Notes

  • Every seed data set should be traceable to specific test scenarios.
  • Input data from _docs/00_problem/input_data/ should be mapped to test scenarios that use it.
  • Every input data item MUST have a corresponding expected result in the Expected Results Mapping table.
  • Expected results MUST be quantifiable: exact values, numeric tolerances, pattern matches, thresholds, or reference files. "Works correctly" is never acceptable.
  • For complex expected outputs, provide machine-readable reference files (JSON, CSV) in _docs/00_problem/input_data/expected_results/ and reference them in the mapping.
  • External mocks must be deterministic — same input always produces same output.
  • Data isolation must guarantee no test can affect another test's outcome.