Files
admin/.cursor/skills/plan/templates/test-spec.md
T
Oleksandr Bezdieniezhnykh d96971b050 Update .gitignore to include .env and .DS_Store files
Add .cursor autodevelopment system
2026-03-25 17:41:10 +02:00

3.7 KiB

Test Specification Template

Use this template for each component's test spec. Save as components/[##]_[name]/tests.md.


# Test Specification — [Component Name]

## Acceptance Criteria Traceability

| AC ID | Acceptance Criterion | Test IDs | Coverage |
|-------|---------------------|----------|----------|
| AC-01 | [criterion from acceptance_criteria.md] | IT-01, AT-01 | Covered |
| AC-02 | [criterion] | PT-01 | Covered |
| AC-03 | [criterion] | — | NOT COVERED — [reason] |

---

## Blackbox Tests

### IT-01: [Test Name]

**Summary**: [One sentence: what this test verifies]

**Traces to**: AC-01, AC-03

**Description**: [Detailed test scenario]

**Input data**:

[specific input data for this test]


**Expected result**:

[specific expected output or state]


**Max execution time**: [e.g., 5s]

**Dependencies**: [other components/services that must be running]

---

### IT-02: [Test Name]

(repeat structure)

---

## Performance Tests

### PT-01: [Test Name]

**Summary**: [One sentence: what performance aspect is tested]

**Traces to**: AC-02

**Load scenario**:
- Concurrent users: [N]
- Request rate: [N req/s]
- Duration: [N minutes]
- Ramp-up: [strategy]

**Expected results**:

| Metric | Target | Failure Threshold |
|--------|--------|-------------------|
| Latency (p50) | [target] | [max] |
| Latency (p95) | [target] | [max] |
| Latency (p99) | [target] | [max] |
| Throughput | [target req/s] | [min req/s] |
| Error rate | [target %] | [max %] |

**Resource limits**:
- CPU: [max %]
- Memory: [max MB/GB]
- Database connections: [max pool size]

---

### PT-02: [Test Name]

(repeat structure)

---

## Security Tests

### ST-01: [Test Name]

**Summary**: [One sentence: what security aspect is tested]

**Traces to**: AC-04

**Attack vector**: [e.g., SQL injection on search endpoint, privilege escalation via direct ID access]

**Test procedure**:
1. [Step 1]
2. [Step 2]

**Expected behavior**: [what the system should do — reject, sanitize, log, etc.]

**Pass criteria**: [specific measurable condition]

**Fail criteria**: [what constitutes a failure]

---

### ST-02: [Test Name]

(repeat structure)

---

## Acceptance Tests

### AT-01: [Test Name]

**Summary**: [One sentence: what user-facing behavior is verified]

**Traces to**: AC-01

**Preconditions**:
- [Precondition 1]
- [Precondition 2]

**Steps**:

| Step | Action | Expected Result |
|------|--------|-----------------|
| 1 | [user action] | [expected outcome] |
| 2 | [user action] | [expected outcome] |
| 3 | [user action] | [expected outcome] |

---

### AT-02: [Test Name]

(repeat structure)

---

## Test Data Management

**Required test data**:

| Data Set | Description | Source | Size |
|----------|-------------|--------|------|
| [name] | [what it contains] | [generated / fixture / copy of prod subset] | [approx size] |

**Setup procedure**:
1. [How to prepare the test environment]
2. [How to load test data]

**Teardown procedure**:
1. [How to clean up after tests]
2. [How to restore initial state]

**Data isolation strategy**: [How tests are isolated from each other — separate DB, transactions, namespacing]

Guidance Notes

  • Every test MUST trace back to at least one acceptance criterion (AC-XX). If a test doesn't trace to any, question whether it's needed.
  • If an acceptance criterion has no test covering it, mark it as NOT COVERED and explain why (e.g., "requires manual verification", "deferred to phase 2").
  • Performance test targets should come from the NFR section in architecture.md.
  • Security tests should cover at minimum: authentication bypass, authorization escalation, injection attacks relevant to this component.
  • Not every component needs all 4 test types. A stateless utility component may only need blackbox tests.