mirror of
https://github.com/azaion/gps-denied-onboard.git
synced 2026-04-23 07:26:37 +00:00
chore: import .claude command skills, CLAUDE.md, .gitignore, next_steps.md
- Vendor local .claude/ command skills (autopilot, plan, implement, etc.) - Add CLAUDE.md pointing slash commands to .claude/commands/*/SKILL.md - Untrack docs-Lokal/ and ignore .planning/ for local-only planning docs - Include next_steps.md pulled from upstream Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,128 @@
|
||||
# Architecture Document Template
|
||||
|
||||
Use this template for the architecture document. Save as `_docs/02_document/architecture.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# [System Name] — Architecture
|
||||
|
||||
## 1. System Context
|
||||
|
||||
**Problem being solved**: [One paragraph summarizing the problem from problem.md]
|
||||
|
||||
**System boundaries**: [What is inside the system vs. external]
|
||||
|
||||
**External systems**:
|
||||
|
||||
| System | Integration Type | Direction | Purpose |
|
||||
|--------|-----------------|-----------|---------|
|
||||
| [name] | REST / Queue / DB / File | Inbound / Outbound / Both | [why] |
|
||||
|
||||
## 2. Technology Stack
|
||||
|
||||
| Layer | Technology | Version | Rationale |
|
||||
|-------|-----------|---------|-----------|
|
||||
| Language | | | |
|
||||
| Framework | | | |
|
||||
| Database | | | |
|
||||
| Cache | | | |
|
||||
| Message Queue | | | |
|
||||
| Hosting | | | |
|
||||
| CI/CD | | | |
|
||||
|
||||
**Key constraints from restrictions.md**:
|
||||
- [Constraint 1 and how it affects technology choices]
|
||||
- [Constraint 2]
|
||||
|
||||
## 3. Deployment Model
|
||||
|
||||
**Environments**: Development, Staging, Production
|
||||
|
||||
**Infrastructure**:
|
||||
- [Cloud provider / On-prem / Hybrid]
|
||||
- [Container orchestration if applicable]
|
||||
- [Scaling strategy: horizontal / vertical / auto]
|
||||
|
||||
**Environment-specific configuration**:
|
||||
|
||||
| Config | Development | Production |
|
||||
|--------|-------------|------------|
|
||||
| Database | [local/docker] | [managed service] |
|
||||
| Secrets | [.env file] | [secret manager] |
|
||||
| Logging | [console] | [centralized] |
|
||||
|
||||
## 4. Data Model Overview
|
||||
|
||||
> High-level data model covering the entire system. Detailed per-component models go in component specs.
|
||||
|
||||
**Core entities**:
|
||||
|
||||
| Entity | Description | Owned By Component |
|
||||
|--------|-------------|--------------------|
|
||||
| [entity] | [what it represents] | [component ##] |
|
||||
|
||||
**Key relationships**:
|
||||
- [Entity A] → [Entity B]: [relationship description]
|
||||
|
||||
**Data flow summary**:
|
||||
- [Source] → [Transform] → [Destination]: [what data and why]
|
||||
|
||||
## 5. Integration Points
|
||||
|
||||
### Internal Communication
|
||||
|
||||
| From | To | Protocol | Pattern | Notes |
|
||||
|------|----|----------|---------|-------|
|
||||
| [component] | [component] | Sync REST / Async Queue / Direct call | Request-Response / Event / Command | |
|
||||
|
||||
### External Integrations
|
||||
|
||||
| External System | Protocol | Auth | Rate Limits | Failure Mode |
|
||||
|----------------|----------|------|-------------|--------------|
|
||||
| [system] | [REST/gRPC/etc] | [API key/OAuth/etc] | [limits] | [retry/circuit breaker/fallback] |
|
||||
|
||||
## 6. Non-Functional Requirements
|
||||
|
||||
| Requirement | Target | Measurement | Priority |
|
||||
|------------|--------|-------------|----------|
|
||||
| Availability | [e.g., 99.9%] | [how measured] | High/Medium/Low |
|
||||
| Latency (p95) | [e.g., <200ms] | [endpoint/operation] | |
|
||||
| Throughput | [e.g., 1000 req/s] | [peak/sustained] | |
|
||||
| Data retention | [e.g., 90 days] | [which data] | |
|
||||
| Recovery (RPO/RTO) | [e.g., RPO 1hr, RTO 4hr] | | |
|
||||
| Scalability | [e.g., 10x current load] | [timeline] | |
|
||||
|
||||
## 7. Security Architecture
|
||||
|
||||
**Authentication**: [mechanism — JWT / session / API key]
|
||||
|
||||
**Authorization**: [RBAC / ABAC / per-resource]
|
||||
|
||||
**Data protection**:
|
||||
- At rest: [encryption method]
|
||||
- In transit: [TLS version]
|
||||
- Secrets management: [tool/approach]
|
||||
|
||||
**Audit logging**: [what is logged, where, retention]
|
||||
|
||||
## 8. Key Architectural Decisions
|
||||
|
||||
Record significant decisions that shaped the architecture.
|
||||
|
||||
### ADR-001: [Decision Title]
|
||||
|
||||
**Context**: [Why this decision was needed]
|
||||
|
||||
**Decision**: [What was decided]
|
||||
|
||||
**Alternatives considered**:
|
||||
1. [Alternative 1] — rejected because [reason]
|
||||
2. [Alternative 2] — rejected because [reason]
|
||||
|
||||
**Consequences**: [Trade-offs accepted]
|
||||
|
||||
### ADR-002: [Decision Title]
|
||||
|
||||
...
|
||||
```
|
||||
@@ -0,0 +1,78 @@
|
||||
# Blackbox Tests Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/blackbox-tests.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Blackbox Tests
|
||||
|
||||
## Positive Scenarios
|
||||
|
||||
### FT-P-01: [Scenario Name]
|
||||
|
||||
**Summary**: [One sentence: what black-box use case this validates]
|
||||
**Traces to**: AC-[ID], AC-[ID]
|
||||
**Category**: [which AC category — e.g., Position Accuracy, Image Processing, etc.]
|
||||
|
||||
**Preconditions**:
|
||||
- [System state required before test]
|
||||
|
||||
**Input data**: [reference to specific data set or file from test-data.md]
|
||||
|
||||
**Steps**:
|
||||
|
||||
| Step | Consumer Action | Expected System Response |
|
||||
|------|----------------|------------------------|
|
||||
| 1 | [call / send / provide input] | [response / event / output] |
|
||||
| 2 | [call / send / provide input] | [response / event / output] |
|
||||
|
||||
**Expected outcome**: [specific, measurable result]
|
||||
**Max execution time**: [e.g., 10s]
|
||||
|
||||
---
|
||||
|
||||
### FT-P-02: [Scenario Name]
|
||||
|
||||
(repeat structure)
|
||||
|
||||
---
|
||||
|
||||
## Negative Scenarios
|
||||
|
||||
### FT-N-01: [Scenario Name]
|
||||
|
||||
**Summary**: [One sentence: what invalid/edge input this tests]
|
||||
**Traces to**: AC-[ID] (negative case), RESTRICT-[ID]
|
||||
**Category**: [which AC/restriction category]
|
||||
|
||||
**Preconditions**:
|
||||
- [System state required before test]
|
||||
|
||||
**Input data**: [reference to specific invalid data or edge case]
|
||||
|
||||
**Steps**:
|
||||
|
||||
| Step | Consumer Action | Expected System Response |
|
||||
|------|----------------|------------------------|
|
||||
| 1 | [provide invalid input / trigger edge case] | [error response / graceful degradation / fallback behavior] |
|
||||
|
||||
**Expected outcome**: [system rejects gracefully / falls back to X / returns error Y]
|
||||
**Max execution time**: [e.g., 5s]
|
||||
|
||||
---
|
||||
|
||||
### FT-N-02: [Scenario Name]
|
||||
|
||||
(repeat structure)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Blackbox tests should typically trace to at least one acceptance criterion or restriction. Tests without a trace are allowed but should have a clear justification.
|
||||
- Positive scenarios validate the system does what it should.
|
||||
- Negative scenarios validate the system rejects or handles gracefully what it shouldn't accept.
|
||||
- Expected outcomes must be specific and measurable — not "works correctly" but "returns position within 50m of ground truth."
|
||||
- Input data references should point to specific entries in test-data.md.
|
||||
@@ -0,0 +1,156 @@
|
||||
# Component Specification Template
|
||||
|
||||
Use this template for each component. Save as `components/[##]_[name]/description.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# [Component Name]
|
||||
|
||||
## 1. High-Level Overview
|
||||
|
||||
**Purpose**: [One sentence: what this component does and its role in the system]
|
||||
|
||||
**Architectural Pattern**: [e.g., Repository, Event-driven, Pipeline, Facade, etc.]
|
||||
|
||||
**Upstream dependencies**: [Components that this component calls or consumes from]
|
||||
|
||||
**Downstream consumers**: [Components that call or consume from this component]
|
||||
|
||||
## 2. Internal Interfaces
|
||||
|
||||
For each interface this component exposes internally:
|
||||
|
||||
### Interface: [InterfaceName]
|
||||
|
||||
| Method | Input | Output | Async | Error Types |
|
||||
|--------|-------|--------|-------|-------------|
|
||||
| `method_name` | `InputDTO` | `OutputDTO` | Yes/No | `ErrorType1`, `ErrorType2` |
|
||||
|
||||
**Input DTOs**:
|
||||
```
|
||||
[DTO name]:
|
||||
field_1: type (required/optional) — description
|
||||
field_2: type (required/optional) — description
|
||||
```
|
||||
|
||||
**Output DTOs**:
|
||||
```
|
||||
[DTO name]:
|
||||
field_1: type — description
|
||||
field_2: type — description
|
||||
```
|
||||
|
||||
## 3. External API Specification
|
||||
|
||||
> Include this section only if the component exposes an external HTTP/gRPC API.
|
||||
> Skip if the component is internal-only.
|
||||
|
||||
| Endpoint | Method | Auth | Rate Limit | Description |
|
||||
|----------|--------|------|------------|-------------|
|
||||
| `/api/v1/...` | GET/POST/PUT/DELETE | Required/Public | X req/min | Brief description |
|
||||
|
||||
**Request/Response schemas**: define per endpoint using OpenAPI-style notation.
|
||||
|
||||
**Example request/response**:
|
||||
```json
|
||||
// Request
|
||||
{ }
|
||||
|
||||
// Response
|
||||
{ }
|
||||
```
|
||||
|
||||
## 4. Data Access Patterns
|
||||
|
||||
### Queries
|
||||
|
||||
| Query | Frequency | Hot Path | Index Needed |
|
||||
|-------|-----------|----------|--------------|
|
||||
| [describe query] | High/Medium/Low | Yes/No | Yes/No |
|
||||
|
||||
### Caching Strategy
|
||||
|
||||
| Data | Cache Type | TTL | Invalidation |
|
||||
|------|-----------|-----|-------------|
|
||||
| [data item] | In-memory / Redis / None | [duration] | [trigger] |
|
||||
|
||||
### Storage Estimates
|
||||
|
||||
| Table/Collection | Est. Row Count (1yr) | Row Size | Total Size | Growth Rate |
|
||||
|-----------------|---------------------|----------|------------|-------------|
|
||||
| [table_name] | | | | /month |
|
||||
|
||||
### Data Management
|
||||
|
||||
**Seed data**: [Required seed data and how to load it]
|
||||
|
||||
**Rollback**: [Rollback procedure for this component's data changes]
|
||||
|
||||
## 5. Implementation Details
|
||||
|
||||
**Algorithmic Complexity**: [Big O for critical methods — only if non-trivial]
|
||||
|
||||
**State Management**: [Local state / Global state / Stateless — explain how state is handled]
|
||||
|
||||
**Key Dependencies**: [External libraries and their purpose]
|
||||
|
||||
| Library | Version | Purpose |
|
||||
|---------|---------|---------|
|
||||
| [name] | [version] | [why needed] |
|
||||
|
||||
**Error Handling Strategy**:
|
||||
- [How errors are caught, propagated, and reported]
|
||||
- [Retry policy if applicable]
|
||||
- [Circuit breaker if applicable]
|
||||
|
||||
## 6. Extensions and Helpers
|
||||
|
||||
> List any shared utilities this component needs that should live in a `helpers/` folder.
|
||||
|
||||
| Helper | Purpose | Used By |
|
||||
|--------|---------|---------|
|
||||
| [helper_name] | [what it does] | [list of components] |
|
||||
|
||||
## 7. Caveats & Edge Cases
|
||||
|
||||
**Known limitations**:
|
||||
- [Limitation 1]
|
||||
|
||||
**Potential race conditions**:
|
||||
- [Race condition scenario, if any]
|
||||
|
||||
**Performance bottlenecks**:
|
||||
- [Bottleneck description and mitigation approach]
|
||||
|
||||
## 8. Dependency Graph
|
||||
|
||||
**Must be implemented after**: [list of component numbers/names]
|
||||
|
||||
**Can be implemented in parallel with**: [list of component numbers/names]
|
||||
|
||||
**Blocks**: [list of components that depend on this one]
|
||||
|
||||
## 9. Logging Strategy
|
||||
|
||||
| Log Level | When | Example |
|
||||
|-----------|------|---------|
|
||||
| ERROR | Unrecoverable failures | `Failed to process order {id}: {error}` |
|
||||
| WARN | Recoverable issues | `Retry attempt {n} for {operation}` |
|
||||
| INFO | Key business events | `Order {id} created by user {uid}` |
|
||||
| DEBUG | Development diagnostics | `Query returned {n} rows in {ms}ms` |
|
||||
|
||||
**Log format**: [structured JSON / plaintext — match system standard]
|
||||
|
||||
**Log storage**: [stdout / file / centralized logging service]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- **Section 3 (External API)**: skip entirely for internal-only components. Include for any component that exposes HTTP endpoints, WebSocket connections, or gRPC services.
|
||||
- **Section 4 (Storage Estimates)**: critical for components that manage persistent data. Skip for stateless components.
|
||||
- **Section 5 (Algorithmic Complexity)**: only document if the algorithm is non-trivial (O(n^2) or worse, recursive, etc.). Simple CRUD operations don't need this.
|
||||
- **Section 6 (Helpers)**: if the helper is used by only one component, keep it inside that component. Only extract to `helpers/` if shared by 2+ components.
|
||||
- **Section 8 (Dependency Graph)**: this is essential for determining implementation order. Be precise about what "depends on" means — data dependency, API dependency, or shared infrastructure.
|
||||
@@ -0,0 +1,127 @@
|
||||
# Epic Template
|
||||
|
||||
Use this template for each epic. Create epics via the configured work item tracker (Jira MCP or Azure DevOps MCP).
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
## Epic: [Component Name] — [Outcome]
|
||||
|
||||
**Example**: Data Ingestion — Near-real-time pipeline
|
||||
|
||||
### Epic Summary
|
||||
|
||||
[1-2 sentences: what we are building + why it matters]
|
||||
|
||||
### Problem / Context
|
||||
|
||||
[Current state, pain points, constraints, business opportunities.
|
||||
Link to architecture.md and relevant component spec.]
|
||||
|
||||
### Scope
|
||||
|
||||
**In Scope**:
|
||||
- [Capability 1 — describe what, not how]
|
||||
- [Capability 2]
|
||||
- [Capability 3]
|
||||
|
||||
**Out of Scope**:
|
||||
- [Explicit exclusion 1 — prevents scope creep]
|
||||
- [Explicit exclusion 2]
|
||||
|
||||
### Assumptions
|
||||
|
||||
- [System design assumption]
|
||||
- [Data structure assumption]
|
||||
- [Infrastructure assumption]
|
||||
|
||||
### Dependencies
|
||||
|
||||
**Epic dependencies** (must be completed first):
|
||||
- [Epic name / ID]
|
||||
|
||||
**External dependencies**:
|
||||
- [Services, hardware, environments, certificates, data sources]
|
||||
|
||||
### Effort Estimation
|
||||
|
||||
**T-shirt size**: S / M / L / XL
|
||||
**Story points range**: [min]-[max]
|
||||
|
||||
### Users / Consumers
|
||||
|
||||
| Type | Who | Key Use Cases |
|
||||
|------|-----|--------------|
|
||||
| Internal | [team/role] | [use case] |
|
||||
| External | [user type] | [use case] |
|
||||
| System | [service name] | [integration point] |
|
||||
|
||||
### Requirements
|
||||
|
||||
**Functional**:
|
||||
- [API expectations, events, data handling]
|
||||
- [Idempotency, retry behavior]
|
||||
|
||||
**Non-functional**:
|
||||
- [Availability, latency, throughput targets]
|
||||
- [Scalability, processing limits, data retention]
|
||||
|
||||
**Security / Compliance**:
|
||||
- [Authentication, encryption, secrets management]
|
||||
- [Logging, audit trail]
|
||||
- [SOC2 / ISO / GDPR if applicable]
|
||||
|
||||
### Design & Architecture
|
||||
|
||||
- Architecture doc: `_docs/02_document/architecture.md`
|
||||
- Component spec: `_docs/02_document/components/[##]_[name]/description.md`
|
||||
- System flows: `_docs/02_document/system-flows.md`
|
||||
|
||||
### Definition of Done
|
||||
|
||||
- [ ] All in-scope capabilities implemented
|
||||
- [ ] Automated tests pass (unit + blackbox)
|
||||
- [ ] Minimum coverage threshold met (75%)
|
||||
- [ ] Runbooks written (if applicable)
|
||||
- [ ] Documentation updated
|
||||
|
||||
### Acceptance Criteria
|
||||
|
||||
| # | Criterion | Measurable Condition |
|
||||
|---|-----------|---------------------|
|
||||
| 1 | [criterion] | [how to verify] |
|
||||
| 2 | [criterion] | [how to verify] |
|
||||
|
||||
### Risks & Mitigations
|
||||
|
||||
| # | Risk | Mitigation | Owner |
|
||||
|---|------|------------|-------|
|
||||
| 1 | [top risk] | [mitigation] | [owner] |
|
||||
| 2 | | | |
|
||||
| 3 | | | |
|
||||
|
||||
### Labels
|
||||
|
||||
- `component:[name]`
|
||||
- `env:prod` / `env:stg`
|
||||
- `type:platform` / `type:data` / `type:integration`
|
||||
|
||||
### Child Issues
|
||||
|
||||
| Type | Title | Points |
|
||||
|------|-------|--------|
|
||||
| Spike | [research/investigation task] | [1-3] |
|
||||
| Task | [implementation task] | [1-5] |
|
||||
| Task | [implementation task] | [1-5] |
|
||||
| Enabler | [infrastructure/setup task] | [1-3] |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Be concise. Fewer words with the same meaning = better epic.
|
||||
- Capabilities in scope are "what", not "how" — avoid describing implementation details.
|
||||
- Dependency order matters: epics that must be done first should be listed earlier in the backlog.
|
||||
- Every epic maps to exactly one component. If a component is too large for one epic, split the component first.
|
||||
- Complexity points for child issues follow the project standard: 1, 2, 3, 5, 8. Do not create issues above 5 points — split them.
|
||||
@@ -0,0 +1,104 @@
|
||||
# Final Planning Report Template
|
||||
|
||||
Use this template after completing all 6 steps and the quality checklist. Save as `_docs/02_document/FINAL_report.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# [System Name] — Planning Report
|
||||
|
||||
## Executive Summary
|
||||
|
||||
[2-3 sentences: what was planned, the core architectural approach, and the key outcome (number of components, epics, estimated effort)]
|
||||
|
||||
## Problem Statement
|
||||
|
||||
[Brief restatement from problem.md — transformed, not copy-pasted]
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
[Key architectural decisions and technology stack summary. Reference `architecture.md` for full details.]
|
||||
|
||||
**Technology stack**: [language, framework, database, hosting — one line]
|
||||
|
||||
**Deployment**: [environment strategy — one line]
|
||||
|
||||
## Component Summary
|
||||
|
||||
| # | Component | Purpose | Dependencies | Epic |
|
||||
|---|-----------|---------|-------------|------|
|
||||
| 01 | [name] | [one-line purpose] | — | [Jira ID] |
|
||||
| 02 | [name] | [one-line purpose] | 01 | [Jira ID] |
|
||||
| ... | | | | |
|
||||
|
||||
**Implementation order** (based on dependency graph):
|
||||
1. [Phase 1: components that can start immediately]
|
||||
2. [Phase 2: components that depend on Phase 1]
|
||||
3. [Phase 3: ...]
|
||||
|
||||
## System Flows
|
||||
|
||||
| Flow | Description | Key Components |
|
||||
|------|-------------|---------------|
|
||||
| [name] | [one-line summary] | [component list] |
|
||||
|
||||
[Reference `system-flows.md` for full diagrams and details.]
|
||||
|
||||
## Risk Summary
|
||||
|
||||
| Level | Count | Key Risks |
|
||||
|-------|-------|-----------|
|
||||
| Critical | [N] | [brief list] |
|
||||
| High | [N] | [brief list] |
|
||||
| Medium | [N] | — |
|
||||
| Low | [N] | — |
|
||||
|
||||
**Iterations completed**: [N]
|
||||
**All Critical/High risks mitigated**: Yes / No — [details if No]
|
||||
|
||||
[Reference `risk_mitigations.md` for full register.]
|
||||
|
||||
## Test Coverage
|
||||
|
||||
| Component | Integration | Performance | Security | Acceptance | AC Coverage |
|
||||
|-----------|-------------|-------------|----------|------------|-------------|
|
||||
| [name] | [N tests] | [N tests] | [N tests] | [N tests] | [X/Y ACs] |
|
||||
| ... | | | | | |
|
||||
|
||||
**Overall acceptance criteria coverage**: [X / Y total ACs covered] ([percentage]%)
|
||||
|
||||
## Epic Roadmap
|
||||
|
||||
| Order | Epic | Component | Effort | Dependencies |
|
||||
|-------|------|-----------|--------|-------------|
|
||||
| 1 | [Jira ID]: [name] | [component] | [S/M/L/XL] | — |
|
||||
| 2 | [Jira ID]: [name] | [component] | [S/M/L/XL] | Epic 1 |
|
||||
| ... | | | | |
|
||||
|
||||
**Total estimated effort**: [sum or range]
|
||||
|
||||
## Key Decisions Made
|
||||
|
||||
| # | Decision | Rationale | Alternatives Rejected |
|
||||
|---|----------|-----------|----------------------|
|
||||
| 1 | [decision] | [why] | [what was rejected] |
|
||||
| 2 | | | |
|
||||
|
||||
## Open Questions
|
||||
|
||||
| # | Question | Impact | Assigned To |
|
||||
|---|----------|--------|-------------|
|
||||
| 1 | [unresolved question] | [what it blocks or affects] | [who should answer] |
|
||||
|
||||
## Artifact Index
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `architecture.md` | System architecture |
|
||||
| `system-flows.md` | System flows and diagrams |
|
||||
| `components/01_[name]/description.md` | Component spec |
|
||||
| `components/01_[name]/tests.md` | Test spec |
|
||||
| `risk_mitigations.md` | Risk register |
|
||||
| `diagrams/components.drawio` | Component diagram |
|
||||
| `diagrams/flows/flow_[name].md` | Flow diagrams |
|
||||
```
|
||||
@@ -0,0 +1,35 @@
|
||||
# Performance Tests Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/performance-tests.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Performance Tests
|
||||
|
||||
### NFT-PERF-01: [Test Name]
|
||||
|
||||
**Summary**: [What performance characteristic this validates]
|
||||
**Traces to**: AC-[ID]
|
||||
**Metric**: [what is measured — latency, throughput, frame rate, etc.]
|
||||
|
||||
**Preconditions**:
|
||||
- [System state, load profile, data volume]
|
||||
|
||||
**Steps**:
|
||||
|
||||
| Step | Consumer Action | Measurement |
|
||||
|------|----------------|-------------|
|
||||
| 1 | [action] | [what to measure and how] |
|
||||
|
||||
**Pass criteria**: [specific threshold — e.g., p95 latency < 400ms]
|
||||
**Duration**: [how long the test runs]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Performance tests should run long enough to capture steady-state behavior, not just cold-start.
|
||||
- Define clear pass/fail thresholds with specific metrics (p50, p95, p99 latency, throughput, etc.).
|
||||
- Include warm-up preconditions to separate initialization cost from steady-state performance.
|
||||
@@ -0,0 +1,37 @@
|
||||
# Resilience Tests Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/resilience-tests.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Resilience Tests
|
||||
|
||||
### NFT-RES-01: [Test Name]
|
||||
|
||||
**Summary**: [What failure/recovery scenario this validates]
|
||||
**Traces to**: AC-[ID]
|
||||
|
||||
**Preconditions**:
|
||||
- [System state before fault injection]
|
||||
|
||||
**Fault injection**:
|
||||
- [What fault is introduced — process kill, network partition, invalid input sequence, etc.]
|
||||
|
||||
**Steps**:
|
||||
|
||||
| Step | Action | Expected Behavior |
|
||||
|------|--------|------------------|
|
||||
| 1 | [inject fault] | [system behavior during fault] |
|
||||
| 2 | [observe recovery] | [system behavior after recovery] |
|
||||
|
||||
**Pass criteria**: [recovery time, data integrity, continued operation]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Resilience tests must define both the fault and the expected recovery — not just "system should recover."
|
||||
- Include specific recovery time expectations and data integrity checks.
|
||||
- Test both graceful degradation (partial failure) and full recovery scenarios.
|
||||
@@ -0,0 +1,31 @@
|
||||
# Resource Limit Tests Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/resource-limit-tests.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Resource Limit Tests
|
||||
|
||||
### NFT-RES-LIM-01: [Test Name]
|
||||
|
||||
**Summary**: [What resource constraint this validates]
|
||||
**Traces to**: AC-[ID], RESTRICT-[ID]
|
||||
|
||||
**Preconditions**:
|
||||
- [System running under specified constraints]
|
||||
|
||||
**Monitoring**:
|
||||
- [What resources to monitor — memory, CPU, GPU, disk, temperature]
|
||||
|
||||
**Duration**: [how long to run]
|
||||
**Pass criteria**: [resource stays within limit — e.g., memory < 8GB throughout]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Resource limit tests must specify monitoring duration — short bursts don't prove sustained compliance.
|
||||
- Define specific numeric limits that can be programmatically checked.
|
||||
- Include both the monitoring method and the threshold in the pass criteria.
|
||||
@@ -0,0 +1,99 @@
|
||||
# Risk Register Template
|
||||
|
||||
Use this template for risk assessment. Save as `_docs/02_document/risk_mitigations.md`.
|
||||
Subsequent iterations: `risk_mitigations_02.md`, `risk_mitigations_03.md`, etc.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Risk Assessment — [Topic] — Iteration [##]
|
||||
|
||||
## Risk Scoring Matrix
|
||||
|
||||
| | Low Impact | Medium Impact | High Impact |
|
||||
|--|------------|---------------|-------------|
|
||||
| **High Probability** | Medium | High | Critical |
|
||||
| **Medium Probability** | Low | Medium | High |
|
||||
| **Low Probability** | Low | Low | Medium |
|
||||
|
||||
## Acceptance Criteria by Risk Level
|
||||
|
||||
| Level | Action Required |
|
||||
|-------|----------------|
|
||||
| Low | Accepted, monitored quarterly |
|
||||
| Medium | Mitigation plan required before implementation |
|
||||
| High | Mitigation + contingency plan required, reviewed weekly |
|
||||
| Critical | Must be resolved before proceeding to next planning step |
|
||||
|
||||
## Risk Register
|
||||
|
||||
| ID | Risk | Category | Probability | Impact | Score | Mitigation | Owner | Status |
|
||||
|----|------|----------|-------------|--------|-------|------------|-------|--------|
|
||||
| R01 | [risk description] | [category] | High/Med/Low | High/Med/Low | Critical/High/Med/Low | [mitigation strategy] | [owner] | Open/Mitigated/Accepted |
|
||||
| R02 | | | | | | | | |
|
||||
|
||||
## Risk Categories
|
||||
|
||||
### Technical Risks
|
||||
- Technology choices may not meet requirements
|
||||
- Integration complexity underestimated
|
||||
- Performance targets unachievable
|
||||
- Security vulnerabilities in design
|
||||
- Data model cannot support future requirements
|
||||
|
||||
### Schedule Risks
|
||||
- Dependencies delayed
|
||||
- Scope creep from ambiguous requirements
|
||||
- Underestimated complexity
|
||||
|
||||
### Resource Risks
|
||||
- Key person dependency
|
||||
- Team lacks experience with chosen technology
|
||||
- Infrastructure not available in time
|
||||
|
||||
### External Risks
|
||||
- Third-party API changes or deprecation
|
||||
- Vendor reliability or pricing changes
|
||||
- Regulatory or compliance changes
|
||||
- Data source availability
|
||||
|
||||
## Detailed Risk Analysis
|
||||
|
||||
### R01: [Risk Title]
|
||||
|
||||
**Description**: [Detailed description of the risk]
|
||||
|
||||
**Trigger conditions**: [What would cause this risk to materialize]
|
||||
|
||||
**Affected components**: [List of components impacted]
|
||||
|
||||
**Mitigation strategy**:
|
||||
1. [Action 1]
|
||||
2. [Action 2]
|
||||
|
||||
**Contingency plan**: [What to do if mitigation fails]
|
||||
|
||||
**Residual risk after mitigation**: [Low/Medium/High]
|
||||
|
||||
**Documents updated**: [List architecture/component docs that were updated to reflect this mitigation]
|
||||
|
||||
---
|
||||
|
||||
### R02: [Risk Title]
|
||||
|
||||
(repeat structure above)
|
||||
|
||||
## Architecture/Component Changes Applied
|
||||
|
||||
| Risk ID | Document Modified | Change Description |
|
||||
|---------|------------------|--------------------|
|
||||
| R01 | `architecture.md` §3 | [what changed] |
|
||||
| R01 | `components/02_[name]/description.md` §5 | [what changed] |
|
||||
|
||||
## Summary
|
||||
|
||||
**Total risks identified**: [N]
|
||||
**Critical**: [N] | **High**: [N] | **Medium**: [N] | **Low**: [N]
|
||||
**Risks mitigated this iteration**: [N]
|
||||
**Risks requiring user decision**: [list]
|
||||
```
|
||||
@@ -0,0 +1,30 @@
|
||||
# Security Tests Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/security-tests.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Security Tests
|
||||
|
||||
### NFT-SEC-01: [Test Name]
|
||||
|
||||
**Summary**: [What security property this validates]
|
||||
**Traces to**: AC-[ID], RESTRICT-[ID]
|
||||
|
||||
**Steps**:
|
||||
|
||||
| Step | Consumer Action | Expected Response |
|
||||
|------|----------------|------------------|
|
||||
| 1 | [attempt unauthorized access / injection / etc.] | [rejection / no data leak / etc.] |
|
||||
|
||||
**Pass criteria**: [specific security outcome]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Security tests at blackbox level focus on black-box attacks (unauthorized API calls, malformed input), not code-level vulnerabilities.
|
||||
- Verify the system remains operational after security-related edge cases (no crash, no hang).
|
||||
- Test authentication/authorization boundaries from the consumer's perspective.
|
||||
@@ -0,0 +1,108 @@
|
||||
# System Flows Template
|
||||
|
||||
Use this template for the system flows document. Save as `_docs/02_document/system-flows.md`.
|
||||
Individual flow diagrams go in `_docs/02_document/diagrams/flows/flow_[name].md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# [System Name] — System Flows
|
||||
|
||||
## Flow Inventory
|
||||
|
||||
| # | Flow Name | Trigger | Primary Components | Criticality |
|
||||
|---|-----------|---------|-------------------|-------------|
|
||||
| F1 | [name] | [user action / scheduled / event] | [component list] | High/Medium/Low |
|
||||
| F2 | [name] | | | |
|
||||
| ... | | | | |
|
||||
|
||||
## Flow Dependencies
|
||||
|
||||
| Flow | Depends On | Shares Data With |
|
||||
|------|-----------|-----------------|
|
||||
| F1 | — | F2 (via [entity]) |
|
||||
| F2 | F1 must complete first | F3 |
|
||||
|
||||
---
|
||||
|
||||
## Flow F1: [Flow Name]
|
||||
|
||||
### Description
|
||||
|
||||
[1-2 sentences: what this flow does, who triggers it, what the outcome is]
|
||||
|
||||
### Preconditions
|
||||
|
||||
- [Condition 1]
|
||||
- [Condition 2]
|
||||
|
||||
### Sequence Diagram
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant User
|
||||
participant ComponentA
|
||||
participant ComponentB
|
||||
participant Database
|
||||
|
||||
User->>ComponentA: [action]
|
||||
ComponentA->>ComponentB: [call with params]
|
||||
ComponentB->>Database: [query/write]
|
||||
Database-->>ComponentB: [result]
|
||||
ComponentB-->>ComponentA: [response]
|
||||
ComponentA-->>User: [result]
|
||||
```
|
||||
|
||||
### Flowchart
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
Start([Trigger]) --> Step1[Step description]
|
||||
Step1 --> Decision{Condition?}
|
||||
Decision -->|Yes| Step2[Step description]
|
||||
Decision -->|No| Step3[Step description]
|
||||
Step2 --> EndNode([Result])
|
||||
Step3 --> EndNode
|
||||
```
|
||||
|
||||
### Data Flow
|
||||
|
||||
| Step | From | To | Data | Format |
|
||||
|------|------|----|------|--------|
|
||||
| 1 | [source] | [destination] | [what data] | [DTO/event/etc] |
|
||||
| 2 | | | | |
|
||||
|
||||
### Error Scenarios
|
||||
|
||||
| Error | Where | Detection | Recovery |
|
||||
|-------|-------|-----------|----------|
|
||||
| [error type] | [which step] | [how detected] | [what happens] |
|
||||
|
||||
### Performance Expectations
|
||||
|
||||
| Metric | Target | Notes |
|
||||
|--------|--------|-------|
|
||||
| End-to-end latency | [target] | [conditions] |
|
||||
| Throughput | [target] | [peak/sustained] |
|
||||
|
||||
---
|
||||
|
||||
## Flow F2: [Flow Name]
|
||||
|
||||
(repeat structure above)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Mermaid Diagram Conventions
|
||||
|
||||
Follow these conventions for consistency across all flow diagrams:
|
||||
|
||||
- **Participants**: use component names matching `components/[##]_[name]`
|
||||
- **Node IDs**: camelCase, no spaces (e.g., `validateInput`, `saveOrder`)
|
||||
- **Decision nodes**: use `{Question?}` format
|
||||
- **Start/End**: use `([label])` stadium shape
|
||||
- **External systems**: use `[[label]]` subroutine shape
|
||||
- **Subgraphs**: group by component or bounded context
|
||||
- **No styling**: do not add colors or CSS classes — let the renderer theme handle it
|
||||
- **Edge labels**: wrap special characters in quotes (e.g., `-->|"O(n) check"|`)
|
||||
@@ -0,0 +1,55 @@
|
||||
# Test Data Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/test-data.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Test Data Management
|
||||
|
||||
## Seed Data Sets
|
||||
|
||||
| Data Set | Description | Used by Tests | How Loaded | Cleanup |
|
||||
|----------|-------------|---------------|-----------|---------|
|
||||
| [name] | [what it contains] | [test IDs] | [SQL script / API call / fixture file / volume mount] | [how removed after test] |
|
||||
|
||||
## Data Isolation Strategy
|
||||
|
||||
[e.g., each test run gets a fresh container restart, or transactions are rolled back, or namespaced data, or separate DB per test group]
|
||||
|
||||
## Input Data Mapping
|
||||
|
||||
| Input Data File | Source Location | Description | Covers Scenarios |
|
||||
|-----------------|----------------|-------------|-----------------|
|
||||
| [filename] | `_docs/00_problem/input_data/[filename]` | [what it contains] | [test IDs that use this data] |
|
||||
|
||||
## Expected Results Mapping
|
||||
|
||||
| Test Scenario ID | Input Data | Expected Result | Comparison Method | Tolerance | Expected Result Source |
|
||||
|-----------------|------------|-----------------|-------------------|-----------|----------------------|
|
||||
| [test ID] | `input_data/[filename]` | [quantifiable expected output] | [exact / tolerance / pattern / threshold / file-diff] | [± value or N/A] | `input_data/expected_results/[filename]` or inline |
|
||||
|
||||
## External Dependency Mocks
|
||||
|
||||
| External Service | Mock/Stub | How Provided | Behavior |
|
||||
|-----------------|-----------|-------------|----------|
|
||||
| [service name] | [mock type] | [Docker service / in-process stub / recorded responses] | [what it returns / simulates] |
|
||||
|
||||
## Data Validation Rules
|
||||
|
||||
| Data Type | Validation | Invalid Examples | Expected System Behavior |
|
||||
|-----------|-----------|-----------------|------------------------|
|
||||
| [type] | [rules] | [invalid input examples] | [how system should respond] |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Every seed data set should be traceable to specific test scenarios.
|
||||
- Input data from `_docs/00_problem/input_data/` should be mapped to test scenarios that use it.
|
||||
- Every input data item MUST have a corresponding expected result in the Expected Results Mapping table.
|
||||
- Expected results MUST be quantifiable: exact values, numeric tolerances, pattern matches, thresholds, or reference files. "Works correctly" is never acceptable.
|
||||
- For complex expected outputs, provide machine-readable reference files (JSON, CSV) in `_docs/00_problem/input_data/expected_results/` and reference them in the mapping.
|
||||
- External mocks must be deterministic — same input always produces same output.
|
||||
- Data isolation must guarantee no test can affect another test's outcome.
|
||||
@@ -0,0 +1,90 @@
|
||||
# Test Environment Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/environment.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Test Environment
|
||||
|
||||
## Overview
|
||||
|
||||
**System under test**: [main system name and entry points — API URLs, message queues, serial ports, etc.]
|
||||
**Consumer app purpose**: Standalone application that exercises the main system through its public interfaces, validating black-box use cases without access to internals.
|
||||
|
||||
## Docker Environment
|
||||
|
||||
### Services
|
||||
|
||||
| Service | Image / Build | Purpose | Ports |
|
||||
|---------|--------------|---------|-------|
|
||||
| system-under-test | [main app image or build context] | The main system being tested | [ports] |
|
||||
| test-db | [postgres/mysql/etc.] | Database for the main system | [ports] |
|
||||
| e2e-consumer | [build context for consumer app] | Black-box test runner | — |
|
||||
| [dependency] | [image] | [purpose — cache, queue, mock, etc.] | [ports] |
|
||||
|
||||
### Networks
|
||||
|
||||
| Network | Services | Purpose |
|
||||
|---------|----------|---------|
|
||||
| e2e-net | all | Isolated test network |
|
||||
|
||||
### Volumes
|
||||
|
||||
| Volume | Mounted to | Purpose |
|
||||
|--------|-----------|---------|
|
||||
| [name] | [service:path] | [test data, DB persistence, etc.] |
|
||||
|
||||
### docker-compose structure
|
||||
|
||||
```yaml
|
||||
# Outline only — not runnable code
|
||||
services:
|
||||
system-under-test:
|
||||
# main system
|
||||
test-db:
|
||||
# database
|
||||
e2e-consumer:
|
||||
# consumer test app
|
||||
depends_on:
|
||||
- system-under-test
|
||||
```
|
||||
|
||||
## Consumer Application
|
||||
|
||||
**Tech stack**: [language, framework, test runner]
|
||||
**Entry point**: [how it starts — e.g., pytest, jest, custom runner]
|
||||
|
||||
### Communication with system under test
|
||||
|
||||
| Interface | Protocol | Endpoint / Topic | Authentication |
|
||||
|-----------|----------|-----------------|----------------|
|
||||
| [API name] | [HTTP/gRPC/AMQP/etc.] | [URL or topic] | [method] |
|
||||
|
||||
### What the consumer does NOT have access to
|
||||
|
||||
- No direct database access to the main system
|
||||
- No internal module imports
|
||||
- No shared memory or file system with the main system
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
**When to run**: [e.g., on PR merge to dev, nightly, before production deploy]
|
||||
**Pipeline stage**: [where in the CI pipeline this fits]
|
||||
**Gate behavior**: [block merge / warning only / manual approval]
|
||||
**Timeout**: [max total suite duration before considered failed]
|
||||
|
||||
## Reporting
|
||||
|
||||
**Format**: CSV
|
||||
**Columns**: Test ID, Test Name, Execution Time (ms), Result (PASS/FAIL/SKIP), Error Message (if FAIL)
|
||||
**Output path**: [where the CSV is written — e.g., ./e2e-results/report.csv]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- The consumer app must treat the main system as a true black box — no internal imports, no direct DB queries against the main system's database.
|
||||
- Docker environment should be self-contained — `docker compose up` must be sufficient to run the full suite.
|
||||
- If the main system requires external services (payment gateways, third-party APIs), define mock/stub services in the Docker environment.
|
||||
@@ -0,0 +1,172 @@
|
||||
# Test Specification Template
|
||||
|
||||
Use this template for each component's test spec. Save as `components/[##]_[name]/tests.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Test Specification — [Component Name]
|
||||
|
||||
## Acceptance Criteria Traceability
|
||||
|
||||
| AC ID | Acceptance Criterion | Test IDs | Coverage |
|
||||
|-------|---------------------|----------|----------|
|
||||
| AC-01 | [criterion from acceptance_criteria.md] | IT-01, AT-01 | Covered |
|
||||
| AC-02 | [criterion] | PT-01 | Covered |
|
||||
| AC-03 | [criterion] | — | NOT COVERED — [reason] |
|
||||
|
||||
---
|
||||
|
||||
## Blackbox Tests
|
||||
|
||||
### IT-01: [Test Name]
|
||||
|
||||
**Summary**: [One sentence: what this test verifies]
|
||||
|
||||
**Traces to**: AC-01, AC-03
|
||||
|
||||
**Description**: [Detailed test scenario]
|
||||
|
||||
**Input data**:
|
||||
```
|
||||
[specific input data for this test]
|
||||
```
|
||||
|
||||
**Expected result**:
|
||||
```
|
||||
[specific expected output or state]
|
||||
```
|
||||
|
||||
**Max execution time**: [e.g., 5s]
|
||||
|
||||
**Dependencies**: [other components/services that must be running]
|
||||
|
||||
---
|
||||
|
||||
### IT-02: [Test Name]
|
||||
|
||||
(repeat structure)
|
||||
|
||||
---
|
||||
|
||||
## Performance Tests
|
||||
|
||||
### PT-01: [Test Name]
|
||||
|
||||
**Summary**: [One sentence: what performance aspect is tested]
|
||||
|
||||
**Traces to**: AC-02
|
||||
|
||||
**Load scenario**:
|
||||
- Concurrent users: [N]
|
||||
- Request rate: [N req/s]
|
||||
- Duration: [N minutes]
|
||||
- Ramp-up: [strategy]
|
||||
|
||||
**Expected results**:
|
||||
|
||||
| Metric | Target | Failure Threshold |
|
||||
|--------|--------|-------------------|
|
||||
| Latency (p50) | [target] | [max] |
|
||||
| Latency (p95) | [target] | [max] |
|
||||
| Latency (p99) | [target] | [max] |
|
||||
| Throughput | [target req/s] | [min req/s] |
|
||||
| Error rate | [target %] | [max %] |
|
||||
|
||||
**Resource limits**:
|
||||
- CPU: [max %]
|
||||
- Memory: [max MB/GB]
|
||||
- Database connections: [max pool size]
|
||||
|
||||
---
|
||||
|
||||
### PT-02: [Test Name]
|
||||
|
||||
(repeat structure)
|
||||
|
||||
---
|
||||
|
||||
## Security Tests
|
||||
|
||||
### ST-01: [Test Name]
|
||||
|
||||
**Summary**: [One sentence: what security aspect is tested]
|
||||
|
||||
**Traces to**: AC-04
|
||||
|
||||
**Attack vector**: [e.g., SQL injection on search endpoint, privilege escalation via direct ID access]
|
||||
|
||||
**Test procedure**:
|
||||
1. [Step 1]
|
||||
2. [Step 2]
|
||||
|
||||
**Expected behavior**: [what the system should do — reject, sanitize, log, etc.]
|
||||
|
||||
**Pass criteria**: [specific measurable condition]
|
||||
|
||||
**Fail criteria**: [what constitutes a failure]
|
||||
|
||||
---
|
||||
|
||||
### ST-02: [Test Name]
|
||||
|
||||
(repeat structure)
|
||||
|
||||
---
|
||||
|
||||
## Acceptance Tests
|
||||
|
||||
### AT-01: [Test Name]
|
||||
|
||||
**Summary**: [One sentence: what user-facing behavior is verified]
|
||||
|
||||
**Traces to**: AC-01
|
||||
|
||||
**Preconditions**:
|
||||
- [Precondition 1]
|
||||
- [Precondition 2]
|
||||
|
||||
**Steps**:
|
||||
|
||||
| Step | Action | Expected Result |
|
||||
|------|--------|-----------------|
|
||||
| 1 | [user action] | [expected outcome] |
|
||||
| 2 | [user action] | [expected outcome] |
|
||||
| 3 | [user action] | [expected outcome] |
|
||||
|
||||
---
|
||||
|
||||
### AT-02: [Test Name]
|
||||
|
||||
(repeat structure)
|
||||
|
||||
---
|
||||
|
||||
## Test Data Management
|
||||
|
||||
**Required test data**:
|
||||
|
||||
| Data Set | Description | Source | Size |
|
||||
|----------|-------------|--------|------|
|
||||
| [name] | [what it contains] | [generated / fixture / copy of prod subset] | [approx size] |
|
||||
|
||||
**Setup procedure**:
|
||||
1. [How to prepare the test environment]
|
||||
2. [How to load test data]
|
||||
|
||||
**Teardown procedure**:
|
||||
1. [How to clean up after tests]
|
||||
2. [How to restore initial state]
|
||||
|
||||
**Data isolation strategy**: [How tests are isolated from each other — separate DB, transactions, namespacing]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Every test MUST trace back to at least one acceptance criterion (AC-XX). If a test doesn't trace to any, question whether it's needed.
|
||||
- If an acceptance criterion has no test covering it, mark it as NOT COVERED and explain why (e.g., "requires manual verification", "deferred to phase 2").
|
||||
- Performance test targets should come from the NFR section in `architecture.md`.
|
||||
- Security tests should cover at minimum: authentication bypass, authorization escalation, injection attacks relevant to this component.
|
||||
- Not every component needs all 4 test types. A stateless utility component may only need blackbox tests.
|
||||
@@ -0,0 +1,47 @@
|
||||
# Traceability Matrix Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/traceability-matrix.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Traceability Matrix
|
||||
|
||||
## Acceptance Criteria Coverage
|
||||
|
||||
| AC ID | Acceptance Criterion | Test IDs | Coverage |
|
||||
|-------|---------------------|----------|----------|
|
||||
| AC-01 | [criterion text] | FT-P-01, NFT-PERF-01 | Covered |
|
||||
| AC-02 | [criterion text] | FT-P-02, FT-N-01 | Covered |
|
||||
| AC-03 | [criterion text] | — | NOT COVERED — [reason and mitigation] |
|
||||
|
||||
## Restrictions Coverage
|
||||
|
||||
| Restriction ID | Restriction | Test IDs | Coverage |
|
||||
|---------------|-------------|----------|----------|
|
||||
| RESTRICT-01 | [restriction text] | FT-N-02, NFT-RES-LIM-01 | Covered |
|
||||
| RESTRICT-02 | [restriction text] | — | NOT COVERED — [reason and mitigation] |
|
||||
|
||||
## Coverage Summary
|
||||
|
||||
| Category | Total Items | Covered | Not Covered | Coverage % |
|
||||
|----------|-----------|---------|-------------|-----------|
|
||||
| Acceptance Criteria | [N] | [N] | [N] | [%] |
|
||||
| Restrictions | [N] | [N] | [N] | [%] |
|
||||
| **Total** | [N] | [N] | [N] | [%] |
|
||||
|
||||
## Uncovered Items Analysis
|
||||
|
||||
| Item | Reason Not Covered | Risk | Mitigation |
|
||||
|------|-------------------|------|-----------|
|
||||
| [AC/Restriction ID] | [why it cannot be tested at blackbox level] | [what could go wrong] | [how risk is addressed — e.g., covered by component tests in Step 5] |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Every acceptance criterion must appear in the matrix — either covered or explicitly marked as not covered with a reason.
|
||||
- Every restriction must appear in the matrix.
|
||||
- NOT COVERED items must have a reason and a mitigation strategy (e.g., "covered at component test level" or "requires real hardware").
|
||||
- Coverage percentage should be at least 75% for acceptance criteria at the blackbox test level.
|
||||
Reference in New Issue
Block a user