mirror of
https://github.com/azaion/gps-denied-desktop.git
synced 2026-04-23 01:26:36 +00:00
Update decomposition skill documentation and templates; remove obsolete feature specification and initial structure templates. Enhance task decomposition descriptions and adjust directory paths for project documentation.
This commit is contained in:
+55
-293
@@ -1,19 +1,21 @@
|
||||
---
|
||||
name: plan
|
||||
description: |
|
||||
Decompose a solution into architecture, system flows, components, tests, and Jira epics.
|
||||
Systematic 5-step planning workflow with BLOCKING gates, self-verification, and structured artifact management.
|
||||
Supports project mode (_docs/ + _docs/02_plans/ structure) and standalone mode (@file.md).
|
||||
Decompose a solution into architecture, data model, deployment plan, system flows, components, tests, and Jira epics.
|
||||
Systematic 6-step planning workflow with BLOCKING gates, self-verification, and structured artifact management.
|
||||
Uses _docs/ + _docs/02_document/ structure.
|
||||
Trigger phrases:
|
||||
- "plan", "decompose solution", "architecture planning"
|
||||
- "break down the solution", "create planning documents"
|
||||
- "component decomposition", "solution analysis"
|
||||
category: build
|
||||
tags: [planning, architecture, components, testing, jira, epics]
|
||||
disable-model-invocation: true
|
||||
---
|
||||
|
||||
# Solution Planning
|
||||
|
||||
Decompose a problem and solution into architecture, system flows, components, tests, and Jira epics through a systematic 5-step workflow.
|
||||
Decompose a problem and solution into architecture, data model, deployment plan, system flows, components, tests, and Jira epics through a systematic 6-step workflow.
|
||||
|
||||
## Core Principles
|
||||
|
||||
@@ -25,329 +27,83 @@ Decompose a problem and solution into architecture, system flows, components, te
|
||||
|
||||
## Context Resolution
|
||||
|
||||
Determine the operating mode based on invocation before any other logic runs.
|
||||
Fixed paths — no mode detection needed:
|
||||
|
||||
**Project mode** (no explicit input file provided):
|
||||
- PROBLEM_FILE: `_docs/00_problem/problem.md`
|
||||
- SOLUTION_FILE: `_docs/01_solution/solution.md`
|
||||
- PLANS_DIR: `_docs/02_plans/`
|
||||
- All existing guardrails apply as-is.
|
||||
- DOCUMENT_DIR: `_docs/02_document/`
|
||||
|
||||
**Standalone mode** (explicit input file provided, e.g. `/plan @some_doc.md`):
|
||||
- INPUT_FILE: the provided file (treated as combined problem + solution context)
|
||||
- Derive `<topic>` from the input filename (without extension)
|
||||
- PLANS_DIR: `_standalone/<topic>/plans/`
|
||||
- Guardrails relaxed: only INPUT_FILE must exist and be non-empty
|
||||
- `acceptance_criteria.md` and `restrictions.md` are optional — warn if absent
|
||||
Announce the resolved paths to the user before proceeding.
|
||||
|
||||
Announce the detected mode and resolved paths to the user before proceeding.
|
||||
|
||||
## Input Specification
|
||||
|
||||
### Required Files
|
||||
|
||||
**Project mode:**
|
||||
## Required Files
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| PROBLEM_FILE (`_docs/00_problem/problem.md`) | Problem description and context |
|
||||
| `_docs/00_problem/input_data/` | Reference data examples (if available) |
|
||||
| `_docs/00_problem/restrictions.md` | Constraints and limitations (if available) |
|
||||
| `_docs/00_problem/acceptance_criteria.md` | Measurable acceptance criteria (if available) |
|
||||
| SOLUTION_FILE (`_docs/01_solution/solution.md`) | Solution draft to decompose |
|
||||
| `_docs/00_problem/problem.md` | Problem description and context |
|
||||
| `_docs/00_problem/acceptance_criteria.md` | Measurable acceptance criteria |
|
||||
| `_docs/00_problem/restrictions.md` | Constraints and limitations |
|
||||
| `_docs/00_problem/input_data/` | Reference data examples |
|
||||
| `_docs/01_solution/solution.md` | Finalized solution to decompose |
|
||||
|
||||
**Standalone mode:**
|
||||
## Prerequisites
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| INPUT_FILE (the provided file) | Combined problem + solution context |
|
||||
|
||||
### Prerequisite Checks (BLOCKING)
|
||||
|
||||
**Project mode:**
|
||||
1. PROBLEM_FILE exists and is non-empty — **STOP if missing**
|
||||
2. SOLUTION_FILE exists and is non-empty — **STOP if missing**
|
||||
3. Create PLANS_DIR if it does not exist
|
||||
4. If `PLANS_DIR/<topic>/` already exists, ask user: **resume from last checkpoint or start fresh?**
|
||||
|
||||
**Standalone mode:**
|
||||
1. INPUT_FILE exists and is non-empty — **STOP if missing**
|
||||
2. Warn if no `restrictions.md` or `acceptance_criteria.md` provided alongside INPUT_FILE
|
||||
3. Create PLANS_DIR if it does not exist
|
||||
4. If `PLANS_DIR/<topic>/` already exists, ask user: **resume from last checkpoint or start fresh?**
|
||||
Read and follow `steps/00_prerequisites.md`. All three prerequisite checks are **BLOCKING** — do not start the workflow until they pass.
|
||||
|
||||
## Artifact Management
|
||||
|
||||
### Directory Structure
|
||||
|
||||
At the start of planning, create a topic-named working directory under PLANS_DIR:
|
||||
|
||||
```
|
||||
PLANS_DIR/<topic>/
|
||||
├── architecture.md
|
||||
├── system-flows.md
|
||||
├── risk_mitigations.md
|
||||
├── risk_mitigations_02.md (iterative, ## as sequence)
|
||||
├── components/
|
||||
│ ├── 01_[name]/
|
||||
│ │ ├── description.md
|
||||
│ │ └── tests.md
|
||||
│ ├── 02_[name]/
|
||||
│ │ ├── description.md
|
||||
│ │ └── tests.md
|
||||
│ └── ...
|
||||
├── common-helpers/
|
||||
│ ├── 01_helper_[name]/
|
||||
│ ├── 02_helper_[name]/
|
||||
│ └── ...
|
||||
├── e2e_test_infrastructure.md
|
||||
├── diagrams/
|
||||
│ ├── components.drawio
|
||||
│ └── flows/
|
||||
│ ├── flow_[name].md (Mermaid)
|
||||
│ └── ...
|
||||
└── FINAL_report.md
|
||||
```
|
||||
|
||||
### Save Timing
|
||||
|
||||
| Step | Save immediately after | Filename |
|
||||
|------|------------------------|----------|
|
||||
| Step 1 | Architecture analysis complete | `architecture.md` |
|
||||
| Step 1 | System flows documented | `system-flows.md` |
|
||||
| Step 2 | Each component analyzed | `components/[##]_[name]/description.md` |
|
||||
| Step 2 | Common helpers generated | `common-helpers/[##]_helper_[name].md` |
|
||||
| Step 2 | Diagrams generated | `diagrams/` |
|
||||
| Step 3 | Risk assessment complete | `risk_mitigations.md` |
|
||||
| Step 4 | Tests written per component | `components/[##]_[name]/tests.md` |
|
||||
| Step 4b | E2E test infrastructure spec | `e2e_test_infrastructure.md` |
|
||||
| Step 5 | Epics created in Jira | Jira via MCP |
|
||||
| Final | All steps complete | `FINAL_report.md` |
|
||||
|
||||
### Save Principles
|
||||
|
||||
1. **Save immediately**: write to disk as soon as a step completes; do not wait until the end
|
||||
2. **Incremental updates**: same file can be updated multiple times; append or replace
|
||||
3. **Preserve process**: keep all intermediate files even after integration into final report
|
||||
4. **Enable recovery**: if interrupted, resume from the last saved artifact (see Resumability)
|
||||
|
||||
### Resumability
|
||||
|
||||
If `PLANS_DIR/<topic>/` already contains artifacts:
|
||||
|
||||
1. List existing files and match them to the save timing table above
|
||||
2. Identify the last completed step based on which artifacts exist
|
||||
3. Resume from the next incomplete step
|
||||
4. Inform the user which steps are being skipped
|
||||
Read `steps/01_artifact-management.md` for directory structure, save timing, save principles, and resumability rules. Refer to it throughout the workflow.
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
At the start of execution, create a TodoWrite with all steps (1 through 5, including 4b). Update status as each step completes.
|
||||
At the start of execution, create a TodoWrite with all steps (1 through 6 plus Final). Update status as each step completes.
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1: Solution Analysis
|
||||
### Step 1: Blackbox Tests
|
||||
|
||||
**Role**: Professional software architect
|
||||
**Goal**: Produce `architecture.md` and `system-flows.md` from the solution draft
|
||||
**Constraints**: No code, no component-level detail yet; focus on system-level view
|
||||
Read and execute `.cursor/skills/test-spec/SKILL.md`.
|
||||
|
||||
1. Read all input files thoroughly
|
||||
2. Research unknown or questionable topics via internet; ask user about ambiguities
|
||||
3. Document architecture using `templates/architecture.md` as structure
|
||||
4. Document system flows using `templates/system-flows.md` as structure
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Architecture covers all capabilities mentioned in solution.md
|
||||
- [ ] System flows cover all main user/system interactions
|
||||
- [ ] No contradictions with problem.md or restrictions.md
|
||||
- [ ] Technology choices are justified
|
||||
|
||||
**Save action**: Write `architecture.md` and `system-flows.md`
|
||||
|
||||
**BLOCKING**: Present architecture summary to user. Do NOT proceed until user confirms.
|
||||
Capture any new questions, findings, or insights that arise during test specification — these feed forward into Steps 2 and 3.
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Component Decomposition
|
||||
### Step 2: Solution Analysis
|
||||
|
||||
**Role**: Professional software architect
|
||||
**Goal**: Decompose the architecture into components with detailed specs
|
||||
**Constraints**: No code; only names, interfaces, inputs/outputs. Follow SRP strictly.
|
||||
|
||||
1. Identify components from the architecture; think about separation, reusability, and communication patterns
|
||||
2. If additional components are needed (data preparation, shared helpers), create them
|
||||
3. For each component, write a spec using `templates/component-spec.md` as structure
|
||||
4. Generate diagrams:
|
||||
- draw.io component diagram showing relations (minimize line intersections, group semantically coherent components, place external users near their components)
|
||||
- Mermaid flowchart per main control flow
|
||||
5. Components can share and reuse common logic, same for multiple components. Hence for such occurences common-helpers folder is specified.
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Each component has a single, clear responsibility
|
||||
- [ ] No functionality is spread across multiple components
|
||||
- [ ] All inter-component interfaces are defined (who calls whom, with what)
|
||||
- [ ] Component dependency graph has no circular dependencies
|
||||
- [ ] All components from architecture.md are accounted for
|
||||
|
||||
**Save action**: Write:
|
||||
- each component `components/[##]_[name]/description.md`
|
||||
- comomon helper `common-helpers/[##]_helper_[name].md`
|
||||
- diagrams `diagrams/`
|
||||
|
||||
**BLOCKING**: Present component list with one-line summaries to user. Do NOT proceed until user confirms.
|
||||
Read and follow `steps/02_solution-analysis.md`.
|
||||
|
||||
---
|
||||
|
||||
### Step 3: Architecture Review & Risk Assessment
|
||||
### Step 3: Component Decomposition
|
||||
|
||||
**Role**: Professional software architect and analyst
|
||||
**Goal**: Validate all artifacts for consistency, then identify and mitigate risks
|
||||
**Constraints**: This is a review step — fix problems found, do not add new features
|
||||
|
||||
#### 3a. Evaluator Pass (re-read ALL artifacts)
|
||||
|
||||
Review checklist:
|
||||
- [ ] All components follow Single Responsibility Principle
|
||||
- [ ] All components follow dumb code / smart data principle
|
||||
- [ ] Inter-component interfaces are consistent (caller's output matches callee's input)
|
||||
- [ ] No circular dependencies in the dependency graph
|
||||
- [ ] No missing interactions between components
|
||||
- [ ] No over-engineering — is there a simpler decomposition?
|
||||
- [ ] Security considerations addressed in component design
|
||||
- [ ] Performance bottlenecks identified
|
||||
- [ ] API contracts are consistent across components
|
||||
|
||||
Fix any issues found before proceeding to risk identification.
|
||||
|
||||
#### 3b. Risk Identification
|
||||
|
||||
1. Identify technical and project risks
|
||||
2. Assess probability and impact using `templates/risk-register.md`
|
||||
3. Define mitigation strategies
|
||||
4. Apply mitigations to architecture, flows, and component documents where applicable
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Every High/Critical risk has a concrete mitigation strategy
|
||||
- [ ] Mitigations are reflected in the relevant component or architecture docs
|
||||
- [ ] No new risks introduced by the mitigations themselves
|
||||
|
||||
**Save action**: Write `risk_mitigations.md`
|
||||
|
||||
**BLOCKING**: Present risk summary to user. Ask whether assessment is sufficient.
|
||||
|
||||
**Iterative**: If user requests another round, repeat Step 3 and write `risk_mitigations_##.md` (## as sequence number). Continue until user confirms.
|
||||
Read and follow `steps/03_component-decomposition.md`.
|
||||
|
||||
---
|
||||
|
||||
### Step 4: Test Specifications
|
||||
### Step 4: Architecture Review & Risk Assessment
|
||||
|
||||
**Role**: Professional Quality Assurance Engineer
|
||||
**Goal**: Write test specs for each component achieving minimum 75% acceptance criteria coverage
|
||||
**Constraints**: Test specs only — no test code. Each test must trace to an acceptance criterion.
|
||||
|
||||
1. For each component, write tests using `templates/test-spec.md` as structure
|
||||
2. Cover all 4 types: integration, performance, security, acceptance
|
||||
3. Include test data management (setup, teardown, isolation)
|
||||
4. Verify traceability: every acceptance criterion from `acceptance_criteria.md` must be covered by at least one test
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Every acceptance criterion has at least one test covering it
|
||||
- [ ] Test inputs are realistic and well-defined
|
||||
- [ ] Expected results are specific and measurable
|
||||
- [ ] No component is left without tests
|
||||
|
||||
**Save action**: Write each `components/[##]_[name]/tests.md`
|
||||
Read and follow `steps/04_review-risk.md`.
|
||||
|
||||
---
|
||||
|
||||
### Step 4b: E2E Black-Box Test Infrastructure
|
||||
### Step 5: Test Specifications
|
||||
|
||||
**Role**: Professional Quality Assurance Engineer
|
||||
**Goal**: Specify a separate consumer application and Docker environment for black-box end-to-end testing of the main system
|
||||
**Constraints**: Spec only — no test code. Consumer must treat the main system as a black box (no internal imports, no direct DB access).
|
||||
|
||||
1. Define Docker environment: services (system under test, test DB, consumer app, dependencies), networks, volumes
|
||||
2. Specify consumer application: tech stack, entry point, communication interfaces with the main system
|
||||
3. Define E2E test scenarios from acceptance criteria — focus on critical end-to-end use cases that cross component boundaries
|
||||
4. Specify test data management: seed data, isolation strategy, external dependency mocks
|
||||
5. Define CI/CD integration: when to run, gate behavior, timeout
|
||||
6. Define reporting format (CSV: test ID, name, execution time, result, error message)
|
||||
|
||||
Use `templates/e2e-test-infrastructure.md` as structure.
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Critical acceptance criteria are covered by at least one E2E scenario
|
||||
- [ ] Consumer app has no direct access to system internals
|
||||
- [ ] Docker environment is self-contained (`docker compose up` sufficient)
|
||||
- [ ] External dependencies have mock/stub services defined
|
||||
|
||||
**Save action**: Write `e2e_test_infrastructure.md`
|
||||
Read and follow `steps/05_test-specifications.md`.
|
||||
|
||||
---
|
||||
|
||||
### Step 5: Jira Epics
|
||||
### Step 6: Jira Epics
|
||||
|
||||
**Role**: Professional product manager
|
||||
**Goal**: Create Jira epics from components, ordered by dependency
|
||||
**Constraints**: Be concise — fewer words with the same meaning is better
|
||||
|
||||
1. Generate Jira Epics from components using Jira MCP, structured per `templates/epic-spec.md`
|
||||
2. Order epics by dependency (which must be done first)
|
||||
3. Include effort estimation per epic (T-shirt size or story points range)
|
||||
4. Ensure each epic has clear acceptance criteria cross-referenced with component specs
|
||||
5. Generate updated draw.io diagram showing component-to-epic mapping
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Every component maps to exactly one epic
|
||||
- [ ] Dependency order is respected (no epic depends on a later one)
|
||||
- [ ] Acceptance criteria are measurable
|
||||
- [ ] Effort estimates are realistic
|
||||
|
||||
**Save action**: Epics created in Jira via MCP
|
||||
Read and follow `steps/06_jira-epics.md`.
|
||||
|
||||
---
|
||||
|
||||
## Quality Checklist (before FINAL_report.md)
|
||||
### Final: Quality Checklist
|
||||
|
||||
Before writing the final report, verify ALL of the following:
|
||||
|
||||
### Architecture
|
||||
- [ ] Covers all capabilities from solution.md
|
||||
- [ ] Technology choices are justified
|
||||
- [ ] Deployment model is defined
|
||||
|
||||
### Components
|
||||
- [ ] Every component follows SRP
|
||||
- [ ] No circular dependencies
|
||||
- [ ] All inter-component interfaces are defined and consistent
|
||||
- [ ] No orphan components (unused by any flow)
|
||||
|
||||
### Risks
|
||||
- [ ] All High/Critical risks have mitigations
|
||||
- [ ] Mitigations are reflected in component/architecture docs
|
||||
- [ ] User has confirmed risk assessment is sufficient
|
||||
|
||||
### Tests
|
||||
- [ ] Every acceptance criterion is covered by at least one test
|
||||
- [ ] All 4 test types are represented per component (where applicable)
|
||||
- [ ] Test data management is defined
|
||||
|
||||
### E2E Test Infrastructure
|
||||
- [ ] Critical use cases covered by E2E scenarios
|
||||
- [ ] Docker environment is self-contained
|
||||
- [ ] Consumer app treats main system as black box
|
||||
- [ ] CI/CD integration and reporting defined
|
||||
|
||||
### Epics
|
||||
- [ ] Every component maps to an epic
|
||||
- [ ] Dependency order is correct
|
||||
- [ ] Acceptance criteria are measurable
|
||||
|
||||
**Save action**: Write `FINAL_report.md` using `templates/final-report.md` as structure
|
||||
Read and follow `steps/07_quality-checklist.md`.
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
- **Proceeding without input data**: all three data gate items (acceptance_criteria, restrictions, input_data) must be present before any planning begins
|
||||
- **Coding during planning**: this workflow produces documents, never code
|
||||
- **Multi-responsibility components**: if a component does two things, split it
|
||||
- **Skipping BLOCKING gates**: never proceed past a BLOCKING marker without user confirmation
|
||||
@@ -355,13 +111,15 @@ Before writing the final report, verify ALL of the following:
|
||||
- **Copy-pasting problem.md**: the architecture doc should analyze and transform, not repeat the input
|
||||
- **Vague interfaces**: "component A talks to component B" is not enough; define the method, input, output
|
||||
- **Ignoring restrictions.md**: every constraint must be traceable in the architecture or risk register
|
||||
- **Ignoring blackbox test findings**: insights from Step 1 must feed into architecture (Step 2) and component decomposition (Step 3)
|
||||
|
||||
## Escalation Rules
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| Missing acceptance_criteria.md, restrictions.md, or input_data/ | **STOP** — planning cannot proceed |
|
||||
| Ambiguous requirements | ASK user |
|
||||
| Missing acceptance criteria | ASK user |
|
||||
| Input data coverage below 70% | Search internet for supplementary data, ASK user to validate |
|
||||
| Technology choice with multiple valid options | ASK user |
|
||||
| Component naming | PROCEED, confirm at next BLOCKING gate |
|
||||
| File structure within templates | PROCEED |
|
||||
@@ -372,22 +130,26 @@ Before writing the final report, verify ALL of the following:
|
||||
|
||||
```
|
||||
┌────────────────────────────────────────────────────────────────┐
|
||||
│ Solution Planning (5-Step Method) │
|
||||
│ Solution Planning (6-Step + Final) │
|
||||
├────────────────────────────────────────────────────────────────┤
|
||||
│ CONTEXT: Resolve mode (project vs standalone) + set paths │
|
||||
│ 1. Solution Analysis → architecture.md, system-flows.md │
|
||||
│ PREREQ: Data Gate (BLOCKING) │
|
||||
│ → verify AC, restrictions, input_data, solution exist │
|
||||
│ │
|
||||
│ 1. Blackbox Tests → test-spec/SKILL.md │
|
||||
│ [BLOCKING: user confirms test coverage] │
|
||||
│ 2. Solution Analysis → architecture, data model, deployment │
|
||||
│ [BLOCKING: user confirms architecture] │
|
||||
│ 2. Component Decompose → components/[##]_[name]/description │
|
||||
│ [BLOCKING: user confirms decomposition] │
|
||||
│ 3. Review & Risk Assess → risk_mitigations.md │
|
||||
│ [BLOCKING: user confirms risks, iterative] │
|
||||
│ 4. Test Specifications → components/[##]_[name]/tests.md │
|
||||
│ 4b.E2E Test Infra → e2e_test_infrastructure.md │
|
||||
│ 5. Jira Epics → Jira via MCP │
|
||||
│ 3. Component Decomp → component specs + interfaces │
|
||||
│ [BLOCKING: user confirms components] │
|
||||
│ 4. Review & Risk → risk register, iterations │
|
||||
│ [BLOCKING: user confirms mitigations] │
|
||||
│ 5. Test Specifications → per-component test specs │
|
||||
│ 6. Jira Epics → epic per component + bootstrap │
|
||||
│ ───────────────────────────────────────────────── │
|
||||
│ Quality Checklist → FINAL_report.md │
|
||||
│ Final: Quality Checklist → FINAL_report.md │
|
||||
├────────────────────────────────────────────────────────────────┤
|
||||
│ Principles: SRP · Dumb code/smart data · Save immediately │
|
||||
│ Ask don't assume · Plan don't code │
|
||||
│ Principles: Single Responsibility · Dumb code, smart data │
|
||||
│ Save immediately · Ask don't assume │
|
||||
│ Plan don't code │
|
||||
└────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
@@ -0,0 +1,27 @@
|
||||
## Prerequisite Checks (BLOCKING)
|
||||
|
||||
Run sequentially before any planning step:
|
||||
|
||||
### Prereq 1: Data Gate
|
||||
|
||||
1. `_docs/00_problem/acceptance_criteria.md` exists and is non-empty — **STOP if missing**
|
||||
2. `_docs/00_problem/restrictions.md` exists and is non-empty — **STOP if missing**
|
||||
3. `_docs/00_problem/input_data/` exists and contains at least one data file — **STOP if missing**
|
||||
4. `_docs/00_problem/problem.md` exists and is non-empty — **STOP if missing**
|
||||
|
||||
All four are mandatory. If any is missing or empty, STOP and ask the user to provide them. If the user cannot provide the required data, planning cannot proceed — just stop.
|
||||
|
||||
### Prereq 2: Finalize Solution Draft
|
||||
|
||||
Only runs after the Data Gate passes:
|
||||
|
||||
1. Scan `_docs/01_solution/` for files matching `solution_draft*.md`
|
||||
2. Identify the highest-numbered draft (e.g. `solution_draft06.md`)
|
||||
3. **Rename** it to `_docs/01_solution/solution.md`
|
||||
4. If `solution.md` already exists, ask the user whether to overwrite or keep existing
|
||||
5. Verify `solution.md` is non-empty — **STOP if missing or empty**
|
||||
|
||||
### Prereq 3: Workspace Setup
|
||||
|
||||
1. Create DOCUMENT_DIR if it does not exist
|
||||
2. If DOCUMENT_DIR already contains artifacts, ask user: **resume from last checkpoint or start fresh?**
|
||||
@@ -0,0 +1,87 @@
|
||||
## Artifact Management
|
||||
|
||||
### Directory Structure
|
||||
|
||||
All artifacts are written directly under DOCUMENT_DIR:
|
||||
|
||||
```
|
||||
DOCUMENT_DIR/
|
||||
├── tests/
|
||||
│ ├── environment.md
|
||||
│ ├── test-data.md
|
||||
│ ├── blackbox-tests.md
|
||||
│ ├── performance-tests.md
|
||||
│ ├── resilience-tests.md
|
||||
│ ├── security-tests.md
|
||||
│ ├── resource-limit-tests.md
|
||||
│ └── traceability-matrix.md
|
||||
├── architecture.md
|
||||
├── system-flows.md
|
||||
├── data_model.md
|
||||
├── deployment/
|
||||
│ ├── containerization.md
|
||||
│ ├── ci_cd_pipeline.md
|
||||
│ ├── environment_strategy.md
|
||||
│ ├── observability.md
|
||||
│ └── deployment_procedures.md
|
||||
├── risk_mitigations.md
|
||||
├── risk_mitigations_02.md (iterative, ## as sequence)
|
||||
├── components/
|
||||
│ ├── 01_[name]/
|
||||
│ │ ├── description.md
|
||||
│ │ └── tests.md
|
||||
│ ├── 02_[name]/
|
||||
│ │ ├── description.md
|
||||
│ │ └── tests.md
|
||||
│ └── ...
|
||||
├── common-helpers/
|
||||
│ ├── 01_helper_[name]/
|
||||
│ ├── 02_helper_[name]/
|
||||
│ └── ...
|
||||
├── diagrams/
|
||||
│ ├── components.drawio
|
||||
│ └── flows/
|
||||
│ ├── flow_[name].md (Mermaid)
|
||||
│ └── ...
|
||||
└── FINAL_report.md
|
||||
```
|
||||
|
||||
### Save Timing
|
||||
|
||||
| Step | Save immediately after | Filename |
|
||||
|------|------------------------|----------|
|
||||
| Step 1 | Blackbox test environment spec | `tests/environment.md` |
|
||||
| Step 1 | Blackbox test data spec | `tests/test-data.md` |
|
||||
| Step 1 | Blackbox tests | `tests/blackbox-tests.md` |
|
||||
| Step 1 | Blackbox performance tests | `tests/performance-tests.md` |
|
||||
| Step 1 | Blackbox resilience tests | `tests/resilience-tests.md` |
|
||||
| Step 1 | Blackbox security tests | `tests/security-tests.md` |
|
||||
| Step 1 | Blackbox resource limit tests | `tests/resource-limit-tests.md` |
|
||||
| Step 1 | Blackbox traceability matrix | `tests/traceability-matrix.md` |
|
||||
| Step 2 | Architecture analysis complete | `architecture.md` |
|
||||
| Step 2 | System flows documented | `system-flows.md` |
|
||||
| Step 2 | Data model documented | `data_model.md` |
|
||||
| Step 2 | Deployment plan complete | `deployment/` (5 files) |
|
||||
| Step 3 | Each component analyzed | `components/[##]_[name]/description.md` |
|
||||
| Step 3 | Common helpers generated | `common-helpers/[##]_helper_[name].md` |
|
||||
| Step 3 | Diagrams generated | `diagrams/` |
|
||||
| Step 4 | Risk assessment complete | `risk_mitigations.md` |
|
||||
| Step 5 | Tests written per component | `components/[##]_[name]/tests.md` |
|
||||
| Step 6 | Epics created in Jira | Jira via MCP |
|
||||
| Final | All steps complete | `FINAL_report.md` |
|
||||
|
||||
### Save Principles
|
||||
|
||||
1. **Save immediately**: write to disk as soon as a step completes; do not wait until the end
|
||||
2. **Incremental updates**: same file can be updated multiple times; append or replace
|
||||
3. **Preserve process**: keep all intermediate files even after integration into final report
|
||||
4. **Enable recovery**: if interrupted, resume from the last saved artifact (see Resumability)
|
||||
|
||||
### Resumability
|
||||
|
||||
If DOCUMENT_DIR already contains artifacts:
|
||||
|
||||
1. List existing files and match them to the save timing table above
|
||||
2. Identify the last completed step based on which artifacts exist
|
||||
3. Resume from the next incomplete step
|
||||
4. Inform the user which steps are being skipped
|
||||
@@ -0,0 +1,74 @@
|
||||
## Step 2: Solution Analysis
|
||||
|
||||
**Role**: Professional software architect
|
||||
**Goal**: Produce `architecture.md`, `system-flows.md`, `data_model.md`, and `deployment/` from the solution draft
|
||||
**Constraints**: No code, no component-level detail yet; focus on system-level view
|
||||
|
||||
### Phase 2a: Architecture & Flows
|
||||
|
||||
1. Read all input files thoroughly
|
||||
2. Incorporate findings, questions, and insights discovered during Step 1 (blackbox tests)
|
||||
3. Research unknown or questionable topics via internet; ask user about ambiguities
|
||||
4. Document architecture using `templates/architecture.md` as structure
|
||||
5. Document system flows using `templates/system-flows.md` as structure
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Architecture covers all capabilities mentioned in solution.md
|
||||
- [ ] System flows cover all main user/system interactions
|
||||
- [ ] No contradictions with problem.md or restrictions.md
|
||||
- [ ] Technology choices are justified
|
||||
- [ ] Blackbox test findings are reflected in architecture decisions
|
||||
|
||||
**Save action**: Write `architecture.md` and `system-flows.md`
|
||||
|
||||
**BLOCKING**: Present architecture summary to user. Do NOT proceed until user confirms.
|
||||
|
||||
### Phase 2b: Data Model
|
||||
|
||||
**Role**: Professional software architect
|
||||
**Goal**: Produce a detailed data model document covering entities, relationships, and migration strategy
|
||||
|
||||
1. Extract core entities from architecture.md and solution.md
|
||||
2. Define entity attributes, types, and constraints
|
||||
3. Define relationships between entities (Mermaid ERD)
|
||||
4. Define migration strategy: versioning tool (EF Core migrations / Alembic / sql-migrate), reversibility requirement, naming convention
|
||||
5. Define seed data requirements per environment (dev, staging)
|
||||
6. Define backward compatibility approach for schema changes (additive-only by default)
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Every entity mentioned in architecture.md is defined
|
||||
- [ ] Relationships are explicit with cardinality
|
||||
- [ ] Migration strategy specifies reversibility requirement
|
||||
- [ ] Seed data requirements defined
|
||||
- [ ] Backward compatibility approach documented
|
||||
|
||||
**Save action**: Write `data_model.md`
|
||||
|
||||
### Phase 2c: Deployment Planning
|
||||
|
||||
**Role**: DevOps / Platform engineer
|
||||
**Goal**: Produce deployment plan covering containerization, CI/CD, environment strategy, observability, and deployment procedures
|
||||
|
||||
Use the `/deploy` skill's templates as structure for each artifact:
|
||||
|
||||
1. Read architecture.md and restrictions.md for infrastructure constraints
|
||||
2. Research Docker best practices for the project's tech stack
|
||||
3. Define containerization plan: Dockerfile per component, docker-compose for dev and tests
|
||||
4. Define CI/CD pipeline: stages, quality gates, caching, parallelization
|
||||
5. Define environment strategy: dev, staging, production with secrets management
|
||||
6. Define observability: structured logging, metrics, tracing, alerting
|
||||
7. Define deployment procedures: strategy, health checks, rollback, checklist
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Every component has a Docker specification
|
||||
- [ ] CI/CD pipeline covers lint, test, security, build, deploy
|
||||
- [ ] Environment strategy covers dev, staging, production
|
||||
- [ ] Observability covers logging, metrics, tracing, alerting
|
||||
- [ ] Deployment procedures include rollback and health checks
|
||||
|
||||
**Save action**: Write all 5 files under `deployment/`:
|
||||
- `containerization.md`
|
||||
- `ci_cd_pipeline.md`
|
||||
- `environment_strategy.md`
|
||||
- `observability.md`
|
||||
- `deployment_procedures.md`
|
||||
@@ -0,0 +1,29 @@
|
||||
## Step 3: Component Decomposition
|
||||
|
||||
**Role**: Professional software architect
|
||||
**Goal**: Decompose the architecture into components with detailed specs
|
||||
**Constraints**: No code; only names, interfaces, inputs/outputs. Follow SRP strictly.
|
||||
|
||||
1. Identify components from the architecture; think about separation, reusability, and communication patterns
|
||||
2. Use blackbox test scenarios from Step 1 to validate component boundaries
|
||||
3. If additional components are needed (data preparation, shared helpers), create them
|
||||
4. For each component, write a spec using `templates/component-spec.md` as structure
|
||||
5. Generate diagrams:
|
||||
- draw.io component diagram showing relations (minimize line intersections, group semantically coherent components, place external users near their components)
|
||||
- Mermaid flowchart per main control flow
|
||||
6. Components can share and reuse common logic, same for multiple components. Hence for such occurences common-helpers folder is specified.
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Each component has a single, clear responsibility
|
||||
- [ ] No functionality is spread across multiple components
|
||||
- [ ] All inter-component interfaces are defined (who calls whom, with what)
|
||||
- [ ] Component dependency graph has no circular dependencies
|
||||
- [ ] All components from architecture.md are accounted for
|
||||
- [ ] Every blackbox test scenario can be traced through component interactions
|
||||
|
||||
**Save action**: Write:
|
||||
- each component `components/[##]_[name]/description.md`
|
||||
- common helper `common-helpers/[##]_helper_[name].md`
|
||||
- diagrams `diagrams/`
|
||||
|
||||
**BLOCKING**: Present component list with one-line summaries to user. Do NOT proceed until user confirms.
|
||||
@@ -0,0 +1,38 @@
|
||||
## Step 4: Architecture Review & Risk Assessment
|
||||
|
||||
**Role**: Professional software architect and analyst
|
||||
**Goal**: Validate all artifacts for consistency, then identify and mitigate risks
|
||||
**Constraints**: This is a review step — fix problems found, do not add new features
|
||||
|
||||
### 4a. Evaluator Pass (re-read ALL artifacts)
|
||||
|
||||
Review checklist:
|
||||
- [ ] All components follow Single Responsibility Principle
|
||||
- [ ] All components follow dumb code / smart data principle
|
||||
- [ ] Inter-component interfaces are consistent (caller's output matches callee's input)
|
||||
- [ ] No circular dependencies in the dependency graph
|
||||
- [ ] No missing interactions between components
|
||||
- [ ] No over-engineering — is there a simpler decomposition?
|
||||
- [ ] Security considerations addressed in component design
|
||||
- [ ] Performance bottlenecks identified
|
||||
- [ ] API contracts are consistent across components
|
||||
|
||||
Fix any issues found before proceeding to risk identification.
|
||||
|
||||
### 4b. Risk Identification
|
||||
|
||||
1. Identify technical and project risks
|
||||
2. Assess probability and impact using `templates/risk-register.md`
|
||||
3. Define mitigation strategies
|
||||
4. Apply mitigations to architecture, flows, and component documents where applicable
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Every High/Critical risk has a concrete mitigation strategy
|
||||
- [ ] Mitigations are reflected in the relevant component or architecture docs
|
||||
- [ ] No new risks introduced by the mitigations themselves
|
||||
|
||||
**Save action**: Write `risk_mitigations.md`
|
||||
|
||||
**BLOCKING**: Present risk summary to user. Ask whether assessment is sufficient.
|
||||
|
||||
**Iterative**: If user requests another round, repeat Step 4 and write `risk_mitigations_##.md` (## as sequence number). Continue until user confirms.
|
||||
@@ -0,0 +1,20 @@
|
||||
## Step 5: Test Specifications
|
||||
|
||||
**Role**: Professional Quality Assurance Engineer
|
||||
|
||||
**Goal**: Write test specs for each component achieving minimum 75% acceptance criteria coverage
|
||||
|
||||
**Constraints**: Test specs only — no test code. Each test must trace to an acceptance criterion.
|
||||
|
||||
1. For each component, write tests using `templates/test-spec.md` as structure
|
||||
2. Cover all 4 types: integration, performance, security, acceptance
|
||||
3. Include test data management (setup, teardown, isolation)
|
||||
4. Verify traceability: every acceptance criterion from `acceptance_criteria.md` must be covered by at least one test
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Every acceptance criterion has at least one test covering it
|
||||
- [ ] Test inputs are realistic and well-defined
|
||||
- [ ] Expected results are specific and measurable
|
||||
- [ ] No component is left without tests
|
||||
|
||||
**Save action**: Write each `components/[##]_[name]/tests.md`
|
||||
@@ -0,0 +1,48 @@
|
||||
## Step 6: Work Item Epics
|
||||
|
||||
**Role**: Professional product manager
|
||||
|
||||
**Goal**: Create epics from components, ordered by dependency
|
||||
|
||||
**Constraints**: Epic descriptions must be **comprehensive and self-contained** — a developer reading only the epic should understand the full context without needing to open separate files.
|
||||
|
||||
1. **Create "Bootstrap & Initial Structure" epic first** — this epic will parent the `01_initial_structure` task created by the decompose skill. It covers project scaffolding: folder structure, shared models, interfaces, stubs, CI/CD config, DB migrations setup, test structure.
|
||||
2. Generate epics for each component using the configured work item tracker (Jira MCP or Azure DevOps MCP — see `autopilot/protocols.md`), structured per `templates/epic-spec.md`
|
||||
3. Order epics by dependency (Bootstrap epic is always first, then components based on their dependency graph)
|
||||
4. Include effort estimation per epic (T-shirt size or story points range)
|
||||
5. Ensure each epic has clear acceptance criteria cross-referenced with component specs
|
||||
6. Generate Mermaid diagrams showing component-to-epic mapping and component relationships
|
||||
|
||||
**CRITICAL — Epic description richness requirements**:
|
||||
|
||||
Each epic description MUST include ALL of the following sections with substantial content:
|
||||
- **System context**: where this component fits in the overall architecture (include Mermaid diagram showing this component's position and connections)
|
||||
- **Problem / Context**: what problem this component solves, why it exists, current pain points
|
||||
- **Scope**: detailed in-scope and out-of-scope lists
|
||||
- **Architecture notes**: relevant ADRs, technology choices, patterns used, key design decisions
|
||||
- **Interface specification**: full method signatures, input/output types, error types (from component description.md)
|
||||
- **Data flow**: how data enters and exits this component (include Mermaid sequence or flowchart diagram)
|
||||
- **Dependencies**: epic dependencies (with Jira IDs) and external dependencies (libraries, hardware, services)
|
||||
- **Acceptance criteria**: measurable criteria with specific thresholds (from component tests.md)
|
||||
- **Non-functional requirements**: latency, memory, throughput targets with failure thresholds
|
||||
- **Risks & mitigations**: relevant risks from risk_mitigations.md with concrete mitigation strategies
|
||||
- **Effort estimation**: T-shirt size and story points range
|
||||
- **Child issues**: planned task breakdown with complexity points
|
||||
- **Key constraints**: from restrictions.md that affect this component
|
||||
- **Testing strategy**: summary of test types and coverage from tests.md
|
||||
|
||||
Do NOT create minimal epics with just a summary and short description. The epic is the primary reference document for the implementation team.
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] "Bootstrap & Initial Structure" epic exists and is first in order
|
||||
- [ ] "Blackbox Tests" epic exists
|
||||
- [ ] Every component maps to exactly one epic
|
||||
- [ ] Dependency order is respected (no epic depends on a later one)
|
||||
- [ ] Acceptance criteria are measurable
|
||||
- [ ] Effort estimates are realistic
|
||||
- [ ] Every epic description includes architecture diagram, interface spec, data flow, risks, and NFRs
|
||||
- [ ] Epic descriptions are self-contained — readable without opening other files
|
||||
|
||||
7. **Create "Blackbox Tests" epic** — this epic will parent the blackbox test tasks created by the `/decompose` skill. It covers implementing the test scenarios defined in `tests/`.
|
||||
|
||||
**Save action**: Epics created via the configured tracker MCP. Also saved locally in `epics.md` with ticket IDs. If `tracker: local`, save locally only.
|
||||
@@ -0,0 +1,57 @@
|
||||
## Quality Checklist (before FINAL_report.md)
|
||||
|
||||
Before writing the final report, verify ALL of the following:
|
||||
|
||||
### Blackbox Tests
|
||||
- [ ] Every acceptance criterion is covered in traceability-matrix.md
|
||||
- [ ] Every restriction is verified by at least one test
|
||||
- [ ] Positive and negative scenarios are balanced
|
||||
- [ ] Docker environment is self-contained
|
||||
- [ ] Consumer app treats main system as black box
|
||||
- [ ] CI/CD integration and reporting defined
|
||||
|
||||
### Architecture
|
||||
- [ ] Covers all capabilities from solution.md
|
||||
- [ ] Technology choices are justified
|
||||
- [ ] Deployment model is defined
|
||||
- [ ] Blackbox test findings are reflected in architecture decisions
|
||||
|
||||
### Data Model
|
||||
- [ ] Every entity from architecture.md is defined
|
||||
- [ ] Relationships have explicit cardinality
|
||||
- [ ] Migration strategy with reversibility requirement
|
||||
- [ ] Seed data requirements defined
|
||||
- [ ] Backward compatibility approach documented
|
||||
|
||||
### Deployment
|
||||
- [ ] Containerization plan covers all components
|
||||
- [ ] CI/CD pipeline includes lint, test, security, build, deploy stages
|
||||
- [ ] Environment strategy covers dev, staging, production
|
||||
- [ ] Observability covers logging, metrics, tracing, alerting
|
||||
- [ ] Deployment procedures include rollback and health checks
|
||||
|
||||
### Components
|
||||
- [ ] Every component follows SRP
|
||||
- [ ] No circular dependencies
|
||||
- [ ] All inter-component interfaces are defined and consistent
|
||||
- [ ] No orphan components (unused by any flow)
|
||||
- [ ] Every blackbox test scenario can be traced through component interactions
|
||||
|
||||
### Risks
|
||||
- [ ] All High/Critical risks have mitigations
|
||||
- [ ] Mitigations are reflected in component/architecture docs
|
||||
- [ ] User has confirmed risk assessment is sufficient
|
||||
|
||||
### Tests
|
||||
- [ ] Every acceptance criterion is covered by at least one test
|
||||
- [ ] All 4 test types are represented per component (where applicable)
|
||||
- [ ] Test data management is defined
|
||||
|
||||
### Epics
|
||||
- [ ] "Bootstrap & Initial Structure" epic exists
|
||||
- [ ] "Blackbox Tests" epic exists
|
||||
- [ ] Every component maps to an epic
|
||||
- [ ] Dependency order is correct
|
||||
- [ ] Acceptance criteria are measurable
|
||||
|
||||
**Save action**: Write `FINAL_report.md` using `templates/final-report.md` as structure
|
||||
@@ -1,6 +1,6 @@
|
||||
# Architecture Document Template
|
||||
|
||||
Use this template for the architecture document. Save as `_docs/02_plans/<topic>/architecture.md`.
|
||||
Use this template for the architecture document. Save as `_docs/02_document/architecture.md`.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -0,0 +1,78 @@
|
||||
# Blackbox Tests Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/blackbox-tests.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Blackbox Tests
|
||||
|
||||
## Positive Scenarios
|
||||
|
||||
### FT-P-01: [Scenario Name]
|
||||
|
||||
**Summary**: [One sentence: what black-box use case this validates]
|
||||
**Traces to**: AC-[ID], AC-[ID]
|
||||
**Category**: [which AC category — e.g., Position Accuracy, Image Processing, etc.]
|
||||
|
||||
**Preconditions**:
|
||||
- [System state required before test]
|
||||
|
||||
**Input data**: [reference to specific data set or file from test-data.md]
|
||||
|
||||
**Steps**:
|
||||
|
||||
| Step | Consumer Action | Expected System Response |
|
||||
|------|----------------|------------------------|
|
||||
| 1 | [call / send / provide input] | [response / event / output] |
|
||||
| 2 | [call / send / provide input] | [response / event / output] |
|
||||
|
||||
**Expected outcome**: [specific, measurable result]
|
||||
**Max execution time**: [e.g., 10s]
|
||||
|
||||
---
|
||||
|
||||
### FT-P-02: [Scenario Name]
|
||||
|
||||
(repeat structure)
|
||||
|
||||
---
|
||||
|
||||
## Negative Scenarios
|
||||
|
||||
### FT-N-01: [Scenario Name]
|
||||
|
||||
**Summary**: [One sentence: what invalid/edge input this tests]
|
||||
**Traces to**: AC-[ID] (negative case), RESTRICT-[ID]
|
||||
**Category**: [which AC/restriction category]
|
||||
|
||||
**Preconditions**:
|
||||
- [System state required before test]
|
||||
|
||||
**Input data**: [reference to specific invalid data or edge case]
|
||||
|
||||
**Steps**:
|
||||
|
||||
| Step | Consumer Action | Expected System Response |
|
||||
|------|----------------|------------------------|
|
||||
| 1 | [provide invalid input / trigger edge case] | [error response / graceful degradation / fallback behavior] |
|
||||
|
||||
**Expected outcome**: [system rejects gracefully / falls back to X / returns error Y]
|
||||
**Max execution time**: [e.g., 5s]
|
||||
|
||||
---
|
||||
|
||||
### FT-N-02: [Scenario Name]
|
||||
|
||||
(repeat structure)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Blackbox tests should typically trace to at least one acceptance criterion or restriction. Tests without a trace are allowed but should have a clear justification.
|
||||
- Positive scenarios validate the system does what it should.
|
||||
- Negative scenarios validate the system rejects or handles gracefully what it shouldn't accept.
|
||||
- Expected outcomes must be specific and measurable — not "works correctly" but "returns position within 50m of ground truth."
|
||||
- Input data references should point to specific entries in test-data.md.
|
||||
@@ -1,6 +1,6 @@
|
||||
# Jira Epic Template
|
||||
# Epic Template
|
||||
|
||||
Use this template for each Jira epic. Create epics via Jira MCP.
|
||||
Use this template for each epic. Create epics via the configured work item tracker (Jira MCP or Azure DevOps MCP).
|
||||
|
||||
---
|
||||
|
||||
@@ -73,14 +73,14 @@ Link to architecture.md and relevant component spec.]
|
||||
|
||||
### Design & Architecture
|
||||
|
||||
- Architecture doc: `_docs/02_plans/<topic>/architecture.md`
|
||||
- Component spec: `_docs/02_plans/<topic>/components/[##]_[name]/description.md`
|
||||
- System flows: `_docs/02_plans/<topic>/system-flows.md`
|
||||
- Architecture doc: `_docs/02_document/architecture.md`
|
||||
- Component spec: `_docs/02_document/components/[##]_[name]/description.md`
|
||||
- System flows: `_docs/02_document/system-flows.md`
|
||||
|
||||
### Definition of Done
|
||||
|
||||
- [ ] All in-scope capabilities implemented
|
||||
- [ ] Automated tests pass (unit + integration + e2e)
|
||||
- [ ] Automated tests pass (unit + blackbox)
|
||||
- [ ] Minimum coverage threshold met (75%)
|
||||
- [ ] Runbooks written (if applicable)
|
||||
- [ ] Documentation updated
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Final Planning Report Template
|
||||
|
||||
Use this template after completing all 5 steps and the quality checklist. Save as `_docs/02_plans/<topic>/FINAL_report.md`.
|
||||
Use this template after completing all 6 steps and the quality checklist. Save as `_docs/02_document/FINAL_report.md`.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -0,0 +1,35 @@
|
||||
# Performance Tests Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/performance-tests.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Performance Tests
|
||||
|
||||
### NFT-PERF-01: [Test Name]
|
||||
|
||||
**Summary**: [What performance characteristic this validates]
|
||||
**Traces to**: AC-[ID]
|
||||
**Metric**: [what is measured — latency, throughput, frame rate, etc.]
|
||||
|
||||
**Preconditions**:
|
||||
- [System state, load profile, data volume]
|
||||
|
||||
**Steps**:
|
||||
|
||||
| Step | Consumer Action | Measurement |
|
||||
|------|----------------|-------------|
|
||||
| 1 | [action] | [what to measure and how] |
|
||||
|
||||
**Pass criteria**: [specific threshold — e.g., p95 latency < 400ms]
|
||||
**Duration**: [how long the test runs]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Performance tests should run long enough to capture steady-state behavior, not just cold-start.
|
||||
- Define clear pass/fail thresholds with specific metrics (p50, p95, p99 latency, throughput, etc.).
|
||||
- Include warm-up preconditions to separate initialization cost from steady-state performance.
|
||||
@@ -0,0 +1,37 @@
|
||||
# Resilience Tests Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/resilience-tests.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Resilience Tests
|
||||
|
||||
### NFT-RES-01: [Test Name]
|
||||
|
||||
**Summary**: [What failure/recovery scenario this validates]
|
||||
**Traces to**: AC-[ID]
|
||||
|
||||
**Preconditions**:
|
||||
- [System state before fault injection]
|
||||
|
||||
**Fault injection**:
|
||||
- [What fault is introduced — process kill, network partition, invalid input sequence, etc.]
|
||||
|
||||
**Steps**:
|
||||
|
||||
| Step | Action | Expected Behavior |
|
||||
|------|--------|------------------|
|
||||
| 1 | [inject fault] | [system behavior during fault] |
|
||||
| 2 | [observe recovery] | [system behavior after recovery] |
|
||||
|
||||
**Pass criteria**: [recovery time, data integrity, continued operation]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Resilience tests must define both the fault and the expected recovery — not just "system should recover."
|
||||
- Include specific recovery time expectations and data integrity checks.
|
||||
- Test both graceful degradation (partial failure) and full recovery scenarios.
|
||||
@@ -0,0 +1,31 @@
|
||||
# Resource Limit Tests Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/resource-limit-tests.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Resource Limit Tests
|
||||
|
||||
### NFT-RES-LIM-01: [Test Name]
|
||||
|
||||
**Summary**: [What resource constraint this validates]
|
||||
**Traces to**: AC-[ID], RESTRICT-[ID]
|
||||
|
||||
**Preconditions**:
|
||||
- [System running under specified constraints]
|
||||
|
||||
**Monitoring**:
|
||||
- [What resources to monitor — memory, CPU, GPU, disk, temperature]
|
||||
|
||||
**Duration**: [how long to run]
|
||||
**Pass criteria**: [resource stays within limit — e.g., memory < 8GB throughout]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Resource limit tests must specify monitoring duration — short bursts don't prove sustained compliance.
|
||||
- Define specific numeric limits that can be programmatically checked.
|
||||
- Include both the monitoring method and the threshold in the pass criteria.
|
||||
@@ -1,6 +1,6 @@
|
||||
# Risk Register Template
|
||||
|
||||
Use this template for risk assessment. Save as `_docs/02_plans/<topic>/risk_mitigations.md`.
|
||||
Use this template for risk assessment. Save as `_docs/02_document/risk_mitigations.md`.
|
||||
Subsequent iterations: `risk_mitigations_02.md`, `risk_mitigations_03.md`, etc.
|
||||
|
||||
---
|
||||
|
||||
@@ -0,0 +1,30 @@
|
||||
# Security Tests Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/security-tests.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Security Tests
|
||||
|
||||
### NFT-SEC-01: [Test Name]
|
||||
|
||||
**Summary**: [What security property this validates]
|
||||
**Traces to**: AC-[ID], RESTRICT-[ID]
|
||||
|
||||
**Steps**:
|
||||
|
||||
| Step | Consumer Action | Expected Response |
|
||||
|------|----------------|------------------|
|
||||
| 1 | [attempt unauthorized access / injection / etc.] | [rejection / no data leak / etc.] |
|
||||
|
||||
**Pass criteria**: [specific security outcome]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Security tests at blackbox level focus on black-box attacks (unauthorized API calls, malformed input), not code-level vulnerabilities.
|
||||
- Verify the system remains operational after security-related edge cases (no crash, no hang).
|
||||
- Test authentication/authorization boundaries from the consumer's perspective.
|
||||
@@ -1,7 +1,7 @@
|
||||
# System Flows Template
|
||||
|
||||
Use this template for the system flows document. Save as `_docs/02_plans/<topic>/system-flows.md`.
|
||||
Individual flow diagrams go in `_docs/02_plans/<topic>/diagrams/flows/flow_[name].md`.
|
||||
Use this template for the system flows document. Save as `_docs/02_document/system-flows.md`.
|
||||
Individual flow diagrams go in `_docs/02_document/diagrams/flows/flow_[name].md`.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -0,0 +1,55 @@
|
||||
# Test Data Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/test-data.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Test Data Management
|
||||
|
||||
## Seed Data Sets
|
||||
|
||||
| Data Set | Description | Used by Tests | How Loaded | Cleanup |
|
||||
|----------|-------------|---------------|-----------|---------|
|
||||
| [name] | [what it contains] | [test IDs] | [SQL script / API call / fixture file / volume mount] | [how removed after test] |
|
||||
|
||||
## Data Isolation Strategy
|
||||
|
||||
[e.g., each test run gets a fresh container restart, or transactions are rolled back, or namespaced data, or separate DB per test group]
|
||||
|
||||
## Input Data Mapping
|
||||
|
||||
| Input Data File | Source Location | Description | Covers Scenarios |
|
||||
|-----------------|----------------|-------------|-----------------|
|
||||
| [filename] | `_docs/00_problem/input_data/[filename]` | [what it contains] | [test IDs that use this data] |
|
||||
|
||||
## Expected Results Mapping
|
||||
|
||||
| Test Scenario ID | Input Data | Expected Result | Comparison Method | Tolerance | Expected Result Source |
|
||||
|-----------------|------------|-----------------|-------------------|-----------|----------------------|
|
||||
| [test ID] | `input_data/[filename]` | [quantifiable expected output] | [exact / tolerance / pattern / threshold / file-diff] | [± value or N/A] | `input_data/expected_results/[filename]` or inline |
|
||||
|
||||
## External Dependency Mocks
|
||||
|
||||
| External Service | Mock/Stub | How Provided | Behavior |
|
||||
|-----------------|-----------|-------------|----------|
|
||||
| [service name] | [mock type] | [Docker service / in-process stub / recorded responses] | [what it returns / simulates] |
|
||||
|
||||
## Data Validation Rules
|
||||
|
||||
| Data Type | Validation | Invalid Examples | Expected System Behavior |
|
||||
|-----------|-----------|-----------------|------------------------|
|
||||
| [type] | [rules] | [invalid input examples] | [how system should respond] |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Every seed data set should be traceable to specific test scenarios.
|
||||
- Input data from `_docs/00_problem/input_data/` should be mapped to test scenarios that use it.
|
||||
- Every input data item MUST have a corresponding expected result in the Expected Results Mapping table.
|
||||
- Expected results MUST be quantifiable: exact values, numeric tolerances, pattern matches, thresholds, or reference files. "Works correctly" is never acceptable.
|
||||
- For complex expected outputs, provide machine-readable reference files (JSON, CSV) in `_docs/00_problem/input_data/expected_results/` and reference them in the mapping.
|
||||
- External mocks must be deterministic — same input always produces same output.
|
||||
- Data isolation must guarantee no test can affect another test's outcome.
|
||||
+6
-57
@@ -1,17 +1,16 @@
|
||||
# E2E Black-Box Test Infrastructure Template
|
||||
# Test Environment Template
|
||||
|
||||
Describes a separate consumer application that tests the main system as a black box.
|
||||
Save as `PLANS_DIR/<topic>/e2e_test_infrastructure.md`.
|
||||
Save as `DOCUMENT_DIR/tests/environment.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# E2E Test Infrastructure
|
||||
# Test Environment
|
||||
|
||||
## Overview
|
||||
|
||||
**System under test**: [main system name and entry points — API URLs, message queues, etc.]
|
||||
**Consumer app purpose**: Standalone application that exercises the main system through its public interfaces, validating end-to-end use cases without access to internals.
|
||||
**System under test**: [main system name and entry points — API URLs, message queues, serial ports, etc.]
|
||||
**Consumer app purpose**: Standalone application that exercises the main system through its public interfaces, validating black-box use cases without access to internals.
|
||||
|
||||
## Docker Environment
|
||||
|
||||
@@ -22,7 +21,7 @@ Save as `PLANS_DIR/<topic>/e2e_test_infrastructure.md`.
|
||||
| system-under-test | [main app image or build context] | The main system being tested | [ports] |
|
||||
| test-db | [postgres/mysql/etc.] | Database for the main system | [ports] |
|
||||
| e2e-consumer | [build context for consumer app] | Black-box test runner | — |
|
||||
| [dependency] | [image] | [purpose — cache, queue, etc.] | [ports] |
|
||||
| [dependency] | [image] | [purpose — cache, queue, mock, etc.] | [ports] |
|
||||
|
||||
### Networks
|
||||
|
||||
@@ -68,54 +67,6 @@ services:
|
||||
- No internal module imports
|
||||
- No shared memory or file system with the main system
|
||||
|
||||
## E2E Test Scenarios
|
||||
|
||||
### Acceptance Criteria Traceability
|
||||
|
||||
| AC ID | Acceptance Criterion | E2E Test IDs | Coverage |
|
||||
|-------|---------------------|-------------|----------|
|
||||
| AC-01 | [criterion] | E2E-01 | Covered |
|
||||
| AC-02 | [criterion] | E2E-02, E2E-03 | Covered |
|
||||
| AC-03 | [criterion] | — | NOT COVERED — [reason] |
|
||||
|
||||
### E2E-01: [Scenario Name]
|
||||
|
||||
**Summary**: [One sentence: what end-to-end use case this validates]
|
||||
|
||||
**Traces to**: AC-01
|
||||
|
||||
**Preconditions**:
|
||||
- [System state required before test]
|
||||
|
||||
**Steps**:
|
||||
|
||||
| Step | Consumer Action | Expected System Response |
|
||||
|------|----------------|------------------------|
|
||||
| 1 | [call / send] | [response / event] |
|
||||
| 2 | [call / send] | [response / event] |
|
||||
|
||||
**Max execution time**: [e.g., 10s]
|
||||
|
||||
---
|
||||
|
||||
### E2E-02: [Scenario Name]
|
||||
|
||||
(repeat structure)
|
||||
|
||||
---
|
||||
|
||||
## Test Data Management
|
||||
|
||||
**Seed data**:
|
||||
|
||||
| Data Set | Description | How Loaded | Cleanup |
|
||||
|----------|-------------|-----------|---------|
|
||||
| [name] | [what it contains] | [SQL script / API call / fixture file] | [how removed after test] |
|
||||
|
||||
**Isolation strategy**: [e.g., each test run gets a fresh DB via container restart, or transactions are rolled back, or namespaced data]
|
||||
|
||||
**External dependencies**: [any external APIs that need mocking or sandbox environments]
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
**When to run**: [e.g., on PR merge to dev, nightly, before production deploy]
|
||||
@@ -134,8 +85,6 @@ services:
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Every E2E test MUST trace to at least one acceptance criterion. If it doesn't, question whether it's needed.
|
||||
- The consumer app must treat the main system as a true black box — no internal imports, no direct DB queries against the main system's database.
|
||||
- Keep the number of E2E tests focused on critical use cases. Exhaustive testing belongs in per-component tests (Step 4).
|
||||
- Docker environment should be self-contained — `docker compose up` must be sufficient to run the full suite.
|
||||
- If the main system requires external services (payment gateways, third-party APIs), define mock/stub services in the Docker environment.
|
||||
@@ -17,7 +17,7 @@ Use this template for each component's test spec. Save as `components/[##]_[name
|
||||
|
||||
---
|
||||
|
||||
## Integration Tests
|
||||
## Blackbox Tests
|
||||
|
||||
### IT-01: [Test Name]
|
||||
|
||||
@@ -169,4 +169,4 @@ Use this template for each component's test spec. Save as `components/[##]_[name
|
||||
- If an acceptance criterion has no test covering it, mark it as NOT COVERED and explain why (e.g., "requires manual verification", "deferred to phase 2").
|
||||
- Performance test targets should come from the NFR section in `architecture.md`.
|
||||
- Security tests should cover at minimum: authentication bypass, authorization escalation, injection attacks relevant to this component.
|
||||
- Not every component needs all 4 test types. A stateless utility component may only need integration tests.
|
||||
- Not every component needs all 4 test types. A stateless utility component may only need blackbox tests.
|
||||
|
||||
@@ -0,0 +1,47 @@
|
||||
# Traceability Matrix Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/traceability-matrix.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Traceability Matrix
|
||||
|
||||
## Acceptance Criteria Coverage
|
||||
|
||||
| AC ID | Acceptance Criterion | Test IDs | Coverage |
|
||||
|-------|---------------------|----------|----------|
|
||||
| AC-01 | [criterion text] | FT-P-01, NFT-PERF-01 | Covered |
|
||||
| AC-02 | [criterion text] | FT-P-02, FT-N-01 | Covered |
|
||||
| AC-03 | [criterion text] | — | NOT COVERED — [reason and mitigation] |
|
||||
|
||||
## Restrictions Coverage
|
||||
|
||||
| Restriction ID | Restriction | Test IDs | Coverage |
|
||||
|---------------|-------------|----------|----------|
|
||||
| RESTRICT-01 | [restriction text] | FT-N-02, NFT-RES-LIM-01 | Covered |
|
||||
| RESTRICT-02 | [restriction text] | — | NOT COVERED — [reason and mitigation] |
|
||||
|
||||
## Coverage Summary
|
||||
|
||||
| Category | Total Items | Covered | Not Covered | Coverage % |
|
||||
|----------|-----------|---------|-------------|-----------|
|
||||
| Acceptance Criteria | [N] | [N] | [N] | [%] |
|
||||
| Restrictions | [N] | [N] | [N] | [%] |
|
||||
| **Total** | [N] | [N] | [N] | [%] |
|
||||
|
||||
## Uncovered Items Analysis
|
||||
|
||||
| Item | Reason Not Covered | Risk | Mitigation |
|
||||
|------|-------------------|------|-----------|
|
||||
| [AC/Restriction ID] | [why it cannot be tested at blackbox level] | [what could go wrong] | [how risk is addressed — e.g., covered by component tests in Step 5] |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Every acceptance criterion must appear in the matrix — either covered or explicitly marked as not covered with a reason.
|
||||
- Every restriction must appear in the matrix.
|
||||
- NOT COVERED items must have a reason and a mitigation strategy (e.g., "covered at component test level" or "requires real hardware").
|
||||
- Coverage percentage should be at least 75% for acceptance criteria at the blackbox test level.
|
||||
Reference in New Issue
Block a user