26 KiB
name, description, category, tags, disable-model-invocation
| name | description | category | tags | disable-model-invocation | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| plan | Decompose a solution into architecture, data model, deployment plan, system flows, components, tests, and Jira epics. Systematic 6-step planning workflow with BLOCKING gates, self-verification, and structured artifact management. Uses _docs/ + _docs/02_document/ structure. Trigger phrases: - "plan", "decompose solution", "architecture planning" - "break down the solution", "create planning documents" - "component decomposition", "solution analysis" | build |
|
true |
Solution Planning
Decompose a problem and solution into architecture, data model, deployment plan, system flows, components, tests, and Jira epics through a systematic 6-step workflow.
Core Principles
- Single Responsibility: each component does one thing well; do not spread related logic across components
- Dumb code, smart data: keep logic simple, push complexity into data structures and configuration
- Save immediately: write artifacts to disk after each step; never accumulate unsaved work
- Ask, don't assume: when requirements are ambiguous, ask the user before proceeding
- Plan, don't code: this workflow produces documents and specs, never implementation code
Context Resolution
Fixed paths — no mode detection needed:
- PROBLEM_FILE:
_docs/00_problem/problem.md - SOLUTION_FILE:
_docs/01_solution/solution.md - DOCUMENT_DIR:
_docs/02_document/
Announce the resolved paths to the user before proceeding.
Input Specification
Required Files
| File | Purpose |
|---|---|
_docs/00_problem/problem.md |
Problem description and context |
_docs/00_problem/acceptance_criteria.md |
Measurable acceptance criteria |
_docs/00_problem/restrictions.md |
Constraints and limitations |
_docs/00_problem/input_data/ |
Reference data examples |
_docs/01_solution/solution.md |
Finalized solution to decompose |
Prerequisite Checks (BLOCKING)
Run sequentially before any planning step:
Prereq 1: Data Gate
_docs/00_problem/acceptance_criteria.mdexists and is non-empty — STOP if missing_docs/00_problem/restrictions.mdexists and is non-empty — STOP if missing_docs/00_problem/input_data/exists and contains at least one data file — STOP if missing_docs/00_problem/problem.mdexists and is non-empty — STOP if missing
All four are mandatory. If any is missing or empty, STOP and ask the user to provide them. If the user cannot provide the required data, planning cannot proceed — just stop.
Prereq 2: Finalize Solution Draft
Only runs after the Data Gate passes:
- Scan
_docs/01_solution/for files matchingsolution_draft*.md - Identify the highest-numbered draft (e.g.
solution_draft06.md) - Rename it to
_docs/01_solution/solution.md - If
solution.mdalready exists, ask the user whether to overwrite or keep existing - Verify
solution.mdis non-empty — STOP if missing or empty
Prereq 3: Workspace Setup
- Create DOCUMENT_DIR if it does not exist
- If DOCUMENT_DIR already contains artifacts, ask user: resume from last checkpoint or start fresh?
Artifact Management
Directory Structure
All artifacts are written directly under DOCUMENT_DIR:
DOCUMENT_DIR/
├── integration_tests/
│ ├── environment.md
│ ├── test_data.md
│ ├── functional_tests.md
│ ├── non_functional_tests.md
│ └── traceability_matrix.md
├── architecture.md
├── system-flows.md
├── data_model.md
├── deployment/
│ ├── containerization.md
│ ├── ci_cd_pipeline.md
│ ├── environment_strategy.md
│ ├── observability.md
│ └── deployment_procedures.md
├── risk_mitigations.md
├── risk_mitigations_02.md (iterative, ## as sequence)
├── components/
│ ├── 01_[name]/
│ │ ├── description.md
│ │ └── tests.md
│ ├── 02_[name]/
│ │ ├── description.md
│ │ └── tests.md
│ └── ...
├── common-helpers/
│ ├── 01_helper_[name]/
│ ├── 02_helper_[name]/
│ └── ...
├── diagrams/
│ ├── components.drawio
│ └── flows/
│ ├── flow_[name].md (Mermaid)
│ └── ...
└── FINAL_report.md
Save Timing
| Step | Save immediately after | Filename |
|---|---|---|
| Step 1 | Integration test environment spec | integration_tests/environment.md |
| Step 1 | Integration test data spec | integration_tests/test_data.md |
| Step 1 | Integration functional tests | integration_tests/functional_tests.md |
| Step 1 | Integration non-functional tests | integration_tests/non_functional_tests.md |
| Step 1 | Integration traceability matrix | integration_tests/traceability_matrix.md |
| Step 2 | Architecture analysis complete | architecture.md |
| Step 2 | System flows documented | system-flows.md |
| Step 2 | Data model documented | data_model.md |
| Step 2 | Deployment plan complete | deployment/ (5 files) |
| Step 3 | Each component analyzed | components/[##]_[name]/description.md |
| Step 3 | Common helpers generated | common-helpers/[##]_helper_[name].md |
| Step 3 | Diagrams generated | diagrams/ |
| Step 4 | Risk assessment complete | risk_mitigations.md |
| Step 5 | Tests written per component | components/[##]_[name]/tests.md |
| Step 6 | Epics created in Jira | Jira via MCP |
| Final | All steps complete | FINAL_report.md |
Save Principles
- Save immediately: write to disk as soon as a step completes; do not wait until the end
- Incremental updates: same file can be updated multiple times; append or replace
- Preserve process: keep all intermediate files even after integration into final report
- Enable recovery: if interrupted, resume from the last saved artifact (see Resumability)
Resumability
If DOCUMENT_DIR already contains artifacts:
- List existing files and match them to the save timing table above
- Identify the last completed step based on which artifacts exist
- Resume from the next incomplete step
- Inform the user which steps are being skipped
Progress Tracking
At the start of execution, create a TodoWrite with all steps (1 through 6). Update status as each step completes.
Workflow
Step 1: Integration Tests
Role: Professional Quality Assurance Engineer Goal: Analyze input data completeness and produce detailed black-box integration test specifications Constraints: Spec only — no test code. Tests describe what the system should do given specific inputs, not how the system is built.
Phase 1a: Input Data Completeness Analysis
- Read
_docs/01_solution/solution.md(finalized in Prereq 2) - Read
acceptance_criteria.md,restrictions.md - Read testing strategy from solution.md
- Analyze
input_data/contents against:- Coverage of acceptance criteria scenarios
- Coverage of restriction edge cases
- Coverage of testing strategy requirements
- Threshold: at least 70% coverage of the scenarios
- If coverage is low, search the internet for supplementary data, assess quality with user, and if user agrees, add to
input_data/ - Present coverage assessment to user
BLOCKING: Do NOT proceed until user confirms the input data coverage is sufficient.
Phase 1b: Black-Box Test Scenario Specification
Based on all acquired data, acceptance_criteria, and restrictions, form detailed test scenarios:
- Define test environment using
templates/integration-environment.mdas structure - Define test data management using
templates/integration-test-data.mdas structure - Write functional test scenarios (positive + negative) using
templates/integration-functional-tests.mdas structure - Write non-functional test scenarios (performance, resilience, security, edge cases) using
templates/integration-non-functional-tests.mdas structure - Build traceability matrix using
templates/integration-traceability-matrix.mdas structure
Self-verification:
- Every acceptance criterion is covered by at least one test scenario
- Every restriction is verified by at least one test scenario
- Positive and negative scenarios are balanced
- Consumer app has no direct access to system internals
- Docker environment is self-contained (
docker compose upsufficient) - External dependencies have mock/stub services defined
- Traceability matrix has no uncovered AC or restrictions
Save action: Write all files under integration_tests/:
environment.mdtest_data.mdfunctional_tests.mdnon_functional_tests.mdtraceability_matrix.md
BLOCKING: Present test coverage summary (from traceability_matrix.md) to user. Do NOT proceed until confirmed.
Capture any new questions, findings, or insights that arise during test specification — these feed forward into Steps 2 and 3.
Step 2: Solution Analysis
Role: Professional software architect
Goal: Produce architecture.md, system-flows.md, data_model.md, and deployment/ from the solution draft
Constraints: No code, no component-level detail yet; focus on system-level view
Phase 2a: Architecture & Flows
- Read all input files thoroughly
- Incorporate findings, questions, and insights discovered during Step 1 (integration tests)
- Research unknown or questionable topics via internet; ask user about ambiguities
- Document architecture using
templates/architecture.mdas structure - Document system flows using
templates/system-flows.mdas structure
Self-verification:
- Architecture covers all capabilities mentioned in solution.md
- System flows cover all main user/system interactions
- No contradictions with problem.md or restrictions.md
- Technology choices are justified
- Integration test findings are reflected in architecture decisions
Save action: Write architecture.md and system-flows.md
BLOCKING: Present architecture summary to user. Do NOT proceed until user confirms.
Phase 2b: Data Model
Role: Professional software architect Goal: Produce a detailed data model document covering entities, relationships, and migration strategy
- Extract core entities from architecture.md and solution.md
- Define entity attributes, types, and constraints
- Define relationships between entities (Mermaid ERD)
- Define migration strategy: versioning tool (EF Core migrations / Alembic / sql-migrate), reversibility requirement, naming convention
- Define seed data requirements per environment (dev, staging)
- Define backward compatibility approach for schema changes (additive-only by default)
Self-verification:
- Every entity mentioned in architecture.md is defined
- Relationships are explicit with cardinality
- Migration strategy specifies reversibility requirement
- Seed data requirements defined
- Backward compatibility approach documented
Save action: Write data_model.md
Phase 2c: Deployment Planning
Role: DevOps / Platform engineer Goal: Produce deployment plan covering containerization, CI/CD, environment strategy, observability, and deployment procedures
Use the /deploy skill's templates as structure for each artifact:
- Read architecture.md and restrictions.md for infrastructure constraints
- Research Docker best practices for the project's tech stack
- Define containerization plan: Dockerfile per component, docker-compose for dev and tests
- Define CI/CD pipeline: stages, quality gates, caching, parallelization
- Define environment strategy: dev, staging, production with secrets management
- Define observability: structured logging, metrics, tracing, alerting
- Define deployment procedures: strategy, health checks, rollback, checklist
Self-verification:
- Every component has a Docker specification
- CI/CD pipeline covers lint, test, security, build, deploy
- Environment strategy covers dev, staging, production
- Observability covers logging, metrics, tracing, alerting
- Deployment procedures include rollback and health checks
Save action: Write all 5 files under deployment/:
containerization.mdci_cd_pipeline.mdenvironment_strategy.mdobservability.mddeployment_procedures.md
Step 3: Component Decomposition
Role: Professional software architect Goal: Decompose the architecture into components with detailed specs Constraints: No code; only names, interfaces, inputs/outputs. Follow SRP strictly.
- Identify components from the architecture; think about separation, reusability, and communication patterns
- Use integration test scenarios from Step 1 to validate component boundaries
- If additional components are needed (data preparation, shared helpers), create them
- For each component, write a spec using
templates/component-spec.mdas structure - Generate diagrams:
- draw.io component diagram showing relations (minimize line intersections, group semantically coherent components, place external users near their components)
- Mermaid flowchart per main control flow
- Components can share and reuse common logic, same for multiple components. Hence for such occurences common-helpers folder is specified.
Self-verification:
- Each component has a single, clear responsibility
- No functionality is spread across multiple components
- All inter-component interfaces are defined (who calls whom, with what)
- Component dependency graph has no circular dependencies
- All components from architecture.md are accounted for
- Every integration test scenario can be traced through component interactions
Save action: Write:
- each component
components/[##]_[name]/description.md - common helper
common-helpers/[##]_helper_[name].md - diagrams
diagrams/
BLOCKING: Present component list with one-line summaries to user. Do NOT proceed until user confirms.
Step 4: Architecture Review & Risk Assessment
Role: Professional software architect and analyst Goal: Validate all artifacts for consistency, then identify and mitigate risks Constraints: This is a review step — fix problems found, do not add new features
4a. Evaluator Pass (re-read ALL artifacts)
Review checklist:
- All components follow Single Responsibility Principle
- All components follow dumb code / smart data principle
- Inter-component interfaces are consistent (caller's output matches callee's input)
- No circular dependencies in the dependency graph
- No missing interactions between components
- No over-engineering — is there a simpler decomposition?
- Security considerations addressed in component design
- Performance bottlenecks identified
- API contracts are consistent across components
Fix any issues found before proceeding to risk identification.
4b. Risk Identification
- Identify technical and project risks
- Assess probability and impact using
templates/risk-register.md - Define mitigation strategies
- Apply mitigations to architecture, flows, and component documents where applicable
Self-verification:
- Every High/Critical risk has a concrete mitigation strategy
- Mitigations are reflected in the relevant component or architecture docs
- No new risks introduced by the mitigations themselves
Save action: Write risk_mitigations.md
BLOCKING: Present risk summary to user. Ask whether assessment is sufficient.
Iterative: If user requests another round, repeat Step 4 and write risk_mitigations_##.md (## as sequence number). Continue until user confirms.
Step 5: Test Specifications
Role: Professional Quality Assurance Engineer
Goal: Write test specs for each component achieving minimum 75% acceptance criteria coverage
Constraints: Test specs only — no test code. Each test must trace to an acceptance criterion.
- For each component, write tests using
templates/test-spec.mdas structure - Cover all 4 types: integration, performance, security, acceptance
- Include test data management (setup, teardown, isolation)
- Verify traceability: every acceptance criterion from
acceptance_criteria.mdmust be covered by at least one test
Self-verification:
- Every acceptance criterion has at least one test covering it
- Test inputs are realistic and well-defined
- Expected results are specific and measurable
- No component is left without tests
Save action: Write each components/[##]_[name]/tests.md
Step 6: Jira Epics
Role: Professional product manager
Goal: Create Jira epics from components, ordered by dependency
Constraints: Epic descriptions must be comprehensive and self-contained — a developer reading only the Jira epic should understand the full context without needing to open separate files.
- Create "Bootstrap & Initial Structure" epic first — this epic will parent the
01_initial_structuretask created by the decompose skill. It covers project scaffolding: folder structure, shared models, interfaces, stubs, CI/CD config, DB migrations setup, test structure. - Generate Jira Epics for each component using Jira MCP, structured per
templates/epic-spec.md - Order epics by dependency (Bootstrap epic is always first, then components based on their dependency graph)
- Include effort estimation per epic (T-shirt size or story points range)
- Ensure each epic has clear acceptance criteria cross-referenced with component specs
- Generate Mermaid diagrams showing component-to-epic mapping and component relationships
CRITICAL — Epic description richness requirements:
Each epic description in Jira MUST include ALL of the following sections with substantial content:
- System context: where this component fits in the overall architecture (include Mermaid diagram showing this component's position and connections)
- Problem / Context: what problem this component solves, why it exists, current pain points
- Scope: detailed in-scope and out-of-scope lists
- Architecture notes: relevant ADRs, technology choices, patterns used, key design decisions
- Interface specification: full method signatures, input/output types, error types (from component description.md)
- Data flow: how data enters and exits this component (include Mermaid sequence or flowchart diagram)
- Dependencies: epic dependencies (with Jira IDs) and external dependencies (libraries, hardware, services)
- Acceptance criteria: measurable criteria with specific thresholds (from component tests.md)
- Non-functional requirements: latency, memory, throughput targets with failure thresholds
- Risks & mitigations: relevant risks from risk_mitigations.md with concrete mitigation strategies
- Effort estimation: T-shirt size and story points range
- Child issues: planned task breakdown with complexity points
- Key constraints: from restrictions.md that affect this component
- Testing strategy: summary of test types and coverage from tests.md
Do NOT create minimal epics with just a summary and short description. The Jira epic is the primary reference document for the implementation team.
Self-verification:
- "Bootstrap & Initial Structure" epic exists and is first in order
- "Integration Tests" epic exists
- Every component maps to exactly one epic
- Dependency order is respected (no epic depends on a later one)
- Acceptance criteria are measurable
- Effort estimates are realistic
- Every epic description includes architecture diagram, interface spec, data flow, risks, and NFRs
- Epic descriptions are self-contained — readable without opening other files
- Create "Integration Tests" epic — this epic will parent the integration test tasks created by the
/decomposeskill. It covers implementing the test scenarios defined inintegration_tests/.
Save action: Epics created in Jira via MCP. Also saved locally in epics.md with Jira IDs.
Quality Checklist (before FINAL_report.md)
Before writing the final report, verify ALL of the following:
Integration Tests
- Every acceptance criterion is covered in traceability_matrix.md
- Every restriction is verified by at least one test
- Positive and negative scenarios are balanced
- Docker environment is self-contained
- Consumer app treats main system as black box
- CI/CD integration and reporting defined
Architecture
- Covers all capabilities from solution.md
- Technology choices are justified
- Deployment model is defined
- Integration test findings are reflected in architecture decisions
Data Model
- Every entity from architecture.md is defined
- Relationships have explicit cardinality
- Migration strategy with reversibility requirement
- Seed data requirements defined
- Backward compatibility approach documented
Deployment
- Containerization plan covers all components
- CI/CD pipeline includes lint, test, security, build, deploy stages
- Environment strategy covers dev, staging, production
- Observability covers logging, metrics, tracing, alerting
- Deployment procedures include rollback and health checks
Components
- Every component follows SRP
- No circular dependencies
- All inter-component interfaces are defined and consistent
- No orphan components (unused by any flow)
- Every integration test scenario can be traced through component interactions
Risks
- All High/Critical risks have mitigations
- Mitigations are reflected in component/architecture docs
- User has confirmed risk assessment is sufficient
Tests
- Every acceptance criterion is covered by at least one test
- All 4 test types are represented per component (where applicable)
- Test data management is defined
Epics
- "Bootstrap & Initial Structure" epic exists
- "Integration Tests" epic exists
- Every component maps to an epic
- Dependency order is correct
- Acceptance criteria are measurable
Save action: Write FINAL_report.md using templates/final-report.md as structure
Common Mistakes
- Proceeding without input data: all three data gate items (acceptance_criteria, restrictions, input_data) must be present before any planning begins
- Coding during planning: this workflow produces documents, never code
- Multi-responsibility components: if a component does two things, split it
- Skipping BLOCKING gates: never proceed past a BLOCKING marker without user confirmation
- Diagrams without data: generate diagrams only after the underlying structure is documented
- Copy-pasting problem.md: the architecture doc should analyze and transform, not repeat the input
- Vague interfaces: "component A talks to component B" is not enough; define the method, input, output
- Ignoring restrictions.md: every constraint must be traceable in the architecture or risk register
- Ignoring integration test findings: insights from Step 1 must feed into architecture (Step 2) and component decomposition (Step 3)
Escalation Rules
| Situation | Action |
|---|---|
| Missing acceptance_criteria.md, restrictions.md, or input_data/ | STOP — planning cannot proceed |
| Ambiguous requirements | ASK user |
| Input data coverage below 70% | Search internet for supplementary data, ASK user to validate |
| Technology choice with multiple valid options | ASK user |
| Component naming | PROCEED, confirm at next BLOCKING gate |
| File structure within templates | PROCEED |
| Contradictions between input files | ASK user |
| Risk mitigation requires architecture change | ASK user |
Methodology Quick Reference
┌────────────────────────────────────────────────────────────────┐
│ Solution Planning (6-Step Method) │
├────────────────────────────────────────────────────────────────┤
│ PREREQ 1: Data Gate (BLOCKING) │
│ → verify AC, restrictions, input_data exist — STOP if not │
│ PREREQ 2: Finalize solution draft │
│ → rename highest solution_draft##.md to solution.md │
│ PREREQ 3: Workspace setup │
│ → create DOCUMENT_DIR/ if needed │
│ │
│ 1. Integration Tests → integration_tests/ (5 files) │
│ [BLOCKING: user confirms test coverage] │
│ 2a. Architecture → architecture.md, system-flows.md │
│ [BLOCKING: user confirms architecture] │
│ 2b. Data Model → data_model.md │
│ 2c. Deployment → deployment/ (5 files) │
│ 3. Component Decompose → components/[##]_[name]/description │
│ [BLOCKING: user confirms decomposition] │
│ 4. Review & Risk → risk_mitigations.md │
│ [BLOCKING: user confirms risks, iterative] │
│ 5. Test Specifications → components/[##]_[name]/tests.md │
│ 6. Jira Epics → Jira via MCP │
│ ───────────────────────────────────────────────── │
│ Quality Checklist → FINAL_report.md │
├────────────────────────────────────────────────────────────────┤
│ Principles: SRP · Dumb code/smart data · Save immediately │
│ Ask don't assume · Plan don't code │
└────────────────────────────────────────────────────────────────┘