Files
annotations/.cursor/skills/plan/SKILL.md
T
Oleksandr Bezdieniezhnykh 9e7dc290db Refactor annotation tool from WPF desktop app to .NET API
Replace the WPF desktop application (Azaion.Suite, Azaion.Annotator,
Azaion.Common, Azaion.Inference, Azaion.Loader, Azaion.LoaderUI,
Azaion.Dataset, Azaion.Test) with a standalone .NET Web API in src/.

Made-with: Cursor
2026-03-25 04:40:03 +02:00

17 KiB

name, description, disable-model-invocation
name description disable-model-invocation
plan Decompose a solution into architecture, system flows, components, tests, and Jira epics. Systematic 5-step planning workflow with BLOCKING gates, self-verification, and structured artifact management. Supports project mode (_docs/ + _docs/02_plans/ structure) and standalone mode (@file.md). Trigger phrases: - "plan", "decompose solution", "architecture planning" - "break down the solution", "create planning documents" - "component decomposition", "solution analysis" true

Solution Planning

Decompose a problem and solution into architecture, system flows, components, tests, and Jira epics through a systematic 5-step workflow.

Core Principles

  • Single Responsibility: each component does one thing well; do not spread related logic across components
  • Dumb code, smart data: keep logic simple, push complexity into data structures and configuration
  • Save immediately: write artifacts to disk after each step; never accumulate unsaved work
  • Ask, don't assume: when requirements are ambiguous, ask the user before proceeding
  • Plan, don't code: this workflow produces documents and specs, never implementation code

Context Resolution

Determine the operating mode based on invocation before any other logic runs.

Project mode (no explicit input file provided):

  • PROBLEM_FILE: _docs/00_problem/problem.md
  • SOLUTION_FILE: _docs/01_solution/solution.md
  • PLANS_DIR: _docs/02_plans/
  • All existing guardrails apply as-is.

Standalone mode (explicit input file provided, e.g. /plan @some_doc.md):

  • INPUT_FILE: the provided file (treated as combined problem + solution context)
  • Derive <topic> from the input filename (without extension)
  • PLANS_DIR: _standalone/<topic>/plans/
  • Guardrails relaxed: only INPUT_FILE must exist and be non-empty
  • acceptance_criteria.md and restrictions.md are optional — warn if absent

Announce the detected mode and resolved paths to the user before proceeding.

Input Specification

Required Files

Project mode:

File Purpose
PROBLEM_FILE (_docs/00_problem/problem.md) Problem description and context
_docs/00_problem/input_data/ Reference data examples (if available)
_docs/00_problem/restrictions.md Constraints and limitations (if available)
_docs/00_problem/acceptance_criteria.md Measurable acceptance criteria (if available)
SOLUTION_FILE (_docs/01_solution/solution.md) Solution draft to decompose

Standalone mode:

File Purpose
INPUT_FILE (the provided file) Combined problem + solution context

Prerequisite Checks (BLOCKING)

Project mode:

  1. PROBLEM_FILE exists and is non-empty — STOP if missing
  2. SOLUTION_FILE exists and is non-empty — STOP if missing
  3. Create PLANS_DIR if it does not exist
  4. If PLANS_DIR/<topic>/ already exists, ask user: resume from last checkpoint or start fresh?

Standalone mode:

  1. INPUT_FILE exists and is non-empty — STOP if missing
  2. Warn if no restrictions.md or acceptance_criteria.md provided alongside INPUT_FILE
  3. Create PLANS_DIR if it does not exist
  4. If PLANS_DIR/<topic>/ already exists, ask user: resume from last checkpoint or start fresh?

Artifact Management

Directory Structure

At the start of planning, create a topic-named working directory under PLANS_DIR:

PLANS_DIR/<topic>/
├── architecture.md
├── system-flows.md
├── risk_mitigations.md
├── risk_mitigations_02.md          (iterative, ## as sequence)
├── components/
│   ├── 01_[name]/
│   │   ├── description.md
│   │   └── tests.md
│   ├── 02_[name]/
│   │   ├── description.md
│   │   └── tests.md
│   └── ...
├── common-helpers/
│   ├── 01_helper_[name]/
│   ├── 02_helper_[name]/
│   └── ...
├── e2e_test_infrastructure.md
├── diagrams/
│   ├── components.drawio
│   └── flows/
│       ├── flow_[name].md          (Mermaid)
│       └── ...
└── FINAL_report.md

Save Timing

Step Save immediately after Filename
Step 1 Architecture analysis complete architecture.md
Step 1 System flows documented system-flows.md
Step 2 Each component analyzed components/[##]_[name]/description.md
Step 2 Common helpers generated common-helpers/[##]_helper_[name].md
Step 2 Diagrams generated diagrams/
Step 3 Risk assessment complete risk_mitigations.md
Step 4 Tests written per component components/[##]_[name]/tests.md
Step 4b E2E test infrastructure spec e2e_test_infrastructure.md
Step 5 Epics created in Jira Jira via MCP
Final All steps complete FINAL_report.md

Save Principles

  1. Save immediately: write to disk as soon as a step completes; do not wait until the end
  2. Incremental updates: same file can be updated multiple times; append or replace
  3. Preserve process: keep all intermediate files even after integration into final report
  4. Enable recovery: if interrupted, resume from the last saved artifact (see Resumability)

Resumability

If PLANS_DIR/<topic>/ already contains artifacts:

  1. List existing files and match them to the save timing table above
  2. Identify the last completed step based on which artifacts exist
  3. Resume from the next incomplete step
  4. Inform the user which steps are being skipped

Progress Tracking

At the start of execution, create a TodoWrite with all steps (1 through 5, including 4b). Update status as each step completes.

Workflow

Step 1: Solution Analysis

Role: Professional software architect Goal: Produce architecture.md and system-flows.md from the solution draft Constraints: No code, no component-level detail yet; focus on system-level view

  1. Read all input files thoroughly
  2. Research unknown or questionable topics via internet; ask user about ambiguities
  3. Document architecture using templates/architecture.md as structure
  4. Document system flows using templates/system-flows.md as structure

Self-verification:

  • Architecture covers all capabilities mentioned in solution.md
  • System flows cover all main user/system interactions
  • No contradictions with problem.md or restrictions.md
  • Technology choices are justified

Save action: Write architecture.md and system-flows.md

BLOCKING: Present architecture summary to user. Do NOT proceed until user confirms.


Step 2: Component Decomposition

Role: Professional software architect Goal: Decompose the architecture into components with detailed specs Constraints: No code; only names, interfaces, inputs/outputs. Follow SRP strictly.

  1. Identify components from the architecture; think about separation, reusability, and communication patterns
  2. If additional components are needed (data preparation, shared helpers), create them
  3. For each component, write a spec using templates/component-spec.md as structure
  4. Generate diagrams:
    • draw.io component diagram showing relations (minimize line intersections, group semantically coherent components, place external users near their components)
    • Mermaid flowchart per main control flow
  5. Components can share and reuse common logic, same for multiple components. Hence for such occurences common-helpers folder is specified.

Self-verification:

  • Each component has a single, clear responsibility
  • No functionality is spread across multiple components
  • All inter-component interfaces are defined (who calls whom, with what)
  • Component dependency graph has no circular dependencies
  • All components from architecture.md are accounted for

Save action: Write:

  • each component components/[##]_[name]/description.md
  • comomon helper common-helpers/[##]_helper_[name].md
  • diagrams diagrams/

BLOCKING: Present component list with one-line summaries to user. Do NOT proceed until user confirms.


Step 3: Architecture Review & Risk Assessment

Role: Professional software architect and analyst Goal: Validate all artifacts for consistency, then identify and mitigate risks Constraints: This is a review step — fix problems found, do not add new features

3a. Evaluator Pass (re-read ALL artifacts)

Review checklist:

  • All components follow Single Responsibility Principle
  • All components follow dumb code / smart data principle
  • Inter-component interfaces are consistent (caller's output matches callee's input)
  • No circular dependencies in the dependency graph
  • No missing interactions between components
  • No over-engineering — is there a simpler decomposition?
  • Security considerations addressed in component design
  • Performance bottlenecks identified
  • API contracts are consistent across components

Fix any issues found before proceeding to risk identification.

3b. Risk Identification

  1. Identify technical and project risks
  2. Assess probability and impact using templates/risk-register.md
  3. Define mitigation strategies
  4. Apply mitigations to architecture, flows, and component documents where applicable

Self-verification:

  • Every High/Critical risk has a concrete mitigation strategy
  • Mitigations are reflected in the relevant component or architecture docs
  • No new risks introduced by the mitigations themselves

Save action: Write risk_mitigations.md

BLOCKING: Present risk summary to user. Ask whether assessment is sufficient.

Iterative: If user requests another round, repeat Step 3 and write risk_mitigations_##.md (## as sequence number). Continue until user confirms.


Step 4: Test Specifications

Role: Professional Quality Assurance Engineer Goal: Write test specs for each component achieving minimum 75% acceptance criteria coverage Constraints: Test specs only — no test code. Each test must trace to an acceptance criterion.

  1. For each component, write tests using templates/test-spec.md as structure
  2. Cover all 4 types: integration, performance, security, acceptance
  3. Include test data management (setup, teardown, isolation)
  4. Verify traceability: every acceptance criterion from acceptance_criteria.md must be covered by at least one test

Self-verification:

  • Every acceptance criterion has at least one test covering it
  • Test inputs are realistic and well-defined
  • Expected results are specific and measurable
  • No component is left without tests

Save action: Write each components/[##]_[name]/tests.md


Step 4b: E2E Black-Box Test Infrastructure

Role: Professional Quality Assurance Engineer Goal: Specify a separate consumer application and Docker environment for black-box end-to-end testing of the main system Constraints: Spec only — no test code. Consumer must treat the main system as a black box (no internal imports, no direct DB access).

  1. Define Docker environment: services (system under test, test DB, consumer app, dependencies), networks, volumes
  2. Specify consumer application: tech stack, entry point, communication interfaces with the main system
  3. Define E2E test scenarios from acceptance criteria — focus on critical end-to-end use cases that cross component boundaries
  4. Specify test data management: seed data, isolation strategy, external dependency mocks
  5. Define CI/CD integration: when to run, gate behavior, timeout
  6. Define reporting format (CSV: test ID, name, execution time, result, error message)

Use templates/e2e-test-infrastructure.md as structure.

Self-verification:

  • Critical acceptance criteria are covered by at least one E2E scenario
  • Consumer app has no direct access to system internals
  • Docker environment is self-contained (docker compose up sufficient)
  • External dependencies have mock/stub services defined

Save action: Write e2e_test_infrastructure.md


Step 5: Jira Epics

Role: Professional product manager Goal: Create Jira epics from components, ordered by dependency Constraints: Be concise — fewer words with the same meaning is better

  1. Generate Jira Epics from components using Jira MCP, structured per templates/epic-spec.md
  2. Order epics by dependency (which must be done first)
  3. Include effort estimation per epic (T-shirt size or story points range)
  4. Ensure each epic has clear acceptance criteria cross-referenced with component specs
  5. Generate updated draw.io diagram showing component-to-epic mapping

Self-verification:

  • Every component maps to exactly one epic
  • Dependency order is respected (no epic depends on a later one)
  • Acceptance criteria are measurable
  • Effort estimates are realistic

Save action: Epics created in Jira via MCP


Quality Checklist (before FINAL_report.md)

Before writing the final report, verify ALL of the following:

Architecture

  • Covers all capabilities from solution.md
  • Technology choices are justified
  • Deployment model is defined

Components

  • Every component follows SRP
  • No circular dependencies
  • All inter-component interfaces are defined and consistent
  • No orphan components (unused by any flow)

Risks

  • All High/Critical risks have mitigations
  • Mitigations are reflected in component/architecture docs
  • User has confirmed risk assessment is sufficient

Tests

  • Every acceptance criterion is covered by at least one test
  • All 4 test types are represented per component (where applicable)
  • Test data management is defined

E2E Test Infrastructure

  • Critical use cases covered by E2E scenarios
  • Docker environment is self-contained
  • Consumer app treats main system as black box
  • CI/CD integration and reporting defined

Epics

  • Every component maps to an epic
  • Dependency order is correct
  • Acceptance criteria are measurable

Save action: Write FINAL_report.md using templates/final-report.md as structure

Common Mistakes

  • Coding during planning: this workflow produces documents, never code
  • Multi-responsibility components: if a component does two things, split it
  • Skipping BLOCKING gates: never proceed past a BLOCKING marker without user confirmation
  • Diagrams without data: generate diagrams only after the underlying structure is documented
  • Copy-pasting problem.md: the architecture doc should analyze and transform, not repeat the input
  • Vague interfaces: "component A talks to component B" is not enough; define the method, input, output
  • Ignoring restrictions.md: every constraint must be traceable in the architecture or risk register

Escalation Rules

Situation Action
Ambiguous requirements ASK user
Missing acceptance criteria ASK user
Technology choice with multiple valid options ASK user
Component naming PROCEED, confirm at next BLOCKING gate
File structure within templates PROCEED
Contradictions between input files ASK user
Risk mitigation requires architecture change ASK user

Methodology Quick Reference

┌────────────────────────────────────────────────────────────────┐
│                Solution Planning (5-Step Method)               │
├────────────────────────────────────────────────────────────────┤
│ CONTEXT: Resolve mode (project vs standalone) + set paths      │
│ 1. Solution Analysis     → architecture.md, system-flows.md    │
│    [BLOCKING: user confirms architecture]                      │
│ 2. Component Decompose   → components/[##]_[name]/description  │
│    [BLOCKING: user confirms decomposition]                     │
│ 3. Review & Risk Assess  → risk_mitigations.md                 │
│    [BLOCKING: user confirms risks, iterative]                  │
│ 4. Test Specifications   → components/[##]_[name]/tests.md     │
│ 4b.E2E Test Infra        → e2e_test_infrastructure.md          │
│ 5. Jira Epics            → Jira via MCP                        │
│    ─────────────────────────────────────────────────           │
│    Quality Checklist → FINAL_report.md                         │
├────────────────────────────────────────────────────────────────┤
│ Principles: SRP · Dumb code/smart data · Save immediately      │
│             Ask don't assume · Plan don't code                 │
└────────────────────────────────────────────────────────────────┘