Sync .cursor from detections

This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-04-12 05:05:09 +03:00
parent 587b0e3c2d
commit d818daacd1
116 changed files with 10075 additions and 4577 deletions
+254 -141
View File
@@ -1,53 +1,60 @@
---
name: decompose
description: |
Decompose planned components into atomic implementable features with bootstrap structure plan.
4-step workflow: bootstrap structure plan, feature decomposition, cross-component verification, and Jira task creation.
Supports project mode (_docs/ structure), single component mode, and standalone mode (@file.md).
Decompose planned components into atomic implementable tasks with bootstrap structure plan.
4-step workflow: bootstrap structure plan, component task decomposition, blackbox test task decomposition, and cross-task verification.
Supports full decomposition (_docs/ structure), single component mode, and tests-only mode.
Trigger phrases:
- "decompose", "decompose features", "feature decomposition"
- "task decomposition", "break down components"
- "prepare for implementation"
- "decompose tests", "test decomposition"
category: build
tags: [decomposition, tasks, dependencies, work-items, implementation-prep]
disable-model-invocation: true
---
# Feature Decomposition
# Task Decomposition
Decompose planned components into atomic, implementable feature specs with a bootstrap structure plan through a systematic workflow.
Decompose planned components into atomic, implementable task specs with a bootstrap structure plan through a systematic workflow. All tasks are named with their work item tracker ID prefix in a flat directory.
## Core Principles
- **Atomic features**: each feature does one thing; if it exceeds 5 complexity points, split it
- **Atomic tasks**: each task does one thing; if it exceeds 8 complexity points, split it
- **Behavioral specs, not implementation plans**: describe what the system should do, not how to build it
- **Save immediately**: write artifacts to disk after each component; never accumulate unsaved work
- **Flat structure**: all tasks are tracker-ID-prefixed files in TASKS_DIR — no component subdirectories
- **Save immediately**: write artifacts to disk after each task; never accumulate unsaved work
- **Tracker inline**: create work item ticket immediately after writing each task file
- **Ask, don't assume**: when requirements are ambiguous, ask the user before proceeding
- **Plan, don't code**: this workflow produces documents and Jira tasks, never implementation code
- **Plan, don't code**: this workflow produces documents and work item tickets, never implementation code
## Context Resolution
Determine the operating mode based on invocation before any other logic runs.
**Full project mode** (no explicit input file provided):
- PLANS_DIR: `_docs/02_plans/`
**Default** (no explicit input file provided):
- DOCUMENT_DIR: `_docs/02_document/`
- TASKS_DIR: `_docs/02_tasks/`
- Reads from: `_docs/00_problem/`, `_docs/01_solution/`, PLANS_DIR
- Runs Step 1 (bootstrap) + Step 2 (all components) + Step 3 (cross-verification) + Step 4 (Jira)
- TASKS_TODO: `_docs/02_tasks/todo/`
- Reads from: `_docs/00_problem/`, `_docs/01_solution/`, DOCUMENT_DIR
- Runs Step 1 (bootstrap) + Step 2 (all components) + Step 3 (blackbox tests) + Step 4 (cross-verification)
**Single component mode** (provided file is within `_docs/02_plans/` and inside a `components/` subdirectory):
- PLANS_DIR: `_docs/02_plans/`
**Single component mode** (provided file is within `_docs/02_document/` and inside a `components/` subdirectory):
- DOCUMENT_DIR: `_docs/02_document/`
- TASKS_DIR: `_docs/02_tasks/`
- Derive `<topic>`, component number, and component name from the file path
- TASKS_TODO: `_docs/02_tasks/todo/`
- Derive component number and component name from the file path
- Ask user for the parent Epic ID
- Runs Step 2 (that component only) + Step 4 (Jira)
- Overwrites existing feature files in that component's TASKS_DIR subdirectory
- Runs Step 2 (that component only, appending to existing task numbering)
**Standalone mode** (explicit input file provided, not within `_docs/02_plans/`):
- INPUT_FILE: the provided file (treated as a component spec)
- Derive `<topic>` from the input filename (without extension)
- TASKS_DIR: `_standalone/<topic>/tasks/`
- Guardrails relaxed: only INPUT_FILE must exist and be non-empty
- Ask user for the parent Epic ID
- Runs Step 2 (that component only) + Step 4 (Jira)
**Tests-only mode** (provided file/directory is within `tests/`, or `DOCUMENT_DIR/tests/` exists and input explicitly requests test decomposition):
- DOCUMENT_DIR: `_docs/02_document/`
- TASKS_DIR: `_docs/02_tasks/`
- TASKS_TODO: `_docs/02_tasks/todo/`
- TESTS_DIR: `DOCUMENT_DIR/tests/`
- Reads from: `_docs/00_problem/`, `_docs/01_solution/`, TESTS_DIR
- Runs Step 1t (test infrastructure bootstrap) + Step 3 (blackbox test decomposition) + Step 4 (cross-verification against test coverage)
- Skips Step 1 (project bootstrap) and Step 2 (component decomposition) — the codebase already exists
Announce the detected mode and resolved paths to the user before proceeding.
@@ -55,17 +62,18 @@ Announce the detected mode and resolved paths to the user before proceeding.
### Required Files
**Full project mode:**
**Default:**
| File | Purpose |
|------|---------|
| `_docs/00_problem/problem.md` | Problem description and context |
| `_docs/00_problem/restrictions.md` | Constraints and limitations (if available) |
| `_docs/00_problem/acceptance_criteria.md` | Measurable acceptance criteria (if available) |
| `_docs/00_problem/restrictions.md` | Constraints and limitations |
| `_docs/00_problem/acceptance_criteria.md` | Measurable acceptance criteria |
| `_docs/01_solution/solution.md` | Finalized solution |
| `PLANS_DIR/<topic>/architecture.md` | Architecture from plan skill |
| `PLANS_DIR/<topic>/system-flows.md` | System flows from plan skill |
| `PLANS_DIR/<topic>/components/[##]_[name]/description.md` | Component specs from plan skill |
| `DOCUMENT_DIR/architecture.md` | Architecture from plan skill |
| `DOCUMENT_DIR/system-flows.md` | System flows from plan skill |
| `DOCUMENT_DIR/components/[##]_[name]/description.md` | Component specs from plan skill |
| `DOCUMENT_DIR/tests/` | Blackbox test specs from plan skill |
**Single component mode:**
@@ -74,64 +82,72 @@ Announce the detected mode and resolved paths to the user before proceeding.
| The provided component `description.md` | Component spec to decompose |
| Corresponding `tests.md` in the same directory (if available) | Test specs for context |
**Standalone mode:**
**Tests-only mode:**
| File | Purpose |
|------|---------|
| INPUT_FILE (the provided file) | Component spec to decompose |
| `TESTS_DIR/environment.md` | Test environment specification (Docker services, networks, volumes) |
| `TESTS_DIR/test-data.md` | Test data management (seed data, mocks, isolation) |
| `TESTS_DIR/blackbox-tests.md` | Blackbox functional scenarios (positive + negative) |
| `TESTS_DIR/performance-tests.md` | Performance test scenarios |
| `TESTS_DIR/resilience-tests.md` | Resilience test scenarios |
| `TESTS_DIR/security-tests.md` | Security test scenarios |
| `TESTS_DIR/resource-limit-tests.md` | Resource limit test scenarios |
| `TESTS_DIR/traceability-matrix.md` | AC/restriction coverage mapping |
| `_docs/00_problem/problem.md` | Problem context |
| `_docs/00_problem/restrictions.md` | Constraints for test design |
| `_docs/00_problem/acceptance_criteria.md` | Acceptance criteria being verified |
### Prerequisite Checks (BLOCKING)
**Full project mode:**
1. At least one `<topic>/` directory exists under PLANS_DIR with `architecture.md` and `components/`**STOP if missing**
2. If multiple topics exist, ask user which one to decompose
3. Create TASKS_DIR if it does not exist
4. If `TASKS_DIR/<topic>/` already exists, ask user: **resume from last checkpoint or start fresh?**
**Default:**
1. DOCUMENT_DIR contains `architecture.md` and `components/`**STOP if missing**
2. Create TASKS_DIR and TASKS_TODO if they do not exist
3. If TASKS_DIR subfolders (`todo/`, `backlog/`, `done/`) already contain task files, ask user: **resume from last checkpoint or start fresh?**
**Single component mode:**
1. The provided component file exists and is non-empty — **STOP if missing**
2. Create the component's subdirectory under TASKS_DIR if it does not exist
**Standalone mode:**
1. INPUT_FILE exists and is non-empty — **STOP if missing**
2. Create TASKS_DIR if it does not exist
**Tests-only mode:**
1. `TESTS_DIR/blackbox-tests.md` exists and is non-empty — **STOP if missing**
2. `TESTS_DIR/environment.md` exists — **STOP if missing**
3. Create TASKS_DIR and TASKS_TODO if they do not exist
4. If TASKS_DIR subfolders (`todo/`, `backlog/`, `done/`) already contain task files, ask user: **resume from last checkpoint or start fresh?**
## Artifact Management
### Directory Structure
```
TASKS_DIR/<topic>/
├── initial_structure.md (Step 1, full mode only)
├── cross_dependencies.md (Step 3, full mode only)
├── SUMMARY.md (final)
├── [##]_[component_name]/
│ ├── [##].[##]_feature_[feature_name].md
│ ├── [##].[##]_feature_[feature_name].md
TASKS_DIR/
├── _dependencies_table.md
├── todo/
│ ├── [TRACKER-ID]_initial_structure.md
├── [TRACKER-ID]_[short_name].md
│ └── ...
├── [##]_[component_name]/
│ └── ...
└── ...
├── backlog/
└── done/
```
**Naming convention**: Each task file is initially saved in `TASKS_TODO/` with a temporary numeric prefix (`[##]_[short_name].md`). After creating the work item ticket, rename the file to use the work item ticket ID as prefix (`[TRACKER-ID]_[short_name].md`). For example: `todo/01_initial_structure.md``todo/AZ-42_initial_structure.md`.
### Save Timing
| Step | Save immediately after | Filename |
|------|------------------------|----------|
| Step 1 | Bootstrap structure plan complete | `initial_structure.md` |
| Step 2 | Each component decomposed | `[##]_[name]/[##].[##]_feature_[feature_name].md` |
| Step 3 | Cross-component verification complete | `cross_dependencies.md` |
| Step 4 | Jira tasks created | Jira via MCP |
| Final | All steps complete | `SUMMARY.md` |
| Step 1 | Bootstrap structure plan complete + work item ticket created + file renamed | `todo/[TRACKER-ID]_initial_structure.md` |
| Step 1t | Test infrastructure bootstrap complete + work item ticket created + file renamed | `todo/[TRACKER-ID]_test_infrastructure.md` |
| Step 2 | Each component task decomposed + work item ticket created + file renamed | `todo/[TRACKER-ID]_[short_name].md` |
| Step 3 | Each blackbox test task decomposed + work item ticket created + file renamed | `todo/[TRACKER-ID]_[short_name].md` |
| Step 4 | Cross-task verification complete | `_dependencies_table.md` |
### Resumability
If `TASKS_DIR/<topic>/` already contains artifacts:
If TASKS_DIR subfolders already contain task files:
1. List existing files and match them to the save timing table
2. Identify the last completed component based on which feature files exist
3. Resume from the next incomplete component
4. Inform the user which components are being skipped
1. List existing `*_*.md` files across `todo/`, `backlog/`, and `done/` (excluding `_dependencies_table.md`) and count them
2. Resume numbering from the next number (for temporary numeric prefix before tracker rename)
3. Inform the user which tasks already exist and are being skipped
## Progress Tracking
@@ -139,143 +155,240 @@ At the start of execution, create a TodoWrite with all applicable steps. Update
## Workflow
### Step 1: Bootstrap Structure Plan (full project mode only)
### Step 1t: Test Infrastructure Bootstrap (tests-only mode only)
**Role**: Professional Quality Assurance Engineer
**Goal**: Produce `01_test_infrastructure.md` — the first task describing the test project scaffold
**Constraints**: This is a plan document, not code. The `/implement` skill executes it.
1. Read `TESTS_DIR/environment.md` and `TESTS_DIR/test-data.md`
2. Read problem.md, restrictions.md, acceptance_criteria.md for domain context
3. Document the test infrastructure plan using `templates/test-infrastructure-task.md`
The test infrastructure bootstrap must include:
- Test project folder layout (`e2e/` directory structure)
- Mock/stub service definitions for each external dependency
- `docker-compose.test.yml` structure from environment.md
- Test runner configuration (framework, plugins, fixtures)
- Test data fixture setup from test-data.md seed data sets
- Test reporting configuration (format, output path)
- Data isolation strategy
**Self-verification**:
- [ ] Every external dependency from environment.md has a mock service defined
- [ ] Docker Compose structure covers all services from environment.md
- [ ] Test data fixtures cover all seed data sets from test-data.md
- [ ] Test runner configuration matches the consumer app tech stack from environment.md
- [ ] Data isolation strategy is defined
**Save action**: Write `todo/01_test_infrastructure.md` (temporary numeric name)
**Tracker action**: Create a work item ticket for this task under the "Blackbox Tests" epic. Write the work item ticket ID and Epic ID back into the task header.
**Rename action**: Rename the file from `todo/01_test_infrastructure.md` to `todo/[TRACKER-ID]_test_infrastructure.md`. Update the **Task** field inside the file to match the new filename.
**BLOCKING**: Present test infrastructure plan summary to user. Do NOT proceed until user confirms.
---
### Step 1: Bootstrap Structure Plan (default mode only)
**Role**: Professional software architect
**Goal**: Produce `initial_structure.md` describing the project skeleton for implementation
**Constraints**: This is a plan document, not code. The `implement-initial` command executes it.
**Goal**: Produce `01_initial_structure.md` — the first task describing the project skeleton
**Constraints**: This is a plan document, not code. The `/implement` skill executes it.
1. Read architecture.md, all component specs, and system-flows.md from PLANS_DIR
1. Read architecture.md, all component specs, system-flows.md, data_model.md, and `deployment/` from DOCUMENT_DIR
2. Read problem, solution, and restrictions from `_docs/00_problem/` and `_docs/01_solution/`
3. Research best implementation patterns for the identified tech stack
4. Document the structure plan using `templates/initial-structure.md`
4. Document the structure plan using `templates/initial-structure-task.md`
The bootstrap structure plan must include:
- Project folder layout with all component directories
- Shared models, interfaces, and DTOs
- Dockerfile per component (multi-stage, non-root, health checks, pinned base images)
- `docker-compose.yml` for local development (all components + database + dependencies)
- `docker-compose.test.yml` for blackbox test environment (blackbox test runner)
- `.dockerignore`
- CI/CD pipeline file (`.github/workflows/ci.yml` or `azure-pipelines.yml`) with stages from `deployment/ci_cd_pipeline.md`
- Database migration setup and initial seed data scripts
- Observability configuration: structured logging setup, health check endpoints (`/health/live`, `/health/ready`), metrics endpoint (`/metrics`)
- Environment variable documentation (`.env.example`)
- Test structure with unit and blackbox test locations
**Self-verification**:
- [ ] All components have corresponding folders in the layout
- [ ] All inter-component interfaces have DTOs defined
- [ ] CI/CD stages cover build, lint, test, security, deploy
- [ ] Dockerfile defined for each component
- [ ] `docker-compose.yml` covers all components and dependencies
- [ ] `docker-compose.test.yml` enables blackbox testing
- [ ] CI/CD pipeline file defined with lint, test, security, build, deploy stages
- [ ] Database migration setup included
- [ ] Health check endpoints specified for each service
- [ ] Structured logging configuration included
- [ ] `.env.example` with all required environment variables
- [ ] Environment strategy covers dev, staging, production
- [ ] Test structure includes unit and integration test locations
- [ ] Test structure includes unit and blackbox test locations
**Save action**: Write `initial_structure.md`
**Save action**: Write `todo/01_initial_structure.md` (temporary numeric name)
**Tracker action**: Create a work item ticket for this task under the "Bootstrap & Initial Structure" epic. Write the work item ticket ID and Epic ID back into the task header.
**Rename action**: Rename the file from `todo/01_initial_structure.md` to `todo/[TRACKER-ID]_initial_structure.md` (e.g., `todo/AZ-42_initial_structure.md`). Update the **Task** field inside the file to match the new filename.
**BLOCKING**: Present structure plan summary to user. Do NOT proceed until user confirms.
---
### Step 2: Feature Decomposition (all modes)
### Step 2: Task Decomposition (default and single component modes)
**Role**: Professional software architect
**Goal**: Decompose each component into atomic, implementable feature specs
**Goal**: Decompose each component into atomic, implementable task specs — numbered sequentially starting from 02
**Constraints**: Behavioral specs only — describe what, not how. No implementation code.
**Numbering**: Tasks are numbered sequentially across all components in dependency order. Start from 02 (01 is initial_structure). In single component mode, start from the next available number in TASKS_DIR.
**Component ordering**: Process components in dependency order — foundational components first (shared models, database), then components that depend on them.
For each component (or the single provided component):
1. Read the component's `description.md` and `tests.md` (if available)
2. Decompose into atomic features; create only 1 feature if the component is simple or atomic
3. Split into multiple features only when it is necessary and would be easier to implement
4. Do not create features of other components — only features of the current component
5. Each feature should be atomic, containing 0 APIs or a list of semantically connected APIs
6. Write each feature spec using `templates/feature-spec.md`
7. Estimate complexity per feature (1, 2, 3, 5 points); no feature should exceed 5 points — split if it does
8. Note feature dependencies (within component and cross-component)
2. Decompose into atomic tasks; create only 1 task if the component is simple or atomic
3. Split into multiple tasks only when it is necessary and would be easier to implement
4. Do not create tasks for other components — only tasks for the current component
5. Each task should be atomic, containing 0 APIs or a list of semantically connected APIs
6. Write each task spec using `templates/task.md`
7. Estimate complexity per task (1, 2, 3, 5, 8 points); no task should exceed 8 points — split if it does
8. Note task dependencies (referencing tracker IDs of already-created dependency tasks, e.g., `AZ-42_initial_structure`)
9. **Immediately after writing each task file**: create a work item ticket, link it to the component's epic, write the work item ticket ID and Epic ID back into the task header, then rename the file from `todo/[##]_[short_name].md` to `todo/[TRACKER-ID]_[short_name].md`.
**Self-verification** (per component):
- [ ] Every feature is atomic (single concern)
- [ ] No feature exceeds 5 complexity points
- [ ] Feature dependencies are noted
- [ ] Features cover all interfaces defined in the component spec
- [ ] No features duplicate work from other components
- [ ] Every task is atomic (single concern)
- [ ] No task exceeds 8 complexity points
- [ ] Task dependencies reference correct tracker IDs
- [ ] Tasks cover all interfaces defined in the component spec
- [ ] No tasks duplicate work from other components
- [ ] Every task has a work item ticket linked to the correct epic
**Save action**: Write each `[##]_[name]/[##].[##]_feature_[feature_name].md`
**Save action**: Write each `todo/[##]_[short_name].md` (temporary numeric name), create work item ticket inline, then rename to `todo/[TRACKER-ID]_[short_name].md`. Update the **Task** field inside the file to match the new filename. Update **Dependencies** references in the file to use tracker IDs of the dependency tasks.
---
### Step 3: Cross-Component Verification (full project mode only)
### Step 3: Blackbox Test Task Decomposition (default and tests-only modes)
**Role**: Professional Quality Assurance Engineer
**Goal**: Decompose blackbox test specs into atomic, implementable task specs
**Constraints**: Behavioral specs only — describe what, not how. No test code.
**Numbering**:
- In default mode: continue sequential numbering from where Step 2 left off.
- In tests-only mode: start from 02 (01 is the test infrastructure bootstrap from Step 1t).
1. Read all test specs from `DOCUMENT_DIR/tests/` (`blackbox-tests.md`, `performance-tests.md`, `resilience-tests.md`, `security-tests.md`, `resource-limit-tests.md`)
2. Group related test scenarios into atomic tasks (e.g., one task per test category or per component under test)
3. Each task should reference the specific test scenarios it implements and the environment/test-data specs
4. Dependencies:
- In default mode: blackbox test tasks depend on the component implementation tasks they exercise
- In tests-only mode: blackbox test tasks depend on the test infrastructure bootstrap task (Step 1t)
5. Write each task spec using `templates/task.md`
6. Estimate complexity per task (1, 2, 3, 5, 8 points); no task should exceed 8 points — split if it does
7. Note task dependencies (referencing tracker IDs of already-created dependency tasks)
8. **Immediately after writing each task file**: create a work item ticket under the "Blackbox Tests" epic, write the work item ticket ID and Epic ID back into the task header, then rename the file from `todo/[##]_[short_name].md` to `todo/[TRACKER-ID]_[short_name].md`.
**Self-verification**:
- [ ] Every scenario from `tests/blackbox-tests.md` is covered by a task
- [ ] Every scenario from `tests/performance-tests.md`, `tests/resilience-tests.md`, `tests/security-tests.md`, and `tests/resource-limit-tests.md` is covered by a task
- [ ] No task exceeds 8 complexity points
- [ ] Dependencies correctly reference the dependency tasks (component tasks in default mode, test infrastructure in tests-only mode)
- [ ] Every task has a work item ticket linked to the "Blackbox Tests" epic
**Save action**: Write each `todo/[##]_[short_name].md` (temporary numeric name), create work item ticket inline, then rename to `todo/[TRACKER-ID]_[short_name].md`.
---
### Step 4: Cross-Task Verification (default and tests-only modes)
**Role**: Professional software architect and analyst
**Goal**: Verify feature consistency across all components
**Constraints**: Review step — fix gaps found, do not add new features
**Goal**: Verify task consistency and produce `_dependencies_table.md`
**Constraints**: Review step — fix gaps found, do not add new tasks
1. Verify feature dependencies across all components are consistent
2. Check no gaps: every interface in architecture.md has features covering it
3. Check no overlaps: features don't duplicate work across components
4. Produce dependency matrix showing cross-component feature dependencies
5. Determine recommended implementation order based on dependencies
1. Verify task dependencies across all tasks are consistent
2. Check no gaps:
- In default mode: every interface in architecture.md has tasks covering it
- In tests-only mode: every test scenario in `traceability-matrix.md` is covered by a task
3. Check no overlaps: tasks don't duplicate work
4. Check no circular dependencies in the task graph
5. Produce `_dependencies_table.md` using `templates/dependencies-table.md`
**Self-verification**:
- [ ] Every architecture interface is covered by at least one feature
- [ ] No circular feature dependencies across components
- [ ] Cross-component dependencies are explicitly noted in affected feature specs
**Save action**: Write `cross_dependencies.md`
Default mode:
- [ ] Every architecture interface is covered by at least one task
- [ ] No circular dependencies in the task graph
- [ ] Cross-component dependencies are explicitly noted in affected task specs
- [ ] `_dependencies_table.md` contains every task with correct dependencies
**BLOCKING**: Present cross-component summary to user. Do NOT proceed until user confirms.
Tests-only mode:
- [ ] Every test scenario from traceability-matrix.md "Covered" entries has a corresponding task
- [ ] No circular dependencies in the task graph
- [ ] Test task dependencies reference the test infrastructure bootstrap
- [ ] `_dependencies_table.md` contains every task with correct dependencies
**Save action**: Write `_dependencies_table.md`
**BLOCKING**: Present dependency summary to user. Do NOT proceed until user confirms.
---
### Step 4: Jira Tasks (all modes)
**Role**: Professional product manager
**Goal**: Create Jira tasks from feature specs under the appropriate parent epics
**Constraints**: Be concise — fewer words with the same meaning is better
1. For each feature spec, create a Jira task following the parsing rules and field mapping from `gen_jira_task_and_branch.md` (skip branch creation and file renaming — those happen during implementation)
2. In full mode: search Jira for epics matching component names/labels to find parent epic IDs
3. In single component mode: use the Epic ID obtained during context resolution
4. In standalone mode: use the Epic ID obtained during context resolution
5. Do NOT create git branches or rename files — that happens during implementation
**Self-verification**:
- [ ] Every feature has a corresponding Jira task
- [ ] Every task is linked to the correct parent epic
- [ ] Task descriptions match feature spec content
**Save action**: Jira tasks created via MCP
---
## Summary Report
After all steps complete, write `SUMMARY.md` using `templates/summary.md` as structure.
## Common Mistakes
- **Coding during decomposition**: this workflow produces specs, never code
- **Over-splitting**: don't create many features if the component is simple — 1 feature is fine
- **Features exceeding 5 points**: split them; no feature should be too complex for a single task
- **Cross-component features**: each feature belongs to exactly one component
- **Over-splitting**: don't create many tasks if the component is simple — 1 task is fine
- **Tasks exceeding 8 points**: split them; no task should be too complex for a single implementer
- **Cross-component tasks**: each task belongs to exactly one component
- **Skipping BLOCKING gates**: never proceed past a BLOCKING marker without user confirmation
- **Creating git branches**: branch creation is an implementation concern, not a decomposition one
- **Creating component subdirectories**: all tasks go flat in `TASKS_TODO/`
- **Forgetting tracker**: every task must have a work item ticket created inline — do not defer to a separate step
- **Forgetting to rename**: after work item ticket creation, always rename the file from numeric prefix to tracker ID prefix
## Escalation Rules
| Situation | Action |
|-----------|--------|
| Ambiguous component boundaries | ASK user |
| Feature complexity exceeds 5 points after splitting | ASK user |
| Missing component specs in PLANS_DIR | ASK user |
| Task complexity exceeds 8 points after splitting | ASK user |
| Missing component specs in DOCUMENT_DIR | ASK user |
| Cross-component dependency conflict | ASK user |
| Jira epic not found for a component | ASK user for Epic ID |
| Component naming | PROCEED, confirm at next BLOCKING gate |
| Tracker epic not found for a component | ASK user for Epic ID |
| Task naming | PROCEED, confirm at next BLOCKING gate |
## Methodology Quick Reference
```
┌────────────────────────────────────────────────────────────────┐
Feature Decomposition (4-Step Method)
Task Decomposition (Multi-Mode)
├────────────────────────────────────────────────────────────────┤
│ CONTEXT: Resolve mode (full / single component / standalone)
1. Bootstrap Structure → initial_structure.md (full only)
[BLOCKING: user confirms structure]
2. Feature Decompose → [##]_[name]/[##].[##]_feature_*
3. Cross-Verification → cross_dependencies.md (full only)
[BLOCKING: user confirms dependencies]
4. Jira Tasks → Jira via MCP
─────────────────────────────────────────────────
Summary → SUMMARY.md
│ CONTEXT: Resolve mode (default / single component / tests-only)
DEFAULT MODE:
1. Bootstrap Structure → [TRACKER-ID]_initial_structure.md
[BLOCKING: user confirms structure]
2. Component Tasks → [TRACKER-ID]_[short_name].md each
3. Blackbox Tests → [TRACKER-ID]_[short_name].md each
4. Cross-Verification → _dependencies_table.md
[BLOCKING: user confirms dependencies]
│ │
│ TESTS-ONLY MODE: │
│ 1t. Test Infrastructure → [TRACKER-ID]_test_infrastructure.md │
│ [BLOCKING: user confirms test scaffold] │
│ 3. Blackbox Tests → [TRACKER-ID]_[short_name].md each │
│ 4. Cross-Verification → _dependencies_table.md │
│ [BLOCKING: user confirms dependencies] │
│ │
│ SINGLE COMPONENT MODE: │
│ 2. Component Tasks → [TRACKER-ID]_[short_name].md each │
├────────────────────────────────────────────────────────────────┤
│ Principles: Atomic features · Behavioral specs · Save now
Ask don't assume · Plan don't code
│ Principles: Atomic tasks · Behavioral specs · Flat structure
Tracker inline · Rename to tracker ID · Save now · Ask don't assume
└────────────────────────────────────────────────────────────────┘
```
@@ -0,0 +1,31 @@
# Dependencies Table Template
Use this template after cross-task verification. Save as `TASKS_DIR/_dependencies_table.md`.
---
```markdown
# Dependencies Table
**Date**: [YYYY-MM-DD]
**Total Tasks**: [N]
**Total Complexity Points**: [N]
| Task | Name | Complexity | Dependencies | Epic |
|------|------|-----------|-------------|------|
| [TRACKER-ID] | initial_structure | [points] | None | [EPIC-ID] |
| [TRACKER-ID] | [short_name] | [points] | [TRACKER-ID] | [EPIC-ID] |
| [TRACKER-ID] | [short_name] | [points] | [TRACKER-ID] | [EPIC-ID] |
| [TRACKER-ID] | [short_name] | [points] | [TRACKER-ID], [TRACKER-ID] | [EPIC-ID] |
| ... | ... | ... | ... | ... |
```
---
## Guidelines
- Every task from TASKS_DIR must appear in this table
- Dependencies column lists tracker IDs (e.g., "AZ-43, AZ-44") or "None"
- No circular dependencies allowed
- Tasks should be listed in recommended execution order
- The `/implement` skill reads this table to compute parallel batches
@@ -1,15 +1,20 @@
# Initial Structure Plan Template
# Initial Structure Task Template
Use this template for the bootstrap structure plan. Save as `TASKS_DIR/<topic>/initial_structure.md`.
Use this template for the bootstrap structure plan. Save as `TASKS_DIR/01_initial_structure.md` initially, then rename to `TASKS_DIR/[TRACKER-ID]_initial_structure.md` after work item ticket creation.
---
```markdown
# Initial Project Structure Plan
# Initial Project Structure
**Date**: [YYYY-MM-DD]
**Tech Stack**: [language, framework, database, etc.]
**Source**: architecture.md, component specs from _docs/02_plans/<topic>/
**Task**: [TRACKER-ID]_initial_structure
**Name**: Initial Structure
**Description**: Scaffold the project skeleton — folders, shared models, interfaces, stubs, CI/CD, DB migrations, test structure
**Complexity**: [3|5] points
**Dependencies**: None
**Component**: Bootstrap
**Tracker**: [TASK-ID]
**Epic**: [EPIC-ID]
## Project Folder Layout
@@ -35,7 +40,7 @@ project-root/
| Component | Interface | Methods | Exposed To |
|-----------|-----------|---------|-----------|
| [##]_[name] | [InterfaceName] | [method list] | [consumers] |
| [name] | [InterfaceName] | [method list] | [consumers] |
## CI/CD Pipeline
@@ -44,7 +49,7 @@ project-root/
| Build | Compile/bundle the application | Every push |
| Lint / Static Analysis | Code quality and style checks | Every push |
| Unit Tests | Run unit test suite | Every push |
| Integration Tests | Run integration test suite | Every push |
| Blackbox Tests | Run blackbox test suite | Every push |
| Security Scan | SAST / dependency check | Every push |
| Deploy to Staging | Deploy to staging environment | Merge to staging branch |
@@ -97,16 +102,33 @@ tests/
| Order | Component | Reason |
|-------|-----------|--------|
| 1 | [##]_[name] | [why first — foundational, no dependencies] |
| 2 | [##]_[name] | [depends on #1] |
| 1 | [name] | [why first — foundational, no dependencies] |
| 2 | [name] | [depends on #1] |
| ... | ... | ... |
## Acceptance Criteria
**AC-1: Project scaffolded**
Given the structure plan above
When the implementer executes this task
Then all folders, stubs, and configuration files exist
**AC-2: Tests runnable**
Given the scaffolded project
When the test suite is executed
Then all stub tests pass (even if they only assert true)
**AC-3: CI/CD configured**
Given the scaffolded project
When CI pipeline runs
Then build, lint, and test stages complete successfully
```
---
## Guidance Notes
- This is a PLAN document, not code. The `3.05_implement_initial_structure` command executes it.
- This is a PLAN document, not code. The `/implement` skill executes it.
- Focus on structure and organization decisions, not implementation details.
- Reference component specs for interface and DTO details — don't repeat everything.
- The folder layout should follow conventions of the identified tech stack.
@@ -1,59 +0,0 @@
# Decomposition Summary Template
Use this template after all steps complete. Save as `TASKS_DIR/<topic>/SUMMARY.md`.
---
```markdown
# Decomposition Summary
**Date**: [YYYY-MM-DD]
**Topic**: [topic name]
**Total Components**: [N]
**Total Features**: [N]
**Total Complexity Points**: [N]
## Component Breakdown
| # | Component | Features | Total Points | Jira Epic |
|---|-----------|----------|-------------|-----------|
| 01 | [name] | [count] | [sum] | [EPIC-ID] |
| 02 | [name] | [count] | [sum] | [EPIC-ID] |
| ... | ... | ... | ... | ... |
## Feature List
| Component | Feature | Complexity | Jira Task | Dependencies |
|-----------|---------|-----------|-----------|-------------|
| [##]_[name] | [##].[##]_feature_[name] | [points] | [TASK-ID] | [deps or "None"] |
| ... | ... | ... | ... | ... |
## Implementation Order
Recommended sequence based on dependency analysis:
| Phase | Components / Features | Rationale |
|-------|----------------------|-----------|
| 1 | [list] | [foundational, no dependencies] |
| 2 | [list] | [depends on phase 1] |
| 3 | [list] | [depends on phase 1-2] |
| ... | ... | ... |
### Parallelization Opportunities
[Features/components that can be implemented concurrently within each phase]
## Cross-Component Dependencies
| From (Feature) | To (Feature) | Dependency Type |
|----------------|-------------|-----------------|
| [comp.feature] | [comp.feature] | [data / API / event] |
| ... | ... | ... |
## Artifacts Produced
- `initial_structure.md` — project skeleton plan
- `cross_dependencies.md` — dependency matrix
- `[##]_[name]/[##].[##]_feature_*.md` — feature specs per component
- Jira tasks created under respective epics
```
@@ -1,17 +1,21 @@
# Feature Specification Template
# Task Specification Template
Create a focused behavioral specification that describes **what** the system should do, not **how** it should be built.
Save as `TASKS_DIR/<topic>/[##]_[component_name]/[##].[##]_feature_[feature_name].md`.
Save as `TASKS_DIR/[##]_[short_name].md` initially, then rename to `TASKS_DIR/[TRACKER-ID]_[short_name].md` after work item ticket creation.
---
```markdown
# [Feature Name]
**Status**: Draft | **Date**: [YYYY-MM-DD] | **Feature**: [Brief Feature Description]
**Complexity**: [1|2|3|5] points
**Dependencies**: [List dependent features or "None"]
**Component**: [##]_[component_name]
**Task**: [TRACKER-ID]_[short_name]
**Name**: [short human name]
**Description**: [one-line description of what this task delivers]
**Complexity**: [1|2|3|5|8] points
**Dependencies**: [AZ-43_shared_models, AZ-44_db_migrations] or "None"
**Component**: [component name for context]
**Tracker**: [TASK-ID]
**Epic**: [EPIC-ID]
## Problem
@@ -21,11 +25,12 @@ Clear, concise statement of the problem users are facing.
- Measurable or observable goal 1
- Measurable or observable goal 2
- ...
## Scope
### Included
- What's in scope for this feature
- What's in scope for this task
### Excluded
- Explicitly what's NOT in scope
@@ -59,7 +64,7 @@ Then [expected result]
|--------|-------------|-----------------|
| AC-1 | [test subject] | [expected result] |
## Integration Tests
## Blackbox Tests
| AC Ref | Initial Data/Conditions | What to Test | Expected Behavior | NFR References |
|--------|------------------------|-------------|-------------------|----------------|
@@ -86,7 +91,8 @@ Then [expected result]
- 2 points: Non-trivial, low complexity, minimal coordination
- 3 points: Multi-step, moderate complexity, potential alignment needed
- 5 points: Difficult, interconnected logic, medium-high risk
- 8 points: Too complex — split into smaller features
- 8 points: High difficulty, high ambiguity or coordination, multiple components
- 13 points: Too complex — split into smaller tasks
## Output Guidelines
@@ -97,7 +103,7 @@ Then [expected result]
- Include realistic scope boundaries
- Write from the user's perspective
- Include complexity estimation
- Note dependencies on other features
- Reference dependencies by tracker ID (e.g., AZ-43_shared_models)
**DON'T:**
- Include implementation details (file paths, classes, methods)
@@ -0,0 +1,129 @@
# Test Infrastructure Task Template
Use this template for the test infrastructure bootstrap (Step 1t in tests-only mode). Save as `TASKS_DIR/01_test_infrastructure.md` initially, then rename to `TASKS_DIR/[TRACKER-ID]_test_infrastructure.md` after work item ticket creation.
---
```markdown
# Test Infrastructure
**Task**: [TRACKER-ID]_test_infrastructure
**Name**: Test Infrastructure
**Description**: Scaffold the Blackbox test project — test runner, mock services, Docker test environment, test data fixtures, reporting
**Complexity**: [3|5] points
**Dependencies**: None
**Component**: Blackbox Tests
**Tracker**: [TASK-ID]
**Epic**: [EPIC-ID]
## Test Project Folder Layout
```
e2e/
├── conftest.py
├── requirements.txt
├── Dockerfile
├── mocks/
│ ├── [mock_service_1]/
│ │ ├── Dockerfile
│ │ └── [entrypoint file]
│ └── [mock_service_2]/
│ ├── Dockerfile
│ └── [entrypoint file]
├── fixtures/
│ └── [test data files]
├── tests/
│ ├── test_[category_1].py
│ ├── test_[category_2].py
│ └── ...
└── docker-compose.test.yml
```
### Layout Rationale
[Brief explanation of directory structure choices — framework conventions, separation of mocks from tests, fixture management]
## Mock Services
| Mock Service | Replaces | Endpoints | Behavior |
|-------------|----------|-----------|----------|
| [name] | [external service] | [endpoints it serves] | [response behavior, configurable via control API] |
### Mock Control API
Each mock service exposes a `POST /mock/config` endpoint for test-time behavior control (e.g., simulate downtime, inject errors). A `GET /mock/[resource]` endpoint returns recorded interactions for assertion.
## Docker Test Environment
### docker-compose.test.yml Structure
| Service | Image / Build | Purpose | Depends On |
|---------|--------------|---------|------------|
| [system-under-test] | [build context] | Main system being tested | [mock services] |
| [mock-1] | [build context] | Mock for [external service] | — |
| [e2e-consumer] | [build from e2e/] | Test runner | [system-under-test] |
### Networks and Volumes
[Isolated test network, volume mounts for test data, model files, results output]
## Test Runner Configuration
**Framework**: [e.g., pytest]
**Plugins**: [e.g., pytest-csv, sseclient-py, requests]
**Entry point**: [e.g., pytest --csv=/results/report.csv]
### Fixture Strategy
| Fixture | Scope | Purpose |
|---------|-------|---------|
| [name] | [session/module/function] | [what it provides] |
## Test Data Fixtures
| Data Set | Source | Format | Used By |
|----------|--------|--------|---------|
| [name] | [volume mount / generated / API seed] | [format] | [test categories] |
### Data Isolation
[Strategy: fresh containers per run, volume cleanup, mock state reset]
## Test Reporting
**Format**: [e.g., CSV]
**Columns**: [e.g., Test ID, Test Name, Execution Time (ms), Result, Error Message]
**Output path**: [e.g., /results/report.csv → mounted to host]
## Acceptance Criteria
**AC-1: Test environment starts**
Given the docker-compose.test.yml
When `docker compose -f docker-compose.test.yml up` is executed
Then all services start and the system-under-test is reachable
**AC-2: Mock services respond**
Given the test environment is running
When the e2e-consumer sends requests to mock services
Then mock services respond with configured behavior
**AC-3: Test runner executes**
Given the test environment is running
When the e2e-consumer starts
Then the test runner discovers and executes test files
**AC-4: Test report generated**
Given tests have been executed
When the test run completes
Then a report file exists at the configured output path with correct columns
```
---
## Guidance Notes
- This is a PLAN document, not code. The `/implement` skill executes it.
- Focus on test infrastructure decisions, not individual test implementations.
- Reference environment.md and test-data.md from the test specs — don't repeat everything.
- Mock services must be deterministic: same input always produces same output.
- The Docker environment must be self-contained: `docker compose up` sufficient.