Refactor documentation structure by renaming 'plans' directory to 'document' across various skills and templates. Update references in README and skill files to reflect the new directory structure for improved clarity and organization.

This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-03-21 14:25:05 +02:00
parent 7556f3b012
commit 2a17590248
22 changed files with 676 additions and 206 deletions
+42 -14
View File
@@ -26,7 +26,7 @@ Auto-chaining execution engine that drives the full BUILD → SHIP workflow. Det
- **Delegate, don't duplicate**: read and execute each sub-skill's SKILL.md; never inline their logic here
- **Sound on pause**: follow `.cursor/rules/human-input-sound.mdc` — play a notification sound before every pause that requires human input
- **Minimize interruptions**: only ask the user when the decision genuinely cannot be resolved automatically
- **Jira MCP required**: steps that create Jira artifacts (Plan Step 6, Decompose) must have authenticated Jira MCP — never skip or substitute with local files
- **Jira MCP recommended**: steps that create Jira artifacts (Plan Step 6, Decompose) should have authenticated Jira MCP — if unavailable, offer user the choice to continue with local-only task tracking
## Jira MCP Authentication
@@ -45,22 +45,26 @@ Before entering **Step 2 (Plan)** or **Step 3 (Decompose)** for the first time,
1. Call `mcp_auth` on the Jira MCP server
2. If authentication succeeds → proceed normally
3. If the user **skips** authentication **STOP**. Present using Choose format:
3. If the user **skips** or authentication fails → present using Choose format:
```
══════════════════════════════════════
BLOCKER: Jira MCP authentication required
Jira MCP authentication failed
══════════════════════════════════════
A) Authenticate now (retry mcp_auth)
B) Pause autopilot — resume after configuring Jira MCP
A) Retry authentication (retry mcp_auth)
B) Continue without Jira (tasks saved locally only)
══════════════════════════════════════
Note: Jira integration is mandatory. Plan and Decompose
steps create epics and tasks that drive implementation.
Local-only workarounds are not acceptable.
Recommendation: A — Jira IDs drive task referencing,
dependency tracking, and implementation batching.
Without Jira, task files use numeric prefixes instead.
══════════════════════════════════════
```
Do NOT offer a "skip Jira" or "save locally" option. The workflow depends on Jira IDs for task referencing, dependency tracking, and implementation batching.
If user picks **B** (continue without Jira):
- Set a flag in the state file: `jira_enabled: false`
- All skills that would create Jira tickets instead save metadata locally in the task/epic files with `Jira: pending` status
- Task files keep numeric prefixes (e.g., `01_initial_structure.md`) instead of Jira ID prefixes
- The workflow proceeds normally in all other respects
### Re-Authentication
@@ -182,7 +186,7 @@ notes: [any context for next session, e.g. "User asked to revisit risk assessmen
1. **Create** the state file on the very first autopilot invocation (after state detection determines Step 0)
2. **Update** the state file after every step completion, every session boundary, and every BLOCKING gate confirmation
3. **Read** the state file as the first action on every invocation — before folder scanning
4. **Cross-check**: after reading the state file, verify against actual `_docs/` folder contents. If they disagree (e.g., state file says Step 2 but `_docs/02_plans/architecture.md` already exists), trust the folder structure and update the state file to match
4. **Cross-check**: after reading the state file, verify against actual `_docs/` folder contents. If they disagree (e.g., state file says Step 2 but `_docs/02_document/architecture.md` already exists), trust the folder structure and update the state file to match
5. **Never delete** the state file. It accumulates history across the entire project lifecycle
## Execution Entry Point
@@ -213,6 +217,29 @@ Scan `_docs/` to determine the current workflow position. Check rules in order
### Detection Rules
**Pre-Step — Existing Codebase Detection**
Condition: `_docs/` does not exist AND the workspace contains source code files (e.g., `*.py`, `*.cs`, `*.rs`, `*.ts`, `src/`, `Cargo.toml`, `*.csproj`, `package.json`)
Action: An existing codebase without documentation was detected. Present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Existing codebase detected
══════════════════════════════════════
A) Start fresh — define the problem from scratch (normal workflow)
B) Document existing codebase first — run /document to reverse-engineer docs, then continue
══════════════════════════════════════
Recommendation: B — the /document skill analyzes your code
bottom-up and produces _docs/ artifacts automatically,
then you can continue with refactor or the normal workflow.
══════════════════════════════════════
```
- If user picks A → proceed to Step 0 (Problem Gathering) as normal
- If user picks B → read and execute `.cursor/skills/document/SKILL.md`. After document skill completes, re-detect state (the produced `_docs/` artifacts will place the project at Step 2 or later).
---
**Step 0 — Problem Gathering**
Condition: `_docs/00_problem/` does not exist, OR any of these are missing/empty:
- `problem.md`
@@ -232,7 +259,7 @@ Action: Read and execute `.cursor/skills/research/SKILL.md` (will auto-detect Mo
---
**Step 1b — Research Decision**
Condition: `_docs/01_solution/` contains `solution_draft*.md` files AND `_docs/01_solution/solution.md` does not exist AND `_docs/02_plans/architecture.md` does not exist
Condition: `_docs/01_solution/` contains `solution_draft*.md` files AND `_docs/01_solution/solution.md` does not exist AND `_docs/02_document/architecture.md` does not exist
Action: Present the current research state to the user:
- How many solution drafts exist
@@ -258,18 +285,18 @@ Then present using the **Choose format**:
---
**Step 2 — Plan**
Condition: `_docs/01_solution/` has `solution_draft*.md` files AND `_docs/02_plans/architecture.md` does not exist
Condition: `_docs/01_solution/` has `solution_draft*.md` files AND `_docs/02_document/architecture.md` does not exist
Action:
1. The plan skill's Prereq 2 will rename the latest draft to `solution.md` — this is handled by the plan skill itself
2. Read and execute `.cursor/skills/plan/SKILL.md`
If `_docs/02_plans/` exists but is incomplete (has some artifacts but no `FINAL_report.md`), the plan skill's built-in resumability handles it.
If `_docs/02_document/` exists but is incomplete (has some artifacts but no `FINAL_report.md`), the plan skill's built-in resumability handles it.
---
**Step 3 — Decompose**
Condition: `_docs/02_plans/` contains `architecture.md` AND `_docs/02_plans/components/` has at least one component AND `_docs/02_tasks/` does not exist or has no task files (excluding `_dependencies_table.md`)
Condition: `_docs/02_document/` contains `architecture.md` AND `_docs/02_document/components/` has at least one component AND `_docs/02_tasks/` does not exist or has no task files (excluding `_dependencies_table.md`)
Action: Read and execute `.cursor/skills/decompose/SKILL.md`
@@ -417,6 +444,7 @@ This skill activates when the user wants to:
**Differentiation**:
- User wants only research → use `/research` directly
- User wants only planning → use `/plan` directly
- User wants to document an existing codebase → use `/document` directly
- User wants the full guided workflow → use `/autopilot`
## Methodology Quick Reference
+12 -12
View File
@@ -32,13 +32,13 @@ Decompose planned components into atomic, implementable task specs with a bootst
Determine the operating mode based on invocation before any other logic runs.
**Default** (no explicit input file provided):
- PLANS_DIR: `_docs/02_plans/`
- DOCUMENT_DIR: `_docs/02_document/`
- TASKS_DIR: `_docs/02_tasks/`
- Reads from: `_docs/00_problem/`, `_docs/01_solution/`, PLANS_DIR
- Reads from: `_docs/00_problem/`, `_docs/01_solution/`, DOCUMENT_DIR
- Runs Step 1 (bootstrap) + Step 2 (all components) + Step 3 (integration tests) + Step 4 (cross-verification)
**Single component mode** (provided file is within `_docs/02_plans/` and inside a `components/` subdirectory):
- PLANS_DIR: `_docs/02_plans/`
**Single component mode** (provided file is within `_docs/02_document/` and inside a `components/` subdirectory):
- DOCUMENT_DIR: `_docs/02_document/`
- TASKS_DIR: `_docs/02_tasks/`
- Derive component number and component name from the file path
- Ask user for the parent Epic ID
@@ -58,10 +58,10 @@ Announce the detected mode and resolved paths to the user before proceeding.
| `_docs/00_problem/restrictions.md` | Constraints and limitations |
| `_docs/00_problem/acceptance_criteria.md` | Measurable acceptance criteria |
| `_docs/01_solution/solution.md` | Finalized solution |
| `PLANS_DIR/architecture.md` | Architecture from plan skill |
| `PLANS_DIR/system-flows.md` | System flows from plan skill |
| `PLANS_DIR/components/[##]_[name]/description.md` | Component specs from plan skill |
| `PLANS_DIR/integration_tests/` | Integration test specs from plan skill |
| `DOCUMENT_DIR/architecture.md` | Architecture from plan skill |
| `DOCUMENT_DIR/system-flows.md` | System flows from plan skill |
| `DOCUMENT_DIR/components/[##]_[name]/description.md` | Component specs from plan skill |
| `DOCUMENT_DIR/integration_tests/` | Integration test specs from plan skill |
**Single component mode:**
@@ -73,7 +73,7 @@ Announce the detected mode and resolved paths to the user before proceeding.
### Prerequisite Checks (BLOCKING)
**Default:**
1. PLANS_DIR contains `architecture.md` and `components/`**STOP if missing**
1. DOCUMENT_DIR contains `architecture.md` and `components/`**STOP if missing**
2. Create TASKS_DIR if it does not exist
3. If TASKS_DIR already contains task files, ask user: **resume from last checkpoint or start fresh?**
@@ -124,7 +124,7 @@ At the start of execution, create a TodoWrite with all applicable steps. Update
**Goal**: Produce `01_initial_structure.md` — the first task describing the project skeleton
**Constraints**: This is a plan document, not code. The `/implement` skill executes it.
1. Read architecture.md, all component specs, system-flows.md, data_model.md, and `deployment/` from PLANS_DIR
1. Read architecture.md, all component specs, system-flows.md, data_model.md, and `deployment/` from DOCUMENT_DIR
2. Read problem, solution, and restrictions from `_docs/00_problem/` and `_docs/01_solution/`
3. Research best implementation patterns for the identified tech stack
4. Document the structure plan using `templates/initial-structure-task.md`
@@ -208,7 +208,7 @@ For each component (or the single provided component):
**Numbering**: Continue sequential numbering from where Step 2 left off.
1. Read all test specs from `PLANS_DIR/integration_tests/` (functional_tests.md, non_functional_tests.md)
1. Read all test specs from `DOCUMENT_DIR/integration_tests/` (functional_tests.md, non_functional_tests.md)
2. Group related test scenarios into atomic tasks (e.g., one task per test category or per component under test)
3. Each task should reference the specific test scenarios it implements and the environment/test_data specs
4. Dependencies: integration test tasks depend on the component implementation tasks they exercise
@@ -270,7 +270,7 @@ For each component (or the single provided component):
|-----------|--------|
| Ambiguous component boundaries | ASK user |
| Task complexity exceeds 5 points after splitting | ASK user |
| Missing component specs in PLANS_DIR | ASK user |
| Missing component specs in DOCUMENT_DIR | ASK user |
| Cross-component dependency conflict | ASK user |
| Jira epic not found for a component | ASK user for Epic ID |
| Task naming | PROCEED, confirm at next BLOCKING gate |
+6 -6
View File
@@ -32,12 +32,12 @@ Plan and document the full deployment lifecycle: check deployment status and env
Fixed paths:
- PLANS_DIR: `_docs/02_plans/`
- DOCUMENT_DIR: `_docs/02_document/`
- DEPLOY_DIR: `_docs/04_deploy/`
- REPORTS_DIR: `_docs/04_deploy/reports/`
- SCRIPTS_DIR: `scripts/`
- ARCHITECTURE: `_docs/02_plans/architecture.md`
- COMPONENTS_DIR: `_docs/02_plans/components/`
- ARCHITECTURE: `_docs/02_document/architecture.md`
- COMPONENTS_DIR: `_docs/02_document/components/`
Announce the resolved paths to the user before proceeding.
@@ -50,13 +50,13 @@ Announce the resolved paths to the user before proceeding.
| `_docs/00_problem/problem.md` | Problem description and context |
| `_docs/00_problem/restrictions.md` | Constraints and limitations |
| `_docs/01_solution/solution.md` | Finalized solution |
| `PLANS_DIR/architecture.md` | Architecture from plan skill |
| `PLANS_DIR/components/` | Component specs |
| `DOCUMENT_DIR/architecture.md` | Architecture from plan skill |
| `DOCUMENT_DIR/components/` | Component specs |
### Prerequisite Checks (BLOCKING)
1. `architecture.md` exists — **STOP if missing**, run `/plan` first
2. At least one component spec exists in `PLANS_DIR/components/`**STOP if missing**
2. At least one component spec exists in `DOCUMENT_DIR/components/`**STOP if missing**
3. Create DEPLOY_DIR, REPORTS_DIR, and SCRIPTS_DIR if they do not exist
4. If DEPLOY_DIR already contains artifacts, ask user: **resume from last checkpoint or start fresh?**
@@ -0,0 +1,114 @@
# Deployment Scripts Documentation Template
Save as `_docs/04_deploy/deploy_scripts.md`.
---
```markdown
# [System Name] — Deployment Scripts
## Overview
| Script | Purpose | Location |
|--------|---------|----------|
| `deploy.sh` | Main deployment orchestrator | `scripts/deploy.sh` |
| `pull-images.sh` | Pull Docker images from registry | `scripts/pull-images.sh` |
| `start-services.sh` | Start all services | `scripts/start-services.sh` |
| `stop-services.sh` | Graceful shutdown | `scripts/stop-services.sh` |
| `health-check.sh` | Verify deployment health | `scripts/health-check.sh` |
## Prerequisites
- Docker and Docker Compose installed on target machine
- SSH access to target machine (configured via `DEPLOY_HOST`)
- Container registry credentials configured
- `.env` file with required environment variables (see `.env.example`)
## Environment Variables
All scripts source `.env` from the project root or accept variables from the environment.
| Variable | Required By | Purpose |
|----------|------------|---------|
| `DEPLOY_HOST` | All (remote mode) | SSH target for remote deployment |
| `REGISTRY_URL` | `pull-images.sh` | Container registry URL |
| `REGISTRY_USER` | `pull-images.sh` | Registry authentication |
| `REGISTRY_PASS` | `pull-images.sh` | Registry authentication |
| `IMAGE_TAG` | `pull-images.sh`, `start-services.sh` | Image version to deploy (default: latest git SHA) |
| [add project-specific variables] | | |
## Script Details
### deploy.sh
Main orchestrator that runs the full deployment flow.
**Usage**:
- `./scripts/deploy.sh` — Deploy latest version
- `./scripts/deploy.sh --rollback` — Rollback to previous version
- `./scripts/deploy.sh --help` — Show usage
**Flow**:
1. Validate required environment variables
2. Call `pull-images.sh`
3. Call `stop-services.sh`
4. Call `start-services.sh`
5. Call `health-check.sh`
6. Report success or failure
**Rollback**: When `--rollback` is passed, reads the previous image tags saved by `stop-services.sh` and redeploys those versions.
### pull-images.sh
**Usage**: `./scripts/pull-images.sh [--help]`
**Steps**:
1. Authenticate with container registry (`REGISTRY_URL`)
2. Pull all required images with specified `IMAGE_TAG`
3. Verify image integrity via digest check
4. Report pull results per image
### start-services.sh
**Usage**: `./scripts/start-services.sh [--help]`
**Steps**:
1. Run `docker compose up -d` with the correct env file
2. Configure networks and volumes
3. Wait for all containers to report healthy state
4. Report startup status per service
### stop-services.sh
**Usage**: `./scripts/stop-services.sh [--help]`
**Steps**:
1. Save current image tags to `previous_tags.env` (for rollback)
2. Stop services with graceful shutdown period (30s)
3. Clean up orphaned containers and networks
### health-check.sh
**Usage**: `./scripts/health-check.sh [--help]`
**Checks**:
| Service | Endpoint | Expected |
|---------|----------|----------|
| [Component 1] | `http://localhost:[port]/health/live` | HTTP 200 |
| [Component 2] | `http://localhost:[port]/health/ready` | HTTP 200 |
| [add all services] | | |
**Exit codes**:
- `0` — All services healthy
- `1` — One or more services unhealthy
## Common Script Properties
All scripts:
- Use `#!/bin/bash` with `set -euo pipefail`
- Support `--help` flag for usage information
- Source `.env` from project root if present
- Are idempotent where possible
- Support remote execution via SSH when `DEPLOY_HOST` is set
```
+445
View File
@@ -0,0 +1,445 @@
---
name: document
description: |
Bottom-up codebase documentation skill. Analyzes existing code from modules up through components
to architecture, then retrospectively derives problem/restrictions/acceptance criteria.
Produces the same _docs/ artifacts as the problem, research, and plan skills, but from code
analysis instead of user interview.
Trigger phrases:
- "document", "document codebase", "document this project"
- "documentation", "generate documentation", "create documentation"
- "reverse-engineer docs", "code to docs"
- "analyze and document"
category: build
tags: [documentation, code-analysis, reverse-engineering, architecture, bottom-up]
disable-model-invocation: true
---
# Bottom-Up Codebase Documentation
Analyze an existing codebase from the bottom up — individual modules first, then components, then system-level architecture — and produce the same `_docs/` artifacts that the `problem` and `plan` skills generate, without requiring user interview.
## Core Principles
- **Bottom-up always**: module docs -> component specs -> architecture/flows -> solution -> problem extraction. Every higher level is synthesized from the level below.
- **Dependencies first**: process modules in topological order (leaves first). When documenting module X, all of X's dependencies already have docs.
- **Incremental context**: each module's doc uses already-written dependency docs as context — no ever-growing chain.
- **Verify against code**: cross-reference every entity in generated docs against actual codebase. Catch hallucinations.
- **Save immediately**: write each artifact as soon as its step completes. Enable resume from any checkpoint.
- **Ask, don't assume**: when code intent is ambiguous, ASK the user before proceeding.
## Context Resolution
Fixed paths:
- DOCUMENT_DIR: `_docs/02_document/`
- SOLUTION_DIR: `_docs/01_solution/`
- PROBLEM_DIR: `_docs/00_problem/`
Announce resolved paths to user before proceeding.
## Prerequisite Checks
1. If `_docs/` already exists and contains files, ASK user: **overwrite, merge, or write to `_docs_generated/` instead?**
2. Create DOCUMENT_DIR, SOLUTION_DIR, and PROBLEM_DIR if they don't exist
3. If DOCUMENT_DIR contains a `state.json`, offer to **resume from last checkpoint or start fresh**
## Progress Tracking
Create a TodoWrite with all steps (0 through 7). Update status as each step completes.
## Workflow
### Step 0: Codebase Discovery
**Role**: Code analyst
**Goal**: Build a complete map of the codebase before analyzing any code.
Scan and catalog:
1. Directory tree (ignore `node_modules`, `.git`, `__pycache__`, `bin/`, `obj/`, build artifacts)
2. Language detection from file extensions and config files
3. Package manifests: `package.json`, `requirements.txt`, `pyproject.toml`, `*.csproj`, `Cargo.toml`, `go.mod`
4. Config files: `Dockerfile`, `docker-compose.yml`, `.env.example`, CI/CD configs (`.github/workflows/`, `.gitlab-ci.yml`, `azure-pipelines.yml`)
5. Entry points: `main.*`, `app.*`, `index.*`, `Program.*`, startup scripts
6. Test structure: test directories, test frameworks, test runner configs
7. Existing documentation: README, `docs/`, wiki references, inline doc coverage
8. **Dependency graph**: build a module-level dependency graph by analyzing imports/references. Identify:
- Leaf modules (no internal dependencies)
- Entry points (no internal dependents)
- Cycles (mark for grouped analysis)
- Topological processing order
**Save**: `DOCUMENT_DIR/00_discovery.md` containing:
- Directory tree (concise, relevant directories only)
- Tech stack summary table (language, framework, database, infra)
- Dependency graph (textual list + Mermaid diagram)
- Topological processing order
- Entry points and leaf modules
**Save**: `DOCUMENT_DIR/state.json` with initial state:
```json
{
"current_step": "module-analysis",
"completed_steps": ["discovery"],
"modules_total": 0,
"modules_documented": [],
"modules_remaining": [],
"components_written": [],
"last_updated": ""
}
```
---
### Step 1: Module-Level Documentation
**Role**: Code analyst
**Goal**: Document every identified module individually, processing in topological order (leaves first).
For each module in topological order:
1. **Read**: read the module's source code. Assess complexity and what context is needed.
2. **Gather context**: collect already-written docs of this module's dependencies (available because of bottom-up order). Note external library usage.
3. **Write module doc** with these sections:
- **Purpose**: one-sentence responsibility
- **Public interface**: exported functions/classes/methods with signatures, input/output types
- **Internal logic**: key algorithms, patterns, non-obvious behavior
- **Dependencies**: what it imports internally and why
- **Consumers**: what uses this module (from the dependency graph)
- **Data models**: entities/types defined in this module
- **Configuration**: env vars, config keys consumed
- **External integrations**: HTTP calls, DB queries, queue operations, file I/O
- **Security**: auth checks, encryption, input validation, secrets access
- **Tests**: what tests exist for this module, what they cover
4. **Verify**: cross-check that every entity referenced in the doc exists in the codebase. Flag uncertainties.
**Cycle handling**: modules in a dependency cycle are analyzed together as a group, producing a single combined doc.
**Large modules**: if a module exceeds comfortable analysis size, split into logical sub-sections and analyze each part, then combine.
**Save**: `DOCUMENT_DIR/modules/[module_name].md` for each module.
**State**: update `state.json` after each module completes (move from `modules_remaining` to `modules_documented`).
---
### Step 2: Component Assembly
**Role**: Software architect
**Goal**: Group related modules into logical components and produce component specs.
1. Analyze module docs from Step 1 to identify natural groupings:
- By directory structure (most common)
- By shared data models or common purpose
- By dependency clusters (tightly coupled modules)
2. For each identified component, synthesize its module docs into a single component specification using `templates/component-spec.md` as structure:
- High-level overview: purpose, pattern, upstream/downstream
- Internal interfaces: method signatures, DTOs (from actual module code)
- External API specification (if the component exposes HTTP/gRPC endpoints)
- Data access patterns: queries, caching, storage estimates
- Implementation details: algorithmic complexity, state management, key libraries
- Extensions and helpers: shared utilities needed
- Caveats and edge cases: limitations, race conditions, bottlenecks
- Dependency graph: implementation order relative to other components
- Logging strategy
3. Identify common helpers shared across multiple components -> document in `common-helpers/`
4. Generate component relationship diagram (Mermaid)
**Self-verification**:
- [ ] Every module from Step 1 is covered by exactly one component
- [ ] No component has overlapping responsibility with another
- [ ] Inter-component interfaces are explicit (who calls whom, with what)
- [ ] Component dependency graph has no circular dependencies
**Save**:
- `DOCUMENT_DIR/components/[##]_[name]/description.md` per component
- `DOCUMENT_DIR/common-helpers/[##]_helper_[name].md` per shared helper
- `DOCUMENT_DIR/diagrams/components.md` (Mermaid component diagram)
**BLOCKING**: Present component list with one-line summaries to user. Do NOT proceed until user confirms the component breakdown is correct.
---
### Step 3: System-Level Synthesis
**Role**: Software architect
**Goal**: From component docs, synthesize system-level documents.
All documents here are derived from component docs (Step 2) + module docs (Step 1). No new code reading should be needed. If it is, that indicates a gap in Steps 1-2 — go back and fill it.
#### 3a. Architecture
Using `templates/architecture.md` as structure:
- System context and boundaries from entry points and external integrations
- Tech stack table from discovery (Step 0) + component specs
- Deployment model from Dockerfiles, CI configs, environment strategies
- Data model overview from per-component data access sections
- Integration points from inter-component interfaces
- NFRs from test thresholds, config limits, health checks
- Security architecture from per-module security observations
- Key ADRs inferred from technology choices and patterns
**Save**: `DOCUMENT_DIR/architecture.md`
#### 3b. System Flows
Using `templates/system-flows.md` as structure:
- Trace main flows through the component interaction graph
- Entry point -> component chain -> output for each major flow
- Mermaid sequence diagrams and flowcharts
- Error scenarios from exception handling patterns
- Data flow tables per flow
**Save**: `DOCUMENT_DIR/system-flows.md` and `DOCUMENT_DIR/diagrams/flows/flow_[name].md`
#### 3c. Data Model
- Consolidate all data models from module docs
- Entity-relationship diagram (Mermaid ERD)
- Migration strategy (if ORM/migration tooling detected)
- Seed data observations
- Backward compatibility approach (if versioning found)
**Save**: `DOCUMENT_DIR/data_model.md`
#### 3d. Deployment (if Dockerfile/CI configs exist)
- Containerization summary
- CI/CD pipeline structure
- Environment strategy (dev, staging, production)
- Observability (logging patterns, metrics, health checks found in code)
**Save**: `DOCUMENT_DIR/deployment/` (containerization.md, ci_cd_pipeline.md, environment_strategy.md, observability.md — only files for which sufficient code evidence exists)
---
### Step 4: Verification Pass
**Role**: Quality verifier
**Goal**: Compare every generated document against actual code. Fix hallucinations, fill gaps, correct inaccuracies.
For each document generated in Steps 1-3:
1. **Entity verification**: extract all code entities (class names, function names, module names, endpoints) mentioned in the doc. Cross-reference each against the actual codebase. Flag any that don't exist.
2. **Interface accuracy**: for every method signature, DTO, or API endpoint in component specs, verify it matches actual code.
3. **Flow correctness**: for each system flow diagram, trace the actual code path and verify the sequence matches.
4. **Completeness check**: are there modules or components discovered in Step 0 that aren't covered by any document? Flag gaps.
5. **Consistency check**: do component docs agree with architecture doc? Do flow diagrams match component interfaces?
Apply corrections inline to the documents that need them.
**Save**: `DOCUMENT_DIR/04_verification_log.md` with:
- Total entities verified vs flagged
- Corrections applied (which document, what changed)
- Remaining gaps or uncertainties
- Completeness score (modules covered / total modules)
**BLOCKING**: Present verification summary to user. Do NOT proceed until user confirms corrections are acceptable or requests additional fixes.
---
### Step 5: Solution Extraction (Retrospective)
**Role**: Software architect
**Goal**: From all verified technical documentation, retrospectively create `solution.md` — the same artifact the research skill produces. This makes downstream skills (`plan`, `deploy`, `decompose`) compatible with the documented codebase.
Synthesize from architecture (Step 3) + component specs (Step 2) + system flows (Step 3) + verification findings (Step 4):
1. **Product Solution Description**: what the system is, brief component interaction diagram (Mermaid)
2. **Architecture**: the architecture that is implemented, with per-component solution tables:
| Solution | Tools | Advantages | Limitations | Requirements | Security | Cost | Fit |
|----------|-------|-----------|-------------|-------------|----------|------|-----|
| [actual implementation] | [libs/platforms used] | [observed strengths] | [observed limitations] | [requirements met] | [security approach] | [cost indicators] | [fitness assessment] |
3. **Testing Strategy**: summarize integration/functional tests and non-functional tests found in the codebase
4. **References**: links to key config files, Dockerfiles, CI configs that evidence the solution choices
**Save**: `SOLUTION_DIR/solution.md` (`_docs/01_solution/solution.md`)
---
### Step 6: Problem Extraction (Retrospective)
**Role**: Business analyst
**Goal**: From all verified technical docs, retrospectively derive the high-level problem definition — producing the same documents the `problem` skill creates through interview.
This is the inverse of normal workflow: instead of problem -> solution -> code, we go code -> technical docs -> problem understanding.
#### 6a. `problem.md`
- Synthesize from architecture overview + component purposes + system flows
- What is this system? What problem does it solve? Who are the users? How does it work at a high level?
- Cross-reference with README if one exists
- Free-form text, concise, readable by someone unfamiliar with the project
#### 6b. `restrictions.md`
- Extract from: tech stack choices, Dockerfile specs (OS, base images), CI configs (platform constraints), dependency versions, environment configs
- Categorize with headers: Hardware, Software, Environment, Operational
- Each restriction should be specific and testable
#### 6c. `acceptance_criteria.md`
- Derive from: test assertions (expected values, thresholds), performance configs (timeouts, rate limits, batch sizes), health check endpoints, validation rules in code
- Categorize with headers by domain
- Every criterion must have a measurable value — if only implied, note the source
#### 6d. `input_data/`
- Document data schemas found (DB schemas, API request/response types, config file formats)
- Create `data_parameters.md` describing what data the system consumes, formats, volumes, update patterns
#### 6e. `security_approach.md` (only if security code found)
- Authentication mechanisms, authorization patterns, encryption, secrets handling, CORS, rate limiting, input sanitization — all from code observations
- If no security-relevant code found, skip this file
**Save**: all files to `PROBLEM_DIR/` (`_docs/00_problem/`)
**BLOCKING**: Present all problem documents to user. These are the most abstracted and therefore most prone to interpretation error. Do NOT proceed until user confirms or requests corrections.
---
### Step 7: Final Report
**Role**: Technical writer
**Goal**: Produce `FINAL_report.md` integrating all generated documentation.
Using `templates/final-report.md` as structure:
- Executive summary from architecture + problem docs
- Problem statement (transformed from problem.md, not copy-pasted)
- Architecture overview with tech stack one-liner
- Component summary table (number, name, purpose, dependencies)
- System flows summary table
- Risk observations from verification log (Step 4)
- Open questions (uncertainties flagged during analysis)
- Artifact index listing all generated documents with paths
**Save**: `DOCUMENT_DIR/FINAL_report.md`
**State**: update `state.json` with `current_step: "complete"`.
---
## Artifact Management
### Directory Structure
```
_docs/
├── 00_problem/ # Step 6 (retrospective)
│ ├── problem.md
│ ├── restrictions.md
│ ├── acceptance_criteria.md
│ ├── input_data/
│ │ └── data_parameters.md
│ └── security_approach.md
├── 01_solution/ # Step 5 (retrospective)
│ └── solution.md
└── 02_document/ # DOCUMENT_DIR
├── 00_discovery.md # Step 0
├── modules/ # Step 1
│ ├── [module_name].md
│ └── ...
├── components/ # Step 2
│ ├── 01_[name]/description.md
│ ├── 02_[name]/description.md
│ └── ...
├── common-helpers/ # Step 2
├── architecture.md # Step 3
├── system-flows.md # Step 3
├── data_model.md # Step 3
├── deployment/ # Step 3
├── diagrams/ # Steps 2-3
│ ├── components.md
│ └── flows/
├── 04_verification_log.md # Step 4
├── FINAL_report.md # Step 7
└── state.json # Resumability
```
### Resumability
Maintain `DOCUMENT_DIR/state.json`:
```json
{
"current_step": "module-analysis",
"completed_steps": ["discovery"],
"modules_total": 12,
"modules_documented": ["utils/helpers", "models/user"],
"modules_remaining": ["services/auth", "api/endpoints"],
"components_written": [],
"last_updated": "2026-03-21T14:00:00Z"
}
```
Update after each module/component completes. If interrupted, resume from next undocumented module.
When resuming:
1. Read `state.json`
2. Cross-check against actual files in DOCUMENT_DIR (trust files over state if they disagree)
3. Continue from the next incomplete item
4. Inform user which steps are being skipped
### Save Principles
1. **Save immediately**: write each module doc as soon as analysis completes
2. **Incremental context**: each subsequent module uses already-written docs as context
3. **Preserve intermediates**: keep all module docs even after synthesis into component docs
4. **Enable recovery**: state file tracks exact progress for resume
## Escalation Rules
| Situation | Action |
|-----------|--------|
| Minified/obfuscated code detected | WARN user, skip module, note in verification log |
| Module too large for context window | Split into sub-sections, analyze parts separately, combine |
| Cycle in dependency graph | Group cycled modules, analyze together as one doc |
| Generated code (protobuf, swagger-gen) | Note as generated, document the source spec instead |
| No tests found in codebase | Note gap in acceptance_criteria.md, derive AC from validation rules and config limits only |
| Contradictions between code and README | Flag in verification log, ASK user |
| Binary files or non-code assets | Skip, note in discovery |
| `_docs/` already exists | ASK user: overwrite, merge, or use `_docs_generated/` |
| Code intent is ambiguous | ASK user, do not guess |
## Common Mistakes
- **Top-down guessing**: never infer architecture before documenting modules. Build up, don't assume down.
- **Hallucinating entities**: always verify that referenced classes/functions/endpoints actually exist in code.
- **Skipping modules**: every source module must appear in exactly one module doc and one component.
- **Monolithic analysis**: don't try to analyze the entire codebase in one pass. Module by module, in order.
- **Inventing restrictions**: only document constraints actually evidenced in code, configs, or Dockerfiles.
- **Vague acceptance criteria**: "should be fast" is not a criterion. Extract actual numeric thresholds from code.
- **Writing code**: this skill produces documents, never implementation code.
## Methodology Quick Reference
```
┌──────────────────────────────────────────────────────────────────┐
│ Bottom-Up Codebase Documentation (8-Step) │
├──────────────────────────────────────────────────────────────────┤
│ PREREQ: Check _docs/ exists (overwrite/merge/new?) │
│ PREREQ: Check state.json for resume │
│ │
│ 0. Discovery → dependency graph, tech stack, topo order │
│ 1. Module Docs → per-module analysis (leaves first) │
│ 2. Component Assembly → group modules, write component specs │
│ [BLOCKING: user confirms components] │
│ 3. System Synthesis → architecture, flows, data model, deploy │
│ 4. Verification → compare all docs vs code, fix errors │
│ [BLOCKING: user reviews corrections] │
│ 5. Solution Extraction → retrospective solution.md │
│ 6. Problem Extraction → retrospective problem, restrictions, AC │
│ [BLOCKING: user confirms problem docs] │
│ 7. Final Report → FINAL_report.md │
├──────────────────────────────────────────────────────────────────┤
│ Principles: Bottom-up always · Dependencies first │
│ Incremental context · Verify against code │
│ Save immediately · Resume from checkpoint │
└──────────────────────────────────────────────────────────────────┘
```
+21 -4
View File
@@ -93,15 +93,30 @@ Launch all subagents immediately — no user confirmation.
- Collect structured status reports from each implementer
- If any implementer reports "Blocked", log the blocker and continue with others
**Stuck detection** — while monitoring, watch for these signals per subagent:
- Same file modified 3+ times without test pass rate improving → flag as stuck, stop the subagent, report as Blocked
- Subagent has not produced new output for an extended period → flag as potentially hung
- If a subagent is flagged as stuck, do NOT let it continue looping — stop it and record the blocker in the batch report
### 8. Code Review
- Run `/code-review` skill on the batch's changed files + corresponding task specs
- The code-review skill produces a verdict: PASS, PASS_WITH_WARNINGS, or FAIL
### 9. Gate
### 9. Auto-Fix Gate
- If verdict is **FAIL**: present findings to user (**BLOCKING**). User must confirm fixes or accept before proceeding.
- If verdict is **PASS** or **PASS_WITH_WARNINGS**: show findings as info, continue automatically.
Auto-fix loop with bounded retries (max 2 attempts) before escalating to user:
1. If verdict is **PASS** or **PASS_WITH_WARNINGS**: show findings as info, continue automatically to step 10
2. If verdict is **FAIL** (attempt 1 or 2):
- Parse the code review findings (Critical and High severity items)
- For each finding, attempt an automated fix using the finding's location, description, and suggestion
- Re-run `/code-review` on the modified files
- If now PASS or PASS_WITH_WARNINGS → continue to step 10
- If still FAIL → increment retry counter, repeat from (2) up to max 2 attempts
3. If still **FAIL** after 2 auto-fix attempts: present all findings to user (**BLOCKING**). User must confirm fixes or accept before proceeding.
Track `auto_fix_attempts` count in the batch report for retrospective analysis.
### 10. Test
@@ -146,6 +161,8 @@ After each batch, produce a structured report:
| [JIRA-ID]_[name] | Done | [count] files | [pass/fail] | [count or None] |
## Code Review Verdict: [PASS/FAIL/PASS_WITH_WARNINGS]
## Auto-Fix Attempts: [0/1/2]
## Stuck Agents: [count or None]
## Next Batch: [task list] or "All tasks complete"
```
@@ -173,5 +190,5 @@ Each batch commit serves as a rollback checkpoint. If recovery is needed:
- Never launch tasks whose dependencies are not yet completed
- Never allow two parallel agents to write to the same file
- If a subagent fails, do NOT retry automatically — report and let user decide
- If a subagent fails or is flagged as stuck, stop it and report — do not let it loop indefinitely
- Always run tests after each batch completes
+8 -8
View File
@@ -3,7 +3,7 @@ name: plan
description: |
Decompose a solution into architecture, data model, deployment plan, system flows, components, tests, and Jira epics.
Systematic 6-step planning workflow with BLOCKING gates, self-verification, and structured artifact management.
Uses _docs/ + _docs/02_plans/ structure.
Uses _docs/ + _docs/02_document/ structure.
Trigger phrases:
- "plan", "decompose solution", "architecture planning"
- "break down the solution", "create planning documents"
@@ -31,7 +31,7 @@ Fixed paths — no mode detection needed:
- PROBLEM_FILE: `_docs/00_problem/problem.md`
- SOLUTION_FILE: `_docs/01_solution/solution.md`
- PLANS_DIR: `_docs/02_plans/`
- DOCUMENT_DIR: `_docs/02_document/`
Announce the resolved paths to the user before proceeding.
@@ -72,17 +72,17 @@ Only runs after the Data Gate passes:
**Prereq 3: Workspace Setup**
1. Create PLANS_DIR if it does not exist
2. If PLANS_DIR already contains artifacts, ask user: **resume from last checkpoint or start fresh?**
1. Create DOCUMENT_DIR if it does not exist
2. If DOCUMENT_DIR already contains artifacts, ask user: **resume from last checkpoint or start fresh?**
## Artifact Management
### Directory Structure
All artifacts are written directly under PLANS_DIR:
All artifacts are written directly under DOCUMENT_DIR:
```
PLANS_DIR/
DOCUMENT_DIR/
├── integration_tests/
│ ├── environment.md
│ ├── test_data.md
@@ -150,7 +150,7 @@ PLANS_DIR/
### Resumability
If PLANS_DIR already contains artifacts:
If DOCUMENT_DIR already contains artifacts:
1. List existing files and match them to the save timing table above
2. Identify the last completed step based on which artifacts exist
@@ -534,7 +534,7 @@ Before writing the final report, verify ALL of the following:
│ PREREQ 2: Finalize solution draft │
│ → rename highest solution_draft##.md to solution.md │
│ PREREQ 3: Workspace setup │
│ → create PLANS_DIR/ if needed │
│ → create DOCUMENT_DIR/ if needed │
│ │
│ 1. Integration Tests → integration_tests/ (5 files) │
│ [BLOCKING: user confirms test coverage] │
@@ -1,6 +1,6 @@
# Architecture Document Template
Use this template for the architecture document. Save as `_docs/02_plans/architecture.md`.
Use this template for the architecture document. Save as `_docs/02_document/architecture.md`.
---
+3 -3
View File
@@ -73,9 +73,9 @@ Link to architecture.md and relevant component spec.]
### Design & Architecture
- Architecture doc: `_docs/02_plans/architecture.md`
- Component spec: `_docs/02_plans/components/[##]_[name]/description.md`
- System flows: `_docs/02_plans/system-flows.md`
- Architecture doc: `_docs/02_document/architecture.md`
- Component spec: `_docs/02_document/components/[##]_[name]/description.md`
- System flows: `_docs/02_document/system-flows.md`
### Definition of Done
@@ -1,6 +1,6 @@
# Final Planning Report Template
Use this template after completing all 5 steps and the quality checklist. Save as `_docs/02_plans/FINAL_report.md`.
Use this template after completing all 5 steps and the quality checklist. Save as `_docs/02_document/FINAL_report.md`.
---
@@ -1,6 +1,6 @@
# E2E Test Environment Template
Save as `PLANS_DIR/integration_tests/environment.md`.
Save as `DOCUMENT_DIR/integration_tests/environment.md`.
---
@@ -1,6 +1,6 @@
# E2E Functional Tests Template
Save as `PLANS_DIR/integration_tests/functional_tests.md`.
Save as `DOCUMENT_DIR/integration_tests/functional_tests.md`.
---
@@ -1,6 +1,6 @@
# E2E Non-Functional Tests Template
Save as `PLANS_DIR/integration_tests/non_functional_tests.md`.
Save as `DOCUMENT_DIR/integration_tests/non_functional_tests.md`.
---
@@ -1,6 +1,6 @@
# E2E Test Data Template
Save as `PLANS_DIR/integration_tests/test_data.md`.
Save as `DOCUMENT_DIR/integration_tests/test_data.md`.
---
@@ -1,6 +1,6 @@
# E2E Traceability Matrix Template
Save as `PLANS_DIR/integration_tests/traceability_matrix.md`.
Save as `DOCUMENT_DIR/integration_tests/traceability_matrix.md`.
---
@@ -1,6 +1,6 @@
# Risk Register Template
Use this template for risk assessment. Save as `_docs/02_plans/risk_mitigations.md`.
Use this template for risk assessment. Save as `_docs/02_document/risk_mitigations.md`.
Subsequent iterations: `risk_mitigations_02.md`, `risk_mitigations_03.md`, etc.
---
@@ -1,7 +1,7 @@
# System Flows Template
Use this template for the system flows document. Save as `_docs/02_plans/system-flows.md`.
Individual flow diagrams go in `_docs/02_plans/diagrams/flows/flow_[name].md`.
Use this template for the system flows document. Save as `_docs/02_document/system-flows.md`.
Individual flow diagrams go in `_docs/02_document/diagrams/flows/flow_[name].md`.
---
+3 -3
View File
@@ -34,8 +34,8 @@ Determine the operating mode based on invocation before any other logic runs.
**Project mode** (no explicit input file provided):
- PROBLEM_DIR: `_docs/00_problem/`
- SOLUTION_DIR: `_docs/01_solution/`
- COMPONENTS_DIR: `_docs/02_components/`
- TESTS_DIR: `_docs/02_tests/`
- COMPONENTS_DIR: `_docs/02_document/components/`
- DOCUMENT_DIR: `_docs/02_document/`
- REFACTOR_DIR: `_docs/04_refactoring/`
- All existing guardrails apply.
@@ -210,7 +210,7 @@ Write:
Also copy to project standard locations if in project mode:
- `SOLUTION_DIR/solution.md`
- `COMPONENTS_DIR/system_flows.md`
- `DOCUMENT_DIR/system_flows.md`
**Self-verification**:
- [ ] Every component in the codebase is documented
+3 -3
View File
@@ -13,6 +13,7 @@ description: |
- "comparative analysis", "concept comparison", "technical comparison"
category: build
tags: [research, analysis, solution-design, comparison, decision-support]
disable-model-invocation: true
---
# Deep Research (8-Step Method)
@@ -310,10 +311,9 @@ When the user wants to:
- "comparative analysis", "concept comparison", "technical comparison"
**Differentiation from other Skills**:
- Needs a **visual knowledge graph** → use `research-to-diagram`
- Needs **written output** (articles/tutorials) → use `wsy-writer`
- Needs **material organization** → use `material-to-markdown`
- Needs **research + solution draft** → use this Skill
- Needs **security audit** → use `/security`
- Needs **existing codebase documented** → use `/document`
## Research Engine (8-Step Method)
-130
View File
@@ -1,130 +0,0 @@
---
name: rollback
description: |
Revert implementation to a specific batch checkpoint using git revert, reset Jira ticket statuses,
verify rollback integrity with tests, and produce a rollback report.
Trigger phrases:
- "rollback", "revert", "revert batch"
- "undo implementation", "roll back to batch"
category: build
tags: [rollback, revert, recovery, implementation]
disable-model-invocation: true
---
# Implementation Rollback
Revert the codebase to a specific batch checkpoint, reset Jira statuses for reverted tasks, and verify integrity.
## Core Principles
- **Preserve history**: always use `git revert`, never force-push
- **Verify after revert**: run the full test suite after every rollback
- **Update tracking**: reset Jira ticket statuses for all reverted tasks
- **Atomic rollback**: if rollback fails midway, stop and report — do not leave the codebase in a partial state
- **Ask, don't assume**: if the target batch is ambiguous, present options and ask
## Context Resolution
- IMPL_DIR: `_docs/03_implementation/`
- Batch reports: `IMPL_DIR/batch_*_report.md`
## Prerequisite Checks (BLOCKING)
1. IMPL_DIR exists and contains at least one `batch_*_report.md`**STOP if missing**
2. Git working tree is clean (no uncommitted changes) — **STOP if dirty**, ask user to commit or stash
## Input
- User specifies a target batch number or commit hash
- If not specified, present the list of available batch checkpoints and ask
## Workflow
### Step 1: Identify Checkpoints
1. Read all `batch_*_report.md` files from IMPL_DIR
2. Extract: batch number, date, tasks included, commit hash, code review verdict
3. Present batch list to user
**BLOCKING**: User must confirm which batch to roll back to.
### Step 2: Revert Commits
1. Determine which commits need to be reverted (all commits after the target batch)
2. For each commit in reverse chronological order:
- Run `git revert <commit-hash> --no-edit`
- If merge conflicts occur: present conflicts and ask user for resolution
3. If any revert fails and cannot be resolved, abort the rollback sequence with `git revert --abort` and report
### Step 3: Verify Integrity
1. Run the full test suite
2. If tests fail: report failures to user, ask how to proceed (fix or abort)
3. If tests pass: continue
### Step 4: Update Jira
1. Identify all tasks from reverted batches
2. Reset each task's Jira ticket status to "To Do" via Jira MCP
### Step 5: Finalize
1. Commit with message: `[ROLLBACK] Reverted to batch [N]: [task list]`
2. Write rollback report to `IMPL_DIR/rollback_report.md`
## Output
Write `_docs/03_implementation/rollback_report.md`:
```markdown
# Rollback Report
**Date**: [YYYY-MM-DD]
**Target**: Batch [N] (commit [hash])
**Reverted Batches**: [list]
## Reverted Tasks
| Task | Batch | Status Before | Status After |
|------|-------|--------------|-------------|
| [JIRA-ID] | [batch #] | In Testing | To Do |
## Test Results
- [pass/fail count]
## Jira Updates
- [list of ticket transitions]
## Notes
- [any conflicts, manual steps, or issues encountered]
```
## Escalation Rules
| Situation | Action |
|-----------|--------|
| No batch reports exist | **STOP** — nothing to roll back |
| Uncommitted changes in working tree | **STOP** — ask user to commit or stash |
| Merge conflicts during revert | **ASK user** for resolution |
| Tests fail after rollback | **ASK user** — fix or abort |
| Rollback fails midway | Abort with `git revert --abort`, report to user |
## Methodology Quick Reference
```
┌────────────────────────────────────────────────────────────────┐
│ Rollback (5-Step Method) │
├────────────────────────────────────────────────────────────────┤
│ PREREQ: batch reports exist, clean working tree │
│ │
│ 1. Identify Checkpoints → present batch list │
│ [BLOCKING: user confirms target batch] │
│ 2. Revert Commits → git revert per commit │
│ 3. Verify Integrity → run full test suite │
│ 4. Update Jira → reset statuses to "To Do" │
│ 5. Finalize → commit + rollback_report.md │
├────────────────────────────────────────────────────────────────┤
│ Principles: Preserve history · Verify after revert │
│ Atomic rollback · Ask don't assume │
└────────────────────────────────────────────────────────────────┘
```