Sync .cursor from suite (autodev orchestrator + monorepo skills)

This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-04-18 22:04:27 +03:00
parent 3418e1fcde
commit 1334eba63c
60 changed files with 4232 additions and 1728 deletions
+135
View File
@@ -0,0 +1,135 @@
---
name: autodev
description: |
Auto-chaining orchestrator that drives the full BUILD-SHIP workflow from problem gathering through deployment.
Detects current project state from _docs/ folder, resumes from where it left off, and flows through
problem → research → plan → decompose → implement → deploy without manual skill invocation.
Maximizes work per conversation by auto-transitioning between skills.
Trigger phrases:
- "autodev", "auto", "start", "continue"
- "what's next", "where am I", "project status"
category: meta
tags: [orchestrator, workflow, auto-chain, state-machine, meta-skill]
disable-model-invocation: true
---
# Autodev Orchestrator
Auto-chaining execution engine that drives the full BUILD → SHIP workflow. Detects project state from `_docs/`, resumes from where work stopped, and flows through skills automatically. The user invokes `/autodev` once — the engine handles sequencing, transitions, and re-entry.
## File Index
| File | Purpose |
|------|---------|
| `flows/greenfield.md` | Detection rules, step table, and auto-chain rules for new projects |
| `flows/existing-code.md` | Detection rules, step table, and auto-chain rules for existing codebases |
| `flows/meta-repo.md` | Detection rules, step table, and auto-chain rules for meta-repositories (submodule aggregators, workspace monorepos) |
| `state.md` | State file format, rules, re-entry protocol, session boundaries |
| `protocols.md` | User interaction, tracker auth, choice format, error handling, status summary |
**On every invocation**: read `state.md`, `protocols.md`, and the active flow file before executing any logic. You don't need to read flow files for flows you're not in.
## Core Principles
- **Auto-chain**: when a skill completes, immediately start the next one — no pause between skills
- **Only pause at decision points**: BLOCKING gates inside sub-skills are the natural pause points; do not add artificial stops between steps
- **State from disk**: current step is persisted to `_docs/_autodev_state.md` and cross-checked against `_docs/` folder structure
- **Re-entry**: on every invocation, read the state file and cross-check against `_docs/` folders before continuing
- **Delegate, don't duplicate**: read and execute each sub-skill's SKILL.md; never inline their logic here
- **Sound on pause**: follow `.cursor/rules/human-attention-sound.mdc` — play a notification sound before every pause that requires human input (AskQuestion tool preferred for structured choices; fall back to plain text if unavailable)
- **Minimize interruptions**: only ask the user when the decision genuinely cannot be resolved automatically
- **Single project per workspace**: all `_docs/` paths are relative to workspace root; for multi-component systems, each component needs its own Cursor workspace. **Exception**: a meta-repo workspace (git-submodule aggregator or monorepo workspace) uses the `meta-repo` flow and maintains cross-cutting artifacts via `monorepo-*` skills rather than per-component BUILD-SHIP flows.
## Flow Resolution
Determine which flow to use (check in order — first match wins):
1. If `_docs/_autodev_state.md` exists → read the `flow` field and use that flow. (When a greenfield project completes its final cycle, the Done step rewrites `flow: existing-code` in-band so the next invocation enters the feature-cycle loop — see greenfield "Done".)
2. If the workspace is a **meta-repo****meta-repo flow**. Detected by: presence of `.gitmodules` with ≥2 submodules, OR `package.json` with `workspaces` field, OR `pnpm-workspace.yaml`, OR `Cargo.toml` with `[workspace]` section, OR `go.work`, OR an ad-hoc structure with multiple top-level component folders each containing their own project manifests. Optional tiebreaker: the workspace has little or no source code of its own at the root (just registry + orchestration files).
3. If workspace has **no source code files****greenfield flow**
4. If workspace has source code files **and** `_docs/` does not exist → **existing-code flow**
5. If workspace has source code files **and** `_docs/` exists → **existing-code flow**
After selecting the flow, apply its detection rules (first match wins) to determine the current step.
**Note**: the meta-repo flow uses a different artifact layout — its source of truth is `_docs/_repo-config.yaml`, not `_docs/NN_*/` folders. Other detection rules assume the BUILD-SHIP artifact layout; they don't apply to meta-repos.
## Execution Loop
Every invocation has three phases: **Bootstrap** (runs once), **Resolve** (runs once), **Execute Loop** (runs per step). Exit conditions are explicit.
```
### Bootstrap (once per invocation)
B1. Process leftovers — delegate to `.cursor/rules/tracker.mdc` → Leftovers Mechanism
(authoritative spec: replay rules, escalation, blocker handling).
B2. Surface Recent Lessons — print top 3 entries from `_docs/LESSONS.md` if present; skip silently otherwise.
B3. Read state — `_docs/_autodev_state.md` (if it exists).
B4. Read File Index — `state.md`, `protocols.md`, and the active flow file.
### Resolve (once per invocation, after Bootstrap)
R1. Reconcile state — verify state file against `_docs/` contents; on disagreement, trust the folders
and update the state file (rules: `state.md` → "State File Rules" #4).
After this step, `state.step` / `state.status` are authoritative.
R2. Resolve flow — see §Flow Resolution above.
R3. Resolve current step — when a state file exists, `state.step` drives detection.
When no state file exists, walk the active flow's detection rules in order;
first folder-probe match wins.
R4. Present Status Summary — banner template in `protocols.md` + step-list fragment from the active flow file.
### Execute Loop (per step)
loop:
E1. Delegate to the current skill (see §Skill Delegation below).
E2. On FAILED
→ apply Failure Handling (`protocols.md`): increment retry_count, auto-retry up to 3.
→ if retry_count reaches 3 → set status: failed → EXIT (escalate on next invocation).
E3. On success
→ reset retry_count, update state file (rules: `state.md`).
E4. Re-detect next step from the active flow's detection rules.
E5. If the transition is marked as a session boundary in the flow's Auto-Chain Rules
→ update state, present boundary Choose block, suggest new conversation → EXIT.
E6. If all steps done
→ update state, report completion → EXIT.
E7. Else
→ continue loop (go to E1 with the next skill).
```
## Skill Delegation
For each step, the delegation pattern is:
1. Update state file: set `step` to the autodev step number, status to `in_progress`, set `sub_step` to the sub-skill's current internal phase using the structured `{phase, name, detail}` schema (see `state.md`), reset `retry_count: 0`
2. Announce: "Starting [Skill Name]..."
3. Read the skill file: `.cursor/skills/[name]/SKILL.md`
4. Execute the skill's workflow exactly as written, including all BLOCKING gates, self-verification checklists, save actions, and escalation rules. Update `sub_step.phase`, `sub_step.name`, and optional `sub_step.detail` in state each time the sub-skill advances to a new internal phase.
5. If the skill **fails**: follow Failure Handling in `protocols.md` — increment `retry_count`, auto-retry up to 3 times, then escalate.
6. When complete (success): reset `retry_count: 0`, update state file to the next step with `status: not_started` and `sub_step: {phase: 0, name: awaiting-invocation, detail: ""}`, return to auto-chain rules (from active flow file)
**sub_step read fallback**: when reading `sub_step`, parse the structured form. If parsing fails (legacy free-text value) OR the named phase is not recognized, log a warning and fall back to a folder scan of the sub-skill's artifact directory to infer progress. Do not silently treat a malformed sub_step as phase 0 — that would cause a sub-skill to restart from scratch after each resume.
Do NOT modify, skip, or abbreviate any part of the sub-skill's workflow. The autodev is a sequencer, not an optimizer.
## State File
The state file (`_docs/_autodev_state.md`) is a minimal pointer — only the current step. See `state.md` for the authoritative template, field semantics, update rules, and worked examples. Do not restate the schema here — `state.md` is the single source of truth.
## Trigger Conditions
This skill activates when the user wants to:
- Start a new project from scratch
- Continue an in-progress project
- Check project status
- Let the AI guide them through the full workflow
**Keywords**: "autodev", "auto", "start", "continue", "what's next", "where am I", "project status"
**Invocation model**: this skill is explicitly user-invoked only (`disable-model-invocation: true` in the front matter). The keywords above aid skill discovery and tooling (other skills / agents can reason about when `/autodev` is appropriate), but the model never auto-fires this skill from a keyword match. The user always types `/autodev`.
**Differentiation**:
- User wants only research → use `/research` directly
- User wants only planning → use `/plan` directly
- User wants to document an existing codebase → use `/document` directly
- User wants the full guided workflow → use `/autodev`
## Flow Reference
See `flows/greenfield.md`, `flows/existing-code.md`, and `flows/meta-repo.md` for step tables, detection rules, auto-chain rules, and each flow's Status Summary step-list fragment. The banner that wraps those fragments lives in `protocols.md` → "Banner Template (authoritative)".
@@ -0,0 +1,406 @@
# Existing Code Workflow
Workflow for projects with an existing codebase. Structurally it has **two phases**:
- **Phase A — One-time baseline setup (Steps 18)**: runs exactly once per codebase. Documents the code, produces test specs, makes the code testable, writes and runs the initial test suite, optionally refactors with that safety net.
- **Phase B — Feature cycle (Steps 917, loops)**: runs once per new feature. After Step 17 (Retrospective), the flow loops back to Step 9 (New Task) with `state.cycle` incremented.
A first-time run executes Phase A then Phase B; every subsequent invocation re-enters Phase B.
## Step Reference Table
### Phase A — One-time baseline setup
| Step | Name | Sub-Skill | Internal SubSteps |
|------|------|-----------|-------------------|
| 1 | Document | document/SKILL.md | Steps 18 |
| 2 | Architecture Baseline Scan | code-review/SKILL.md (baseline mode) | Phase 1 + Phase 7 |
| 3 | Test Spec | test-spec/SKILL.md | Phases 14 |
| 4 | Code Testability Revision | refactor/SKILL.md (guided mode) | Phases 07 (conditional) |
| 5 | Decompose Tests | decompose/SKILL.md (tests-only) | Step 1t + Step 3 + Step 4 |
| 6 | Implement Tests | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
| 7 | Run Tests | test-run/SKILL.md | Steps 14 |
| 8 | Refactor | refactor/SKILL.md | Phases 07 (optional) |
### Phase B — Feature cycle (loops back to Step 9 after Step 17)
| Step | Name | Sub-Skill | Internal SubSteps |
|------|------|-----------|-------------------|
| 9 | New Task | new-task/SKILL.md | Steps 18 (loop) |
| 10 | Implement | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
| 11 | Run Tests | test-run/SKILL.md | Steps 14 |
| 12 | Test-Spec Sync | test-spec/SKILL.md (cycle-update mode) | Phase 2 + Phase 3 (scoped) |
| 13 | Update Docs | document/SKILL.md (task mode) | Task Steps 05 |
| 14 | Security Audit | security/SKILL.md | Phase 15 (optional) |
| 15 | Performance Test | test-run/SKILL.md (perf mode) | Steps 15 (optional) |
| 16 | Deploy | deploy/SKILL.md | Step 17 |
| 17 | Retrospective | retrospective/SKILL.md (cycle-end mode) | Steps 14 |
After Step 17, the feature cycle completes and the flow loops back to Step 9 with `state.cycle + 1` — see "Re-Entry After Completion" below.
## Detection Rules
**Resolution**: when a state file exists, `state.step` + `state.status` drive detection and the conditions below are not consulted. When no state file exists (cold start), walk the rules in order — first folder-probe match wins. Steps without a folder probe are state-driven only; they can only be reached by auto-chain from a prior step. Cycle-scoped steps (Step 10 onward) always read `state.cycle` to disambiguate current vs. prior cycle artifacts.
---
### Phase A — One-time baseline setup (Steps 18)
**Step 1 — Document**
Condition: `_docs/` does not exist AND the workspace contains source code files (e.g., `*.py`, `*.cs`, `*.rs`, `*.ts`, `src/`, `Cargo.toml`, `*.csproj`, `package.json`)
Action: An existing codebase without documentation was detected. Read and execute `.cursor/skills/document/SKILL.md`. After the document skill completes, re-detect state (the produced `_docs/` artifacts will place the project at Step 2 or later).
The document skill's Step 2.5 produces `_docs/02_document/module-layout.md`, which is required by every downstream step that assigns file ownership (`/implement` Step 4, `/code-review` Phase 7, `/refactor` discovery). If this file is missing after Step 1 completes (e.g., a pre-existing `_docs/` dir predates the 2.5 addition), re-invoke `/document` in resume mode — it will pick up at Step 2.5.
---
**Step 2 — Architecture Baseline Scan**
Condition: `_docs/02_document/FINAL_report.md` exists AND `_docs/02_document/architecture.md` exists AND `_docs/02_document/architecture_compliance_baseline.md` does not exist.
Action: Invoke `.cursor/skills/code-review/SKILL.md` in **baseline mode** (Phase 1 + Phase 7 only) against the full existing codebase. Phase 7 produces a structural map of the code vs. the just-documented `architecture.md`. Save the output to `_docs/02_document/architecture_compliance_baseline.md`.
Rationale: existing codebases often have pre-existing architecture violations (cycles, cross-component private imports, duplicate logic). Catching them here, before the Testability Revision (Step 4), gives the user a chance to fold structural fixes into the refactor scope.
After completion, if the baseline report contains **High or Critical** Architecture findings:
- Append them to the testability `list-of-changes.md` input in Step 4 (so testability refactor can address the most disruptive ones along with testability fixes), OR
- Surface them to the user via Choose format to defer to Step 8 (optional Refactor).
If the baseline report is clean (no High/Critical findings), auto-chain directly to Step 3.
---
**Step 3 — Test Spec**
Condition (folder fallback): `_docs/02_document/FINAL_report.md` exists AND workspace contains source code files AND `_docs/02_document/tests/traceability-matrix.md` does not exist.
State-driven: reached by auto-chain from Step 2.
Action: Read and execute `.cursor/skills/test-spec/SKILL.md`
This step applies when the codebase was documented via the `/document` skill. Test specifications must be produced before refactoring or further development.
---
**Step 4 — Code Testability Revision**
Condition (folder fallback): `_docs/02_document/tests/traceability-matrix.md` exists AND no test tasks exist yet in `_docs/02_tasks/todo/`.
State-driven: reached by auto-chain from Step 3.
**Purpose**: enable tests to run at all. Without this step, hardcoded URLs, file paths, credentials, or global singletons can prevent the test suite from exercising the code against a controlled environment. The test authors need a testable surface before they can write tests that mean anything.
**Scope — MINIMAL, SURGICAL fixes**: this is not a profound refactor. It is the smallest set of changes (sometimes temporary hacks) required to make code runnable under tests. "Smallest" beats "elegant" here — deeper structural improvements belong in Step 8 (Refactor), not this step.
**Allowed changes** in this phase:
- Replace hardcoded URLs / file paths / credentials / magic numbers with env vars or constructor arguments.
- Extract narrow interfaces for components that need stubbing in tests.
- Add optional constructor parameters for dependency injection; default to the existing hardcoded behavior so callers do not break.
- Wrap global singletons in thin accessors that tests can override (thread-local / context var / setter gate).
- Split a huge function ONLY when necessary to stub one of its collaborators — do not split for clarity alone.
**NOT allowed** in this phase (defer to Step 8 Refactor):
- Renaming public APIs (breaks consumers without a safety net).
- Moving code between files unless strictly required for isolation.
- Changing algorithms or business logic.
- Restructuring module boundaries or rewriting layers.
**Safety**: Phase 3 (Safety Net) of the refactor skill is skipped here **by design** — no tests exist yet to form the safety net. Compensating controls:
- Every change is bounded by the allowed/not-allowed lists above.
- `list-of-changes.md` must be reviewed by the user BEFORE execution (refactor skill enforces this gate).
- After execution, the refactor skill produces `RUN_DIR/testability_changes_summary.md` — a plain-language list of every applied change and why. Present this to the user before auto-chaining to Step 5.
Action: Analyze the codebase against the test specs to determine whether the code can be tested as-is.
1. Read `_docs/02_document/tests/traceability-matrix.md` and all test scenario files in `_docs/02_document/tests/`.
2. Read `_docs/02_document/architecture_compliance_baseline.md` (produced in Step 2). If it contains High/Critical Architecture findings that overlap with testability issues, consider including the lightest structural fixes inline; leave the rest for Step 8.
3. For each test scenario, check whether the code under test can be exercised in isolation. Look for:
- Hardcoded file paths or directory references
- Hardcoded configuration values (URLs, credentials, magic numbers)
- Global mutable state that cannot be overridden
- Tight coupling to external services without abstraction
- Missing dependency injection or non-configurable parameters
- Direct file system operations without path configurability
- Inline construction of heavy dependencies (models, clients)
4. If ALL scenarios are testable as-is:
- Mark Step 4 as `completed` with outcome "Code is testable — no changes needed"
- Auto-chain to Step 5 (Decompose Tests)
5. If testability issues are found:
- Create `_docs/04_refactoring/01-testability-refactoring/`
- Write `list-of-changes.md` in that directory using the refactor skill template (`.cursor/skills/refactor/templates/list-of-changes.md`), with:
- **Mode**: `guided`
- **Source**: `autodev-testability-analysis`
- One change entry per testability issue found (change ID, file paths, problem, proposed change, risk, dependencies). Each entry must fit the allowed-changes list above; reject entries that drift into full refactor territory and log them under "Deferred to Step 8 Refactor" instead.
- Invoke the refactor skill in **guided mode**: read and execute `.cursor/skills/refactor/SKILL.md` with the `list-of-changes.md` as input
- The refactor skill will create RUN_DIR (`01-testability-refactoring`), create tasks in `_docs/02_tasks/todo/`, delegate to implement skill, and verify results
- Phase 3 (Safety Net) is automatically skipped by the refactor skill for testability runs
- After execution, the refactor skill produces `RUN_DIR/testability_changes_summary.md`. Surface this summary to the user via the Choose format (accept / request follow-up) before auto-chaining.
- Mark Step 4 as `completed`
- Auto-chain to Step 5 (Decompose Tests)
---
**Step 5 — Decompose Tests**
Condition (folder fallback): `_docs/02_document/tests/traceability-matrix.md` exists AND workspace contains source code files AND (`_docs/02_tasks/todo/` does not exist or has no test task files).
State-driven: reached by auto-chain from Step 4 (completed or skipped).
Action: Read and execute `.cursor/skills/decompose/SKILL.md` in **tests-only mode** (pass `_docs/02_document/tests/` as input). The decompose skill will:
1. Run Step 1t (test infrastructure bootstrap)
2. Run Step 3 (blackbox test task decomposition)
3. Run Step 4 (cross-verification against test coverage)
If `_docs/02_tasks/` subfolders have some task files already (e.g., refactoring tasks from Step 4), the decompose skill's resumability handles it — it appends test tasks alongside existing tasks.
---
**Step 6 — Implement Tests**
Condition (folder fallback): `_docs/02_tasks/todo/` contains task files AND `_dependencies_table.md` exists AND `_docs/03_implementation/implementation_report_tests.md` does not exist.
State-driven: reached by auto-chain from Step 5.
Action: Read and execute `.cursor/skills/implement/SKILL.md`
The implement skill reads test tasks from `_docs/02_tasks/todo/` and implements them.
If `_docs/03_implementation/` has batch reports, the implement skill detects completed tasks and continues.
---
**Step 7 — Run Tests**
Condition (folder fallback): `_docs/03_implementation/implementation_report_tests.md` exists.
State-driven: reached by auto-chain from Step 6.
Action: Read and execute `.cursor/skills/test-run/SKILL.md`
Verifies the implemented test suite passes before proceeding to refactoring. The tests form the safety net for all subsequent code changes.
---
**Step 8 — Refactor (optional)**
State-driven: reached by auto-chain from Step 7. (Sanity check: no `_docs/04_refactoring/` run folder should contain a `FINAL_report.md` for a non-testability run when entering this step for the first time.)
Action: Present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Refactor codebase before adding new features?
══════════════════════════════════════
A) Run refactoring (recommended if code quality issues were noted during documentation)
B) Skip — proceed directly to New Task
══════════════════════════════════════
Recommendation: [A or B — base on whether documentation
flagged significant code smells, coupling issues, or
technical debt worth addressing before new development]
══════════════════════════════════════
```
- If user picks A → Read and execute `.cursor/skills/refactor/SKILL.md` in automatic mode. The refactor skill creates a new run folder in `_docs/04_refactoring/` (e.g., `02-coupling-refactoring`), runs the full method using the implemented tests as a safety net. After completion, auto-chain to Step 9 (New Task).
- If user picks B → Mark Step 8 as `skipped` in the state file, auto-chain to Step 9 (New Task).
---
### Phase B — Feature cycle (Steps 917, loops)
**Step 9 — New Task**
State-driven: reached by auto-chain from Step 8 (completed or skipped). This is also the re-entry point after a completed cycle — see "Re-Entry After Completion" below.
Action: Read and execute `.cursor/skills/new-task/SKILL.md`
The new-task skill interactively guides the user through defining new functionality. It loops until the user is done adding tasks. New task files are written to `_docs/02_tasks/todo/`.
---
**Step 10 — Implement**
State-driven: reached by auto-chain from Step 9 in the CURRENT cycle (matching `state.cycle`). Detection is purely state-driven — prior cycles will have left `implementation_report_{feature_slug}_cycle{N-1}.md` artifacts that must not block new cycles.
Action: Read and execute `.cursor/skills/implement/SKILL.md`
The implement skill reads the new tasks from `_docs/02_tasks/todo/` and implements them. Tasks already implemented in Step 6 or prior cycles are skipped (completed tasks have been moved to `done/`).
**Implementation report naming**: the final report for this cycle must be named `implementation_report_{feature_slug}_cycle{N}.md` where `{N}` is `state.cycle`. Batch reports are named `batch_{NN}_cycle{M}_report.md` so the cycle counter survives folder scans.
If `_docs/03_implementation/` has batch reports from the current cycle, the implement skill detects completed tasks and continues.
---
**Step 11 — Run Tests**
State-driven: reached by auto-chain from Step 10.
Action: Read and execute `.cursor/skills/test-run/SKILL.md`
---
**Step 12 — Test-Spec Sync**
State-driven: reached by auto-chain from Step 11. Requires `_docs/02_document/tests/traceability-matrix.md` to exist — if missing, mark Step 12 `skipped` (see Action below).
Action: Read and execute `.cursor/skills/test-spec/SKILL.md` in **cycle-update mode**. Pass the cycle's completed task specs (files in `_docs/02_tasks/done/` moved during this cycle) and the implementation report `_docs/03_implementation/implementation_report_{feature_slug}_cycle{N}.md` as inputs.
The skill appends new ACs, scenarios, and NFRs to the existing test-spec files without rewriting unaffected sections. If `traceability-matrix.md` is missing (e.g., cycle added after a greenfield-only project), mark Step 12 as `skipped` — the next `/test-spec` full run will regenerate it.
After completion, auto-chain to Step 13 (Update Docs).
---
**Step 13 — Update Docs**
State-driven: reached by auto-chain from Step 12 (completed or skipped). Requires `_docs/02_document/` to contain existing documentation — if missing, mark Step 13 `skipped` (see Action below).
Action: Read and execute `.cursor/skills/document/SKILL.md` in **Task mode**. Pass all task spec files from `_docs/02_tasks/done/` that were implemented in the current cycle (i.e., tasks moved to `done/` during Steps 910 of this cycle).
The document skill in Task mode:
1. Reads each task spec to identify changed source files
2. Updates affected module docs, component docs, and system-level docs
3. Does NOT redo full discovery, verification, or problem extraction
If `_docs/02_document/` does not contain existing docs (e.g., documentation step was skipped), mark Step 13 as `skipped`.
After completion, auto-chain to Step 14 (Security Audit).
---
**Step 14 — Security Audit (optional)**
State-driven: reached by auto-chain from Step 13 (completed or skipped).
Action: Apply the **Optional Skill Gate** (`protocols.md` → "Optional Skill Gate") with:
- question: `Run security audit before deploy?`
- option-a-label: `Run security audit (recommended for production deployments)`
- option-b-label: `Skip — proceed directly to deploy`
- recommendation: `A — catches vulnerabilities before production`
- target-skill: `.cursor/skills/security/SKILL.md`
- next-step: Step 15 (Performance Test)
---
**Step 15 — Performance Test (optional)**
State-driven: reached by auto-chain from Step 14 (completed or skipped).
Action: Apply the **Optional Skill Gate** (`protocols.md` → "Optional Skill Gate") with:
- question: `Run performance/load tests before deploy?`
- option-a-label: `Run performance tests (recommended for latency-sensitive or high-load systems)`
- option-b-label: `Skip — proceed directly to deploy`
- recommendation: `A or B — base on whether acceptance criteria include latency, throughput, or load requirements`
- target-skill: `.cursor/skills/test-run/SKILL.md` in **perf mode** (the skill handles runner detection, threshold comparison, and its own A/B/C gate on threshold failures)
- next-step: Step 16 (Deploy)
---
**Step 16 — Deploy**
State-driven: reached by auto-chain from Step 15 (completed or skipped).
Action: Read and execute `.cursor/skills/deploy/SKILL.md`.
After the deploy skill completes successfully, mark Step 16 as `completed` and auto-chain to Step 17 (Retrospective).
---
**Step 17 — Retrospective**
State-driven: reached by auto-chain from Step 16, for the current `state.cycle`.
Action: Read and execute `.cursor/skills/retrospective/SKILL.md` in **cycle-end mode**. Pass cycle context (`cycle: state.cycle`) so the retro report and LESSONS.md entries record which feature cycle they came from.
After retrospective completes, mark Step 17 as `completed` and enter "Re-Entry After Completion" evaluation.
---
**Re-Entry After Completion**
State-driven: `state.step == done` OR Step 17 (Retrospective) is completed for `state.cycle`.
Action: The project completed a full cycle. Print the status banner and automatically loop back to New Task — do NOT ask the user for confirmation:
```
══════════════════════════════════════
PROJECT CYCLE COMPLETE
══════════════════════════════════════
The previous cycle finished successfully.
Starting new feature cycle…
══════════════════════════════════════
```
Set `step: 9`, `status: not_started`, and **increment `cycle`** (`cycle: state.cycle + 1`) in the state file, then auto-chain to Step 9 (New Task). Reset `sub_step` to `phase: 0, name: awaiting-invocation, detail: ""` and `retry_count: 0`.
Note: the loop (Steps 9 → 17 → 9) ensures every feature cycle includes: New Task → Implement → Run Tests → Test-Spec Sync → Update Docs → Security → Performance → Deploy → Retrospective.
## Auto-Chain Rules
### Phase A — One-time baseline setup
| Completed Step | Next Action |
|---------------|-------------|
| Document (1) | Auto-chain → Architecture Baseline Scan (2) |
| Architecture Baseline Scan (2) | Auto-chain → Test Spec (3). If baseline has High/Critical Architecture findings, surface them as inputs to Step 4 (testability) or defer to Step 8 (refactor). |
| Test Spec (3) | Auto-chain → Code Testability Revision (4) |
| Code Testability Revision (4) | Auto-chain → Decompose Tests (5) |
| Decompose Tests (5) | **Session boundary** — suggest new conversation before Implement Tests |
| Implement Tests (6) | Auto-chain → Run Tests (7) |
| Run Tests (7, all pass) | Auto-chain → Refactor choice (8) |
| Refactor (8, done or skipped) | Auto-chain → New Task (9) — enters Phase B |
### Phase B — Feature cycle (loops)
| Completed Step | Next Action |
|---------------|-------------|
| New Task (9) | **Session boundary** — suggest new conversation before Implement |
| Implement (10) | Auto-chain → Run Tests (11) |
| Run Tests (11, all pass) | Auto-chain → Test-Spec Sync (12) |
| Test-Spec Sync (12, done or skipped) | Auto-chain → Update Docs (13) |
| Update Docs (13) | Auto-chain → Security Audit choice (14) |
| Security Audit (14, done or skipped) | Auto-chain → Performance Test choice (15) |
| Performance Test (15, done or skipped) | Auto-chain → Deploy (16) |
| Deploy (16) | Auto-chain → Retrospective (17) |
| Retrospective (17) | **Cycle complete** — loop back to New Task (9) with incremented cycle counter |
## Status Summary — Step List
Flow name: `existing-code`. Render using the banner template in `protocols.md` → "Banner Template (authoritative)".
Flow-specific slot values:
- `<header-suffix>`: ` — Cycle <N>` when `state.cycle > 1`; otherwise empty.
- `<current-suffix>`: ` (cycle <N>)` when `state.cycle > 1`; otherwise empty.
- `<footer-extras>`: empty.
**Phase A — One-time baseline setup**
| # | Step Name | Extra state tokens (beyond the shared set) |
|---|-----------------------------|--------------------------------------------|
| 1 | Document | — |
| 2 | Architecture Baseline | — |
| 3 | Test Spec | — |
| 4 | Code Testability Revision | — |
| 5 | Decompose Tests | `DONE (N tasks)` |
| 6 | Implement Tests | `IN PROGRESS (batch M)` |
| 7 | Run Tests | `DONE (N passed, M failed)` |
| 8 | Refactor | `IN PROGRESS (phase N)` |
**Phase B — Feature cycle (loops)**
| # | Step Name | Extra state tokens (beyond the shared set) |
|---|-----------------------------|--------------------------------------------|
| 9 | New Task | `DONE (N tasks)` |
| 10 | Implement | `IN PROGRESS (batch M of ~N)` |
| 11 | Run Tests | `DONE (N passed, M failed)` |
| 12 | Test-Spec Sync | — |
| 13 | Update Docs | — |
| 14 | Security Audit | — |
| 15 | Performance Test | — |
| 16 | Deploy | — |
| 17 | Retrospective | — |
All rows accept the shared state tokens (`DONE`, `IN PROGRESS`, `NOT STARTED`, `FAILED (retry N/3)`); rows 2, 4, 8, 12, 13, 14, 15 additionally accept `SKIPPED`.
Row rendering format (renders with a phase separator between Step 8 and Step 9):
```
── Phase A: One-time baseline setup ──
Step 1 Document [<state token>]
Step 2 Architecture Baseline [<state token>]
Step 3 Test Spec [<state token>]
Step 4 Code Testability Rev. [<state token>]
Step 5 Decompose Tests [<state token>]
Step 6 Implement Tests [<state token>]
Step 7 Run Tests [<state token>]
Step 8 Refactor [<state token>]
── Phase B: Feature cycle (loops) ──
Step 9 New Task [<state token>]
Step 10 Implement [<state token>]
Step 11 Run Tests [<state token>]
Step 12 Test-Spec Sync [<state token>]
Step 13 Update Docs [<state token>]
Step 14 Security Audit [<state token>]
Step 15 Performance Test [<state token>]
Step 16 Deploy [<state token>]
Step 17 Retrospective [<state token>]
```
+237
View File
@@ -0,0 +1,237 @@
# Greenfield Workflow
Workflow for new projects built from scratch. Flows linearly: Problem → Research → Plan → UI Design (if applicable) → Decompose → Implement → Run Tests → Security Audit (optional) → Performance Test (optional) → Deploy → Retrospective.
## Step Reference Table
| Step | Name | Sub-Skill | Internal SubSteps |
|------|------|-----------|-------------------|
| 1 | Problem | problem/SKILL.md | Phase 14 |
| 2 | Research | research/SKILL.md | Mode A: Phase 14 · Mode B: Step 08 |
| 3 | Plan | plan/SKILL.md | Step 16 + Final |
| 4 | UI Design | ui-design/SKILL.md | Phase 08 (conditional — UI projects only) |
| 5 | Decompose | decompose/SKILL.md | Step 14 |
| 6 | Implement | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
| 7 | Run Tests | test-run/SKILL.md | Steps 14 |
| 8 | Security Audit | security/SKILL.md | Phase 15 (optional) |
| 9 | Performance Test | test-run/SKILL.md (perf mode) | Steps 15 (optional) |
| 10 | Deploy | deploy/SKILL.md | Step 17 |
| 11 | Retrospective | retrospective/SKILL.md (cycle-end mode) | Steps 14 |
## Detection Rules
**Resolution**: when a state file exists, `state.step` + `state.status` drive detection and the conditions below are not consulted. When no state file exists (cold start), walk the rules in order — first folder-probe match wins. Steps without a folder probe are state-driven only; they can only be reached by auto-chain from a prior step.
---
**Step 1 — Problem Gathering**
Condition: `_docs/00_problem/` does not exist, OR any of these are missing/empty:
- `problem.md`
- `restrictions.md`
- `acceptance_criteria.md`
- `input_data/` (must contain at least one file)
Action: Read and execute `.cursor/skills/problem/SKILL.md`
---
**Step 2 — Research (Initial)**
Condition: `_docs/00_problem/` is complete AND `_docs/01_solution/` has no `solution_draft*.md` files
Action: Read and execute `.cursor/skills/research/SKILL.md` (will auto-detect Mode A)
---
**Research Decision** (inline gate between Step 2 and Step 3)
Condition: `_docs/01_solution/` contains `solution_draft*.md` files AND `_docs/01_solution/solution.md` does not exist AND `_docs/02_document/architecture.md` does not exist
Action: Present the current research state to the user:
- How many solution drafts exist
- Whether tech_stack.md and security_analysis.md exist
- One-line summary from the latest draft
Then present using the **Choose format**:
```
══════════════════════════════════════
DECISION REQUIRED: Research complete — next action?
══════════════════════════════════════
A) Run another research round (Mode B assessment)
B) Proceed to planning with current draft
══════════════════════════════════════
Recommendation: [A or B] — [reason based on draft quality]
══════════════════════════════════════
```
- If user picks A → Read and execute `.cursor/skills/research/SKILL.md` (will auto-detect Mode B)
- If user picks B → auto-chain to Step 3 (Plan)
---
**Step 3 — Plan**
Condition: `_docs/01_solution/` has `solution_draft*.md` files AND `_docs/02_document/architecture.md` does not exist
Action:
1. The plan skill's Prereq 2 will rename the latest draft to `solution.md` — this is handled by the plan skill itself
2. Read and execute `.cursor/skills/plan/SKILL.md`
If `_docs/02_document/` exists but is incomplete (has some artifacts but no `FINAL_report.md`), the plan skill's built-in resumability handles it.
---
**Step 4 — UI Design (conditional)**
Condition (folder fallback): `_docs/02_document/architecture.md` exists AND `_docs/02_tasks/todo/` does not exist or has no task files.
State-driven: reached by auto-chain from Step 3.
Action: Read and execute `.cursor/skills/ui-design/SKILL.md`. The skill runs its own **Applicability Check**, which handles UI project detection and the user's A/B choice. It returns one of:
- `outcome: completed` → mark Step 4 as `completed`, auto-chain to Step 5 (Decompose).
- `outcome: skipped, reason: not-a-ui-project` → mark Step 4 as `skipped`, auto-chain to Step 5.
- `outcome: skipped, reason: user-declined` → mark Step 4 as `skipped`, auto-chain to Step 5.
The autodev no longer inlines UI detection heuristics — they live in `ui-design/SKILL.md` under "Applicability Check".
---
**Step 5 — Decompose**
Condition: `_docs/02_document/` contains `architecture.md` AND `_docs/02_document/components/` has at least one component AND `_docs/02_tasks/todo/` does not exist or has no task files
Action: Read and execute `.cursor/skills/decompose/SKILL.md`
If `_docs/02_tasks/` subfolders have some task files already, the decompose skill's resumability handles it.
---
**Step 6 — Implement**
Condition: `_docs/02_tasks/todo/` contains task files AND `_dependencies_table.md` exists AND `_docs/03_implementation/` does not contain any `implementation_report_*.md` file
Action: Read and execute `.cursor/skills/implement/SKILL.md`
If `_docs/03_implementation/` has batch reports, the implement skill detects completed tasks and continues. The FINAL report filename is context-dependent — see implement skill documentation for naming convention.
---
**Step 7 — Run Tests**
Condition (folder fallback): `_docs/03_implementation/` contains an `implementation_report_*.md` file.
State-driven: reached by auto-chain from Step 6.
Action: Read and execute `.cursor/skills/test-run/SKILL.md`
---
**Step 8 — Security Audit (optional)**
State-driven: reached by auto-chain from Step 7.
Action: Apply the **Optional Skill Gate** (`protocols.md` → "Optional Skill Gate") with:
- question: `Run security audit before deploy?`
- option-a-label: `Run security audit (recommended for production deployments)`
- option-b-label: `Skip — proceed directly to deploy`
- recommendation: `A — catches vulnerabilities before production`
- target-skill: `.cursor/skills/security/SKILL.md`
- next-step: Step 9 (Performance Test)
---
**Step 9 — Performance Test (optional)**
State-driven: reached by auto-chain from Step 8.
Action: Apply the **Optional Skill Gate** (`protocols.md` → "Optional Skill Gate") with:
- question: `Run performance/load tests before deploy?`
- option-a-label: `Run performance tests (recommended for latency-sensitive or high-load systems)`
- option-b-label: `Skip — proceed directly to deploy`
- recommendation: `A or B — base on whether acceptance criteria include latency, throughput, or load requirements`
- target-skill: `.cursor/skills/test-run/SKILL.md` in **perf mode** (the skill handles runner detection, threshold comparison, and its own A/B/C gate on threshold failures)
- next-step: Step 10 (Deploy)
---
**Step 10 — Deploy**
State-driven: reached by auto-chain from Step 9 (after Step 9 is completed or skipped).
Action: Read and execute `.cursor/skills/deploy/SKILL.md`.
After the deploy skill completes successfully, mark Step 10 as `completed` and auto-chain to Step 11 (Retrospective).
---
**Step 11 — Retrospective**
State-driven: reached by auto-chain from Step 10.
Action: Read and execute `.cursor/skills/retrospective/SKILL.md` in **cycle-end mode**. This closes the cycle's feedback loop by folding metrics into `_docs/06_metrics/retro_<date>.md` and appending the top-3 lessons to `_docs/LESSONS.md`.
After retrospective completes, mark Step 11 as `completed` and enter "Done" evaluation.
---
**Done**
State-driven: reached by auto-chain from Step 11. (Sanity check: `_docs/04_deploy/` should contain all expected artifacts — containerization.md, ci_cd_pipeline.md, environment_strategy.md, observability.md, deployment_procedures.md, deploy_scripts.md.)
Action: Report project completion with summary. Then **rewrite the state file** so the next `/autodev` invocation enters the feature-cycle loop in the existing-code flow:
```
flow: existing-code
step: 9
name: New Task
status: not_started
sub_step:
phase: 0
name: awaiting-invocation
detail: ""
retry_count: 0
cycle: 1
```
On the next invocation, Flow Resolution rule 1 reads `flow: existing-code` and re-entry flows directly into existing-code Step 9 (New Task).
## Auto-Chain Rules
| Completed Step | Next Action |
|---------------|-------------|
| Problem (1) | Auto-chain → Research (2) |
| Research (2) | Auto-chain → Research Decision (ask user: another round or proceed?) |
| Research Decision → proceed | Auto-chain → Plan (3) |
| Plan (3) | Auto-chain → UI Design detection (4) |
| UI Design (4, done or skipped) | Auto-chain → Decompose (5) |
| Decompose (5) | **Session boundary** — suggest new conversation before Implement |
| Implement (6) | Auto-chain → Run Tests (7) |
| Run Tests (7, all pass) | Auto-chain → Security Audit choice (8) |
| Security Audit (8, done or skipped) | Auto-chain → Performance Test choice (9) |
| Performance Test (9, done or skipped) | Auto-chain → Deploy (10) |
| Deploy (10) | Auto-chain → Retrospective (11) |
| Retrospective (11) | Report completion; rewrite state to existing-code flow, step 9 |
## Status Summary — Step List
Flow name: `greenfield`. Render using the banner template in `protocols.md` → "Banner Template (authoritative)". No header-suffix, current-suffix, or footer-extras — all empty for this flow.
| # | Step Name | Extra state tokens (beyond the shared set) |
|---|--------------------|--------------------------------------------|
| 1 | Problem | — |
| 2 | Research | `DONE (N drafts)` |
| 3 | Plan | — |
| 4 | UI Design | — |
| 5 | Decompose | `DONE (N tasks)` |
| 6 | Implement | `IN PROGRESS (batch M of ~N)` |
| 7 | Run Tests | `DONE (N passed, M failed)` |
| 8 | Security Audit | — |
| 9 | Performance Test | — |
| 10 | Deploy | — |
| 11 | Retrospective | — |
All rows also accept the shared state tokens (`DONE`, `IN PROGRESS`, `NOT STARTED`, `FAILED (retry N/3)`); rows 4, 8, 9 additionally accept `SKIPPED`.
Row rendering format (step-number column is right-padded to 2 characters for alignment):
```
Step 1 Problem [<state token>]
Step 2 Research [<state token>]
Step 3 Plan [<state token>]
Step 4 UI Design [<state token>]
Step 5 Decompose [<state token>]
Step 6 Implement [<state token>]
Step 7 Run Tests [<state token>]
Step 8 Security Audit [<state token>]
Step 9 Performance Test [<state token>]
Step 10 Deploy [<state token>]
Step 11 Retrospective [<state token>]
```
+207
View File
@@ -0,0 +1,207 @@
# Meta-Repo Workflow
Workflow for **meta-repositories** — repos that aggregate multiple components via git submodules, npm/cargo/pnpm/go workspaces, or ad-hoc conventions. The meta-repo itself has little or no source code of its own; it orchestrates cross-cutting documentation, CI/CD, and component registration.
This flow differs fundamentally from `greenfield` and `existing-code`:
- **No problem/research/plan phases** — meta-repos don't build features, they coordinate existing ones
- **No test spec / implement / run tests** — the meta-repo has no code to test
- **No `_docs/00_problem/` artifacts** — documentation target is `_docs/*.md` unified docs, not per-feature `_docs/NN_feature/` folders
- **Primary artifact is `_docs/_repo-config.yaml`** — generated by `monorepo-discover`, read by every other step
## Step Reference Table
| Step | Name | Sub-Skill | Internal SubSteps |
|------|------|-----------|-------------------|
| 1 | Discover | monorepo-discover/SKILL.md | Phase 110 |
| 2 | Config Review | (human checkpoint, no sub-skill) | — |
| 3 | Status | monorepo-status/SKILL.md | Sections 15 |
| 4 | Document Sync | monorepo-document/SKILL.md | Phase 17 (conditional on doc drift) |
| 5 | CICD Sync | monorepo-cicd/SKILL.md | Phase 17 (conditional on CI drift) |
| 6 | Loop | (auto-return to Step 3 on next invocation) | — |
**Onboarding is NOT in the auto-chain.** Onboarding a new component is always user-initiated (`monorepo-onboard` directly, or answering "yes" to the optional onboard branch at end of Step 5). The autodev does NOT silently onboard components it discovers.
## Detection Rules
**Resolution**: when a state file exists, `state.step` + `state.status` drive detection and the conditions below are not consulted. When no state file exists (cold start), walk the rules in order — first match wins. Meta-repo uses `_docs/_repo-config.yaml` (and its `confirmed_by_user` flag) as its primary folder-probe signal rather than per-step artifact folders.
---
**Step 1 — Discover**
Condition: `_docs/_repo-config.yaml` does NOT exist
Action: Read and execute `.cursor/skills/monorepo-discover/SKILL.md`. After completion, auto-chain to **Step 2 (Config Review)**.
---
**Step 2 — Config Review** (session boundary)
Condition: `_docs/_repo-config.yaml` exists AND top-level `confirmed_by_user: false`
Action: This is a **hard session boundary**. The skill cannot proceed until a human reviews the generated config and sets `confirmed_by_user: true`. Present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Config review pending
══════════════════════════════════════
_docs/_repo-config.yaml was generated by monorepo-discover
but has confirmed_by_user: false.
A) I've reviewed — proceed to Status
B) Pause — I'll review the config and come back later
══════════════════════════════════════
Recommendation: B — review the inferred mappings (tagged
`confirmed: false`), unresolved questions, and assumptions
before flipping confirmed_by_user: true.
══════════════════════════════════════
```
- If user picks A → verify `confirmed_by_user: true` is now set in the config. If still `false`, re-ask. If true, auto-chain to **Step 3 (Status)**.
- If user picks B → mark Step 2 as `in_progress`, update state file, end the session. Tell the user to invoke `/autodev` again after reviewing.
**Do NOT auto-flip `confirmed_by_user`.** Only the human does that.
---
**Step 3 — Status**
Condition (folder fallback): `_docs/_repo-config.yaml` exists AND `confirmed_by_user: true`.
State-driven: reached by auto-chain from Step 2 (user picked A), or entered on any re-invocation after a completed cycle.
Action: Read and execute `.cursor/skills/monorepo-status/SKILL.md`.
The status report identifies:
- Components with doc drift (commits newer than their mapped docs)
- Components with CI coverage gaps
- Registry/config mismatches
- Unresolved questions
Based on the report, auto-chain branches:
- If **doc drift** found → auto-chain to **Step 4 (Document Sync)**
- Else if **CI drift** (only) found → auto-chain to **Step 5 (CICD Sync)**
- Else if **registry mismatch** found (new components not in config) → present Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Registry drift detected
══════════════════════════════════════
Components in registry but not in config: <list>
Components in config but not in registry: <list>
A) Run monorepo-discover to refresh config
B) Run monorepo-onboard for each new component (interactive)
C) Ignore for now — continue
══════════════════════════════════════
Recommendation: A — safest; re-detect everything, human reviews
══════════════════════════════════════
```
- Else → **workflow done for this cycle**. Report "No drift. Meta-repo is in sync." Loop waits for next invocation.
---
**Step 4 — Document Sync**
State-driven: reached by auto-chain from Step 3 when the status report flagged doc drift.
Action: Read and execute `.cursor/skills/monorepo-document/SKILL.md` with scope = components flagged by status.
The skill:
1. Runs its own drift check (M7)
2. Asks user to confirm scope (components it will touch)
3. Applies doc edits
4. Skips any component with unconfirmed mapping (M5), reports
After completion:
- If the status report ALSO flagged CI drift → auto-chain to **Step 5 (CICD Sync)**
- Else → end cycle, report done
---
**Step 5 — CICD Sync**
State-driven: reached by auto-chain from Step 3 (when status report flagged CI drift and no doc drift) or from Step 4 (when both doc and CI drift were flagged).
Action: Read and execute `.cursor/skills/monorepo-cicd/SKILL.md` with scope = components flagged by status.
After completion, end cycle. Report files updated across both doc and CI sync.
---
**Step 6 — Loop (re-entry on next invocation)**
State-driven: all triggered steps completed; the meta-repo cycle has finished.
Action: Update state file to `step: 3, status: not_started` so that next `/autodev` invocation starts from Status. The meta-repo flow is cyclical — there's no terminal "done" state, because drift can appear at any time as submodules evolve.
On re-invocation:
- If config was updated externally and `confirmed_by_user` flipped back to `false` → go back to Step 2
- Otherwise → Step 3 (Status)
## Explicit Onboarding Branch (user-initiated)
Onboarding is not auto-chained. Two ways to invoke:
**1. During Step 3 registry-mismatch handling** — if user picks option B in the registry-mismatch Choose format, launch `monorepo-onboard` interactively for each new component.
**2. Direct user request** — if the user says "onboard <name>" during any step, pause the current step, save state, run `monorepo-onboard`, then resume.
After onboarding completes, the config is updated. Auto-chain back to **Step 3 (Status)** to catch any remaining drift the new component introduced.
## Auto-Chain Rules
| Completed Step | Next Action |
|---------------|-------------|
| Discover (1) | Auto-chain → Config Review (2) |
| Config Review (2, user picked A, confirmed_by_user: true) | Auto-chain → Status (3) |
| Config Review (2, user picked B) | **Session boundary** — end session, await re-invocation |
| Status (3, doc drift) | Auto-chain → Document Sync (4) |
| Status (3, CI drift only) | Auto-chain → CICD Sync (5) |
| Status (3, no drift) | **Cycle complete** — end session, await re-invocation |
| Status (3, registry mismatch) | Ask user (A: discover, B: onboard, C: continue) |
| Document Sync (4) + CI drift pending | Auto-chain → CICD Sync (5) |
| Document Sync (4) + no CI drift | **Cycle complete** |
| CICD Sync (5) | **Cycle complete** |
## Status Summary — Step List
Flow name: `meta-repo`. Render using the banner template in `protocols.md` → "Banner Template (authoritative)".
Flow-specific slot values:
- `<header-suffix>`: empty.
- `<current-suffix>`: empty.
- `<footer-extras>`: add a single line:
```
Config: _docs/_repo-config.yaml [confirmed_by_user: <true|false>, last_updated: <date>]
```
| # | Step Name | Extra state tokens (beyond the shared set) |
|---|------------------|--------------------------------------------|
| 1 | Discover | — |
| 2 | Config Review | `IN PROGRESS (awaiting human)` |
| 3 | Status | `DONE (no drift)`, `DONE (N drifts)` |
| 4 | Document Sync | `DONE (N docs)`, `SKIPPED (no doc drift)` |
| 5 | CICD Sync | `DONE (N files)`, `SKIPPED (no CI drift)` |
All rows accept the shared state tokens (`DONE`, `IN PROGRESS`, `NOT STARTED`, `FAILED (retry N/3)`); rows 4 and 5 additionally accept `SKIPPED`.
Row rendering format:
```
Step 1 Discover [<state token>]
Step 2 Config Review [<state token>]
Step 3 Status [<state token>]
Step 4 Document Sync [<state token>]
Step 5 CICD Sync [<state token>]
```
## Notes for the meta-repo flow
- **No session boundary except Step 2**: unlike existing-code flow (which has boundaries around decompose), meta-repo flow only pauses at config review. Syncing is fast enough to complete in one session.
- **Cyclical, not terminal**: no "done forever" state. Each invocation completes a drift cycle; next invocation starts fresh.
- **No tracker integration**: this flow does NOT create Jira/ADO tickets. Maintenance is not a feature — if a feature-level ticket spans the meta-repo's concerns, it lives in the per-component workspace.
- **Onboarding is opt-in**: never auto-onboarded. User must explicitly request.
- **Failure handling**: uses the same retry/escalation protocol as other flows (see `protocols.md`).
+394
View File
@@ -0,0 +1,394 @@
# Autodev Protocols
## User Interaction Protocol
Every time the autodev or a sub-skill needs a user decision, use the **Choose A / B / C / D** format. This applies to:
- State transitions where multiple valid next actions exist
- Sub-skill BLOCKING gates that require user judgment
- Any fork where the autodev cannot confidently pick the right path
- Trade-off decisions (tech choices, scope, risk acceptance)
### When to Ask (MUST ask)
- The next action is ambiguous (e.g., "another research round or proceed?")
- The decision has irreversible consequences (e.g., architecture choices, skipping a step)
- The user's intent or preference cannot be inferred from existing artifacts
- A sub-skill's BLOCKING gate explicitly requires user confirmation
- Multiple valid approaches exist with meaningfully different trade-offs
### When NOT to Ask (auto-transition)
- Only one logical next step exists (e.g., Problem complete → Research is the only option)
- The transition is deterministic from the state (e.g., Plan complete → Decompose)
- The decision is low-risk and reversible
- Existing artifacts or prior decisions already imply the answer
### Choice Format
Always present decisions in this format:
```
══════════════════════════════════════
DECISION REQUIRED: [brief context]
══════════════════════════════════════
A) [Option A — short description]
B) [Option B — short description]
C) [Option C — short description, if applicable]
D) [Option D — short description, if applicable]
══════════════════════════════════════
Recommendation: [A/B/C/D] — [one-line reason]
══════════════════════════════════════
```
Rules:
1. Always provide 24 concrete options (never open-ended questions)
2. Always include a recommendation with a brief justification
3. Keep option descriptions to one line each
4. If only 2 options make sense, use A/B only — do not pad with filler options
5. Play the notification sound (per `.cursor/rules/human-attention-sound.mdc`) before presenting the choice
6. After the user picks, proceed immediately — no follow-up confirmation unless the choice was destructive
## Optional Skill Gate (reusable template)
Several flow steps ask the user whether to run an optional skill (security audit, performance test, etc.) before auto-chaining. Instead of re-stating the Choose block and skip semantics at each such step, flow files invoke this shared template.
### Template shape
```
══════════════════════════════════════
DECISION REQUIRED: <question>
══════════════════════════════════════
A) <option-a-label>
B) <option-b-label>
══════════════════════════════════════
Recommendation: <A|B> — <reason>
══════════════════════════════════════
```
### Semantics (same for every invocation)
- **On A** → read and execute the target skill's `SKILL.md`; after it completes, auto-chain to `<next-step>`.
- **On B** → mark the current step `skipped` in the state file; auto-chain to `<next-step>`.
- **On skill failure** → standard Failure Handling (§Failure Handling) — retry ladder, then escalate via Choose block.
- **Sound before the prompt** — follow `.cursor/rules/human-attention-sound.mdc`.
### How flow files invoke it
Each flow-file step that needs this gate supplies only the variable parts:
```
Action: Apply the **Optional Skill Gate** (protocols.md → "Optional Skill Gate") with:
- question: <Choose-block header>
- option-a-label: <one-line A description>
- option-b-label: <one-line B description>
- recommendation: <A|B> — <short reason, may be dynamic>
- target-skill: <.cursor/skills/<name>/SKILL.md, plus any mode hint>
- next-step: Step <N> (<name>)
```
The resolved Choose block (shape above) is then rendered verbatim by substituting these variables. Do NOT reword the shared scaffolding — reword only the variable parts. If a step needs different semantics (e.g., "re-run same skill" rather than "skip to next step"), it MUST NOT use this template; it writes the Choose block inline with its own semantics.
### When NOT to use this template
- The user choice has **more than two options** (A/B/C/D).
- The choice is **not "run-or-skip-this-skill"** (e.g., "another round of the same skill", "pick tech stack", "proceed vs. rollback").
- The skipped path needs special bookkeeping beyond `status: skipped` (e.g., must also move artifacts, notify tracker, trigger a different skill).
For those cases, write the Choose block inline using the base format in §User Interaction Protocol.
## Work Item Tracker Authentication
All tracker detection, authentication, availability gating, `tracker: local` fallback semantics, and leftovers handling are defined in `.cursor/rules/tracker.mdc`. Follow that rule — do not restate its logic here.
Autodev-specific additions on top of the rule:
### Steps That Require Work Item Tracker
Before entering a step from this table for the first time in a session, verify tracker availability per `.cursor/rules/tracker.mdc`. If the user has already chosen `tracker: local`, skip the gate and proceed.
| Flow | Step | Sub-Step | Tracker Action |
|------|------|----------|----------------|
| greenfield | Plan | Step 6 — Epics | Create epics for each component |
| greenfield | Decompose | Step 1 + Step 2 + Step 3 — All tasks | Create ticket per task, link to epic |
| existing-code | Decompose Tests | Step 1t + Step 3 — All test tasks | Create ticket per task, link to epic |
| existing-code | New Task | Step 7 — Ticket | Create ticket per task, link to epic |
### State File Marker
Record the resolved choice in the state file once per session: `tracker: jira` or `tracker: local`. Subsequent steps read this marker instead of re-running the gate.
## Error Handling
All error situations that require user input MUST use the **Choose A / B / C / D** format.
| Situation | Action |
|-----------|--------|
| State detection is ambiguous (artifacts suggest two different steps) | Present findings and use Choose format with the candidate steps as options |
| Sub-skill fails or hits an unrecoverable blocker | Use Choose format: A) retry, B) skip with warning, C) abort and fix manually |
| User wants to skip a step | Use Choose format: A) skip (with dependency warning), B) execute the step |
| User wants to go back to a previous step | Use Choose format: A) re-run (with overwrite warning), B) stay on current step |
| User asks "where am I?" without wanting to continue | Show Status Summary only, do not start execution |
## Failure Handling
One retry ladder covers all failure modes: explicit failure returned by a sub-skill, stuck loops detected while monitoring, and persistent failures across conversations. The single counter is `retry_count` in the state file; the single escalation is the Choose block below.
### Failure signals
Treat the sub-skill as **failed** when ANY of the following is observed:
- The sub-skill explicitly returns a failed result (including blocked subagents, auto-fix loop exhaustion, prerequisite violations).
- **Stuck signals**: the same artifact is rewritten 3+ times without meaningful change; the sub-skill re-asks a question that was already answered; no new artifact has been saved despite active execution.
### Retry ladder
```
Failure observed
├─ retry_count < 3 ?
│ YES → increment retry_count in state file
│ → re-read the sub-skill's SKILL.md and _docs/_autodev_state.md
│ → resume from the last recorded sub_step (restart from sub_step 1 only if corruption is suspected)
│ → loop
│ NO (retry_count = 3) →
│ → set status: failed and retry_count: 3 in Current Step
│ → play notification sound (.cursor/rules/human-attention-sound.mdc)
│ → escalate (Choose block below)
│ → do NOT auto-retry until the user intervenes
```
Rules:
1. **Auto-retry is immediate** — do not ask before retrying.
2. **Preserve `sub_step`** across retries unless the failure indicates artifact corruption.
3. **Reset `retry_count: 0` on success.**
4. The counter is **per step, per cycle**. It is not cleared by crossing a session boundary — persistence across conversations is intentional; it IS the circuit breaker.
### Escalation
```
══════════════════════════════════════
SKILL FAILED: [Skill Name] — 3 consecutive failures
══════════════════════════════════════
Step: [N] — [Name]
SubStep: [M] — [sub-step name]
Last failure reason: [reason]
══════════════════════════════════════
A) Retry with fresh context (new conversation)
B) Skip this step with warning
C) Abort — investigate and fix manually
══════════════════════════════════════
Recommendation: A — fresh context often resolves
persistent failures
══════════════════════════════════════
```
### Re-entry after escalation
On the next invocation, if the state file shows `status: failed` AND `retry_count: 3`, do NOT auto-retry. Present the escalation block above first:
- User picks A → reset `retry_count: 0`, set `status: in_progress`, re-execute.
- User picks B → mark step `skipped`, proceed to the next step.
- User picks C → stop; return control to the user.
### Incident retrospective
Immediately after the user has made their A/B/C choice, invoke `.cursor/skills/retrospective/SKILL.md` in **incident mode**:
```
mode: incident
failing_skill: <skill name>
failure_summary: <last failure reason string>
```
This produces `_docs/06_metrics/incident_<YYYY-MM-DD>_<skill>.md` and appends 13 lessons to `_docs/LESSONS.md` under `process` or `tooling`. The retro runs even if the user picked Abort — the goal is to capture the pattern while it is fresh. If the retrospective skill itself fails, log the failure to `_docs/_process_leftovers/` but do NOT block the user's recovery choice from completing.
## Context Management Protocol
### Principle
Disk is memory. Never rely on in-context accumulation — read from `_docs/` artifacts, not from conversation history.
### Minimal Re-Read Set Per Skill
When re-entering a skill (new conversation or context refresh):
- Always read: `_docs/_autodev_state.md`
- Always read: the active skill's `SKILL.md`
- Conditionally read: only the `_docs/` artifacts the current sub-step requires (listed in each skill's Context Resolution section)
- Never bulk-read: do not load all `_docs/` files at once
### Mid-Skill Interruption
If context is filling up during a long skill (e.g., document, implement):
1. Save current sub-step progress to the skill's artifact directory
2. Update `_docs/_autodev_state.md` with exact sub-step position
3. Suggest a new conversation: "Context is getting long — recommend continuing in a fresh conversation for better results"
4. On re-entry, the skill's resumability protocol picks up from the saved sub-step
### Large Artifact Handling
When a skill needs to read large files (e.g., full solution.md, architecture.md):
- Read only the sections relevant to the current sub-step
- Use search tools (Grep, SemanticSearch) to find specific sections rather than reading entire files
- Summarize key decisions from prior steps in the state file so they don't need to be re-read
### Context Budget Heuristic
Agents cannot programmatically query context window usage. Use these heuristics to avoid degradation:
| Zone | Indicators | Action |
|------|-----------|--------|
| **Safe** | State file + SKILL.md + 23 focused artifacts loaded | Continue normally |
| **Caution** | 5+ artifacts loaded, or 3+ large files (architecture, solution, discovery), or conversation has 20+ tool calls | Complete current sub-step, then suggest session break |
| **Danger** | Repeated truncation in tool output, tool calls failing unexpectedly, responses becoming shallow or repetitive | Save immediately, update state file, force session boundary |
**Skill-specific guidelines**:
| Skill | Recommended session breaks |
|-------|---------------------------|
| **document** | After every ~5 modules in Step 1; between Step 4 (Verification) and Step 5 (Solution Extraction) |
| **implement** | Each batch is a natural checkpoint; if more than 2 batches completed in one session, suggest break |
| **plan** | Between Step 5 (Test Specifications) and Step 6 (Epics) for projects with many components |
| **research** | Between Mode A rounds; between Mode A and Mode B |
**How to detect caution/danger zone without API**:
1. Count tool calls made so far — if approaching 20+, context is likely filling up
2. If reading a file returns truncated content, context is under pressure
3. If the agent starts producing shorter or less detailed responses than earlier in the conversation, context quality is degrading
4. When in doubt, save and suggest a new conversation — re-entry is cheap thanks to the state file
## Rollback Protocol
### Implementation Steps (git-based)
Handled by `/implement` skill — each batch commit is a rollback checkpoint via `git revert`.
### Planning/Documentation Steps (artifact-based)
For steps that produce `_docs/` artifacts (problem, research, plan, decompose, document):
1. **Before overwriting**: if re-running a step that already has artifacts, the sub-skill's prerequisite check asks the user (resume/overwrite/skip)
2. **Rollback to previous step**: use Choose format:
```
══════════════════════════════════════
ROLLBACK: Re-run [step name]?
══════════════════════════════════════
A) Re-run the step (overwrites current artifacts)
B) Stay on current step
══════════════════════════════════════
Warning: This will overwrite files in _docs/[folder]/
══════════════════════════════════════
```
3. **Git safety net**: artifacts are committed with each autodev step completion. To roll back: `git log --oneline _docs/` to find the commit, then `git checkout <commit> -- _docs/<folder>/`
4. **State file rollback**: when rolling back artifacts, also update `_docs/_autodev_state.md` to reflect the rolled-back step (set it to `in_progress`, clear completed date)
## Debug Protocol
When the implement skill's auto-fix loop fails (code review FAIL after 2 auto-fix attempts) or an implementer subagent reports a blocker, the user is asked to intervene. This protocol guides the debugging process. (Retry budget and escalation are covered by Failure Handling above; this section is about *how* to diagnose once the user has been looped in.)
### Structured Debugging Workflow
When escalated to the user after implementation failure:
1. **Classify the failure** — determine the category:
- **Missing dependency**: a package, service, or module the task needs but isn't available
- **Logic error**: code runs but produces wrong results (assertion failures, incorrect output)
- **Integration mismatch**: interfaces between components don't align (type errors, missing methods, wrong signatures)
- **Environment issue**: Docker, database, network, or configuration problem
- **Spec ambiguity**: the task spec is unclear or contradictory
2. **Reproduce** — isolate the failing behavior:
- Run the specific failing test(s) in isolation
- Check whether the failure is deterministic or intermittent
- Capture the exact error message, stack trace, and relevant file:line
3. **Narrow scope** — focus on the minimal reproduction:
- For logic errors: trace the data flow from input to the point of failure
- For integration mismatches: compare the caller's expectations against the callee's actual interface
- For environment issues: verify Docker services are running, DB is accessible, env vars are set
4. **Fix and verify** — apply the fix and confirm:
- Make the minimal change that fixes the root cause
- Re-run the failing test(s) to confirm the fix
- Run the full test suite to check for regressions
- If the fix changes a shared interface, check all consumers
5. **Report** — update the batch report with:
- Root cause category
- Fix applied (file:line, description)
- Tests that now pass
### Common Recovery Patterns
| Failure Pattern | Typical Root Cause | Recovery Action |
|----------------|-------------------|----------------|
| ImportError / ModuleNotFoundError | Missing dependency or wrong path | Install dependency or fix import path |
| TypeError on method call | Interface mismatch between tasks | Align caller with callee's actual signature |
| AssertionError in test | Logic bug or wrong expected value | Fix logic or update test expectations |
| ConnectionRefused | Service not running | Start Docker services, check docker-compose |
| Timeout | Blocking I/O or infinite loop | Add timeout, fix blocking call |
| FileNotFoundError | Hardcoded path or missing fixture | Make path configurable, add fixture |
### Escalation
If debugging does not resolve the issue after 2 focused attempts:
```
══════════════════════════════════════
DEBUG ESCALATION: [failure description]
══════════════════════════════════════
Root cause category: [category]
Attempted fixes: [list]
Current state: [what works, what doesn't]
══════════════════════════════════════
A) Continue debugging with more context
B) Revert this batch and skip the task (move to backlog)
C) Simplify the task scope and retry
══════════════════════════════════════
```
## Status Summary
On every invocation, before executing any skill, present a status summary built from the state file (with folder scan fallback). For re-entry (state file exists), cross-check the current step against `_docs/` folder structure and present any `status: failed` state to the user before continuing.
### Banner Template (authoritative)
The banner shell is defined here once. Each flow file contributes only its step-list fragment and any flow-specific header/footer extras. Do not inline a full banner in flow files.
```
═══════════════════════════════════════════════════
AUTODEV STATUS (<flow-name>)<header-suffix>
═══════════════════════════════════════════════════
<step-list from the active flow file>
═══════════════════════════════════════════════════
Current: Step <N> — <Name><current-suffix>
SubStep: <M> — <sub-skill internal step name>
Retry: <N/3> ← omit row if retry_count is 0
Action: <what will happen next>
<footer-extras from the active flow file>
═══════════════════════════════════════════════════
```
### Slot rules
- `<flow-name>``greenfield`, `existing-code`, or `meta-repo`.
- `<header-suffix>` — optional, flow-specific. The existing-code flow appends ` — Cycle <N>` when `state.cycle > 1`; other flows leave it empty.
- `<step-list>` — a fixed-width table supplied by the active flow file (see that file's "Status Summary — Step List" section). Row format is standardized:
```
Step <N> <Step Name> [<state token>]
```
where `<state token>` comes from the state-token set defined per row in the flow's step-list table.
- `<current-suffix>` — optional, flow-specific. The existing-code flow appends ` (cycle <N>)` when `state.cycle > 1`; other flows leave it empty.
- `Retry:` row — omit entirely when `retry_count` is 0. Include it with `<N>/3` otherwise.
- `<footer-extras>` — optional, flow-specific. The meta-repo flow adds a `Config:` line with `_docs/_repo-config.yaml` state; other flows leave it empty.
### State token set (shared)
The common tokens all flows may emit are: `DONE`, `IN PROGRESS`, `NOT STARTED`, `SKIPPED`, `FAILED (retry N/3)`. Specific step rows may extend this with parenthetical detail (e.g., `DONE (N drafts)`, `DONE (N tasks)`, `IN PROGRESS (batch M of ~N)`, `DONE (N passed, M failed)`). The flow's step-list table declares which extensions each step supports.
+158
View File
@@ -0,0 +1,158 @@
# Autodev State Management
## State File: `_docs/_autodev_state.md`
The autodev persists its position to `_docs/_autodev_state.md`. This is a lightweight pointer — only the current step. All history lives in `_docs/` artifacts and git log. Folder scanning is the fallback when the state file doesn't exist.
### Template
**Saved at:** `_docs/_autodev_state.md` (workspace-relative, one file per project). Created on the first `/autodev` invocation; updated in place on every state transition; never deleted.
```markdown
# Autodev State
## Current Step
flow: [greenfield | existing-code | meta-repo]
step: [1-11 for greenfield, 1-17 for existing-code, 1-6 for meta-repo, or "done"]
name: [step name from the active flow's Step Reference Table]
status: [not_started / in_progress / completed / skipped / failed]
sub_step:
phase: [integer — sub-skill internal phase/step number, or 0 if not started]
name: [kebab-case short identifier from the sub-skill, or "awaiting-invocation"]
detail: [optional free-text note, may be empty]
retry_count: [0-3 — consecutive auto-retry attempts, reset to 0 on success]
cycle: [1-N — feature cycle counter for existing-code flow; increments on each "Re-Entry After Completion" loop; always 1 for greenfield and meta-repo]
```
The `sub_step` field is structured. Every sub-skill must save both `phase` (integer) and `name` (kebab-case token matching the skill's documented phase names). `detail` is optional human-readable context. On re-entry the orchestrator parses `phase` and `name` to resume; if parsing fails, fall back to folder scan and log the parse failure.
### Sub-Skill Phase Persistence — Rules (not a registry)
Each sub-skill is authoritative for its own phase list. Phase names and numbers live inside the sub-skill's own SKILL.md (and any `steps/` / `phases/` files). The orchestrator does not maintain a central phase table — it reads whatever `phase` / `name` the sub-skill last wrote.
Every sub-skill MUST follow these rules when persisting `sub_step`:
1. **`phase`** — a strictly monotonic integer per invocation, starting at 0 (`awaiting-invocation`) and incrementing by 1 at each internal save point. No fractional values are ever persisted. If the skill's own docs use half-step numbering (e.g., "Phase 4.5", decompose's "Step 1.5"), the persisted integer is simply the next integer, and all subsequent phases shift up by one in that skill's own file.
2. **`name`** — a kebab-case short identifier unique within that sub-skill. Use the phase's heading or step title in kebab-case (e.g., `component-decomposition`, `auto-fix-gate`, `cross-task-consistency`). Different modes of the same skill may reuse a `phase` integer with distinct `name` values (e.g., `decompose` phase 1 is `bootstrap-structure` in default mode, `test-infrastructure-bootstrap` in tests-only mode).
3. **`detail`** — optional free-text note (batch index, mode flag, retry hint); may be empty.
4. **Reserved name**`name: awaiting-invocation` with `phase: 0` is the universal "skill was chained but has not started" marker. Every sub-skill implicitly supports it; no sub-skill should reuse the token for anything else.
On re-entry, the orchestrator parses the structured field and resumes at `(phase, name)`. If parsing fails, it falls back to folder scan and logs the parse error — it does NOT guess a phase.
The `cycle` counter is used by existing-code flow Step 10 (Implement) detection and by implementation report naming (`implementation_report_{feature_slug}_cycle{N}.md`). It starts at 1 when a project enters existing-code flow (either by routing from greenfield's Done branch, or by first invocation on an existing codebase). It increments on each completed Retrospective → New Task loop.
### Examples
```
flow: greenfield
step: 3
name: Plan
status: in_progress
sub_step:
phase: 4
name: architecture-review-risk-assessment
detail: ""
retry_count: 0
cycle: 1
```
```
flow: existing-code
step: 3
name: Test Spec
status: failed
sub_step:
phase: 1
name: test-case-generation
detail: "variant 1b"
retry_count: 3
cycle: 1
```
```
flow: meta-repo
step: 2
name: Config Review
status: in_progress
sub_step:
phase: 0
name: awaiting-human-review
detail: "awaiting review of _docs/_repo-config.yaml"
retry_count: 0
cycle: 1
```
```
flow: existing-code
step: 10
name: Implement
status: in_progress
sub_step:
phase: 7
name: batch-loop
detail: "batch 2 of ~4"
retry_count: 0
cycle: 3
```
### State File Rules
1. **Create** on the first autodev invocation (after state detection determines Step 1)
2. **Update** after every change — this includes: batch completion, sub-step progress, step completion, session boundary, failed retry, or any meaningful state transition. The state file must always reflect the current reality.
3. **Read** as the first action on every invocation — before folder scanning
4. **Cross-check**: verify against actual `_docs/` folder contents. If they disagree, trust the folder structure and update the state file
5. **Never delete** the state file
6. **Retry tracking**: increment `retry_count` on each failed auto-retry; reset to `0` on success. If `retry_count` reaches 3, set `status: failed`
7. **Failed state on re-entry**: if `status: failed` with `retry_count: 3`, do NOT auto-retry — present the issue to the user first
8. **Skill-internal state**: when the active skill maintains its own state file (e.g., document skill's `_docs/02_document/state.json`), the autodev's `sub_step` field should reflect the skill's internal progress. On re-entry, cross-check the skill's state file against the autodev's `sub_step` for consistency.
## State Detection
Read `_docs/_autodev_state.md` first. If it exists and is consistent with the folder structure, use the `Current Step` from the state file. If the state file doesn't exist or is inconsistent, fall back to folder scanning.
### Folder Scan Rules (fallback)
Scan the workspace and `_docs/` to determine the current workflow position. The detection rules are defined in each flow file (`flows/greenfield.md`, `flows/existing-code.md`, `flows/meta-repo.md`). Resolution order:
1. Apply the Flow Resolution rules in `SKILL.md` to pick the flow first (meta-repo detection takes priority over greenfield/existing-code).
2. Within the selected flow, check its detection rules in order — first match wins.
## Re-Entry Protocol
When the user invokes `/autodev` and work already exists:
1. Read `_docs/_autodev_state.md`
2. Cross-check against `_docs/` folder structure
3. Present Status Summary (render using the banner template in `protocols.md` → "Banner Template", filled in with the active flow's "Status Summary — Step List" fragment)
4. If the detected step has a sub-skill with built-in resumability, the sub-skill handles mid-step recovery
5. Continue execution from detected state
## Session Boundaries
A **session boundary** is a transition that explicitly breaks auto-chain. Which transitions are boundaries is declared **in each flow file's Auto-Chain Rules table** — rows marked `**Session boundary**`. The details live with the steps they apply to; this section defines only the shared mechanism.
**Invariant**: a flow row without the `Session boundary` marker auto-chains unconditionally. Missing marker = missing boundary.
### Orchestrator mechanism at a boundary
1. Update the state file: mark the current step `completed`; set the next step with `status: not_started`; reset `sub_step: {phase: 0, name: awaiting-invocation, detail: ""}`; keep `retry_count: 0`.
2. Present a brief summary of what just finished (tasks produced, batches expected, etc., as relevant to the boundary).
3. Present the shared Choose block (template below) — or a flow-specific override if the flow file supplies one.
4. End the session — do not start the next skill in the same conversation.
### Shared Choose template
```
══════════════════════════════════════
DECISION REQUIRED: <what just completed> — start <next phase>?
══════════════════════════════════════
A) Start a new conversation for <next phase> (recommended for context freshness)
B) Continue in this conversation (NOT recommended — context may degrade)
Warning: if context fills mid-<next phase>, state will be saved and you will
still be asked to resume in a new conversation — option B only delays that.
══════════════════════════════════════
Recommendation: A — <next phase> is long; fresh context helps
══════════════════════════════════════
```
Individual boundaries MAY override this template with a flow-specific Choose block when the pause has different semantics (e.g., `meta-repo.md` Step 2 Config Review pauses for human review of a config flag, not for context freshness). The flow file is authoritative for any such override.