Merge branch 'dev' into stage
ci/woodpecker/push/build-arm Pipeline was successful

This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-04-22 01:37:06 +03:00
73 changed files with 4494 additions and 1788 deletions
+12 -12
View File
@@ -1,9 +1,9 @@
## How to Use ## How to Use
Type `/autopilot` to start or continue the full workflow. The orchestrator detects where your project is and picks up from there. Type `/autodev` to start or continue the full workflow. The orchestrator detects where your project is and picks up from there.
``` ```
/autopilot — start a new project or continue where you left off /autodev — start a new project or continue where you left off
``` ```
If you want to run a specific skill directly (without the orchestrator), use the individual commands: If you want to run a specific skill directly (without the orchestrator), use the individual commands:
@@ -19,13 +19,13 @@ If you want to run a specific skill directly (without the orchestrator), use the
## How It Works ## How It Works
The autopilot is a state machine that persists its state to `_docs/_autopilot_state.md`. On every invocation it reads the state file, cross-checks against the `_docs/` folder structure, shows a status summary with context from prior sessions, and continues execution. The autodev is a state machine that persists its state to `_docs/_autodev_state.md`. On every invocation it reads the state file, cross-checks against the `_docs/` folder structure, shows a status summary with context from prior sessions, and continues execution.
``` ```
/autopilot invoked /autodev invoked
Read _docs/_autopilot_state.md → cross-check _docs/ folders Read _docs/_autodev_state.md → cross-check _docs/ folders
Show status summary (progress, key decisions, last session context) Show status summary (progress, key decisions, last session context)
@@ -37,7 +37,7 @@ Execute current skill (read its SKILL.md, follow its workflow)
Update state file → auto-chain to next skill → loop Update state file → auto-chain to next skill → loop
``` ```
The state file tracks completed steps, key decisions, blockers, and session context. This makes re-entry across conversations seamless — the autopilot knows not just where you are, but what decisions were made and why. The state file tracks completed steps, key decisions, blockers, and session context. This makes re-entry across conversations seamless — the autodev knows not just where you are, but what decisions were made and why.
Skills auto-chain without pausing between them. The only pauses are: Skills auto-chain without pausing between them. The only pauses are:
- **BLOCKING gates** inside each skill (user must confirm before proceeding) - **BLOCKING gates** inside each skill (user must confirm before proceeding)
@@ -49,13 +49,13 @@ A typical project runs in 2-4 conversations:
- Session 3: Implement (may span multiple sessions) - Session 3: Implement (may span multiple sessions)
- Session 4: Deploy - Session 4: Deploy
Re-entry is seamless: type `/autopilot` in a new conversation and the orchestrator reads the state file to pick up exactly where you left off. Re-entry is seamless: type `/autodev` in a new conversation and the orchestrator reads the state file to pick up exactly where you left off.
## Skill Descriptions ## Skill Descriptions
### autopilot (meta-orchestrator) ### autodev (meta-orchestrator)
Auto-chaining engine that sequences the full BUILD → SHIP workflow. Persists state to `_docs/_autopilot_state.md`, tracks key decisions and session context, and flows through problem → research → plan → decompose → implement → deploy without manual skill invocation. Maximizes work per conversation with seamless cross-session re-entry. Auto-chaining engine that sequences the full BUILD → SHIP workflow. Persists state to `_docs/_autodev_state.md`, tracks key decisions and session context, and flows through problem → research → plan → decompose → implement → deploy without manual skill invocation. Maximizes work per conversation with seamless cross-session re-entry.
### problem ### problem
@@ -136,13 +136,13 @@ Bottom-up codebase documentation. Analyzes existing code from modules through co
7. /retrospective — metrics, trends, improvement actions → _docs/06_metrics/ 7. /retrospective — metrics, trends, improvement actions → _docs/06_metrics/
``` ```
Or just use `/autopilot` to run steps 0-5 automatically. Or just use `/autodev` to run steps 0-5 automatically.
## Available Skills ## Available Skills
| Skill | Triggers | Output | | Skill | Triggers | Output |
|-------|----------|--------| |-------|----------|--------|
| **autopilot** | "autopilot", "auto", "start", "continue", "what's next" | Orchestrates full workflow | | **autodev** | "autodev", "auto", "start", "continue", "what's next" | Orchestrates full workflow |
| **problem** | "problem", "define problem", "new project" | `_docs/00_problem/` | | **problem** | "problem", "define problem", "new project" | `_docs/00_problem/` |
| **research** | "research", "investigate" | `_docs/01_solution/` | | **research** | "research", "investigate" | `_docs/01_solution/` |
| **plan** | "plan", "decompose solution" | `_docs/02_document/` | | **plan** | "plan", "decompose solution" | `_docs/02_document/` |
@@ -170,7 +170,7 @@ Or just use `/autopilot` to run steps 0-5 automatically.
``` ```
_project.md — project-specific config (tracker type, project key, etc.) _project.md — project-specific config (tracker type, project key, etc.)
_docs/ _docs/
├── _autopilot_state.md — autopilot orchestrator state (progress, decisions, session context) ├── _autodev_state.md — autodev orchestrator state (progress, decisions, session context)
├── 00_problem/ — problem definition, restrictions, AC, input data ├── 00_problem/ — problem definition, restrictions, AC, input data
├── 00_research/ — intermediate research artifacts ├── 00_research/ — intermediate research artifacts
├── 01_solution/ — solution drafts, tech stack, security analysis ├── 01_solution/ — solution drafts, tech stack, security analysis
+10
View File
@@ -0,0 +1,10 @@
---
description: Rules for installation and provisioning scripts
globs: scripts/**/*.sh
alwaysApply: false
---
# Automation Scripts
- Automate repeatable setup steps in scripts. For dependencies with official package managers (apt, brew, pip, npm), automate installation. For binaries from external URLs, document the download but require user review before execution.
- Use sensible defaults for paths and configuration (e.g. `/opt/` for system-wide tools). Allow overrides via environment variables for users who need non-standard locations.
+20 -11
View File
@@ -1,17 +1,17 @@
--- ---
description: "Enforces concise, comment-free, environment-aware coding standards with strict scope discipline and test verification" description: "Enforces readable, environment-aware coding standards with scope discipline, meaningful comments, and test verification"
alwaysApply: true alwaysApply: true
--- ---
# Coding preferences # Coding preferences
- Always prefer simple solution - Prefer the simplest solution that satisfies all requirements, including maintainability. When in doubt between two approaches, choose the one with fewer moving parts — but never sacrifice correctness, error handling, or readability for brevity.
- Follow the Single Responsibility Principle — a class or method should have one reason to change: - Follow the Single Responsibility Principle — a class or method should have one reason to change:
- If a method is hard to name precisely from the caller's perspective, its responsibility is misplaced. Vague names like "candidate", "data", or "item" are a signal — fix the design, not just the name. - If a method is hard to name precisely from the caller's perspective, its responsibility is misplaced. Vague names like "candidate", "data", or "item" are a signal — fix the design, not just the name.
- Logic specific to a platform, variant, or environment belongs in the class that owns that variant, not in the general coordinator. Passing a dependency through is preferable to leaking variant-specific concepts into shared code. - Logic specific to a platform, variant, or environment belongs in the class that owns that variant, not in the general coordinator. Passing a dependency through is preferable to leaking variant-specific concepts into shared code.
- Only use static methods for pure, self-contained computations (constants, simple math, stateless lookups). If a static method involves resource access, side effects, OS interaction, or logic that varies across subclasses or environments — use an instance method or factory class instead. Before implementing a non-trivial static method, ask the user. - Only use static methods for pure, self-contained computations (constants, simple math, stateless lookups). If a static method involves resource access, side effects, OS interaction, or logic that varies across subclasses or environments — use an instance method or factory class instead. Before implementing a non-trivial static method, ask the user.
- Generate concise code - Avoid boilerplate and unnecessary indirection, but never sacrifice readability for brevity.
- Never suppress errors silently — no `2>/dev/null`, empty `catch` blocks, bare `except: pass`, or discarded error returns. These hide the information you need most when something breaks. If an error is truly safe to ignore, log it or comment why. - Never suppress errors silently — no `2>/dev/null`, empty `catch` blocks, bare `except: pass`, or discarded error returns. These hide the information you need most when something breaks. If an error is truly safe to ignore, log it or comment why.
- Do not put comments in the code, except in tests: every test must use the Arrange / Act / Assert pattern with language-appropriate comment syntax (`# Arrange` for Python, `// Arrange` for C#/Rust/JS/TS). Omit any section that is not needed (e.g. if there is no setup, skip Arrange; if act and assert are the same line, keep only Assert) - Do not add comments that merely narrate what the code does. Comments are appropriate for: non-obvious business rules, workarounds with references to issues/bugs, safety invariants, and public API contracts. Make comments as short and concise as possible. Exception: every test must use the Arrange / Act / Assert pattern with language-appropriate comment syntax (`# Arrange` for Python, `// Arrange` for C#/Rust/JS/TS). Omit any section that is not needed (e.g. if there is no setup, skip Arrange; if act and assert are the same line, keep only Assert)
- Do not put logs unless it is an exception, or was asked specifically - Do not add verbose debug/trace logs by default. Log exceptions, security events (auth failures, permission denials), and business-critical state transitions. Add debug-level logging only when asked.
- Do not put code annotations unless it was asked specifically - Do not put code annotations unless it was asked specifically
- Write code that takes into account the different environments: development, production - Write code that takes into account the different environments: development, production
- You are careful to make changes that are requested or you are confident the changes are well understood and related to the change being requested - You are careful to make changes that are requested or you are confident the changes are well understood and related to the change being requested
@@ -22,16 +22,25 @@ alwaysApply: true
- When a test fails due to a missing dependency, install it — do not fake or stub the module system. For normal packages, add them to the project's dependency file (requirements-test.txt, package.json devDependencies, test csproj, etc.) and install. Only consider stubbing if the dependency is heavy (e.g. hardware-specific SDK, large native toolchain) — and even then, ask the user first before choosing to stub. - When a test fails due to a missing dependency, install it — do not fake or stub the module system. For normal packages, add them to the project's dependency file (requirements-test.txt, package.json devDependencies, test csproj, etc.) and install. Only consider stubbing if the dependency is heavy (e.g. hardware-specific SDK, large native toolchain) — and even then, ask the user first before choosing to stub.
- Do not solve environment or infrastructure problems (dependency resolution, import paths, service discovery, connection config) by hardcoding workarounds in source code. Fix them at the environment/configuration level. - Do not solve environment or infrastructure problems (dependency resolution, import paths, service discovery, connection config) by hardcoding workarounds in source code. Fix them at the environment/configuration level.
- Before writing new infrastructure or workaround code, check how the existing codebase already handles the same concern. Follow established project patterns. - Before writing new infrastructure or workaround code, check how the existing codebase already handles the same concern. Follow established project patterns.
- If a file, class, or function has no remaining usages — delete it. Do not keep dead code "just in case"; git history preserves everything. Dead code rots: its dependencies drift, it misleads readers, and it breaks when the code it depends on evolves. - If a file, class, or function has no remaining usages — delete it. Dead code rots: its dependencies drift, it misleads readers, and it breaks when the code it depends on evolves. However, before deletion verify that the symbol is not used via any of the following. If any applies, do NOT delete — leave it or ASK the user:
- Public API surface exported from the package and potentially consumed outside the workspace (see `workspace-boundary.mdc`)
- Reflection, dependency injection, or service registration (scan DI container registrations, `appsettings.json` / equivalent config, attribute-based discovery, plugin manifests)
- Dynamic dispatch from config/data (YAML/JSON references, string-based class lookups, route tables, command dispatchers)
- Test fixtures used only by currently-skipped tests — temporary skips may become active again
- Cross-repo references — if this workspace is part of a multi-repo system, grep sibling repos for shared contracts before deleting
- Focus on the areas of code relevant to the task - **Scope discipline**: focus edits on the task scope. The "scope" is:
- Do not touch code that is unrelated to the task - Files the task explicitly names
- Always think about what other methods and areas of code might be affected by the code changes - Files that define interfaces the task changes
- When you think you are done with changes, run the full test suite. Every failure — including pre-existing ones, collection errors, and import errors — is a **blocking gate**. Never silently ignore, skip, or proceed past a failing test. On any failure, stop and ask the user to choose one of: - Files that directly call, implement, or test the changed code
- **Adjacent hygiene is permitted** without asking: fixing imports you caused to break, updating obvious stale references within a file you already modify, deleting code that became dead because of your change.
- **Unrelated issues elsewhere**: do not silently fix them as part of this task. Either note them to the user at end of turn and ASK before expanding scope, or record in `_docs/_process_leftovers/` for later handling.
- Always think about what other methods and areas of code might be affected by the code changes, and surface the list to the user before modifying.
- When you think you are done with changes, run the full test suite. Every failure in tests that cover code you modified or that depend on code you modified is a **blocking gate**. For pre-existing failures in unrelated areas, report them to the user but do not block on them. Never silently ignore or skip a failure without reporting it. On any blocking failure, stop and ask the user to choose one of:
- **Investigate and fix** the failing test or source code - **Investigate and fix** the failing test or source code
- **Remove the test** if it is obsolete or no longer relevant - **Remove the test** if it is obsolete or no longer relevant
- Do not rename any databases or tables or table columns without confirmation. Avoid such renaming if possible. - Do not rename any databases or tables or table columns without confirmation. Avoid such renaming if possible.
- Make sure we don't commit binaries, create and keep .gitignore up to date and delete binaries after you are done with the task - Make sure we don't commit binaries, create and keep .gitignore up to date and delete binaries after you are done with the task
- Never force-push to main or dev branches - Never force-push to main or dev branches
- Place all source code under the `src/` directory; keep project-level config, tests, and tooling at the repo root - For new projects, place source code under `src/` (this works for all stacks including .NET). For existing projects, follow the established directory structure. Keep project-level config, tests, and tooling at the repo root.
+14
View File
@@ -23,3 +23,17 @@ globs: [".cursor/**"]
## Security ## Security
- All `.cursor/` files must be scanned for hidden Unicode before committing (see cursor-security.mdc) - All `.cursor/` files must be scanned for hidden Unicode before committing (see cursor-security.mdc)
## Quality Thresholds (canonical reference)
All rules and skills must reference the single source of truth below. Do NOT restate different numeric thresholds in individual rule or skill files.
| Concern | Threshold | Enforcement |
|---------|-----------|-------------|
| Test coverage on business logic | 75% | Aim (warn below); 100% on critical paths |
| Test scenario coverage (vs AC + restrictions) | 75% | Blocking in test-spec Phase 1 and Phase 3 |
| CI coverage gate | 75% | Fail build below |
| Lint errors (Critical/High) | 0 | Blocking pre-commit |
| Code-review auto-fix | Low + Medium (Style/Maint/Perf) + High (Style/Scope) | Critical and Security always escalate |
When a skill or rule needs to cite a threshold, link to this table instead of hardcoding a different number.
+1 -1
View File
@@ -5,7 +5,7 @@ globs: ["**/*.cs", "**/*.csproj", "**/*.sln"]
# .NET / C# # .NET / C#
- PascalCase for classes, methods, properties, namespaces; camelCase for locals and parameters; prefix interfaces with `I` - PascalCase for classes, methods, properties, namespaces; camelCase for locals and parameters; prefix interfaces with `I`
- Use `async`/`await` for I/O-bound operations, do not suffix async methods with Async - Use `async`/`await` for I/O-bound operations; the `Async` suffix on method names is optional — follow the project's existing convention
- Use dependency injection via constructor injection; register services in `Program.cs` - Use dependency injection via constructor injection; register services in `Program.cs`
- Use linq2db for small projects, EF Core with migrations for big ones; avoid raw SQL unless performance-critical; prevent N+1 with `.Include()` or projection - Use linq2db for small projects, EF Core with migrations for big ones; avoid raw SQL unless performance-critical; prevent N+1 with `.Include()` or projection
- Use `Result<T, E>` pattern or custom error types over throwing exceptions for expected failures - Use `Result<T, E>` pattern or custom error types over throwing exceptions for expected failures
+3 -2
View File
@@ -5,6 +5,7 @@ alwaysApply: true
# Git Workflow # Git Workflow
- Work on the `dev` branch - Work on the `dev` branch
- Commit message format: `[TRACKER-ID-1] [TRACKER-ID-2] Summary of changes` - Commit message subject line format: `[TRACKER-ID-1] [TRACKER-ID-2] Summary of changes`
- Commit message total length must not exceed 30 characters - Subject line must not exceed 72 characters (standard Git convention for the first line). The 72-char limit applies to the subject ONLY, not the full commit message.
- A commit message body is optional. Add one when the subject alone cannot convey the why of the change. Wrap the body at 72 chars per line.
- Do NOT push or merge unless the user explicitly asks you to. Always ask first if there is a need. - Do NOT push or merge unless the user explicitly asks you to. Always ask first if there is a need.
+33 -11
View File
@@ -4,21 +4,43 @@ alwaysApply: true
--- ---
# Sound Notification on Human Input # Sound Notification on Human Input
Whenever you are about to ask the user a question, request confirmation, present options for a decision, or otherwise pause and wait for human input, you MUST first run the appropriate shell command for the current OS: ## Sound commands per OS
Detect the OS from user system info or `uname -s`:
- **macOS**: `afplay /System/Library/Sounds/Glass.aiff &` - **macOS**: `afplay /System/Library/Sounds/Glass.aiff &`
- **Linux**: `paplay /usr/share/sounds/freedesktop/stereo/bell.oga 2>/dev/null || aplay /usr/share/sounds/freedesktop/stereo/bell.oga 2>/dev/null || echo -e '\a' &` - **Linux**: `paplay /usr/share/sounds/freedesktop/stereo/bell.oga 2>/dev/null || aplay /usr/share/sounds/freedesktop/stereo/bell.oga 2>/dev/null || echo -e '\a' &`
- **Windows (PowerShell)**: `[System.Media.SystemSounds]::Exclamation.Play()` - **Windows (PowerShell)**: `[System.Media.SystemSounds]::Exclamation.Play()`
Detect the OS from the user's system info or by running `uname -s` if unknown. ## When to play (play exactly once per trigger)
This applies to: Play the sound when your turn will end in one of these states:
- Asking clarifying questions
- Presenting choices (e.g. via AskQuestion tool)
- Requesting approval for destructive actions
- Reporting that you are blocked and need guidance
- Any situation where the conversation will stall without user response
- Completing a task (final answer / deliverable ready for review)
Do NOT play the sound when: 1. You are about to call the AskQuestion tool — sound BEFORE the AskQuestion call
- You are in the middle of executing a multi-step task and just providing a status update 2. Your text ends with a direct question to the user that cannot be answered without their input (e.g., "Which option do you prefer?", "What is the database name?", "Confirm before I push?")
3. You are reporting that you are BLOCKED and cannot continue without user input (missing credentials, conflicting requirements, external approval required)
4. You have just completed a destructive or irreversible action the user asked to review (commit, push, deploy, data migration, file deletion)
## When NOT to play
- You are mid-execution and returning a progress update (the conversation is not stalling)
- You are answering a purely informational or factual question and no follow-up is required
- You have already played the sound once this turn for the same pause point
- Your response only contains text describing what you did or found, with no question, no block, no irreversible action
## "Trivial" definition
A response is trivial (no sound) when ALL of the following are true:
- No explicit question to the user
- No "I am blocked" report
- No destructive/irreversible action that needs review
If any one of those is present, the response is non-trivial — play the sound.
## Ordering
The sound command is a normal Shell tool call. Place it:
- **Immediately before an AskQuestion tool call** in the same message, or
- **As the last Shell call of the turn** if ending with a text-based question, block report, or post-destructive-action review
Do not play the sound as part of routine command execution — only at the pause points listed under "When to play".
+22 -10
View File
@@ -5,7 +5,7 @@ alwaysApply: true
# Agent Meta Rules # Agent Meta Rules
## Execution Safety ## Execution Safety
- Never run test suites, builds, Docker commands, or other long-running/resource-heavy/security-risky operations without asking the user first — unless it is explicitly stated in a skill or agent, or the user already asked to do so. - Run the full test suite automatically when you believe code changes are complete (as required by coderule.mdc). For other long-running/resource-heavy/security-risky operations (builds, Docker commands, deployments, performance tests), ask the user first — unless explicitly stated in a skill or the user already asked to do so.
## User Interaction ## User Interaction
- Use the AskQuestion tool for structured choices (A/B/C/D) when available — it provides an interactive UI. Fall back to plain-text questions if the tool is unavailable. - Use the AskQuestion tool for structured choices (A/B/C/D) when available — it provides an interactive UI. Fall back to plain-text questions if the tool is unavailable.
@@ -33,18 +33,30 @@ When the user reacts negatively to generated code ("WTF", "what the hell", "why
- "Before writing new infrastructure or workaround code, check how the existing codebase already handles the same concern. Follow established project patterns." - "Before writing new infrastructure or workaround code, check how the existing codebase already handles the same concern. Follow established project patterns."
## Debugging Over Contemplation ## Debugging Over Contemplation
When the root cause of a bug is not clear after ~5 minutes of reasoning, analysis, and assumption-making — **stop speculating and add debugging logs**. Observe actual runtime behavior before forming another theory. The pattern to follow:
Agents cannot measure wall-clock time between turns. Use observable counts from your own transcript instead.
**Trigger: stop speculating and instrument.** When you've formed **3 or more distinct hypotheses** about a bug without confirming any against runtime evidence (logs, stderr, debugger state, actual test failure messages) — stop and add debugging output. Re-reading the same code hoping to "spot it this time" counts as a new hypothesis that still has zero evidence.
Steps:
1. Identify the last known-good boundary (e.g., "request enters handler") and the known-bad result (e.g., "callback never fires"). 1. Identify the last known-good boundary (e.g., "request enters handler") and the known-bad result (e.g., "callback never fires").
2. Add targeted `print(..., flush=True)` or log statements at each intermediate step to narrow the gap. 2. Add targeted `print(..., flush=True)`, `console.error`, or logger statements at each intermediate step to narrow the gap.
3. Read the output. Let evidence drive the next step — not inference chains built on unverified assumptions. 3. Run the instrumented code. Read the output. Let evidence drive the next hypothesis — not inference chains.
Prolonged mental contemplation without evidence is a time sink. A 15-minute instrumented run beats 45 minutes of "could it be X? but then Y... unless Z..." reasoning. An instrumented run producing real output beats any amount of "could it be X? but then Y..." reasoning.
## Long Investigation Retrospective ## Long Investigation Retrospective
When a problem takes significantly longer than expected (>30 minutes), perform a post-mortem before closing out:
1. **Identify the bottleneck**: Was the delay caused by assumptions that turned out wrong? Missing visibility into runtime state? Incorrect mental model of a framework or language boundary? Trigger a post-mortem when ANY of the following is true (all are observable in your own transcript):
2. **Extract the general lesson**: What category of mistake was this? (e.g., "Python cannot call Cython `cdef` methods", "engine errors silently swallowed", "wrong layer to fix the problem")
3. **Propose a preventive rule**: Formulate it as a short, actionable statement. Present it to the user for approval. - **10+ tool calls** were used to diagnose a single issue
4. **Write it down**: Add the approved rule to the appropriate `.mdc` file so it applies to all future sessions. - **Same file modified 3+ times** without tests going green
- **3+ distinct approaches** attempted before arriving at the fix
- Any phrase like "let me try X instead" appeared **more than twice**
- A fix was eventually found by reading docs/source the agent had dismissed earlier
Post-mortem steps:
1. **Identify the bottleneck**: wrong assumption? missing runtime visibility? incorrect mental model of a framework/language boundary? ignored evidence?
2. **Extract the general lesson**: what category of mistake was this? (e.g., "Python cannot call Cython `cdef` methods", "engine errors silently swallowed", "wrong layer to fix the problem")
3. **Propose a preventive rule**: short, actionable. Present to user for approval.
4. **Write it down**: add approved rule to the appropriate `.mdc` so it applies to future sessions.
+1 -1
View File
@@ -4,7 +4,7 @@ alwaysApply: true
--- ---
# Quality Gates # Quality Gates
- After substantive code edits, run `ReadLints` on modified files and fix introduced errors - After any code edit that changes logic, adds/removes imports, or modifies function signatures, run `ReadLints` on modified files and fix introduced errors
- Before committing, run the project's formatter if one exists (black, rustfmt, prettier, dotnet format) - Before committing, run the project's formatter if one exists (black, rustfmt, prettier, dotnet format)
- Respect existing `.editorconfig`, `.prettierrc`, `pyproject.toml [tool.black]`, or `rustfmt.toml` - Respect existing `.editorconfig`, `.prettierrc`, `pyproject.toml [tool.black]`, or `rustfmt.toml`
- Do not commit code with Critical or High severity lint errors - Do not commit code with Critical or High severity lint errors
+1 -1
View File
@@ -4,6 +4,6 @@ alwaysApply: true
--- ---
# Tech Stack # Tech Stack
- Prefer Postgres database, but ask user - Prefer Postgres database, but ask user
- Depending on task, for backend prefer .Net or Python. Rust for performance-critical things. - For new backend projects: use .NET for structured enterprise/API services, Python for data/ML/scripting tasks, Rust for performance-critical components. For existing projects, use the language already established in that project.
- For the frontend, use React with Tailwind css (or even plain css, if it is a simple project) - For the frontend, use React with Tailwind css (or even plain css, if it is a simple project)
- document api with OpenAPI - document api with OpenAPI
+1 -1
View File
@@ -8,7 +8,7 @@ globs: ["**/*test*", "**/*spec*", "**/*Test*", "**/tests/**", "**/test/**"]
- One assertion per test when practical; name tests descriptively: `MethodName_Scenario_ExpectedResult` - One assertion per test when practical; name tests descriptively: `MethodName_Scenario_ExpectedResult`
- Test boundary conditions, error paths, and happy paths - Test boundary conditions, error paths, and happy paths
- Use mocks only for external dependencies; prefer real implementations for internal code - Use mocks only for external dependencies; prefer real implementations for internal code
- Aim for 80%+ coverage on business logic; 100% on critical paths - Aim for 75%+ coverage on business logic; 100% on critical paths (code paths where a bug would cause data loss, security breaches, financial errors, or system outages — identify from acceptance criteria marked as must-have or from security_approach.md). The 75% threshold is canonical — see `cursor-meta.mdc` Quality Thresholds.
- Integration tests use real database (Postgres testcontainers or dedicated test DB) - Integration tests use real database (Postgres testcontainers or dedicated test DB)
- Never use Thread Sleep or fixed delays in tests; use polling or async waits - Never use Thread Sleep or fixed delays in tests; use polling or async waits
- Keep test data factories/builders for reusable test setup - Keep test data factories/builders for reusable test setup
+39
View File
@@ -12,3 +12,42 @@ alwaysApply: true
- Project name: AZAION - Project name: AZAION
- All task IDs follow the format `AZ-<number>` - All task IDs follow the format `AZ-<number>`
- Issue types: Epic, Story, Task, Bug, Subtask - Issue types: Epic, Story, Task, Bug, Subtask
## Tracker Availability Gate
- If Jira MCP returns **Unauthorized**, **errored**, **connection refused**, or any non-success response: **STOP** tracker operations and notify the user via the Choose A/B/C/D format documented in `.cursor/skills/autodev/protocols.md`.
- The user may choose to:
- **Retry authentication** — preferred; the tracker remains the source of truth.
- **Continue in `tracker: local` mode** — only when the user explicitly accepts this option. In that mode all tasks keep numeric prefixes and a `Tracker: pending` marker is written into each task header. The state file records `tracker: local`. The mode is NOT silent — the user has been asked and has acknowledged the trade-off.
- Do NOT auto-fall-back to `tracker: local` without a user decision. Do not pretend a write succeeded. If the user is unreachable (e.g., non-interactive run), stop and wait.
- When the tracker becomes available again, any `Tracker: pending` tasks should be synced — this is done at the start of the next `/autodev` invocation via the Leftovers Mechanism below.
## Leftovers Mechanism (non-user-input blockers only)
When a **non-user** blocker prevents a tracker write (MCP down, network error, transient failure, ticket linkage recoverable later), record the deferred write in `_docs/_process_leftovers/<YYYY-MM-DD>_<topic>.md` and continue non-tracker work. Each entry must include:
- Timestamp (ISO 8601)
- What was blocked (ticket creation, status transition, comment, link)
- Full payload that would have been written (summary, description, story points, epic, target status) — so the write can be replayed later
- Reason for the blockage (MCP unavailable, auth expired, unknown epic ID pending user clarification, etc.)
### Hard gates that CANNOT be deferred to leftovers
Anything requiring user input MUST still block:
- Clarifications about requirements, scope, or priority
- Approval for destructive actions or irreversible changes
- Choice between alternatives (A/B/C decisions)
- Confirmation of assumptions that change task outcome
If a blocker of this kind appears, STOP and ASK — do not write to leftovers.
### Replay obligation
At the start of every `/autodev` invocation, and before any new tracker write in any skill, check `_docs/_process_leftovers/` for pending entries. For each entry:
1. Attempt to replay the deferred write against the tracker
2. If replay succeeds → delete the leftover entry
3. If replay still fails → update the entry's timestamp and reason, continue
4. If the blocker now requires user input (e.g., MCP still down after N retries) → surface to the user
Autodev must not progress past its own step 0 until all leftovers that CAN be replayed have been replayed.
+7
View File
@@ -0,0 +1,7 @@
# Workspace Boundary
- Only modify files within the current repository (workspace root).
- Never write, edit, or delete files in sibling repositories or parent directories outside the workspace.
- When a task requires changes in another repository (e.g., admin API, flights, UI), **document** the required changes in the task's implementation notes or a dedicated cross-repo doc — do not implement them.
- The mock API at `e2e/mocks/mock_api/` may be updated to reflect the expected contract of external services, but this is a test mock — not the real implementation.
- If a task is entirely scoped to another repository, mark it as out-of-scope for this workspace and note the target repository.
+135
View File
@@ -0,0 +1,135 @@
---
name: autodev
description: |
Auto-chaining orchestrator that drives the full BUILD-SHIP workflow from problem gathering through deployment.
Detects current project state from _docs/ folder, resumes from where it left off, and flows through
problem → research → plan → decompose → implement → deploy without manual skill invocation.
Maximizes work per conversation by auto-transitioning between skills.
Trigger phrases:
- "autodev", "auto", "start", "continue"
- "what's next", "where am I", "project status"
category: meta
tags: [orchestrator, workflow, auto-chain, state-machine, meta-skill]
disable-model-invocation: true
---
# Autodev Orchestrator
Auto-chaining execution engine that drives the full BUILD → SHIP workflow. Detects project state from `_docs/`, resumes from where work stopped, and flows through skills automatically. The user invokes `/autodev` once — the engine handles sequencing, transitions, and re-entry.
## File Index
| File | Purpose |
|------|---------|
| `flows/greenfield.md` | Detection rules, step table, and auto-chain rules for new projects |
| `flows/existing-code.md` | Detection rules, step table, and auto-chain rules for existing codebases |
| `flows/meta-repo.md` | Detection rules, step table, and auto-chain rules for meta-repositories (submodule aggregators, workspace monorepos) |
| `state.md` | State file format, rules, re-entry protocol, session boundaries |
| `protocols.md` | User interaction, tracker auth, choice format, error handling, status summary |
**On every invocation**: read `state.md`, `protocols.md`, and the active flow file before executing any logic. You don't need to read flow files for flows you're not in.
## Core Principles
- **Auto-chain**: when a skill completes, immediately start the next one — no pause between skills
- **Only pause at decision points**: BLOCKING gates inside sub-skills are the natural pause points; do not add artificial stops between steps
- **State from disk**: current step is persisted to `_docs/_autodev_state.md` and cross-checked against `_docs/` folder structure
- **Re-entry**: on every invocation, read the state file and cross-check against `_docs/` folders before continuing
- **Delegate, don't duplicate**: read and execute each sub-skill's SKILL.md; never inline their logic here
- **Sound on pause**: follow `.cursor/rules/human-attention-sound.mdc` — play a notification sound before every pause that requires human input (AskQuestion tool preferred for structured choices; fall back to plain text if unavailable)
- **Minimize interruptions**: only ask the user when the decision genuinely cannot be resolved automatically
- **Single project per workspace**: all `_docs/` paths are relative to workspace root; for multi-component systems, each component needs its own Cursor workspace. **Exception**: a meta-repo workspace (git-submodule aggregator or monorepo workspace) uses the `meta-repo` flow and maintains cross-cutting artifacts via `monorepo-*` skills rather than per-component BUILD-SHIP flows.
## Flow Resolution
Determine which flow to use (check in order — first match wins):
1. If `_docs/_autodev_state.md` exists → read the `flow` field and use that flow. (When a greenfield project completes its final cycle, the Done step rewrites `flow: existing-code` in-band so the next invocation enters the feature-cycle loop — see greenfield "Done".)
2. If the workspace is a **meta-repo****meta-repo flow**. Detected by: presence of `.gitmodules` with ≥2 submodules, OR `package.json` with `workspaces` field, OR `pnpm-workspace.yaml`, OR `Cargo.toml` with `[workspace]` section, OR `go.work`, OR an ad-hoc structure with multiple top-level component folders each containing their own project manifests. Optional tiebreaker: the workspace has little or no source code of its own at the root (just registry + orchestration files).
3. If workspace has **no source code files****greenfield flow**
4. If workspace has source code files **and** `_docs/` does not exist → **existing-code flow**
5. If workspace has source code files **and** `_docs/` exists → **existing-code flow**
After selecting the flow, apply its detection rules (first match wins) to determine the current step.
**Note**: the meta-repo flow uses a different artifact layout — its source of truth is `_docs/_repo-config.yaml`, not `_docs/NN_*/` folders. Other detection rules assume the BUILD-SHIP artifact layout; they don't apply to meta-repos.
## Execution Loop
Every invocation has three phases: **Bootstrap** (runs once), **Resolve** (runs once), **Execute Loop** (runs per step). Exit conditions are explicit.
```
### Bootstrap (once per invocation)
B1. Process leftovers — delegate to `.cursor/rules/tracker.mdc` → Leftovers Mechanism
(authoritative spec: replay rules, escalation, blocker handling).
B2. Surface Recent Lessons — print top 3 entries from `_docs/LESSONS.md` if present; skip silently otherwise.
B3. Read state — `_docs/_autodev_state.md` (if it exists).
B4. Read File Index — `state.md`, `protocols.md`, and the active flow file.
### Resolve (once per invocation, after Bootstrap)
R1. Reconcile state — verify state file against `_docs/` contents; on disagreement, trust the folders
and update the state file (rules: `state.md` → "State File Rules" #4).
After this step, `state.step` / `state.status` are authoritative.
R2. Resolve flow — see §Flow Resolution above.
R3. Resolve current step — when a state file exists, `state.step` drives detection.
When no state file exists, walk the active flow's detection rules in order;
first folder-probe match wins.
R4. Present Status Summary — banner template in `protocols.md` + step-list fragment from the active flow file.
### Execute Loop (per step)
loop:
E1. Delegate to the current skill (see §Skill Delegation below).
E2. On FAILED
→ apply Failure Handling (`protocols.md`): increment retry_count, auto-retry up to 3.
→ if retry_count reaches 3 → set status: failed → EXIT (escalate on next invocation).
E3. On success
→ reset retry_count, update state file (rules: `state.md`).
E4. Re-detect next step from the active flow's detection rules.
E5. If the transition is marked as a session boundary in the flow's Auto-Chain Rules
→ update state, present boundary Choose block, suggest new conversation → EXIT.
E6. If all steps done
→ update state, report completion → EXIT.
E7. Else
→ continue loop (go to E1 with the next skill).
```
## Skill Delegation
For each step, the delegation pattern is:
1. Update state file: set `step` to the autodev step number, status to `in_progress`, set `sub_step` to the sub-skill's current internal phase using the structured `{phase, name, detail}` schema (see `state.md`), reset `retry_count: 0`
2. Announce: "Starting [Skill Name]..."
3. Read the skill file: `.cursor/skills/[name]/SKILL.md`
4. Execute the skill's workflow exactly as written, including all BLOCKING gates, self-verification checklists, save actions, and escalation rules. Update `sub_step.phase`, `sub_step.name`, and optional `sub_step.detail` in state each time the sub-skill advances to a new internal phase.
5. If the skill **fails**: follow Failure Handling in `protocols.md` — increment `retry_count`, auto-retry up to 3 times, then escalate.
6. When complete (success): reset `retry_count: 0`, update state file to the next step with `status: not_started` and `sub_step: {phase: 0, name: awaiting-invocation, detail: ""}`, return to auto-chain rules (from active flow file)
**sub_step read fallback**: when reading `sub_step`, parse the structured form. If parsing fails (legacy free-text value) OR the named phase is not recognized, log a warning and fall back to a folder scan of the sub-skill's artifact directory to infer progress. Do not silently treat a malformed sub_step as phase 0 — that would cause a sub-skill to restart from scratch after each resume.
Do NOT modify, skip, or abbreviate any part of the sub-skill's workflow. The autodev is a sequencer, not an optimizer.
## State File
The state file (`_docs/_autodev_state.md`) is a minimal pointer — only the current step. See `state.md` for the authoritative template, field semantics, update rules, and worked examples. Do not restate the schema here — `state.md` is the single source of truth.
## Trigger Conditions
This skill activates when the user wants to:
- Start a new project from scratch
- Continue an in-progress project
- Check project status
- Let the AI guide them through the full workflow
**Keywords**: "autodev", "auto", "start", "continue", "what's next", "where am I", "project status"
**Invocation model**: this skill is explicitly user-invoked only (`disable-model-invocation: true` in the front matter). The keywords above aid skill discovery and tooling (other skills / agents can reason about when `/autodev` is appropriate), but the model never auto-fires this skill from a keyword match. The user always types `/autodev`.
**Differentiation**:
- User wants only research → use `/research` directly
- User wants only planning → use `/plan` directly
- User wants to document an existing codebase → use `/document` directly
- User wants the full guided workflow → use `/autodev`
## Flow Reference
See `flows/greenfield.md`, `flows/existing-code.md`, and `flows/meta-repo.md` for step tables, detection rules, auto-chain rules, and each flow's Status Summary step-list fragment. The banner that wraps those fragments lives in `protocols.md` → "Banner Template (authoritative)".
@@ -0,0 +1,406 @@
# Existing Code Workflow
Workflow for projects with an existing codebase. Structurally it has **two phases**:
- **Phase A — One-time baseline setup (Steps 18)**: runs exactly once per codebase. Documents the code, produces test specs, makes the code testable, writes and runs the initial test suite, optionally refactors with that safety net.
- **Phase B — Feature cycle (Steps 917, loops)**: runs once per new feature. After Step 17 (Retrospective), the flow loops back to Step 9 (New Task) with `state.cycle` incremented.
A first-time run executes Phase A then Phase B; every subsequent invocation re-enters Phase B.
## Step Reference Table
### Phase A — One-time baseline setup
| Step | Name | Sub-Skill | Internal SubSteps |
|------|------|-----------|-------------------|
| 1 | Document | document/SKILL.md | Steps 18 |
| 2 | Architecture Baseline Scan | code-review/SKILL.md (baseline mode) | Phase 1 + Phase 7 |
| 3 | Test Spec | test-spec/SKILL.md | Phases 14 |
| 4 | Code Testability Revision | refactor/SKILL.md (guided mode) | Phases 07 (conditional) |
| 5 | Decompose Tests | decompose/SKILL.md (tests-only) | Step 1t + Step 3 + Step 4 |
| 6 | Implement Tests | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
| 7 | Run Tests | test-run/SKILL.md | Steps 14 |
| 8 | Refactor | refactor/SKILL.md | Phases 07 (optional) |
### Phase B — Feature cycle (loops back to Step 9 after Step 17)
| Step | Name | Sub-Skill | Internal SubSteps |
|------|------|-----------|-------------------|
| 9 | New Task | new-task/SKILL.md | Steps 18 (loop) |
| 10 | Implement | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
| 11 | Run Tests | test-run/SKILL.md | Steps 14 |
| 12 | Test-Spec Sync | test-spec/SKILL.md (cycle-update mode) | Phase 2 + Phase 3 (scoped) |
| 13 | Update Docs | document/SKILL.md (task mode) | Task Steps 05 |
| 14 | Security Audit | security/SKILL.md | Phase 15 (optional) |
| 15 | Performance Test | test-run/SKILL.md (perf mode) | Steps 15 (optional) |
| 16 | Deploy | deploy/SKILL.md | Step 17 |
| 17 | Retrospective | retrospective/SKILL.md (cycle-end mode) | Steps 14 |
After Step 17, the feature cycle completes and the flow loops back to Step 9 with `state.cycle + 1` — see "Re-Entry After Completion" below.
## Detection Rules
**Resolution**: when a state file exists, `state.step` + `state.status` drive detection and the conditions below are not consulted. When no state file exists (cold start), walk the rules in order — first folder-probe match wins. Steps without a folder probe are state-driven only; they can only be reached by auto-chain from a prior step. Cycle-scoped steps (Step 10 onward) always read `state.cycle` to disambiguate current vs. prior cycle artifacts.
---
### Phase A — One-time baseline setup (Steps 18)
**Step 1 — Document**
Condition: `_docs/` does not exist AND the workspace contains source code files (e.g., `*.py`, `*.cs`, `*.rs`, `*.ts`, `src/`, `Cargo.toml`, `*.csproj`, `package.json`)
Action: An existing codebase without documentation was detected. Read and execute `.cursor/skills/document/SKILL.md`. After the document skill completes, re-detect state (the produced `_docs/` artifacts will place the project at Step 2 or later).
The document skill's Step 2.5 produces `_docs/02_document/module-layout.md`, which is required by every downstream step that assigns file ownership (`/implement` Step 4, `/code-review` Phase 7, `/refactor` discovery). If this file is missing after Step 1 completes (e.g., a pre-existing `_docs/` dir predates the 2.5 addition), re-invoke `/document` in resume mode — it will pick up at Step 2.5.
---
**Step 2 — Architecture Baseline Scan**
Condition: `_docs/02_document/FINAL_report.md` exists AND `_docs/02_document/architecture.md` exists AND `_docs/02_document/architecture_compliance_baseline.md` does not exist.
Action: Invoke `.cursor/skills/code-review/SKILL.md` in **baseline mode** (Phase 1 + Phase 7 only) against the full existing codebase. Phase 7 produces a structural map of the code vs. the just-documented `architecture.md`. Save the output to `_docs/02_document/architecture_compliance_baseline.md`.
Rationale: existing codebases often have pre-existing architecture violations (cycles, cross-component private imports, duplicate logic). Catching them here, before the Testability Revision (Step 4), gives the user a chance to fold structural fixes into the refactor scope.
After completion, if the baseline report contains **High or Critical** Architecture findings:
- Append them to the testability `list-of-changes.md` input in Step 4 (so testability refactor can address the most disruptive ones along with testability fixes), OR
- Surface them to the user via Choose format to defer to Step 8 (optional Refactor).
If the baseline report is clean (no High/Critical findings), auto-chain directly to Step 3.
---
**Step 3 — Test Spec**
Condition (folder fallback): `_docs/02_document/FINAL_report.md` exists AND workspace contains source code files AND `_docs/02_document/tests/traceability-matrix.md` does not exist.
State-driven: reached by auto-chain from Step 2.
Action: Read and execute `.cursor/skills/test-spec/SKILL.md`
This step applies when the codebase was documented via the `/document` skill. Test specifications must be produced before refactoring or further development.
---
**Step 4 — Code Testability Revision**
Condition (folder fallback): `_docs/02_document/tests/traceability-matrix.md` exists AND no test tasks exist yet in `_docs/02_tasks/todo/`.
State-driven: reached by auto-chain from Step 3.
**Purpose**: enable tests to run at all. Without this step, hardcoded URLs, file paths, credentials, or global singletons can prevent the test suite from exercising the code against a controlled environment. The test authors need a testable surface before they can write tests that mean anything.
**Scope — MINIMAL, SURGICAL fixes**: this is not a profound refactor. It is the smallest set of changes (sometimes temporary hacks) required to make code runnable under tests. "Smallest" beats "elegant" here — deeper structural improvements belong in Step 8 (Refactor), not this step.
**Allowed changes** in this phase:
- Replace hardcoded URLs / file paths / credentials / magic numbers with env vars or constructor arguments.
- Extract narrow interfaces for components that need stubbing in tests.
- Add optional constructor parameters for dependency injection; default to the existing hardcoded behavior so callers do not break.
- Wrap global singletons in thin accessors that tests can override (thread-local / context var / setter gate).
- Split a huge function ONLY when necessary to stub one of its collaborators — do not split for clarity alone.
**NOT allowed** in this phase (defer to Step 8 Refactor):
- Renaming public APIs (breaks consumers without a safety net).
- Moving code between files unless strictly required for isolation.
- Changing algorithms or business logic.
- Restructuring module boundaries or rewriting layers.
**Safety**: Phase 3 (Safety Net) of the refactor skill is skipped here **by design** — no tests exist yet to form the safety net. Compensating controls:
- Every change is bounded by the allowed/not-allowed lists above.
- `list-of-changes.md` must be reviewed by the user BEFORE execution (refactor skill enforces this gate).
- After execution, the refactor skill produces `RUN_DIR/testability_changes_summary.md` — a plain-language list of every applied change and why. Present this to the user before auto-chaining to Step 5.
Action: Analyze the codebase against the test specs to determine whether the code can be tested as-is.
1. Read `_docs/02_document/tests/traceability-matrix.md` and all test scenario files in `_docs/02_document/tests/`.
2. Read `_docs/02_document/architecture_compliance_baseline.md` (produced in Step 2). If it contains High/Critical Architecture findings that overlap with testability issues, consider including the lightest structural fixes inline; leave the rest for Step 8.
3. For each test scenario, check whether the code under test can be exercised in isolation. Look for:
- Hardcoded file paths or directory references
- Hardcoded configuration values (URLs, credentials, magic numbers)
- Global mutable state that cannot be overridden
- Tight coupling to external services without abstraction
- Missing dependency injection or non-configurable parameters
- Direct file system operations without path configurability
- Inline construction of heavy dependencies (models, clients)
4. If ALL scenarios are testable as-is:
- Mark Step 4 as `completed` with outcome "Code is testable — no changes needed"
- Auto-chain to Step 5 (Decompose Tests)
5. If testability issues are found:
- Create `_docs/04_refactoring/01-testability-refactoring/`
- Write `list-of-changes.md` in that directory using the refactor skill template (`.cursor/skills/refactor/templates/list-of-changes.md`), with:
- **Mode**: `guided`
- **Source**: `autodev-testability-analysis`
- One change entry per testability issue found (change ID, file paths, problem, proposed change, risk, dependencies). Each entry must fit the allowed-changes list above; reject entries that drift into full refactor territory and log them under "Deferred to Step 8 Refactor" instead.
- Invoke the refactor skill in **guided mode**: read and execute `.cursor/skills/refactor/SKILL.md` with the `list-of-changes.md` as input
- The refactor skill will create RUN_DIR (`01-testability-refactoring`), create tasks in `_docs/02_tasks/todo/`, delegate to implement skill, and verify results
- Phase 3 (Safety Net) is automatically skipped by the refactor skill for testability runs
- After execution, the refactor skill produces `RUN_DIR/testability_changes_summary.md`. Surface this summary to the user via the Choose format (accept / request follow-up) before auto-chaining.
- Mark Step 4 as `completed`
- Auto-chain to Step 5 (Decompose Tests)
---
**Step 5 — Decompose Tests**
Condition (folder fallback): `_docs/02_document/tests/traceability-matrix.md` exists AND workspace contains source code files AND (`_docs/02_tasks/todo/` does not exist or has no test task files).
State-driven: reached by auto-chain from Step 4 (completed or skipped).
Action: Read and execute `.cursor/skills/decompose/SKILL.md` in **tests-only mode** (pass `_docs/02_document/tests/` as input). The decompose skill will:
1. Run Step 1t (test infrastructure bootstrap)
2. Run Step 3 (blackbox test task decomposition)
3. Run Step 4 (cross-verification against test coverage)
If `_docs/02_tasks/` subfolders have some task files already (e.g., refactoring tasks from Step 4), the decompose skill's resumability handles it — it appends test tasks alongside existing tasks.
---
**Step 6 — Implement Tests**
Condition (folder fallback): `_docs/02_tasks/todo/` contains task files AND `_dependencies_table.md` exists AND `_docs/03_implementation/implementation_report_tests.md` does not exist.
State-driven: reached by auto-chain from Step 5.
Action: Read and execute `.cursor/skills/implement/SKILL.md`
The implement skill reads test tasks from `_docs/02_tasks/todo/` and implements them.
If `_docs/03_implementation/` has batch reports, the implement skill detects completed tasks and continues.
---
**Step 7 — Run Tests**
Condition (folder fallback): `_docs/03_implementation/implementation_report_tests.md` exists.
State-driven: reached by auto-chain from Step 6.
Action: Read and execute `.cursor/skills/test-run/SKILL.md`
Verifies the implemented test suite passes before proceeding to refactoring. The tests form the safety net for all subsequent code changes.
---
**Step 8 — Refactor (optional)**
State-driven: reached by auto-chain from Step 7. (Sanity check: no `_docs/04_refactoring/` run folder should contain a `FINAL_report.md` for a non-testability run when entering this step for the first time.)
Action: Present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Refactor codebase before adding new features?
══════════════════════════════════════
A) Run refactoring (recommended if code quality issues were noted during documentation)
B) Skip — proceed directly to New Task
══════════════════════════════════════
Recommendation: [A or B — base on whether documentation
flagged significant code smells, coupling issues, or
technical debt worth addressing before new development]
══════════════════════════════════════
```
- If user picks A → Read and execute `.cursor/skills/refactor/SKILL.md` in automatic mode. The refactor skill creates a new run folder in `_docs/04_refactoring/` (e.g., `02-coupling-refactoring`), runs the full method using the implemented tests as a safety net. After completion, auto-chain to Step 9 (New Task).
- If user picks B → Mark Step 8 as `skipped` in the state file, auto-chain to Step 9 (New Task).
---
### Phase B — Feature cycle (Steps 917, loops)
**Step 9 — New Task**
State-driven: reached by auto-chain from Step 8 (completed or skipped). This is also the re-entry point after a completed cycle — see "Re-Entry After Completion" below.
Action: Read and execute `.cursor/skills/new-task/SKILL.md`
The new-task skill interactively guides the user through defining new functionality. It loops until the user is done adding tasks. New task files are written to `_docs/02_tasks/todo/`.
---
**Step 10 — Implement**
State-driven: reached by auto-chain from Step 9 in the CURRENT cycle (matching `state.cycle`). Detection is purely state-driven — prior cycles will have left `implementation_report_{feature_slug}_cycle{N-1}.md` artifacts that must not block new cycles.
Action: Read and execute `.cursor/skills/implement/SKILL.md`
The implement skill reads the new tasks from `_docs/02_tasks/todo/` and implements them. Tasks already implemented in Step 6 or prior cycles are skipped (completed tasks have been moved to `done/`).
**Implementation report naming**: the final report for this cycle must be named `implementation_report_{feature_slug}_cycle{N}.md` where `{N}` is `state.cycle`. Batch reports are named `batch_{NN}_cycle{M}_report.md` so the cycle counter survives folder scans.
If `_docs/03_implementation/` has batch reports from the current cycle, the implement skill detects completed tasks and continues.
---
**Step 11 — Run Tests**
State-driven: reached by auto-chain from Step 10.
Action: Read and execute `.cursor/skills/test-run/SKILL.md`
---
**Step 12 — Test-Spec Sync**
State-driven: reached by auto-chain from Step 11. Requires `_docs/02_document/tests/traceability-matrix.md` to exist — if missing, mark Step 12 `skipped` (see Action below).
Action: Read and execute `.cursor/skills/test-spec/SKILL.md` in **cycle-update mode**. Pass the cycle's completed task specs (files in `_docs/02_tasks/done/` moved during this cycle) and the implementation report `_docs/03_implementation/implementation_report_{feature_slug}_cycle{N}.md` as inputs.
The skill appends new ACs, scenarios, and NFRs to the existing test-spec files without rewriting unaffected sections. If `traceability-matrix.md` is missing (e.g., cycle added after a greenfield-only project), mark Step 12 as `skipped` — the next `/test-spec` full run will regenerate it.
After completion, auto-chain to Step 13 (Update Docs).
---
**Step 13 — Update Docs**
State-driven: reached by auto-chain from Step 12 (completed or skipped). Requires `_docs/02_document/` to contain existing documentation — if missing, mark Step 13 `skipped` (see Action below).
Action: Read and execute `.cursor/skills/document/SKILL.md` in **Task mode**. Pass all task spec files from `_docs/02_tasks/done/` that were implemented in the current cycle (i.e., tasks moved to `done/` during Steps 910 of this cycle).
The document skill in Task mode:
1. Reads each task spec to identify changed source files
2. Updates affected module docs, component docs, and system-level docs
3. Does NOT redo full discovery, verification, or problem extraction
If `_docs/02_document/` does not contain existing docs (e.g., documentation step was skipped), mark Step 13 as `skipped`.
After completion, auto-chain to Step 14 (Security Audit).
---
**Step 14 — Security Audit (optional)**
State-driven: reached by auto-chain from Step 13 (completed or skipped).
Action: Apply the **Optional Skill Gate** (`protocols.md` → "Optional Skill Gate") with:
- question: `Run security audit before deploy?`
- option-a-label: `Run security audit (recommended for production deployments)`
- option-b-label: `Skip — proceed directly to deploy`
- recommendation: `A — catches vulnerabilities before production`
- target-skill: `.cursor/skills/security/SKILL.md`
- next-step: Step 15 (Performance Test)
---
**Step 15 — Performance Test (optional)**
State-driven: reached by auto-chain from Step 14 (completed or skipped).
Action: Apply the **Optional Skill Gate** (`protocols.md` → "Optional Skill Gate") with:
- question: `Run performance/load tests before deploy?`
- option-a-label: `Run performance tests (recommended for latency-sensitive or high-load systems)`
- option-b-label: `Skip — proceed directly to deploy`
- recommendation: `A or B — base on whether acceptance criteria include latency, throughput, or load requirements`
- target-skill: `.cursor/skills/test-run/SKILL.md` in **perf mode** (the skill handles runner detection, threshold comparison, and its own A/B/C gate on threshold failures)
- next-step: Step 16 (Deploy)
---
**Step 16 — Deploy**
State-driven: reached by auto-chain from Step 15 (completed or skipped).
Action: Read and execute `.cursor/skills/deploy/SKILL.md`.
After the deploy skill completes successfully, mark Step 16 as `completed` and auto-chain to Step 17 (Retrospective).
---
**Step 17 — Retrospective**
State-driven: reached by auto-chain from Step 16, for the current `state.cycle`.
Action: Read and execute `.cursor/skills/retrospective/SKILL.md` in **cycle-end mode**. Pass cycle context (`cycle: state.cycle`) so the retro report and LESSONS.md entries record which feature cycle they came from.
After retrospective completes, mark Step 17 as `completed` and enter "Re-Entry After Completion" evaluation.
---
**Re-Entry After Completion**
State-driven: `state.step == done` OR Step 17 (Retrospective) is completed for `state.cycle`.
Action: The project completed a full cycle. Print the status banner and automatically loop back to New Task — do NOT ask the user for confirmation:
```
══════════════════════════════════════
PROJECT CYCLE COMPLETE
══════════════════════════════════════
The previous cycle finished successfully.
Starting new feature cycle…
══════════════════════════════════════
```
Set `step: 9`, `status: not_started`, and **increment `cycle`** (`cycle: state.cycle + 1`) in the state file, then auto-chain to Step 9 (New Task). Reset `sub_step` to `phase: 0, name: awaiting-invocation, detail: ""` and `retry_count: 0`.
Note: the loop (Steps 9 → 17 → 9) ensures every feature cycle includes: New Task → Implement → Run Tests → Test-Spec Sync → Update Docs → Security → Performance → Deploy → Retrospective.
## Auto-Chain Rules
### Phase A — One-time baseline setup
| Completed Step | Next Action |
|---------------|-------------|
| Document (1) | Auto-chain → Architecture Baseline Scan (2) |
| Architecture Baseline Scan (2) | Auto-chain → Test Spec (3). If baseline has High/Critical Architecture findings, surface them as inputs to Step 4 (testability) or defer to Step 8 (refactor). |
| Test Spec (3) | Auto-chain → Code Testability Revision (4) |
| Code Testability Revision (4) | Auto-chain → Decompose Tests (5) |
| Decompose Tests (5) | **Session boundary** — suggest new conversation before Implement Tests |
| Implement Tests (6) | Auto-chain → Run Tests (7) |
| Run Tests (7, all pass) | Auto-chain → Refactor choice (8) |
| Refactor (8, done or skipped) | Auto-chain → New Task (9) — enters Phase B |
### Phase B — Feature cycle (loops)
| Completed Step | Next Action |
|---------------|-------------|
| New Task (9) | **Session boundary** — suggest new conversation before Implement |
| Implement (10) | Auto-chain → Run Tests (11) |
| Run Tests (11, all pass) | Auto-chain → Test-Spec Sync (12) |
| Test-Spec Sync (12, done or skipped) | Auto-chain → Update Docs (13) |
| Update Docs (13) | Auto-chain → Security Audit choice (14) |
| Security Audit (14, done or skipped) | Auto-chain → Performance Test choice (15) |
| Performance Test (15, done or skipped) | Auto-chain → Deploy (16) |
| Deploy (16) | Auto-chain → Retrospective (17) |
| Retrospective (17) | **Cycle complete** — loop back to New Task (9) with incremented cycle counter |
## Status Summary — Step List
Flow name: `existing-code`. Render using the banner template in `protocols.md` → "Banner Template (authoritative)".
Flow-specific slot values:
- `<header-suffix>`: ` — Cycle <N>` when `state.cycle > 1`; otherwise empty.
- `<current-suffix>`: ` (cycle <N>)` when `state.cycle > 1`; otherwise empty.
- `<footer-extras>`: empty.
**Phase A — One-time baseline setup**
| # | Step Name | Extra state tokens (beyond the shared set) |
|---|-----------------------------|--------------------------------------------|
| 1 | Document | — |
| 2 | Architecture Baseline | — |
| 3 | Test Spec | — |
| 4 | Code Testability Revision | — |
| 5 | Decompose Tests | `DONE (N tasks)` |
| 6 | Implement Tests | `IN PROGRESS (batch M)` |
| 7 | Run Tests | `DONE (N passed, M failed)` |
| 8 | Refactor | `IN PROGRESS (phase N)` |
**Phase B — Feature cycle (loops)**
| # | Step Name | Extra state tokens (beyond the shared set) |
|---|-----------------------------|--------------------------------------------|
| 9 | New Task | `DONE (N tasks)` |
| 10 | Implement | `IN PROGRESS (batch M of ~N)` |
| 11 | Run Tests | `DONE (N passed, M failed)` |
| 12 | Test-Spec Sync | — |
| 13 | Update Docs | — |
| 14 | Security Audit | — |
| 15 | Performance Test | — |
| 16 | Deploy | — |
| 17 | Retrospective | — |
All rows accept the shared state tokens (`DONE`, `IN PROGRESS`, `NOT STARTED`, `FAILED (retry N/3)`); rows 2, 4, 8, 12, 13, 14, 15 additionally accept `SKIPPED`.
Row rendering format (renders with a phase separator between Step 8 and Step 9):
```
── Phase A: One-time baseline setup ──
Step 1 Document [<state token>]
Step 2 Architecture Baseline [<state token>]
Step 3 Test Spec [<state token>]
Step 4 Code Testability Rev. [<state token>]
Step 5 Decompose Tests [<state token>]
Step 6 Implement Tests [<state token>]
Step 7 Run Tests [<state token>]
Step 8 Refactor [<state token>]
── Phase B: Feature cycle (loops) ──
Step 9 New Task [<state token>]
Step 10 Implement [<state token>]
Step 11 Run Tests [<state token>]
Step 12 Test-Spec Sync [<state token>]
Step 13 Update Docs [<state token>]
Step 14 Security Audit [<state token>]
Step 15 Performance Test [<state token>]
Step 16 Deploy [<state token>]
Step 17 Retrospective [<state token>]
```
+237
View File
@@ -0,0 +1,237 @@
# Greenfield Workflow
Workflow for new projects built from scratch. Flows linearly: Problem → Research → Plan → UI Design (if applicable) → Decompose → Implement → Run Tests → Security Audit (optional) → Performance Test (optional) → Deploy → Retrospective.
## Step Reference Table
| Step | Name | Sub-Skill | Internal SubSteps |
|------|------|-----------|-------------------|
| 1 | Problem | problem/SKILL.md | Phase 14 |
| 2 | Research | research/SKILL.md | Mode A: Phase 14 · Mode B: Step 08 |
| 3 | Plan | plan/SKILL.md | Step 16 + Final |
| 4 | UI Design | ui-design/SKILL.md | Phase 08 (conditional — UI projects only) |
| 5 | Decompose | decompose/SKILL.md | Step 14 |
| 6 | Implement | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
| 7 | Run Tests | test-run/SKILL.md | Steps 14 |
| 8 | Security Audit | security/SKILL.md | Phase 15 (optional) |
| 9 | Performance Test | test-run/SKILL.md (perf mode) | Steps 15 (optional) |
| 10 | Deploy | deploy/SKILL.md | Step 17 |
| 11 | Retrospective | retrospective/SKILL.md (cycle-end mode) | Steps 14 |
## Detection Rules
**Resolution**: when a state file exists, `state.step` + `state.status` drive detection and the conditions below are not consulted. When no state file exists (cold start), walk the rules in order — first folder-probe match wins. Steps without a folder probe are state-driven only; they can only be reached by auto-chain from a prior step.
---
**Step 1 — Problem Gathering**
Condition: `_docs/00_problem/` does not exist, OR any of these are missing/empty:
- `problem.md`
- `restrictions.md`
- `acceptance_criteria.md`
- `input_data/` (must contain at least one file)
Action: Read and execute `.cursor/skills/problem/SKILL.md`
---
**Step 2 — Research (Initial)**
Condition: `_docs/00_problem/` is complete AND `_docs/01_solution/` has no `solution_draft*.md` files
Action: Read and execute `.cursor/skills/research/SKILL.md` (will auto-detect Mode A)
---
**Research Decision** (inline gate between Step 2 and Step 3)
Condition: `_docs/01_solution/` contains `solution_draft*.md` files AND `_docs/01_solution/solution.md` does not exist AND `_docs/02_document/architecture.md` does not exist
Action: Present the current research state to the user:
- How many solution drafts exist
- Whether tech_stack.md and security_analysis.md exist
- One-line summary from the latest draft
Then present using the **Choose format**:
```
══════════════════════════════════════
DECISION REQUIRED: Research complete — next action?
══════════════════════════════════════
A) Run another research round (Mode B assessment)
B) Proceed to planning with current draft
══════════════════════════════════════
Recommendation: [A or B] — [reason based on draft quality]
══════════════════════════════════════
```
- If user picks A → Read and execute `.cursor/skills/research/SKILL.md` (will auto-detect Mode B)
- If user picks B → auto-chain to Step 3 (Plan)
---
**Step 3 — Plan**
Condition: `_docs/01_solution/` has `solution_draft*.md` files AND `_docs/02_document/architecture.md` does not exist
Action:
1. The plan skill's Prereq 2 will rename the latest draft to `solution.md` — this is handled by the plan skill itself
2. Read and execute `.cursor/skills/plan/SKILL.md`
If `_docs/02_document/` exists but is incomplete (has some artifacts but no `FINAL_report.md`), the plan skill's built-in resumability handles it.
---
**Step 4 — UI Design (conditional)**
Condition (folder fallback): `_docs/02_document/architecture.md` exists AND `_docs/02_tasks/todo/` does not exist or has no task files.
State-driven: reached by auto-chain from Step 3.
Action: Read and execute `.cursor/skills/ui-design/SKILL.md`. The skill runs its own **Applicability Check**, which handles UI project detection and the user's A/B choice. It returns one of:
- `outcome: completed` → mark Step 4 as `completed`, auto-chain to Step 5 (Decompose).
- `outcome: skipped, reason: not-a-ui-project` → mark Step 4 as `skipped`, auto-chain to Step 5.
- `outcome: skipped, reason: user-declined` → mark Step 4 as `skipped`, auto-chain to Step 5.
The autodev no longer inlines UI detection heuristics — they live in `ui-design/SKILL.md` under "Applicability Check".
---
**Step 5 — Decompose**
Condition: `_docs/02_document/` contains `architecture.md` AND `_docs/02_document/components/` has at least one component AND `_docs/02_tasks/todo/` does not exist or has no task files
Action: Read and execute `.cursor/skills/decompose/SKILL.md`
If `_docs/02_tasks/` subfolders have some task files already, the decompose skill's resumability handles it.
---
**Step 6 — Implement**
Condition: `_docs/02_tasks/todo/` contains task files AND `_dependencies_table.md` exists AND `_docs/03_implementation/` does not contain any `implementation_report_*.md` file
Action: Read and execute `.cursor/skills/implement/SKILL.md`
If `_docs/03_implementation/` has batch reports, the implement skill detects completed tasks and continues. The FINAL report filename is context-dependent — see implement skill documentation for naming convention.
---
**Step 7 — Run Tests**
Condition (folder fallback): `_docs/03_implementation/` contains an `implementation_report_*.md` file.
State-driven: reached by auto-chain from Step 6.
Action: Read and execute `.cursor/skills/test-run/SKILL.md`
---
**Step 8 — Security Audit (optional)**
State-driven: reached by auto-chain from Step 7.
Action: Apply the **Optional Skill Gate** (`protocols.md` → "Optional Skill Gate") with:
- question: `Run security audit before deploy?`
- option-a-label: `Run security audit (recommended for production deployments)`
- option-b-label: `Skip — proceed directly to deploy`
- recommendation: `A — catches vulnerabilities before production`
- target-skill: `.cursor/skills/security/SKILL.md`
- next-step: Step 9 (Performance Test)
---
**Step 9 — Performance Test (optional)**
State-driven: reached by auto-chain from Step 8.
Action: Apply the **Optional Skill Gate** (`protocols.md` → "Optional Skill Gate") with:
- question: `Run performance/load tests before deploy?`
- option-a-label: `Run performance tests (recommended for latency-sensitive or high-load systems)`
- option-b-label: `Skip — proceed directly to deploy`
- recommendation: `A or B — base on whether acceptance criteria include latency, throughput, or load requirements`
- target-skill: `.cursor/skills/test-run/SKILL.md` in **perf mode** (the skill handles runner detection, threshold comparison, and its own A/B/C gate on threshold failures)
- next-step: Step 10 (Deploy)
---
**Step 10 — Deploy**
State-driven: reached by auto-chain from Step 9 (after Step 9 is completed or skipped).
Action: Read and execute `.cursor/skills/deploy/SKILL.md`.
After the deploy skill completes successfully, mark Step 10 as `completed` and auto-chain to Step 11 (Retrospective).
---
**Step 11 — Retrospective**
State-driven: reached by auto-chain from Step 10.
Action: Read and execute `.cursor/skills/retrospective/SKILL.md` in **cycle-end mode**. This closes the cycle's feedback loop by folding metrics into `_docs/06_metrics/retro_<date>.md` and appending the top-3 lessons to `_docs/LESSONS.md`.
After retrospective completes, mark Step 11 as `completed` and enter "Done" evaluation.
---
**Done**
State-driven: reached by auto-chain from Step 11. (Sanity check: `_docs/04_deploy/` should contain all expected artifacts — containerization.md, ci_cd_pipeline.md, environment_strategy.md, observability.md, deployment_procedures.md, deploy_scripts.md.)
Action: Report project completion with summary. Then **rewrite the state file** so the next `/autodev` invocation enters the feature-cycle loop in the existing-code flow:
```
flow: existing-code
step: 9
name: New Task
status: not_started
sub_step:
phase: 0
name: awaiting-invocation
detail: ""
retry_count: 0
cycle: 1
```
On the next invocation, Flow Resolution rule 1 reads `flow: existing-code` and re-entry flows directly into existing-code Step 9 (New Task).
## Auto-Chain Rules
| Completed Step | Next Action |
|---------------|-------------|
| Problem (1) | Auto-chain → Research (2) |
| Research (2) | Auto-chain → Research Decision (ask user: another round or proceed?) |
| Research Decision → proceed | Auto-chain → Plan (3) |
| Plan (3) | Auto-chain → UI Design detection (4) |
| UI Design (4, done or skipped) | Auto-chain → Decompose (5) |
| Decompose (5) | **Session boundary** — suggest new conversation before Implement |
| Implement (6) | Auto-chain → Run Tests (7) |
| Run Tests (7, all pass) | Auto-chain → Security Audit choice (8) |
| Security Audit (8, done or skipped) | Auto-chain → Performance Test choice (9) |
| Performance Test (9, done or skipped) | Auto-chain → Deploy (10) |
| Deploy (10) | Auto-chain → Retrospective (11) |
| Retrospective (11) | Report completion; rewrite state to existing-code flow, step 9 |
## Status Summary — Step List
Flow name: `greenfield`. Render using the banner template in `protocols.md` → "Banner Template (authoritative)". No header-suffix, current-suffix, or footer-extras — all empty for this flow.
| # | Step Name | Extra state tokens (beyond the shared set) |
|---|--------------------|--------------------------------------------|
| 1 | Problem | — |
| 2 | Research | `DONE (N drafts)` |
| 3 | Plan | — |
| 4 | UI Design | — |
| 5 | Decompose | `DONE (N tasks)` |
| 6 | Implement | `IN PROGRESS (batch M of ~N)` |
| 7 | Run Tests | `DONE (N passed, M failed)` |
| 8 | Security Audit | — |
| 9 | Performance Test | — |
| 10 | Deploy | — |
| 11 | Retrospective | — |
All rows also accept the shared state tokens (`DONE`, `IN PROGRESS`, `NOT STARTED`, `FAILED (retry N/3)`); rows 4, 8, 9 additionally accept `SKIPPED`.
Row rendering format (step-number column is right-padded to 2 characters for alignment):
```
Step 1 Problem [<state token>]
Step 2 Research [<state token>]
Step 3 Plan [<state token>]
Step 4 UI Design [<state token>]
Step 5 Decompose [<state token>]
Step 6 Implement [<state token>]
Step 7 Run Tests [<state token>]
Step 8 Security Audit [<state token>]
Step 9 Performance Test [<state token>]
Step 10 Deploy [<state token>]
Step 11 Retrospective [<state token>]
```
+207
View File
@@ -0,0 +1,207 @@
# Meta-Repo Workflow
Workflow for **meta-repositories** — repos that aggregate multiple components via git submodules, npm/cargo/pnpm/go workspaces, or ad-hoc conventions. The meta-repo itself has little or no source code of its own; it orchestrates cross-cutting documentation, CI/CD, and component registration.
This flow differs fundamentally from `greenfield` and `existing-code`:
- **No problem/research/plan phases** — meta-repos don't build features, they coordinate existing ones
- **No test spec / implement / run tests** — the meta-repo has no code to test
- **No `_docs/00_problem/` artifacts** — documentation target is `_docs/*.md` unified docs, not per-feature `_docs/NN_feature/` folders
- **Primary artifact is `_docs/_repo-config.yaml`** — generated by `monorepo-discover`, read by every other step
## Step Reference Table
| Step | Name | Sub-Skill | Internal SubSteps |
|------|------|-----------|-------------------|
| 1 | Discover | monorepo-discover/SKILL.md | Phase 110 |
| 2 | Config Review | (human checkpoint, no sub-skill) | — |
| 3 | Status | monorepo-status/SKILL.md | Sections 15 |
| 4 | Document Sync | monorepo-document/SKILL.md | Phase 17 (conditional on doc drift) |
| 5 | CICD Sync | monorepo-cicd/SKILL.md | Phase 17 (conditional on CI drift) |
| 6 | Loop | (auto-return to Step 3 on next invocation) | — |
**Onboarding is NOT in the auto-chain.** Onboarding a new component is always user-initiated (`monorepo-onboard` directly, or answering "yes" to the optional onboard branch at end of Step 5). The autodev does NOT silently onboard components it discovers.
## Detection Rules
**Resolution**: when a state file exists, `state.step` + `state.status` drive detection and the conditions below are not consulted. When no state file exists (cold start), walk the rules in order — first match wins. Meta-repo uses `_docs/_repo-config.yaml` (and its `confirmed_by_user` flag) as its primary folder-probe signal rather than per-step artifact folders.
---
**Step 1 — Discover**
Condition: `_docs/_repo-config.yaml` does NOT exist
Action: Read and execute `.cursor/skills/monorepo-discover/SKILL.md`. After completion, auto-chain to **Step 2 (Config Review)**.
---
**Step 2 — Config Review** (session boundary)
Condition: `_docs/_repo-config.yaml` exists AND top-level `confirmed_by_user: false`
Action: This is a **hard session boundary**. The skill cannot proceed until a human reviews the generated config and sets `confirmed_by_user: true`. Present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Config review pending
══════════════════════════════════════
_docs/_repo-config.yaml was generated by monorepo-discover
but has confirmed_by_user: false.
A) I've reviewed — proceed to Status
B) Pause — I'll review the config and come back later
══════════════════════════════════════
Recommendation: B — review the inferred mappings (tagged
`confirmed: false`), unresolved questions, and assumptions
before flipping confirmed_by_user: true.
══════════════════════════════════════
```
- If user picks A → verify `confirmed_by_user: true` is now set in the config. If still `false`, re-ask. If true, auto-chain to **Step 3 (Status)**.
- If user picks B → mark Step 2 as `in_progress`, update state file, end the session. Tell the user to invoke `/autodev` again after reviewing.
**Do NOT auto-flip `confirmed_by_user`.** Only the human does that.
---
**Step 3 — Status**
Condition (folder fallback): `_docs/_repo-config.yaml` exists AND `confirmed_by_user: true`.
State-driven: reached by auto-chain from Step 2 (user picked A), or entered on any re-invocation after a completed cycle.
Action: Read and execute `.cursor/skills/monorepo-status/SKILL.md`.
The status report identifies:
- Components with doc drift (commits newer than their mapped docs)
- Components with CI coverage gaps
- Registry/config mismatches
- Unresolved questions
Based on the report, auto-chain branches:
- If **doc drift** found → auto-chain to **Step 4 (Document Sync)**
- Else if **CI drift** (only) found → auto-chain to **Step 5 (CICD Sync)**
- Else if **registry mismatch** found (new components not in config) → present Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Registry drift detected
══════════════════════════════════════
Components in registry but not in config: <list>
Components in config but not in registry: <list>
A) Run monorepo-discover to refresh config
B) Run monorepo-onboard for each new component (interactive)
C) Ignore for now — continue
══════════════════════════════════════
Recommendation: A — safest; re-detect everything, human reviews
══════════════════════════════════════
```
- Else → **workflow done for this cycle**. Report "No drift. Meta-repo is in sync." Loop waits for next invocation.
---
**Step 4 — Document Sync**
State-driven: reached by auto-chain from Step 3 when the status report flagged doc drift.
Action: Read and execute `.cursor/skills/monorepo-document/SKILL.md` with scope = components flagged by status.
The skill:
1. Runs its own drift check (M7)
2. Asks user to confirm scope (components it will touch)
3. Applies doc edits
4. Skips any component with unconfirmed mapping (M5), reports
After completion:
- If the status report ALSO flagged CI drift → auto-chain to **Step 5 (CICD Sync)**
- Else → end cycle, report done
---
**Step 5 — CICD Sync**
State-driven: reached by auto-chain from Step 3 (when status report flagged CI drift and no doc drift) or from Step 4 (when both doc and CI drift were flagged).
Action: Read and execute `.cursor/skills/monorepo-cicd/SKILL.md` with scope = components flagged by status.
After completion, end cycle. Report files updated across both doc and CI sync.
---
**Step 6 — Loop (re-entry on next invocation)**
State-driven: all triggered steps completed; the meta-repo cycle has finished.
Action: Update state file to `step: 3, status: not_started` so that next `/autodev` invocation starts from Status. The meta-repo flow is cyclical — there's no terminal "done" state, because drift can appear at any time as submodules evolve.
On re-invocation:
- If config was updated externally and `confirmed_by_user` flipped back to `false` → go back to Step 2
- Otherwise → Step 3 (Status)
## Explicit Onboarding Branch (user-initiated)
Onboarding is not auto-chained. Two ways to invoke:
**1. During Step 3 registry-mismatch handling** — if user picks option B in the registry-mismatch Choose format, launch `monorepo-onboard` interactively for each new component.
**2. Direct user request** — if the user says "onboard <name>" during any step, pause the current step, save state, run `monorepo-onboard`, then resume.
After onboarding completes, the config is updated. Auto-chain back to **Step 3 (Status)** to catch any remaining drift the new component introduced.
## Auto-Chain Rules
| Completed Step | Next Action |
|---------------|-------------|
| Discover (1) | Auto-chain → Config Review (2) |
| Config Review (2, user picked A, confirmed_by_user: true) | Auto-chain → Status (3) |
| Config Review (2, user picked B) | **Session boundary** — end session, await re-invocation |
| Status (3, doc drift) | Auto-chain → Document Sync (4) |
| Status (3, CI drift only) | Auto-chain → CICD Sync (5) |
| Status (3, no drift) | **Cycle complete** — end session, await re-invocation |
| Status (3, registry mismatch) | Ask user (A: discover, B: onboard, C: continue) |
| Document Sync (4) + CI drift pending | Auto-chain → CICD Sync (5) |
| Document Sync (4) + no CI drift | **Cycle complete** |
| CICD Sync (5) | **Cycle complete** |
## Status Summary — Step List
Flow name: `meta-repo`. Render using the banner template in `protocols.md` → "Banner Template (authoritative)".
Flow-specific slot values:
- `<header-suffix>`: empty.
- `<current-suffix>`: empty.
- `<footer-extras>`: add a single line:
```
Config: _docs/_repo-config.yaml [confirmed_by_user: <true|false>, last_updated: <date>]
```
| # | Step Name | Extra state tokens (beyond the shared set) |
|---|------------------|--------------------------------------------|
| 1 | Discover | — |
| 2 | Config Review | `IN PROGRESS (awaiting human)` |
| 3 | Status | `DONE (no drift)`, `DONE (N drifts)` |
| 4 | Document Sync | `DONE (N docs)`, `SKIPPED (no doc drift)` |
| 5 | CICD Sync | `DONE (N files)`, `SKIPPED (no CI drift)` |
All rows accept the shared state tokens (`DONE`, `IN PROGRESS`, `NOT STARTED`, `FAILED (retry N/3)`); rows 4 and 5 additionally accept `SKIPPED`.
Row rendering format:
```
Step 1 Discover [<state token>]
Step 2 Config Review [<state token>]
Step 3 Status [<state token>]
Step 4 Document Sync [<state token>]
Step 5 CICD Sync [<state token>]
```
## Notes for the meta-repo flow
- **No session boundary except Step 2**: unlike existing-code flow (which has boundaries around decompose), meta-repo flow only pauses at config review. Syncing is fast enough to complete in one session.
- **Cyclical, not terminal**: no "done forever" state. Each invocation completes a drift cycle; next invocation starts fresh.
- **No tracker integration**: this flow does NOT create Jira/ADO tickets. Maintenance is not a feature — if a feature-level ticket spans the meta-repo's concerns, it lives in the per-component workspace.
- **Onboarding is opt-in**: never auto-onboarded. User must explicitly request.
- **Failure handling**: uses the same retry/escalation protocol as other flows (see `protocols.md`).
@@ -1,12 +1,12 @@
# Autopilot Protocols # Autodev Protocols
## User Interaction Protocol ## User Interaction Protocol
Every time the autopilot or a sub-skill needs a user decision, use the **Choose A / B / C / D** format. This applies to: Every time the autodev or a sub-skill needs a user decision, use the **Choose A / B / C / D** format. This applies to:
- State transitions where multiple valid next actions exist - State transitions where multiple valid next actions exist
- Sub-skill BLOCKING gates that require user judgment - Sub-skill BLOCKING gates that require user judgment
- Any fork where the autopilot cannot confidently pick the right path - Any fork where the autodev cannot confidently pick the right path
- Trade-off decisions (tech choices, scope, risk acceptance) - Trade-off decisions (tech choices, scope, risk acceptance)
### When to Ask (MUST ask) ### When to Ask (MUST ask)
@@ -49,55 +49,74 @@ Rules:
5. Play the notification sound (per `.cursor/rules/human-attention-sound.mdc`) before presenting the choice 5. Play the notification sound (per `.cursor/rules/human-attention-sound.mdc`) before presenting the choice
6. After the user picks, proceed immediately — no follow-up confirmation unless the choice was destructive 6. After the user picks, proceed immediately — no follow-up confirmation unless the choice was destructive
## Optional Skill Gate (reusable template)
Several flow steps ask the user whether to run an optional skill (security audit, performance test, etc.) before auto-chaining. Instead of re-stating the Choose block and skip semantics at each such step, flow files invoke this shared template.
### Template shape
```
══════════════════════════════════════
DECISION REQUIRED: <question>
══════════════════════════════════════
A) <option-a-label>
B) <option-b-label>
══════════════════════════════════════
Recommendation: <A|B> — <reason>
══════════════════════════════════════
```
### Semantics (same for every invocation)
- **On A** → read and execute the target skill's `SKILL.md`; after it completes, auto-chain to `<next-step>`.
- **On B** → mark the current step `skipped` in the state file; auto-chain to `<next-step>`.
- **On skill failure** → standard Failure Handling (§Failure Handling) — retry ladder, then escalate via Choose block.
- **Sound before the prompt** — follow `.cursor/rules/human-attention-sound.mdc`.
### How flow files invoke it
Each flow-file step that needs this gate supplies only the variable parts:
```
Action: Apply the **Optional Skill Gate** (protocols.md → "Optional Skill Gate") with:
- question: <Choose-block header>
- option-a-label: <one-line A description>
- option-b-label: <one-line B description>
- recommendation: <A|B> — <short reason, may be dynamic>
- target-skill: <.cursor/skills/<name>/SKILL.md, plus any mode hint>
- next-step: Step <N> (<name>)
```
The resolved Choose block (shape above) is then rendered verbatim by substituting these variables. Do NOT reword the shared scaffolding — reword only the variable parts. If a step needs different semantics (e.g., "re-run same skill" rather than "skip to next step"), it MUST NOT use this template; it writes the Choose block inline with its own semantics.
### When NOT to use this template
- The user choice has **more than two options** (A/B/C/D).
- The choice is **not "run-or-skip-this-skill"** (e.g., "another round of the same skill", "pick tech stack", "proceed vs. rollback").
- The skipped path needs special bookkeeping beyond `status: skipped` (e.g., must also move artifacts, notify tracker, trigger a different skill).
For those cases, write the Choose block inline using the base format in §User Interaction Protocol.
## Work Item Tracker Authentication ## Work Item Tracker Authentication
Several workflow steps create work items (epics, tasks, links). The system requires some task tracker MCP as interchangeable backend. All tracker detection, authentication, availability gating, `tracker: local` fallback semantics, and leftovers handling are defined in `.cursor/rules/tracker.mdc`. Follow that rule — do not restate its logic here.
### Tracker Detection Autodev-specific additions on top of the rule:
1. If there is no task tracker MCP or it is not authorized, ask the user about it
3. Record the choice in the state file: `tracker: jira` or `tracker: ado`
4. If neither is available, set `tracker: local` and proceed without external tracking
### Steps That Require Work Item Tracker ### Steps That Require Work Item Tracker
Before entering a step from this table for the first time in a session, verify tracker availability per `.cursor/rules/tracker.mdc`. If the user has already chosen `tracker: local`, skip the gate and proceed.
| Flow | Step | Sub-Step | Tracker Action | | Flow | Step | Sub-Step | Tracker Action |
|------|------|----------|----------------| |------|------|----------|----------------|
| greenfield | 3 (Plan) | Step 6 — Epics | Create epics for each component | | greenfield | Plan | Step 6 — Epics | Create epics for each component |
| greenfield | 5 (Decompose) | Step 13 — All tasks | Create ticket per task, link to epic | | greenfield | Decompose | Step 1 + Step 2 + Step 3 — All tasks | Create ticket per task, link to epic |
| existing-code | 3 (Decompose Tests) | Step 1t + Step 3 — All test tasks | Create ticket per task, link to epic | | existing-code | Decompose Tests | Step 1t + Step 3 — All test tasks | Create ticket per task, link to epic |
| existing-code | 7 (New Task) | Step 7 — Ticket | Create ticket per task, link to epic | | existing-code | New Task | Step 7 — Ticket | Create ticket per task, link to epic |
### Authentication Gate ### State File Marker
Before entering a step that requires work item tracking (see table above) for the first time, the autopilot must: Record the resolved choice in the state file once per session: `tracker: jira` or `tracker: local`. Subsequent steps read this marker instead of re-running the gate.
1. Call `mcp_auth` on the detected tracker's MCP server
2. If authentication succeeds → proceed normally
3. If the user **skips** or authentication fails → present using Choose format:
```
══════════════════════════════════════
Tracker authentication failed
══════════════════════════════════════
A) Retry authentication (retry mcp_auth)
B) Continue without tracker (tasks saved locally only)
══════════════════════════════════════
Recommendation: A — Tracker IDs drive task referencing,
dependency tracking, and implementation batching.
Without tracker, task files use numeric prefixes instead.
══════════════════════════════════════
```
If user picks **B** (continue without tracker):
- Set a flag in the state file: `tracker: local`
- All skills that would create tickets instead save metadata locally in the task/epic files with `Tracker: pending` status
- Task files keep numeric prefixes (e.g., `01_initial_structure.md`) instead of tracker ID prefixes
- The workflow proceeds normally in all other respects
### Re-Authentication
If the tracker MCP was already authenticated in a previous invocation (verify by listing available tools beyond `mcp_auth`), skip the auth gate.
## Error Handling ## Error Handling
@@ -111,41 +130,42 @@ All error situations that require user input MUST use the **Choose A / B / C / D
| User wants to go back to a previous step | Use Choose format: A) re-run (with overwrite warning), B) stay on current step | | User wants to go back to a previous step | Use Choose format: A) re-run (with overwrite warning), B) stay on current step |
| User asks "where am I?" without wanting to continue | Show Status Summary only, do not start execution | | User asks "where am I?" without wanting to continue | Show Status Summary only, do not start execution |
## Skill Failure Retry Protocol ## Failure Handling
Sub-skills can return a **failed** result. Failures are often caused by missing user input, environment issues, or transient errors that resolve on retry. The autopilot auto-retries before escalating. One retry ladder covers all failure modes: explicit failure returned by a sub-skill, stuck loops detected while monitoring, and persistent failures across conversations. The single counter is `retry_count` in the state file; the single escalation is the Choose block below.
### Retry Flow ### Failure signals
Treat the sub-skill as **failed** when ANY of the following is observed:
- The sub-skill explicitly returns a failed result (including blocked subagents, auto-fix loop exhaustion, prerequisite violations).
- **Stuck signals**: the same artifact is rewritten 3+ times without meaningful change; the sub-skill re-asks a question that was already answered; no new artifact has been saved despite active execution.
### Retry ladder
``` ```
Skill execution → FAILED Failure observed
├─ retry_count < 3 ? ├─ retry_count < 3 ?
│ YES → increment retry_count in state file │ YES → increment retry_count in state file
│ → re-read the sub-skill's SKILL.md │ → re-read the sub-skill's SKILL.md and _docs/_autodev_state.md
│ → re-execute from the current sub_step │ → resume from the last recorded sub_step (restart from sub_step 1 only if corruption is suspected)
│ → (loop back to check result) │ → loop
│ NO (retry_count = 3) → │ NO (retry_count = 3) →
│ → set status: failed in Current Step │ → set status: failed and retry_count: 3 in Current Step
│ → present warning to user (see Escalation below) │ → play notification sound (.cursor/rules/human-attention-sound.mdc)
│ → do NOT auto-retry again until user intervenes │ → escalate (Choose block below)
│ → do NOT auto-retry until the user intervenes
``` ```
### Retry Rules Rules:
1. **Auto-retry is immediate** — do not ask before retrying.
2. **Preserve `sub_step`** across retries unless the failure indicates artifact corruption.
3. **Reset `retry_count: 0` on success.**
4. The counter is **per step, per cycle**. It is not cleared by crossing a session boundary — persistence across conversations is intentional; it IS the circuit breaker.
1. **Auto-retry immediately**: when a skill fails, retry it without asking the user — the failure is often transient (missing user confirmation in a prior step, docker not running, file lock, etc.) ### Escalation
2. **Preserve sub_step**: retry from the last recorded `sub_step`, not from the beginning of the skill — unless the failure indicates corruption, in which case restart from sub_step 1
3. **Increment `retry_count`**: update `retry_count` in the state file's `Current Step` section on each retry attempt
4. **Reset on success**: when the skill eventually succeeds, reset `retry_count: 0`
### Escalation (after 3 consecutive failures)
After 3 failed auto-retries of the same skill, the failure is likely not user-related. Stop retrying and escalate:
1. Update the state file: set `status: failed` and `retry_count: 3` in `Current Step`
2. Play notification sound (per `.cursor/rules/human-attention-sound.mdc`)
3. Present using Choose format:
``` ```
══════════════════════════════════════ ══════════════════════════════════════
@@ -164,49 +184,25 @@ After 3 failed auto-retries of the same skill, the failure is likely not user-re
══════════════════════════════════════ ══════════════════════════════════════
``` ```
### Re-Entry After Failure ### Re-entry after escalation
On the next autopilot invocation (new conversation), if the state file shows `status: failed` and `retry_count: 3`: On the next invocation, if the state file shows `status: failed` AND `retry_count: 3`, do NOT auto-retry. Present the escalation block above first:
- Present the blocker to the user before attempting execution - User picks A → reset `retry_count: 0`, set `status: in_progress`, re-execute.
- If the user chooses to retry → reset `retry_count: 0`, set `status: in_progress`, and re-execute - User picks B → mark step `skipped`, proceed to the next step.
- If the user chooses to skip → mark step as `skipped`, proceed to next step - User picks C → stop; return control to the user.
- Do NOT silently auto-retry — the user must acknowledge the persistent failure first
## Error Recovery Protocol ### Incident retrospective
### Stuck Detection Immediately after the user has made their A/B/C choice, invoke `.cursor/skills/retrospective/SKILL.md` in **incident mode**:
When executing a sub-skill, monitor for these signals:
- Same artifact overwritten 3+ times without meaningful change
- Sub-skill repeatedly asks the same question after receiving an answer
- No new artifacts saved for an extended period despite active execution
### Recovery Actions (ordered)
1. **Re-read state**: read `_docs/_autopilot_state.md` and cross-check against `_docs/` folders
2. **Retry current sub-step**: re-read the sub-skill's SKILL.md and restart from the current sub-step
3. **Escalate**: after 2 failed retries, present diagnostic summary to user using Choose format:
``` ```
══════════════════════════════════════ mode: incident
RECOVERY: [skill name] stuck at [sub-step] failing_skill: <skill name>
══════════════════════════════════════ failure_summary: <last failure reason string>
A) Retry with fresh context (new conversation)
B) Skip this sub-step with warning
C) Abort and fix manually
══════════════════════════════════════
Recommendation: A — fresh context often resolves stuck loops
══════════════════════════════════════
``` ```
### Circuit Breaker This produces `_docs/06_metrics/incident_<YYYY-MM-DD>_<skill>.md` and appends 13 lessons to `_docs/LESSONS.md` under `process` or `tooling`. The retro runs even if the user picked Abort — the goal is to capture the pattern while it is fresh. If the retrospective skill itself fails, log the failure to `_docs/_process_leftovers/` but do NOT block the user's recovery choice from completing.
If the same autopilot step fails 3 consecutive times across conversations:
- Do NOT auto-retry on next invocation
- Present the failure pattern and ask user for guidance before attempting again
## Context Management Protocol ## Context Management Protocol
@@ -218,7 +214,7 @@ Disk is memory. Never rely on in-context accumulation — read from `_docs/` art
When re-entering a skill (new conversation or context refresh): When re-entering a skill (new conversation or context refresh):
- Always read: `_docs/_autopilot_state.md` - Always read: `_docs/_autodev_state.md`
- Always read: the active skill's `SKILL.md` - Always read: the active skill's `SKILL.md`
- Conditionally read: only the `_docs/` artifacts the current sub-step requires (listed in each skill's Context Resolution section) - Conditionally read: only the `_docs/` artifacts the current sub-step requires (listed in each skill's Context Resolution section)
- Never bulk-read: do not load all `_docs/` files at once - Never bulk-read: do not load all `_docs/` files at once
@@ -228,7 +224,7 @@ When re-entering a skill (new conversation or context refresh):
If context is filling up during a long skill (e.g., document, implement): If context is filling up during a long skill (e.g., document, implement):
1. Save current sub-step progress to the skill's artifact directory 1. Save current sub-step progress to the skill's artifact directory
2. Update `_docs/_autopilot_state.md` with exact sub-step position 2. Update `_docs/_autodev_state.md` with exact sub-step position
3. Suggest a new conversation: "Context is getting long — recommend continuing in a fresh conversation for better results" 3. Suggest a new conversation: "Context is getting long — recommend continuing in a fresh conversation for better results"
4. On re-entry, the skill's resumability protocol picks up from the saved sub-step 4. On re-entry, the skill's resumability protocol picks up from the saved sub-step
@@ -290,12 +286,12 @@ For steps that produce `_docs/` artifacts (problem, research, plan, decompose, d
══════════════════════════════════════ ══════════════════════════════════════
``` ```
3. **Git safety net**: artifacts are committed with each autopilot step completion. To roll back: `git log --oneline _docs/` to find the commit, then `git checkout <commit> -- _docs/<folder>/` 3. **Git safety net**: artifacts are committed with each autodev step completion. To roll back: `git log --oneline _docs/` to find the commit, then `git checkout <commit> -- _docs/<folder>/`
4. **State file rollback**: when rolling back artifacts, also update `_docs/_autopilot_state.md` to reflect the rolled-back step (set it to `in_progress`, clear completed date) 4. **State file rollback**: when rolling back artifacts, also update `_docs/_autodev_state.md` to reflect the rolled-back step (set it to `in_progress`, clear completed date)
## Debug / Error Recovery Protocol ## Debug Protocol
When the implement skill's auto-fix loop fails (code review FAIL after 2 auto-fix attempts) or an implementer subagent reports a blocker, the user is asked to intervene. This protocol guides the recovery process. When the implement skill's auto-fix loop fails (code review FAIL after 2 auto-fix attempts) or an implementer subagent reports a blocker, the user is asked to intervene. This protocol guides the debugging process. (Retry budget and escalation are covered by Failure Handling above; this section is about *how* to diagnose once the user has been looped in.)
### Structured Debugging Workflow ### Structured Debugging Workflow
@@ -360,6 +356,39 @@ If debugging does not resolve the issue after 2 focused attempts:
## Status Summary ## Status Summary
On every invocation, before executing any skill, present a status summary built from the state file (with folder scan fallback). Use the Status Summary Template from the active flow file (`flows/greenfield.md` or `flows/existing-code.md`). On every invocation, before executing any skill, present a status summary built from the state file (with folder scan fallback). For re-entry (state file exists), cross-check the current step against `_docs/` folder structure and present any `status: failed` state to the user before continuing.
For re-entry (state file exists), cross-check the current step against `_docs/` folder structure and present any `status: failed` state to the user before continuing. ### Banner Template (authoritative)
The banner shell is defined here once. Each flow file contributes only its step-list fragment and any flow-specific header/footer extras. Do not inline a full banner in flow files.
```
═══════════════════════════════════════════════════
AUTODEV STATUS (<flow-name>)<header-suffix>
═══════════════════════════════════════════════════
<step-list from the active flow file>
═══════════════════════════════════════════════════
Current: Step <N> — <Name><current-suffix>
SubStep: <M> — <sub-skill internal step name>
Retry: <N/3> ← omit row if retry_count is 0
Action: <what will happen next>
<footer-extras from the active flow file>
═══════════════════════════════════════════════════
```
### Slot rules
- `<flow-name>``greenfield`, `existing-code`, or `meta-repo`.
- `<header-suffix>` — optional, flow-specific. The existing-code flow appends ` — Cycle <N>` when `state.cycle > 1`; other flows leave it empty.
- `<step-list>` — a fixed-width table supplied by the active flow file (see that file's "Status Summary — Step List" section). Row format is standardized:
```
Step <N> <Step Name> [<state token>]
```
where `<state token>` comes from the state-token set defined per row in the flow's step-list table.
- `<current-suffix>` — optional, flow-specific. The existing-code flow appends ` (cycle <N>)` when `state.cycle > 1`; other flows leave it empty.
- `Retry:` row — omit entirely when `retry_count` is 0. Include it with `<N>/3` otherwise.
- `<footer-extras>` — optional, flow-specific. The meta-repo flow adds a `Config:` line with `_docs/_repo-config.yaml` state; other flows leave it empty.
### State token set (shared)
The common tokens all flows may emit are: `DONE`, `IN PROGRESS`, `NOT STARTED`, `SKIPPED`, `FAILED (retry N/3)`. Specific step rows may extend this with parenthetical detail (e.g., `DONE (N drafts)`, `DONE (N tasks)`, `IN PROGRESS (batch M of ~N)`, `DONE (N passed, M failed)`). The flow's step-list table declares which extensions each step supports.
+158
View File
@@ -0,0 +1,158 @@
# Autodev State Management
## State File: `_docs/_autodev_state.md`
The autodev persists its position to `_docs/_autodev_state.md`. This is a lightweight pointer — only the current step. All history lives in `_docs/` artifacts and git log. Folder scanning is the fallback when the state file doesn't exist.
### Template
**Saved at:** `_docs/_autodev_state.md` (workspace-relative, one file per project). Created on the first `/autodev` invocation; updated in place on every state transition; never deleted.
```markdown
# Autodev State
## Current Step
flow: [greenfield | existing-code | meta-repo]
step: [1-11 for greenfield, 1-17 for existing-code, 1-6 for meta-repo, or "done"]
name: [step name from the active flow's Step Reference Table]
status: [not_started / in_progress / completed / skipped / failed]
sub_step:
phase: [integer — sub-skill internal phase/step number, or 0 if not started]
name: [kebab-case short identifier from the sub-skill, or "awaiting-invocation"]
detail: [optional free-text note, may be empty]
retry_count: [0-3 — consecutive auto-retry attempts, reset to 0 on success]
cycle: [1-N — feature cycle counter for existing-code flow; increments on each "Re-Entry After Completion" loop; always 1 for greenfield and meta-repo]
```
The `sub_step` field is structured. Every sub-skill must save both `phase` (integer) and `name` (kebab-case token matching the skill's documented phase names). `detail` is optional human-readable context. On re-entry the orchestrator parses `phase` and `name` to resume; if parsing fails, fall back to folder scan and log the parse failure.
### Sub-Skill Phase Persistence — Rules (not a registry)
Each sub-skill is authoritative for its own phase list. Phase names and numbers live inside the sub-skill's own SKILL.md (and any `steps/` / `phases/` files). The orchestrator does not maintain a central phase table — it reads whatever `phase` / `name` the sub-skill last wrote.
Every sub-skill MUST follow these rules when persisting `sub_step`:
1. **`phase`** — a strictly monotonic integer per invocation, starting at 0 (`awaiting-invocation`) and incrementing by 1 at each internal save point. No fractional values are ever persisted. If the skill's own docs use half-step numbering (e.g., "Phase 4.5", decompose's "Step 1.5"), the persisted integer is simply the next integer, and all subsequent phases shift up by one in that skill's own file.
2. **`name`** — a kebab-case short identifier unique within that sub-skill. Use the phase's heading or step title in kebab-case (e.g., `component-decomposition`, `auto-fix-gate`, `cross-task-consistency`). Different modes of the same skill may reuse a `phase` integer with distinct `name` values (e.g., `decompose` phase 1 is `bootstrap-structure` in default mode, `test-infrastructure-bootstrap` in tests-only mode).
3. **`detail`** — optional free-text note (batch index, mode flag, retry hint); may be empty.
4. **Reserved name**`name: awaiting-invocation` with `phase: 0` is the universal "skill was chained but has not started" marker. Every sub-skill implicitly supports it; no sub-skill should reuse the token for anything else.
On re-entry, the orchestrator parses the structured field and resumes at `(phase, name)`. If parsing fails, it falls back to folder scan and logs the parse error — it does NOT guess a phase.
The `cycle` counter is used by existing-code flow Step 10 (Implement) detection and by implementation report naming (`implementation_report_{feature_slug}_cycle{N}.md`). It starts at 1 when a project enters existing-code flow (either by routing from greenfield's Done branch, or by first invocation on an existing codebase). It increments on each completed Retrospective → New Task loop.
### Examples
```
flow: greenfield
step: 3
name: Plan
status: in_progress
sub_step:
phase: 4
name: architecture-review-risk-assessment
detail: ""
retry_count: 0
cycle: 1
```
```
flow: existing-code
step: 3
name: Test Spec
status: failed
sub_step:
phase: 1
name: test-case-generation
detail: "variant 1b"
retry_count: 3
cycle: 1
```
```
flow: meta-repo
step: 2
name: Config Review
status: in_progress
sub_step:
phase: 0
name: awaiting-human-review
detail: "awaiting review of _docs/_repo-config.yaml"
retry_count: 0
cycle: 1
```
```
flow: existing-code
step: 10
name: Implement
status: in_progress
sub_step:
phase: 7
name: batch-loop
detail: "batch 2 of ~4"
retry_count: 0
cycle: 3
```
### State File Rules
1. **Create** on the first autodev invocation (after state detection determines Step 1)
2. **Update** after every change — this includes: batch completion, sub-step progress, step completion, session boundary, failed retry, or any meaningful state transition. The state file must always reflect the current reality.
3. **Read** as the first action on every invocation — before folder scanning
4. **Cross-check**: verify against actual `_docs/` folder contents. If they disagree, trust the folder structure and update the state file
5. **Never delete** the state file
6. **Retry tracking**: increment `retry_count` on each failed auto-retry; reset to `0` on success. If `retry_count` reaches 3, set `status: failed`
7. **Failed state on re-entry**: if `status: failed` with `retry_count: 3`, do NOT auto-retry — present the issue to the user first
8. **Skill-internal state**: when the active skill maintains its own state file (e.g., document skill's `_docs/02_document/state.json`), the autodev's `sub_step` field should reflect the skill's internal progress. On re-entry, cross-check the skill's state file against the autodev's `sub_step` for consistency.
## State Detection
Read `_docs/_autodev_state.md` first. If it exists and is consistent with the folder structure, use the `Current Step` from the state file. If the state file doesn't exist or is inconsistent, fall back to folder scanning.
### Folder Scan Rules (fallback)
Scan the workspace and `_docs/` to determine the current workflow position. The detection rules are defined in each flow file (`flows/greenfield.md`, `flows/existing-code.md`, `flows/meta-repo.md`). Resolution order:
1. Apply the Flow Resolution rules in `SKILL.md` to pick the flow first (meta-repo detection takes priority over greenfield/existing-code).
2. Within the selected flow, check its detection rules in order — first match wins.
## Re-Entry Protocol
When the user invokes `/autodev` and work already exists:
1. Read `_docs/_autodev_state.md`
2. Cross-check against `_docs/` folder structure
3. Present Status Summary (render using the banner template in `protocols.md` → "Banner Template", filled in with the active flow's "Status Summary — Step List" fragment)
4. If the detected step has a sub-skill with built-in resumability, the sub-skill handles mid-step recovery
5. Continue execution from detected state
## Session Boundaries
A **session boundary** is a transition that explicitly breaks auto-chain. Which transitions are boundaries is declared **in each flow file's Auto-Chain Rules table** — rows marked `**Session boundary**`. The details live with the steps they apply to; this section defines only the shared mechanism.
**Invariant**: a flow row without the `Session boundary` marker auto-chains unconditionally. Missing marker = missing boundary.
### Orchestrator mechanism at a boundary
1. Update the state file: mark the current step `completed`; set the next step with `status: not_started`; reset `sub_step: {phase: 0, name: awaiting-invocation, detail: ""}`; keep `retry_count: 0`.
2. Present a brief summary of what just finished (tasks produced, batches expected, etc., as relevant to the boundary).
3. Present the shared Choose block (template below) — or a flow-specific override if the flow file supplies one.
4. End the session — do not start the next skill in the same conversation.
### Shared Choose template
```
══════════════════════════════════════
DECISION REQUIRED: <what just completed> — start <next phase>?
══════════════════════════════════════
A) Start a new conversation for <next phase> (recommended for context freshness)
B) Continue in this conversation (NOT recommended — context may degrade)
Warning: if context fills mid-<next phase>, state will be saved and you will
still be asked to resume in a new conversation — option B only delays that.
══════════════════════════════════════
Recommendation: A — <next phase> is long; fresh context helps
══════════════════════════════════════
```
Individual boundaries MAY override this template with a flow-specific Choose block when the pause has different semantics (e.g., `meta-repo.md` Step 2 Config Review pauses for human review of a config flag, not for context freshness). The flow file is authoritative for any such override.
-123
View File
@@ -1,123 +0,0 @@
---
name: autopilot
description: |
Auto-chaining orchestrator that drives the full BUILD-SHIP workflow from problem gathering through deployment.
Detects current project state from _docs/ folder, resumes from where it left off, and flows through
problem → research → plan → decompose → implement → deploy without manual skill invocation.
Maximizes work per conversation by auto-transitioning between skills.
Trigger phrases:
- "autopilot", "auto", "start", "continue"
- "what's next", "where am I", "project status"
category: meta
tags: [orchestrator, workflow, auto-chain, state-machine, meta-skill]
disable-model-invocation: true
---
# Autopilot Orchestrator
Auto-chaining execution engine that drives the full BUILD → SHIP workflow. Detects project state from `_docs/`, resumes from where work stopped, and flows through skills automatically. The user invokes `/autopilot` once — the engine handles sequencing, transitions, and re-entry.
## File Index
| File | Purpose |
|------|---------|
| `flows/greenfield.md` | Detection rules, step table, and auto-chain rules for new projects |
| `flows/existing-code.md` | Detection rules, step table, and auto-chain rules for existing codebases |
| `state.md` | State file format, rules, re-entry protocol, session boundaries |
| `protocols.md` | User interaction, tracker auth, choice format, error handling, status summary |
**On every invocation**: read all four files above before executing any logic.
## Core Principles
- **Auto-chain**: when a skill completes, immediately start the next one — no pause between skills
- **Only pause at decision points**: BLOCKING gates inside sub-skills are the natural pause points; do not add artificial stops between steps
- **State from disk**: current step is persisted to `_docs/_autopilot_state.md` and cross-checked against `_docs/` folder structure
- **Re-entry**: on every invocation, read the state file and cross-check against `_docs/` folders before continuing
- **Delegate, don't duplicate**: read and execute each sub-skill's SKILL.md; never inline their logic here
- **Sound on pause**: follow `.cursor/rules/human-attention-sound.mdc` — play a notification sound before every pause that requires human input (AskQuestion tool preferred for structured choices; fall back to plain text if unavailable)
- **Minimize interruptions**: only ask the user when the decision genuinely cannot be resolved automatically
- **Single project per workspace**: all `_docs/` paths are relative to workspace root; for monorepos, each service needs its own Cursor workspace
## Flow Resolution
Determine which flow to use:
1. If workspace has **no source code files****greenfield flow**
2. If workspace has source code files **and** `_docs/` does not exist → **existing-code flow**
3. If workspace has source code files **and** `_docs/` exists **and** `_docs/_autopilot_state.md` does not exist → **existing-code flow**
4. If workspace has source code files **and** `_docs/_autopilot_state.md` exists → read the `flow` field from the state file and use that flow
After selecting the flow, apply its detection rules (first match wins) to determine the current step.
## Execution Loop
Every invocation follows this sequence:
```
1. Read _docs/_autopilot_state.md (if exists)
2. Read all File Index files above
3. Cross-check state file against _docs/ folder structure (rules in state.md)
4. Resolve flow (see Flow Resolution above)
5. Resolve current step (detection rules from the active flow file)
6. Present Status Summary (template in active flow file)
7. Execute:
a. Delegate to current skill (see Skill Delegation below)
b. If skill returns FAILED → apply Skill Failure Retry Protocol (see protocols.md):
- Auto-retry the same skill (failure may be caused by missing user input or environment issue)
- If 3 consecutive auto-retries fail → set status: failed, warn user, stop auto-retry
c. When skill completes successfully → reset retry counter, update state file (rules in state.md)
d. Re-detect next step from the active flow's detection rules
e. If next skill is ready → auto-chain (go to 7a with next skill)
f. If session boundary reached → update state, suggest new conversation (rules in state.md)
g. If all steps done → update state → report completion
```
## Skill Delegation
For each step, the delegation pattern is:
1. Update state file: set `step` to the autopilot step number, status to `in_progress`, set `sub_step` to the sub-skill's current internal step/phase, reset `retry_count: 0`
2. Announce: "Starting [Skill Name]..."
3. Read the skill file: `.cursor/skills/[name]/SKILL.md`
4. Execute the skill's workflow exactly as written, including all BLOCKING gates, self-verification checklists, save actions, and escalation rules. Update `sub_step` in state each time the sub-skill advances.
5. If the skill **fails**: follow the Skill Failure Retry Protocol in `protocols.md` — increment `retry_count`, auto-retry up to 3 times, then escalate.
6. When complete (success): reset `retry_count: 0`, update state file to the next step with `status: not_started`, return to auto-chain rules (from active flow file)
Do NOT modify, skip, or abbreviate any part of the sub-skill's workflow. The autopilot is a sequencer, not an optimizer.
## State File Template
The state file (`_docs/_autopilot_state.md`) is a minimal pointer — only the current step. Full format rules are in `state.md`.
```markdown
# Autopilot State
## Current Step
flow: [greenfield | existing-code]
step: [number or "done"]
name: [step name]
status: [not_started / in_progress / completed / skipped / failed]
sub_step: [0 or N — sub-skill phase name]
retry_count: [0-3]
```
## Trigger Conditions
This skill activates when the user wants to:
- Start a new project from scratch
- Continue an in-progress project
- Check project status
- Let the AI guide them through the full workflow
**Keywords**: "autopilot", "auto", "start", "continue", "what's next", "where am I", "project status"
**Differentiation**:
- User wants only research → use `/research` directly
- User wants only planning → use `/plan` directly
- User wants to document an existing codebase → use `/document` directly
- User wants the full guided workflow → use `/autopilot`
## Flow Reference
See `flows/greenfield.md` and `flows/existing-code.md` for step tables, detection rules, auto-chain rules, and status summary templates.
@@ -1,297 +0,0 @@
# Existing Code Workflow
Workflow for projects with an existing codebase. Starts with documentation, produces test specs, checks code testability (refactoring if needed), decomposes and implements tests, verifies them, refactors with that safety net, then adds new functionality and deploys.
## Step Reference Table
| Step | Name | Sub-Skill | Internal SubSteps |
|------|------|-----------|-------------------|
| 1 | Document | document/SKILL.md | Steps 18 |
| 2 | Test Spec | test-spec/SKILL.md | Phase 1a1b |
| 3 | Code Testability Revision | refactor/SKILL.md (guided mode) | Phases 07 (conditional) |
| 4 | Decompose Tests | decompose/SKILL.md (tests-only) | Step 1t + Step 3 + Step 4 |
| 5 | Implement Tests | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
| 6 | Run Tests | test-run/SKILL.md | Steps 14 |
| 7 | Refactor | refactor/SKILL.md | Phases 07 (optional) |
| 8 | New Task | new-task/SKILL.md | Steps 18 (loop) |
| 9 | Implement | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
| 10 | Run Tests | test-run/SKILL.md | Steps 14 |
| 11 | Update Docs | document/SKILL.md (task mode) | Task Steps 05 |
| 12 | Security Audit | security/SKILL.md | Phase 15 (optional) |
| 13 | Performance Test | (autopilot-managed) | Load/stress tests (optional) |
| 14 | Deploy | deploy/SKILL.md | Step 17 |
After Step 14, the existing-code workflow is complete.
## Detection Rules
Check rules in order — first match wins.
---
**Step 1 — Document**
Condition: `_docs/` does not exist AND the workspace contains source code files (e.g., `*.py`, `*.cs`, `*.rs`, `*.ts`, `src/`, `Cargo.toml`, `*.csproj`, `package.json`)
Action: An existing codebase without documentation was detected. Read and execute `.cursor/skills/document/SKILL.md`. After the document skill completes, re-detect state (the produced `_docs/` artifacts will place the project at Step 2 or later).
---
**Step 2 — Test Spec**
Condition: `_docs/02_document/FINAL_report.md` exists AND workspace contains source code files (e.g., `*.py`, `*.cs`, `*.rs`, `*.ts`) AND `_docs/02_document/tests/traceability-matrix.md` does not exist AND the autopilot state shows `step >= 2` (Document already ran)
Action: Read and execute `.cursor/skills/test-spec/SKILL.md`
This step applies when the codebase was documented via the `/document` skill. Test specifications must be produced before refactoring or further development.
---
**Step 3 — Code Testability Revision**
Condition: `_docs/02_document/tests/traceability-matrix.md` exists AND the autopilot state shows Test Spec (Step 2) is completed AND the autopilot state does NOT show Code Testability Revision (Step 3) as completed or skipped
Action: Analyze the codebase against the test specs to determine whether the code can be tested as-is.
1. Read `_docs/02_document/tests/traceability-matrix.md` and all test scenario files in `_docs/02_document/tests/`
2. For each test scenario, check whether the code under test can be exercised in isolation. Look for:
- Hardcoded file paths or directory references
- Hardcoded configuration values (URLs, credentials, magic numbers)
- Global mutable state that cannot be overridden
- Tight coupling to external services without abstraction
- Missing dependency injection or non-configurable parameters
- Direct file system operations without path configurability
- Inline construction of heavy dependencies (models, clients)
3. If ALL scenarios are testable as-is:
- Mark Step 3 as `completed` with outcome "Code is testable — no changes needed"
- Auto-chain to Step 4 (Decompose Tests)
4. If testability issues are found:
- Create `_docs/04_refactoring/01-testability-refactoring/`
- Write `list-of-changes.md` in that directory using the refactor skill template (`.cursor/skills/refactor/templates/list-of-changes.md`), with:
- **Mode**: `guided`
- **Source**: `autopilot-testability-analysis`
- One change entry per testability issue found (change ID, file paths, problem, proposed change, risk, dependencies)
- Invoke the refactor skill in **guided mode**: read and execute `.cursor/skills/refactor/SKILL.md` with the `list-of-changes.md` as input
- The refactor skill will create RUN_DIR (`01-testability-refactoring`), create tasks in `_docs/02_tasks/todo/`, delegate to implement skill, and verify results
- Phase 3 (Safety Net) is automatically skipped by the refactor skill for testability runs
- After refactoring completes, mark Step 3 as `completed`
- Auto-chain to Step 4 (Decompose Tests)
---
**Step 4 — Decompose Tests**
Condition: `_docs/02_document/tests/traceability-matrix.md` exists AND workspace contains source code files AND the autopilot state shows Step 3 (Code Testability Revision) is completed or skipped AND (`_docs/02_tasks/todo/` does not exist or has no test task files)
Action: Read and execute `.cursor/skills/decompose/SKILL.md` in **tests-only mode** (pass `_docs/02_document/tests/` as input). The decompose skill will:
1. Run Step 1t (test infrastructure bootstrap)
2. Run Step 3 (blackbox test task decomposition)
3. Run Step 4 (cross-verification against test coverage)
If `_docs/02_tasks/` subfolders have some task files already (e.g., refactoring tasks from Step 3), the decompose skill's resumability handles it — it appends test tasks alongside existing tasks.
---
**Step 5 — Implement Tests**
Condition: `_docs/02_tasks/todo/` contains task files AND `_dependencies_table.md` exists AND the autopilot state shows Step 4 (Decompose Tests) is completed AND `_docs/03_implementation/implementation_report_tests.md` does not exist
Action: Read and execute `.cursor/skills/implement/SKILL.md`
The implement skill reads test tasks from `_docs/02_tasks/todo/` and implements them.
If `_docs/03_implementation/` has batch reports, the implement skill detects completed tasks and continues.
---
**Step 6 — Run Tests**
Condition: `_docs/03_implementation/implementation_report_tests.md` exists AND the autopilot state shows Step 5 (Implement Tests) is completed AND the autopilot state does NOT show Step 6 (Run Tests) as completed
Action: Read and execute `.cursor/skills/test-run/SKILL.md`
Verifies the implemented test suite passes before proceeding to refactoring. The tests form the safety net for all subsequent code changes.
---
**Step 7 — Refactor (optional)**
Condition: the autopilot state shows Step 6 (Run Tests) is completed AND the autopilot state does NOT show Step 7 (Refactor) as completed or skipped AND no `_docs/04_refactoring/` run folder contains a `FINAL_report.md` for a non-testability run
Action: Present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Refactor codebase before adding new features?
══════════════════════════════════════
A) Run refactoring (recommended if code quality issues were noted during documentation)
B) Skip — proceed directly to New Task
══════════════════════════════════════
Recommendation: [A or B — base on whether documentation
flagged significant code smells, coupling issues, or
technical debt worth addressing before new development]
══════════════════════════════════════
```
- If user picks A → Read and execute `.cursor/skills/refactor/SKILL.md` in automatic mode. The refactor skill creates a new run folder in `_docs/04_refactoring/` (e.g., `02-coupling-refactoring`), runs the full method using the implemented tests as a safety net. After completion, auto-chain to Step 8 (New Task).
- If user picks B → Mark Step 7 as `skipped` in the state file, auto-chain to Step 8 (New Task).
---
**Step 8 — New Task**
Condition: the autopilot state shows Step 7 (Refactor) is completed or skipped AND the autopilot state does NOT show Step 8 (New Task) as completed
Action: Read and execute `.cursor/skills/new-task/SKILL.md`
The new-task skill interactively guides the user through defining new functionality. It loops until the user is done adding tasks. New task files are written to `_docs/02_tasks/todo/`.
---
**Step 9 — Implement**
Condition: the autopilot state shows Step 8 (New Task) is completed AND `_docs/03_implementation/` does not contain an `implementation_report_*.md` file other than `implementation_report_tests.md` (the tests report from Step 5 is excluded from this check)
Action: Read and execute `.cursor/skills/implement/SKILL.md`
The implement skill reads the new tasks from `_docs/02_tasks/todo/` and implements them. Tasks already implemented in Step 5 are skipped (completed tasks have been moved to `done/`).
If `_docs/03_implementation/` has batch reports from this phase, the implement skill detects completed tasks and continues.
---
**Step 10 — Run Tests**
Condition: the autopilot state shows Step 9 (Implement) is completed AND the autopilot state does NOT show Step 10 (Run Tests) as completed
Action: Read and execute `.cursor/skills/test-run/SKILL.md`
---
**Step 11 — Update Docs**
Condition: the autopilot state shows Step 10 (Run Tests) is completed AND the autopilot state does NOT show Step 11 (Update Docs) as completed AND `_docs/02_document/` contains existing documentation (module or component docs)
Action: Read and execute `.cursor/skills/document/SKILL.md` in **Task mode**. Pass all task spec files from `_docs/02_tasks/done/` that were implemented in the current cycle (i.e., tasks moved to `done/` during Steps 89 of this cycle).
The document skill in Task mode:
1. Reads each task spec to identify changed source files
2. Updates affected module docs, component docs, and system-level docs
3. Does NOT redo full discovery, verification, or problem extraction
If `_docs/02_document/` does not contain existing docs (e.g., documentation step was skipped), mark Step 11 as `skipped`.
After completion, auto-chain to Step 12 (Security Audit).
---
**Step 12 — Security Audit (optional)**
Condition: the autopilot state shows Step 11 (Update Docs) is completed or skipped AND the autopilot state does NOT show Step 12 (Security Audit) as completed or skipped AND (`_docs/04_deploy/` does not exist or is incomplete)
Action: Present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Run security audit before deploy?
══════════════════════════════════════
A) Run security audit (recommended for production deployments)
B) Skip — proceed directly to deploy
══════════════════════════════════════
Recommendation: A — catches vulnerabilities before production
══════════════════════════════════════
```
- If user picks A → Read and execute `.cursor/skills/security/SKILL.md`. After completion, auto-chain to Step 13 (Performance Test).
- If user picks B → Mark Step 12 as `skipped` in the state file, auto-chain to Step 13 (Performance Test).
---
**Step 13 — Performance Test (optional)**
Condition: the autopilot state shows Step 12 (Security Audit) is completed or skipped AND the autopilot state does NOT show Step 13 (Performance Test) as completed or skipped AND (`_docs/04_deploy/` does not exist or is incomplete)
Action: Present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Run performance/load tests before deploy?
══════════════════════════════════════
A) Run performance tests (recommended for latency-sensitive or high-load systems)
B) Skip — proceed directly to deploy
══════════════════════════════════════
Recommendation: [A or B — base on whether acceptance criteria
include latency, throughput, or load requirements]
══════════════════════════════════════
```
- If user picks A → Run performance tests:
1. If `scripts/run-performance-tests.sh` exists (generated by the test-spec skill Phase 4), execute it
2. Otherwise, check if `_docs/02_document/tests/performance-tests.md` exists for test scenarios, detect appropriate load testing tool (k6, locust, artillery, wrk, or built-in benchmarks), and execute performance test scenarios against the running system
3. Present results vs acceptance criteria thresholds
4. If thresholds fail → present Choose format: A) Fix and re-run, B) Proceed anyway, C) Abort
5. After completion, auto-chain to Step 14 (Deploy)
- If user picks B → Mark Step 13 as `skipped` in the state file, auto-chain to Step 14 (Deploy).
---
**Step 14 — Deploy**
Condition: the autopilot state shows Step 10 (Run Tests) is completed AND (Step 11 is completed or skipped) AND (Step 12 is completed or skipped) AND (Step 13 is completed or skipped) AND (`_docs/04_deploy/` does not exist or is incomplete)
Action: Read and execute `.cursor/skills/deploy/SKILL.md`
After deployment completes, the existing-code workflow is done.
---
**Re-Entry After Completion**
Condition: the autopilot state shows `step: done` OR all steps through 14 (Deploy) are completed
Action: The project completed a full cycle. Print the status banner and automatically loop back to New Task — do NOT ask the user for confirmation:
```
══════════════════════════════════════
PROJECT CYCLE COMPLETE
══════════════════════════════════════
The previous cycle finished successfully.
Starting new feature cycle…
══════════════════════════════════════
```
Set `step: 8`, `status: not_started` in the state file, then auto-chain to Step 8 (New Task).
Note: the loop (Steps 8 → 14 → 8) ensures every feature cycle includes: New Task → Implement → Run Tests → Update Docs → Security → Performance → Deploy.
## Auto-Chain Rules
| Completed Step | Next Action |
|---------------|-------------|
| Document (1) | Auto-chain → Test Spec (2) |
| Test Spec (2) | Auto-chain → Code Testability Revision (3) |
| Code Testability Revision (3) | Auto-chain → Decompose Tests (4) |
| Decompose Tests (4) | **Session boundary** — suggest new conversation before Implement Tests |
| Implement Tests (5) | Auto-chain → Run Tests (6) |
| Run Tests (6, all pass) | Auto-chain → Refactor choice (7) |
| Refactor (7, done or skipped) | Auto-chain → New Task (8) |
| New Task (8) | **Session boundary** — suggest new conversation before Implement |
| Implement (9) | Auto-chain → Run Tests (10) |
| Run Tests (10, all pass) | Auto-chain → Update Docs (11) |
| Update Docs (11) | Auto-chain → Security Audit choice (12) |
| Security Audit (12, done or skipped) | Auto-chain → Performance Test choice (13) |
| Performance Test (13, done or skipped) | Auto-chain → Deploy (14) |
| Deploy (14) | **Workflow complete** — existing-code flow done |
## Status Summary Template
```
═══════════════════════════════════════════════════
AUTOPILOT STATUS (existing-code)
═══════════════════════════════════════════════════
Step 1 Document [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 2 Test Spec [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 3 Code Testability Rev. [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 4 Decompose Tests [DONE (N tasks) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 5 Implement Tests [DONE / IN PROGRESS (batch M) / NOT STARTED / FAILED (retry N/3)]
Step 6 Run Tests [DONE (N passed, M failed) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 7 Refactor [DONE / SKIPPED / IN PROGRESS (phase N) / NOT STARTED / FAILED (retry N/3)]
Step 8 New Task [DONE (N tasks) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 9 Implement [DONE / IN PROGRESS (batch M of ~N) / NOT STARTED / FAILED (retry N/3)]
Step 10 Run Tests [DONE (N passed, M failed) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 11 Update Docs [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 12 Security Audit [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 13 Performance Test [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 14 Deploy [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
═══════════════════════════════════════════════════
Current: Step N — Name
SubStep: M — [sub-skill internal step name]
Retry: [N/3 if retrying, omit if 0]
Action: [what will happen next]
═══════════════════════════════════════════════════
```
@@ -1,235 +0,0 @@
# Greenfield Workflow
Workflow for new projects built from scratch. Flows linearly: Problem → Research → Plan → UI Design (if applicable) → Decompose → Implement → Run Tests → Security Audit (optional) → Performance Test (optional) → Deploy.
## Step Reference Table
| Step | Name | Sub-Skill | Internal SubSteps |
|------|------|-----------|-------------------|
| 1 | Problem | problem/SKILL.md | Phase 14 |
| 2 | Research | research/SKILL.md | Mode A: Phase 14 · Mode B: Step 08 |
| 3 | Plan | plan/SKILL.md | Step 16 + Final |
| 4 | UI Design | ui-design/SKILL.md | Phase 08 (conditional — UI projects only) |
| 5 | Decompose | decompose/SKILL.md | Step 14 |
| 6 | Implement | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
| 7 | Run Tests | test-run/SKILL.md | Steps 14 |
| 8 | Security Audit | security/SKILL.md | Phase 15 (optional) |
| 9 | Performance Test | (autopilot-managed) | Load/stress tests (optional) |
| 10 | Deploy | deploy/SKILL.md | Step 17 |
## Detection Rules
Check rules in order — first match wins.
---
**Step 1 — Problem Gathering**
Condition: `_docs/00_problem/` does not exist, OR any of these are missing/empty:
- `problem.md`
- `restrictions.md`
- `acceptance_criteria.md`
- `input_data/` (must contain at least one file)
Action: Read and execute `.cursor/skills/problem/SKILL.md`
---
**Step 2 — Research (Initial)**
Condition: `_docs/00_problem/` is complete AND `_docs/01_solution/` has no `solution_draft*.md` files
Action: Read and execute `.cursor/skills/research/SKILL.md` (will auto-detect Mode A)
---
**Research Decision** (inline gate between Step 2 and Step 3)
Condition: `_docs/01_solution/` contains `solution_draft*.md` files AND `_docs/01_solution/solution.md` does not exist AND `_docs/02_document/architecture.md` does not exist
Action: Present the current research state to the user:
- How many solution drafts exist
- Whether tech_stack.md and security_analysis.md exist
- One-line summary from the latest draft
Then present using the **Choose format**:
```
══════════════════════════════════════
DECISION REQUIRED: Research complete — next action?
══════════════════════════════════════
A) Run another research round (Mode B assessment)
B) Proceed to planning with current draft
══════════════════════════════════════
Recommendation: [A or B] — [reason based on draft quality]
══════════════════════════════════════
```
- If user picks A → Read and execute `.cursor/skills/research/SKILL.md` (will auto-detect Mode B)
- If user picks B → auto-chain to Step 3 (Plan)
---
**Step 3 — Plan**
Condition: `_docs/01_solution/` has `solution_draft*.md` files AND `_docs/02_document/architecture.md` does not exist
Action:
1. The plan skill's Prereq 2 will rename the latest draft to `solution.md` — this is handled by the plan skill itself
2. Read and execute `.cursor/skills/plan/SKILL.md`
If `_docs/02_document/` exists but is incomplete (has some artifacts but no `FINAL_report.md`), the plan skill's built-in resumability handles it.
---
**Step 4 — UI Design (conditional)**
Condition: `_docs/02_document/architecture.md` exists AND the autopilot state does NOT show Step 4 (UI Design) as completed or skipped AND the project is a UI project
**UI Project Detection** — the project is a UI project if ANY of the following are true:
- `package.json` exists in the workspace root or any subdirectory
- `*.html`, `*.jsx`, `*.tsx` files exist in the workspace
- `_docs/02_document/components/` contains a component whose `description.md` mentions UI, frontend, page, screen, dashboard, form, or view
- `_docs/02_document/architecture.md` mentions frontend, UI layer, SPA, or client-side rendering
- `_docs/01_solution/solution.md` mentions frontend, web interface, or user-facing UI
If the project is NOT a UI project → mark Step 4 as `skipped` in the state file and auto-chain to Step 5.
If the project IS a UI project → present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: UI project detected — generate mockups?
══════════════════════════════════════
A) Generate UI mockups before decomposition (recommended)
B) Skip — proceed directly to decompose
══════════════════════════════════════
Recommendation: A — mockups before decomposition
produce better task specs for frontend components
══════════════════════════════════════
```
- If user picks A → Read and execute `.cursor/skills/ui-design/SKILL.md`. After completion, auto-chain to Step 5 (Decompose).
- If user picks B → Mark Step 4 as `skipped` in the state file, auto-chain to Step 5 (Decompose).
---
**Step 5 — Decompose**
Condition: `_docs/02_document/` contains `architecture.md` AND `_docs/02_document/components/` has at least one component AND `_docs/02_tasks/todo/` does not exist or has no task files
Action: Read and execute `.cursor/skills/decompose/SKILL.md`
If `_docs/02_tasks/` subfolders have some task files already, the decompose skill's resumability handles it.
---
**Step 6 — Implement**
Condition: `_docs/02_tasks/todo/` contains task files AND `_dependencies_table.md` exists AND `_docs/03_implementation/` does not contain any `implementation_report_*.md` file
Action: Read and execute `.cursor/skills/implement/SKILL.md`
If `_docs/03_implementation/` has batch reports, the implement skill detects completed tasks and continues. The FINAL report filename is context-dependent — see implement skill documentation for naming convention.
---
**Step 7 — Run Tests**
Condition: `_docs/03_implementation/` contains an `implementation_report_*.md` file AND the autopilot state does NOT show Step 7 (Run Tests) as completed AND (`_docs/04_deploy/` does not exist or is incomplete)
Action: Read and execute `.cursor/skills/test-run/SKILL.md`
---
**Step 8 — Security Audit (optional)**
Condition: the autopilot state shows Step 7 (Run Tests) is completed AND the autopilot state does NOT show Step 8 (Security Audit) as completed or skipped AND (`_docs/04_deploy/` does not exist or is incomplete)
Action: Present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Run security audit before deploy?
══════════════════════════════════════
A) Run security audit (recommended for production deployments)
B) Skip — proceed directly to deploy
══════════════════════════════════════
Recommendation: A — catches vulnerabilities before production
══════════════════════════════════════
```
- If user picks A → Read and execute `.cursor/skills/security/SKILL.md`. After completion, auto-chain to Step 9 (Performance Test).
- If user picks B → Mark Step 8 as `skipped` in the state file, auto-chain to Step 9 (Performance Test).
---
**Step 9 — Performance Test (optional)**
Condition: the autopilot state shows Step 8 (Security Audit) is completed or skipped AND the autopilot state does NOT show Step 9 (Performance Test) as completed or skipped AND (`_docs/04_deploy/` does not exist or is incomplete)
Action: Present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Run performance/load tests before deploy?
══════════════════════════════════════
A) Run performance tests (recommended for latency-sensitive or high-load systems)
B) Skip — proceed directly to deploy
══════════════════════════════════════
Recommendation: [A or B — base on whether acceptance criteria
include latency, throughput, or load requirements]
══════════════════════════════════════
```
- If user picks A → Run performance tests:
1. If `scripts/run-performance-tests.sh` exists (generated by the test-spec skill Phase 4), execute it
2. Otherwise, check if `_docs/02_document/tests/performance-tests.md` exists for test scenarios, detect appropriate load testing tool (k6, locust, artillery, wrk, or built-in benchmarks), and execute performance test scenarios against the running system
3. Present results vs acceptance criteria thresholds
4. If thresholds fail → present Choose format: A) Fix and re-run, B) Proceed anyway, C) Abort
5. After completion, auto-chain to Step 10 (Deploy)
- If user picks B → Mark Step 9 as `skipped` in the state file, auto-chain to Step 10 (Deploy).
---
**Step 10 — Deploy**
Condition: the autopilot state shows Step 7 (Run Tests) is completed AND (Step 8 is completed or skipped) AND (Step 9 is completed or skipped) AND (`_docs/04_deploy/` does not exist or is incomplete)
Action: Read and execute `.cursor/skills/deploy/SKILL.md`
---
**Done**
Condition: `_docs/04_deploy/` contains all expected artifacts (containerization.md, ci_cd_pipeline.md, environment_strategy.md, observability.md, deployment_procedures.md, deploy_scripts.md)
Action: Report project completion with summary. If the user runs autopilot again after greenfield completion, Flow Resolution rule 3 routes to the existing-code flow (re-entry after completion) so they can add new features.
## Auto-Chain Rules
| Completed Step | Next Action |
|---------------|-------------|
| Problem (1) | Auto-chain → Research (2) |
| Research (2) | Auto-chain → Research Decision (ask user: another round or proceed?) |
| Research Decision → proceed | Auto-chain → Plan (3) |
| Plan (3) | Auto-chain → UI Design detection (4) |
| UI Design (4, done or skipped) | Auto-chain → Decompose (5) |
| Decompose (5) | **Session boundary** — suggest new conversation before Implement |
| Implement (6) | Auto-chain → Run Tests (7) |
| Run Tests (7, all pass) | Auto-chain → Security Audit choice (8) |
| Security Audit (8, done or skipped) | Auto-chain → Performance Test choice (9) |
| Performance Test (9, done or skipped) | Auto-chain → Deploy (10) |
| Deploy (10) | Report completion |
## Status Summary Template
```
═══════════════════════════════════════════════════
AUTOPILOT STATUS (greenfield)
═══════════════════════════════════════════════════
Step 1 Problem [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 2 Research [DONE (N drafts) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 3 Plan [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 4 UI Design [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 5 Decompose [DONE (N tasks) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 6 Implement [DONE / IN PROGRESS (batch M of ~N) / NOT STARTED / FAILED (retry N/3)]
Step 7 Run Tests [DONE (N passed, M failed) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 8 Security Audit [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 9 Performance Test [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 10 Deploy [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
═══════════════════════════════════════════════════
Current: Step N — Name
SubStep: M — [sub-skill internal step name]
Retry: [N/3 if retrying, omit if 0]
Action: [what will happen next]
═══════════════════════════════════════════════════
```
-92
View File
@@ -1,92 +0,0 @@
# Autopilot State Management
## State File: `_docs/_autopilot_state.md`
The autopilot persists its position to `_docs/_autopilot_state.md`. This is a lightweight pointer — only the current step. All history lives in `_docs/` artifacts and git log. Folder scanning is the fallback when the state file doesn't exist.
### Template
```markdown
# Autopilot State
## Current Step
flow: [greenfield | existing-code]
step: [1-10 for greenfield, 1-13 for existing-code, or "done"]
name: [step name from the active flow's Step Reference Table]
status: [not_started / in_progress / completed / skipped / failed]
sub_step: [0, or sub-skill internal step number + name if interrupted mid-step]
retry_count: [0-3 — consecutive auto-retry attempts, reset to 0 on success]
```
### Examples
```
flow: greenfield
step: 3
name: Plan
status: in_progress
sub_step: 4 — Architecture Review & Risk Assessment
retry_count: 0
```
```
flow: existing-code
step: 2
name: Test Spec
status: failed
sub_step: 1b — Test Case Generation
retry_count: 3
```
### State File Rules
1. **Create** on the first autopilot invocation (after state detection determines Step 1)
2. **Update** after every change — this includes: batch completion, sub-step progress, step completion, session boundary, failed retry, or any meaningful state transition. The state file must always reflect the current reality.
3. **Read** as the first action on every invocation — before folder scanning
4. **Cross-check**: verify against actual `_docs/` folder contents. If they disagree, trust the folder structure and update the state file
5. **Never delete** the state file
6. **Retry tracking**: increment `retry_count` on each failed auto-retry; reset to `0` on success. If `retry_count` reaches 3, set `status: failed`
7. **Failed state on re-entry**: if `status: failed` with `retry_count: 3`, do NOT auto-retry — present the issue to the user first
8. **Skill-internal state**: when the active skill maintains its own state file (e.g., document skill's `_docs/02_document/state.json`), the autopilot's `sub_step` field should reflect the skill's internal progress. On re-entry, cross-check the skill's state file against the autopilot's `sub_step` for consistency.
## State Detection
Read `_docs/_autopilot_state.md` first. If it exists and is consistent with the folder structure, use the `Current Step` from the state file. If the state file doesn't exist or is inconsistent, fall back to folder scanning.
### Folder Scan Rules (fallback)
Scan `_docs/` to determine the current workflow position. The detection rules are defined in each flow file (`flows/greenfield.md` and `flows/existing-code.md`). Check the existing-code flow first (Step 1 detection), then greenfield flow rules. First match wins.
## Re-Entry Protocol
When the user invokes `/autopilot` and work already exists:
1. Read `_docs/_autopilot_state.md`
2. Cross-check against `_docs/` folder structure
3. Present Status Summary (use the active flow's Status Summary Template)
4. If the detected step has a sub-skill with built-in resumability, the sub-skill handles mid-step recovery
5. Continue execution from detected state
## Session Boundaries
After any decompose/planning step completes, **do not auto-chain to implement**. Instead:
1. Update state file: mark the step as completed, set current step to the next implement step with status `not_started`
- Existing-code flow: After Step 4 (Decompose Tests) → set current step to 5 (Implement Tests)
- Existing-code flow: After Step 8 (New Task) → set current step to 9 (Implement)
- Greenfield flow: After Step 5 (Decompose) → set current step to 6 (Implement)
2. Present a summary: number of tasks, estimated batches, total complexity points
3. Use Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Decompose complete — start implementation?
══════════════════════════════════════
A) Start a new conversation for implementation (recommended for context freshness)
B) Continue implementation in this conversation
══════════════════════════════════════
Recommendation: A — implementation is the longest phase, fresh context helps
══════════════════════════════════════
```
These are the only hard session boundaries. All other transitions auto-chain.
+62 -1
View File
@@ -50,6 +50,18 @@ For each task, verify implementation satisfies every acceptance criterion:
- Flag any AC that is not demonstrably satisfied as a **Spec-Gap** finding (severity: High) - Flag any AC that is not demonstrably satisfied as a **Spec-Gap** finding (severity: High)
- Flag any scope creep (implementation beyond what the spec asked for) as a **Scope** finding (severity: Low) - Flag any scope creep (implementation beyond what the spec asked for) as a **Scope** finding (severity: Low)
**Contract verification** (for shared-models / shared-API tasks — any task with a `## Contract` section):
- Verify the referenced contract file exists at the stated path under `_docs/02_document/contracts/`.
- Verify the implementation's public signatures (types, method shapes, endpoint paths, error variants) match the contract's **Shape** section.
- Verify invariants from the contract's **Invariants** section are enforced in code (either structurally via types or via runtime checks with tests).
- If the implementation and the contract disagree, emit a **Spec-Gap** finding (High severity) and note which side is drifting.
**Consumer-side contract verification** (for tasks whose Dependencies list a contract file):
- Verify the consumer's imports and call sites match the contract's Shape.
- If they diverge, emit a **Spec-Gap** finding (High severity) with a hint that the consumer, the contract, or the producer is drifting.
## Phase 3: Code Quality Review ## Phase 3: Code Quality Review
Check implemented code against quality standards: Check implemented code against quality standards:
@@ -92,6 +104,53 @@ When multiple tasks were implemented in the same batch:
- Shared code is not duplicated across task implementations - Shared code is not duplicated across task implementations
- Dependencies declared in task specs are properly wired - Dependencies declared in task specs are properly wired
## Phase 7: Architecture Compliance
Verify the implemented code respects the architecture documented in `_docs/02_document/architecture.md` and the component boundaries declared in `_docs/02_document/module-layout.md`.
**Inputs**:
- `_docs/02_document/architecture.md` — layering, allowed dependencies, patterns
- `_docs/02_document/module-layout.md` — per-component directories, Public API surface, `Imports from` lists, Allowed Dependencies table
- The cumulative list of changed files (for per-batch invocation) or the full codebase (for baseline invocation)
**Checks**:
1. **Layer direction**: for each import in a changed file, resolve the importer's layer (from the Allowed Dependencies table) and the importee's layer. Flag any import where the importee's layer is strictly higher than the importer's. Severity: High. Category: Architecture.
2. **Public API respect**: for each cross-component import, verify the imported symbol lives in the target component's Public API file list (from `module-layout.md`). Importing an internal file of another component is an Architecture finding. Severity: High.
3. **No new cyclic module dependencies**: build a module-level import graph of the changed files plus their direct dependencies. Flag any new cycle introduced by this batch. Severity: Critical (cycles are structurally hard to undo once wired). Category: Architecture.
4. **Duplicate symbols across components**: scan changed files for class, function, or constant names that also appear in another component's code AND do not share an interface. If a shared abstraction was expected (via cross-cutting epic or shared/*), flag it. Severity: High. Category: Architecture.
5. **Cross-cutting concerns not locally re-implemented**: if a file under a component directory contains logic that should live in `shared/<concern>/` (e.g., custom logging setup, config loader, error envelope), flag it. Severity: Medium. Category: Architecture.
**Detection approach (per language)**:
- Python: parse `import` / `from ... import` statements; optionally AST with `ast` module for reliable symbol resolution.
- TypeScript / JavaScript: parse `import ... from '...'` and `require('...')`; resolve via `tsconfig.json` paths.
- C#: parse `using` directives and fully-qualified type references; respect `.csproj` ProjectReference layering.
- Rust: parse `use <crate>::` and `mod` declarations; respect `Cargo.toml` workspace members.
- Go: parse `import` blocks; respect module path ownership.
If a static analyzer tool is available on the project (ArchUnit, NsDepCop, tach, eslint-plugin-boundaries, etc.), prefer invoking it and parsing its output over hand-rolled analysis.
**Invocation modes**:
- **Full mode** (default when invoked by the implement skill per batch): all 7 phases run.
- **Baseline mode**: Phase 1 + Phase 7 only. Used for one-time architecture scan of an existing codebase (see existing-code flow Step 2 — Architecture Baseline Scan). Produces `_docs/02_document/architecture_compliance_baseline.md` instead of a batch review report.
- **Cumulative mode**: all 7 phases on the union of changed files since the last cumulative review. Used mid-implementation (see implement skill Step 14.5).
**Baseline delta** (cumulative mode + full mode, when `_docs/02_document/architecture_compliance_baseline.md` exists):
After the seven phases produce the current Architecture findings list, partition those findings against the baseline:
- **Carried over**: a finding whose `(file, category, rule)` triple matches an entry in the baseline. Not new; still present.
- **Resolved**: a baseline entry whose `(file, category, rule)` triple is NOT in the current findings AND whose target file is in scope of this review. The team fixed it.
- **Newly introduced**: a current finding that was not in the baseline. The review cycle created this.
Emit a `## Baseline Delta` section in the report with three tables (Carried over, Resolved, Newly introduced) and per-category counts. The verdict logic does not change — Critical / High still drive FAIL. The delta is additional signal for the user and feeds the retrospective's structural metrics.
## Output Format ## Output Format
Produce a structured report with findings deduplicated and sorted by severity: Produce a structured report with findings deduplicated and sorted by severity:
@@ -136,7 +195,9 @@ Produce a structured report with findings deduplicated and sorted by severity:
## Category Values ## Category Values
Bug, Spec-Gap, Security, Performance, Maintainability, Style, Scope Bug, Spec-Gap, Security, Performance, Maintainability, Style, Scope, Architecture
`Architecture` findings come from Phase 7. They indicate layering violations, Public API bypasses, new cyclic dependencies, duplicate symbols, or cross-cutting concerns re-implemented locally.
## Verdict Logic ## Verdict Logic
+39 -173
View File
@@ -33,31 +33,41 @@ Decompose planned components into atomic, implementable task specs with a bootst
Determine the operating mode based on invocation before any other logic runs. Determine the operating mode based on invocation before any other logic runs.
**Default** (no explicit input file provided): **Default** (no explicit input file provided):
- DOCUMENT_DIR: `_docs/02_document/` - DOCUMENT_DIR: `_docs/02_document/`
- TASKS_DIR: `_docs/02_tasks/` - TASKS_DIR: `_docs/02_tasks/`
- TASKS_TODO: `_docs/02_tasks/todo/` - TASKS_TODO: `_docs/02_tasks/todo/`
- Reads from: `_docs/00_problem/`, `_docs/01_solution/`, DOCUMENT_DIR - Reads from: `_docs/00_problem/`, `_docs/01_solution/`, DOCUMENT_DIR
- Runs Step 1 (bootstrap) + Step 2 (all components) + Step 3 (blackbox tests) + Step 4 (cross-verification)
**Single component mode** (provided file is within `_docs/02_document/` and inside a `components/` subdirectory): **Single component mode** (provided file is within `_docs/02_document/` and inside a `components/` subdirectory):
- DOCUMENT_DIR: `_docs/02_document/` - DOCUMENT_DIR: `_docs/02_document/`
- TASKS_DIR: `_docs/02_tasks/` - TASKS_DIR: `_docs/02_tasks/`
- TASKS_TODO: `_docs/02_tasks/todo/` - TASKS_TODO: `_docs/02_tasks/todo/`
- Derive component number and component name from the file path - Derive component number and component name from the file path
- Ask user for the parent Epic ID - Ask user for the parent Epic ID
- Runs Step 2 (that component only, appending to existing task numbering)
**Tests-only mode** (provided file/directory is within `tests/`, or `DOCUMENT_DIR/tests/` exists and input explicitly requests test decomposition): **Tests-only mode** (provided file/directory is within `tests/`, or `DOCUMENT_DIR/tests/` exists and input explicitly requests test decomposition):
- DOCUMENT_DIR: `_docs/02_document/` - DOCUMENT_DIR: `_docs/02_document/`
- TASKS_DIR: `_docs/02_tasks/` - TASKS_DIR: `_docs/02_tasks/`
- TASKS_TODO: `_docs/02_tasks/todo/` - TASKS_TODO: `_docs/02_tasks/todo/`
- TESTS_DIR: `DOCUMENT_DIR/tests/` - TESTS_DIR: `DOCUMENT_DIR/tests/`
- Reads from: `_docs/00_problem/`, `_docs/01_solution/`, TESTS_DIR - Reads from: `_docs/00_problem/`, `_docs/01_solution/`, TESTS_DIR
- Runs Step 1t (test infrastructure bootstrap) + Step 3 (blackbox test decomposition) + Step 4 (cross-verification against test coverage)
- Skips Step 1 (project bootstrap) and Step 2 (component decomposition) — the codebase already exists
Announce the detected mode and resolved paths to the user before proceeding. Announce the detected mode and resolved paths to the user before proceeding.
### Step Applicability by Mode
| Step | File | Default | Single | Tests-only |
|------|------|:-------:|:------:|:----------:|
| 1 Bootstrap Structure | `steps/01_bootstrap-structure.md` | ✓ | — | — |
| 1t Test Infrastructure | `steps/01t_test-infrastructure.md` | — | — | ✓ |
| 1.5 Module Layout | `steps/01-5_module-layout.md` | ✓ | — | — |
| 2 Task Decomposition | `steps/02_task-decomposition.md` | ✓ | ✓ | — |
| 3 Blackbox Test Tasks | `steps/03_blackbox-test-decomposition.md` | ✓ | — | ✓ |
| 4 Cross-Verification | `steps/04_cross-verification.md` | ✓ | — | ✓ |
## Input Specification ## Input Specification
### Required Files ### Required Files
@@ -101,14 +111,17 @@ Announce the detected mode and resolved paths to the user before proceeding.
### Prerequisite Checks (BLOCKING) ### Prerequisite Checks (BLOCKING)
**Default:** **Default:**
1. DOCUMENT_DIR contains `architecture.md` and `components/`**STOP if missing** 1. DOCUMENT_DIR contains `architecture.md` and `components/`**STOP if missing**
2. Create TASKS_DIR and TASKS_TODO if they do not exist 2. Create TASKS_DIR and TASKS_TODO if they do not exist
3. If TASKS_DIR subfolders (`todo/`, `backlog/`, `done/`) already contain task files, ask user: **resume from last checkpoint or start fresh?** 3. If TASKS_DIR subfolders (`todo/`, `backlog/`, `done/`) already contain task files, ask user: **resume from last checkpoint or start fresh?**
**Single component mode:** **Single component mode:**
1. The provided component file exists and is non-empty — **STOP if missing** 1. The provided component file exists and is non-empty — **STOP if missing**
**Tests-only mode:** **Tests-only mode:**
1. `TESTS_DIR/blackbox-tests.md` exists and is non-empty — **STOP if missing** 1. `TESTS_DIR/blackbox-tests.md` exists and is non-empty — **STOP if missing**
2. `TESTS_DIR/environment.md` exists — **STOP if missing** 2. `TESTS_DIR/environment.md` exists — **STOP if missing**
3. Create TASKS_DIR and TASKS_TODO if they do not exist 3. Create TASKS_DIR and TASKS_TODO if they do not exist
@@ -136,6 +149,7 @@ TASKS_DIR/
| Step | Save immediately after | Filename | | Step | Save immediately after | Filename |
|------|------------------------|----------| |------|------------------------|----------|
| Step 1 | Bootstrap structure plan complete + work item ticket created + file renamed | `todo/[TRACKER-ID]_initial_structure.md` | | Step 1 | Bootstrap structure plan complete + work item ticket created + file renamed | `todo/[TRACKER-ID]_initial_structure.md` |
| Step 1.5 | Module layout written | `_docs/02_document/module-layout.md` |
| Step 1t | Test infrastructure bootstrap complete + work item ticket created + file renamed | `todo/[TRACKER-ID]_test_infrastructure.md` | | Step 1t | Test infrastructure bootstrap complete + work item ticket created + file renamed | `todo/[TRACKER-ID]_test_infrastructure.md` |
| Step 2 | Each component task decomposed + work item ticket created + file renamed | `todo/[TRACKER-ID]_[short_name].md` | | Step 2 | Each component task decomposed + work item ticket created + file renamed | `todo/[TRACKER-ID]_[short_name].md` |
| Step 3 | Each blackbox test task decomposed + work item ticket created + file renamed | `todo/[TRACKER-ID]_[short_name].md` | | Step 3 | Each blackbox test task decomposed + work item ticket created + file renamed | `todo/[TRACKER-ID]_[short_name].md` |
@@ -151,193 +165,43 @@ If TASKS_DIR subfolders already contain task files:
## Progress Tracking ## Progress Tracking
At the start of execution, create a TodoWrite with all applicable steps. Update status as each step/component completes. At the start of execution, create a TodoWrite with all applicable steps for the detected mode (see Step Applicability table). Update status as each step/component completes.
## Workflow ## Workflow
### Step 1t: Test Infrastructure Bootstrap (tests-only mode only) ### Step 1: Bootstrap Structure Plan (default mode only)
**Role**: Professional Quality Assurance Engineer Read and follow `steps/01_bootstrap-structure.md`.
**Goal**: Produce `01_test_infrastructure.md` — the first task describing the test project scaffold
**Constraints**: This is a plan document, not code. The `/implement` skill executes it.
1. Read `TESTS_DIR/environment.md` and `TESTS_DIR/test-data.md`
2. Read problem.md, restrictions.md, acceptance_criteria.md for domain context
3. Document the test infrastructure plan using `templates/test-infrastructure-task.md`
The test infrastructure bootstrap must include:
- Test project folder layout (`e2e/` directory structure)
- Mock/stub service definitions for each external dependency
- `docker-compose.test.yml` structure from environment.md
- Test runner configuration (framework, plugins, fixtures)
- Test data fixture setup from test-data.md seed data sets
- Test reporting configuration (format, output path)
- Data isolation strategy
**Self-verification**:
- [ ] Every external dependency from environment.md has a mock service defined
- [ ] Docker Compose structure covers all services from environment.md
- [ ] Test data fixtures cover all seed data sets from test-data.md
- [ ] Test runner configuration matches the consumer app tech stack from environment.md
- [ ] Data isolation strategy is defined
**Save action**: Write `todo/01_test_infrastructure.md` (temporary numeric name)
**Tracker action**: Create a work item ticket for this task under the "Blackbox Tests" epic. Write the work item ticket ID and Epic ID back into the task header.
**Rename action**: Rename the file from `todo/01_test_infrastructure.md` to `todo/[TRACKER-ID]_test_infrastructure.md`. Update the **Task** field inside the file to match the new filename.
**BLOCKING**: Present test infrastructure plan summary to user. Do NOT proceed until user confirms.
--- ---
### Step 1: Bootstrap Structure Plan (default mode only) ### Step 1t: Test Infrastructure Bootstrap (tests-only mode only)
**Role**: Professional software architect Read and follow `steps/01t_test-infrastructure.md`.
**Goal**: Produce `01_initial_structure.md` — the first task describing the project skeleton
**Constraints**: This is a plan document, not code. The `/implement` skill executes it.
1. Read architecture.md, all component specs, system-flows.md, data_model.md, and `deployment/` from DOCUMENT_DIR ---
2. Read problem, solution, and restrictions from `_docs/00_problem/` and `_docs/01_solution/`
3. Research best implementation patterns for the identified tech stack
4. Document the structure plan using `templates/initial-structure-task.md`
The bootstrap structure plan must include: ### Step 1.5: Module Layout (default mode only)
- Project folder layout with all component directories
- Shared models, interfaces, and DTOs
- Dockerfile per component (multi-stage, non-root, health checks, pinned base images)
- `docker-compose.yml` for local development (all components + database + dependencies)
- `docker-compose.test.yml` for blackbox test environment (blackbox test runner)
- `.dockerignore`
- CI/CD pipeline file (`.github/workflows/ci.yml` or `azure-pipelines.yml`) with stages from `deployment/ci_cd_pipeline.md`
- Database migration setup and initial seed data scripts
- Observability configuration: structured logging setup, health check endpoints (`/health/live`, `/health/ready`), metrics endpoint (`/metrics`)
- Environment variable documentation (`.env.example`)
- Test structure with unit and blackbox test locations
**Self-verification**: Read and follow `steps/01-5_module-layout.md`.
- [ ] All components have corresponding folders in the layout
- [ ] All inter-component interfaces have DTOs defined
- [ ] Dockerfile defined for each component
- [ ] `docker-compose.yml` covers all components and dependencies
- [ ] `docker-compose.test.yml` enables blackbox testing
- [ ] CI/CD pipeline file defined with lint, test, security, build, deploy stages
- [ ] Database migration setup included
- [ ] Health check endpoints specified for each service
- [ ] Structured logging configuration included
- [ ] `.env.example` with all required environment variables
- [ ] Environment strategy covers dev, staging, production
- [ ] Test structure includes unit and blackbox test locations
**Save action**: Write `todo/01_initial_structure.md` (temporary numeric name)
**Tracker action**: Create a work item ticket for this task under the "Bootstrap & Initial Structure" epic. Write the work item ticket ID and Epic ID back into the task header.
**Rename action**: Rename the file from `todo/01_initial_structure.md` to `todo/[TRACKER-ID]_initial_structure.md` (e.g., `todo/AZ-42_initial_structure.md`). Update the **Task** field inside the file to match the new filename.
**BLOCKING**: Present structure plan summary to user. Do NOT proceed until user confirms.
--- ---
### Step 2: Task Decomposition (default and single component modes) ### Step 2: Task Decomposition (default and single component modes)
**Role**: Professional software architect Read and follow `steps/02_task-decomposition.md`.
**Goal**: Decompose each component into atomic, implementable task specs — numbered sequentially starting from 02
**Constraints**: Behavioral specs only — describe what, not how. No implementation code.
**Numbering**: Tasks are numbered sequentially across all components in dependency order. Start from 02 (01 is initial_structure). In single component mode, start from the next available number in TASKS_DIR.
**Component ordering**: Process components in dependency order — foundational components first (shared models, database), then components that depend on them.
For each component (or the single provided component):
1. Read the component's `description.md` and `tests.md` (if available)
2. Decompose into atomic tasks; create only 1 task if the component is simple or atomic
3. Split into multiple tasks only when it is necessary and would be easier to implement
4. Do not create tasks for other components — only tasks for the current component
5. Each task should be atomic, containing 0 APIs or a list of semantically connected APIs
6. Write each task spec using `templates/task.md`
7. Estimate complexity per task (1, 2, 3, 5, 8 points); no task should exceed 8 points — split if it does
8. Note task dependencies (referencing tracker IDs of already-created dependency tasks, e.g., `AZ-42_initial_structure`)
9. **Immediately after writing each task file**: create a work item ticket, link it to the component's epic, write the work item ticket ID and Epic ID back into the task header, then rename the file from `todo/[##]_[short_name].md` to `todo/[TRACKER-ID]_[short_name].md`.
**Self-verification** (per component):
- [ ] Every task is atomic (single concern)
- [ ] No task exceeds 8 complexity points
- [ ] Task dependencies reference correct tracker IDs
- [ ] Tasks cover all interfaces defined in the component spec
- [ ] No tasks duplicate work from other components
- [ ] Every task has a work item ticket linked to the correct epic
**Save action**: Write each `todo/[##]_[short_name].md` (temporary numeric name), create work item ticket inline, then rename to `todo/[TRACKER-ID]_[short_name].md`. Update the **Task** field inside the file to match the new filename. Update **Dependencies** references in the file to use tracker IDs of the dependency tasks.
--- ---
### Step 3: Blackbox Test Task Decomposition (default and tests-only modes) ### Step 3: Blackbox Test Task Decomposition (default and tests-only modes)
**Role**: Professional Quality Assurance Engineer Read and follow `steps/03_blackbox-test-decomposition.md`.
**Goal**: Decompose blackbox test specs into atomic, implementable task specs
**Constraints**: Behavioral specs only — describe what, not how. No test code.
**Numbering**:
- In default mode: continue sequential numbering from where Step 2 left off.
- In tests-only mode: start from 02 (01 is the test infrastructure bootstrap from Step 1t).
1. Read all test specs from `DOCUMENT_DIR/tests/` (`blackbox-tests.md`, `performance-tests.md`, `resilience-tests.md`, `security-tests.md`, `resource-limit-tests.md`)
2. Group related test scenarios into atomic tasks (e.g., one task per test category or per component under test)
3. Each task should reference the specific test scenarios it implements and the environment/test-data specs
4. Dependencies:
- In default mode: blackbox test tasks depend on the component implementation tasks they exercise
- In tests-only mode: blackbox test tasks depend on the test infrastructure bootstrap task (Step 1t)
5. Write each task spec using `templates/task.md`
6. Estimate complexity per task (1, 2, 3, 5, 8 points); no task should exceed 8 points — split if it does
7. Note task dependencies (referencing tracker IDs of already-created dependency tasks)
8. **Immediately after writing each task file**: create a work item ticket under the "Blackbox Tests" epic, write the work item ticket ID and Epic ID back into the task header, then rename the file from `todo/[##]_[short_name].md` to `todo/[TRACKER-ID]_[short_name].md`.
**Self-verification**:
- [ ] Every scenario from `tests/blackbox-tests.md` is covered by a task
- [ ] Every scenario from `tests/performance-tests.md`, `tests/resilience-tests.md`, `tests/security-tests.md`, and `tests/resource-limit-tests.md` is covered by a task
- [ ] No task exceeds 8 complexity points
- [ ] Dependencies correctly reference the dependency tasks (component tasks in default mode, test infrastructure in tests-only mode)
- [ ] Every task has a work item ticket linked to the "Blackbox Tests" epic
**Save action**: Write each `todo/[##]_[short_name].md` (temporary numeric name), create work item ticket inline, then rename to `todo/[TRACKER-ID]_[short_name].md`.
--- ---
### Step 4: Cross-Task Verification (default and tests-only modes) ### Step 4: Cross-Task Verification (default and tests-only modes)
**Role**: Professional software architect and analyst Read and follow `steps/04_cross-verification.md`.
**Goal**: Verify task consistency and produce `_dependencies_table.md`
**Constraints**: Review step — fix gaps found, do not add new tasks
1. Verify task dependencies across all tasks are consistent
2. Check no gaps:
- In default mode: every interface in architecture.md has tasks covering it
- In tests-only mode: every test scenario in `traceability-matrix.md` is covered by a task
3. Check no overlaps: tasks don't duplicate work
4. Check no circular dependencies in the task graph
5. Produce `_dependencies_table.md` using `templates/dependencies-table.md`
**Self-verification**:
Default mode:
- [ ] Every architecture interface is covered by at least one task
- [ ] No circular dependencies in the task graph
- [ ] Cross-component dependencies are explicitly noted in affected task specs
- [ ] `_dependencies_table.md` contains every task with correct dependencies
Tests-only mode:
- [ ] Every test scenario from traceability-matrix.md "Covered" entries has a corresponding task
- [ ] No circular dependencies in the task graph
- [ ] Test task dependencies reference the test infrastructure bootstrap
- [ ] `_dependencies_table.md` contains every task with correct dependencies
**Save action**: Write `_dependencies_table.md`
**BLOCKING**: Present dependency summary to user. Do NOT proceed until user confirms.
---
## Common Mistakes ## Common Mistakes
@@ -371,22 +235,24 @@ Tests-only mode:
│ CONTEXT: Resolve mode (default / single component / tests-only) │ │ CONTEXT: Resolve mode (default / single component / tests-only) │
│ │ │ │
│ DEFAULT MODE: │ │ DEFAULT MODE: │
│ 1. Bootstrap Structure [TRACKER-ID]_initial_structure.md │ │ 1. Bootstrap Structure → steps/01_bootstrap-structure.md │
│ [BLOCKING: user confirms structure] │ │ [BLOCKING: user confirms structure] │
2. Component Tasks → [TRACKER-ID]_[short_name].md each 1.5 Module Layout → steps/01-5_module-layout.md
3. Blackbox Tests → [TRACKER-ID]_[short_name].md each [BLOCKING: user confirms layout]
4. Cross-Verification → _dependencies_table.md 2. Component Tasks → steps/02_task-decomposition.md
│ 3. Blackbox Tests → steps/03_blackbox-test-decomposition.md │
│ 4. Cross-Verification → steps/04_cross-verification.md │
│ [BLOCKING: user confirms dependencies] │ │ [BLOCKING: user confirms dependencies] │
│ │ │ │
│ TESTS-ONLY MODE: │ │ TESTS-ONLY MODE: │
│ 1t. Test Infrastructure [TRACKER-ID]_test_infrastructure.md │ │ 1t. Test Infrastructure → steps/01t_test-infrastructure.md
│ [BLOCKING: user confirms test scaffold] │ │ [BLOCKING: user confirms test scaffold] │
│ 3. Blackbox Tests [TRACKER-ID]_[short_name].md each │ 3. Blackbox Tests → steps/03_blackbox-test-decomposition.md
│ 4. Cross-Verification _dependencies_table.md │ 4. Cross-Verification → steps/04_cross-verification.md
│ [BLOCKING: user confirms dependencies] │ │ [BLOCKING: user confirms dependencies] │
│ │ │ │
│ SINGLE COMPONENT MODE: │ │ SINGLE COMPONENT MODE: │
│ 2. Component Tasks [TRACKER-ID]_[short_name].md each │ 2. Component Tasks → steps/02_task-decomposition.md
├────────────────────────────────────────────────────────────────┤ ├────────────────────────────────────────────────────────────────┤
│ Principles: Atomic tasks · Behavioral specs · Flat structure │ │ Principles: Atomic tasks · Behavioral specs · Flat structure │
│ Tracker inline · Rename to tracker ID · Save now · Ask don't assume│ │ Tracker inline · Rename to tracker ID · Save now · Ask don't assume│
@@ -0,0 +1,36 @@
# Step 1.5: Module Layout (default mode only)
**Role**: Professional software architect
**Goal**: Produce `_docs/02_document/module-layout.md` — the authoritative file-ownership map used by the implement skill. Separates **behavioral** task specs (no file paths) from **structural** file mapping (no behavior).
**Constraints**: Follow the target language's standard project-layout conventions. Do not invent non-standard directory structures.
## Steps
1. Detect the target language from `DOCUMENT_DIR/architecture.md` and the bootstrap structure plan produced in Step 1.
2. Apply the language's conventional layout (see table in `templates/module-layout.md`):
- Python → `src/<pkg>/<component>/`
- C# → `src/<Component>/`
- Rust → `crates/<component>/`
- TypeScript / React → `src/<component>/` with `index.ts` barrel
- Go → `internal/<component>/` or `pkg/<component>/`
3. Each component owns ONE top-level directory. Shared code goes under `<root>/shared/` (or language equivalent).
4. Public API surface = files in the layout's `public:` list for each component; everything else is internal and MUST NOT be imported from other components.
5. Cross-cutting concerns (logging, error handling, config, telemetry, auth middleware, feature flags, i18n) each get ONE entry under Shared / Cross-Cutting; per-component tasks consume them (see Step 2 cross-cutting rule).
6. Write `_docs/02_document/module-layout.md` using `templates/module-layout.md` format.
## Self-verification
- [ ] Every component in `DOCUMENT_DIR/components/` has a Per-Component Mapping entry
- [ ] Every shared / cross-cutting concern has a Shared section entry
- [ ] Layering table covers every component (shared at the bottom)
- [ ] No component's `Imports from` list points at a higher layer
- [ ] Paths follow the detected language's convention
- [ ] No two components own overlapping paths
## Save action
Write `_docs/02_document/module-layout.md`.
## Blocking
**BLOCKING**: Present layout summary to user. Do NOT proceed to Step 2 until user confirms. The implement skill depends on this file; inconsistencies here cause file-ownership conflicts at batch time.
@@ -0,0 +1,57 @@
# Step 1: Bootstrap Structure Plan (default mode only)
**Role**: Professional software architect
**Goal**: Produce `01_initial_structure.md` — the first task describing the project skeleton.
**Constraints**: This is a plan document, not code. The `/implement` skill executes it.
## Steps
1. Read `architecture.md`, all component specs, `system-flows.md`, `data_model.md`, and `deployment/` from DOCUMENT_DIR
2. Read problem, solution, and restrictions from `_docs/00_problem/` and `_docs/01_solution/`
3. Research best implementation patterns for the identified tech stack
4. Document the structure plan using `templates/initial-structure-task.md`
The bootstrap structure plan must include:
- Project folder layout with all component directories
- Shared models, interfaces, and DTOs
- Dockerfile per component (multi-stage, non-root, health checks, pinned base images)
- `docker-compose.yml` for local development (all components + database + dependencies)
- `docker-compose.test.yml` for blackbox test environment (blackbox test runner)
- `.dockerignore`
- CI/CD pipeline file (`.github/workflows/ci.yml` or `azure-pipelines.yml`) with stages from `deployment/ci_cd_pipeline.md`
- Database migration setup and initial seed data scripts
- Observability configuration: structured logging setup, health check endpoints (`/health/live`, `/health/ready`), metrics endpoint (`/metrics`)
- Environment variable documentation (`.env.example`)
- Test structure with unit and blackbox test locations
## Self-verification
- [ ] All components have corresponding folders in the layout
- [ ] All inter-component interfaces have DTOs defined
- [ ] Dockerfile defined for each component
- [ ] `docker-compose.yml` covers all components and dependencies
- [ ] `docker-compose.test.yml` enables blackbox testing
- [ ] CI/CD pipeline file defined with lint, test, security, build, deploy stages
- [ ] Database migration setup included
- [ ] Health check endpoints specified for each service
- [ ] Structured logging configuration included
- [ ] `.env.example` with all required environment variables
- [ ] Environment strategy covers dev, staging, production
- [ ] Test structure includes unit and blackbox test locations
## Save action
Write `todo/01_initial_structure.md` (temporary numeric name).
## Tracker action
Create a work item ticket for this task under the "Bootstrap & Initial Structure" epic. Write the work item ticket ID and Epic ID back into the task header.
## Rename action
Rename the file from `todo/01_initial_structure.md` to `todo/[TRACKER-ID]_initial_structure.md` (e.g., `todo/AZ-42_initial_structure.md`). Update the **Task** field inside the file to match the new filename.
## Blocking
**BLOCKING**: Present structure plan summary to user. Do NOT proceed until user confirms.
@@ -0,0 +1,45 @@
# Step 1t: Test Infrastructure Bootstrap (tests-only mode only)
**Role**: Professional Quality Assurance Engineer
**Goal**: Produce `01_test_infrastructure.md` — the first task describing the test project scaffold.
**Constraints**: This is a plan document, not code. The `/implement` skill executes it.
## Steps
1. Read `TESTS_DIR/environment.md` and `TESTS_DIR/test-data.md`
2. Read `problem.md`, `restrictions.md`, `acceptance_criteria.md` for domain context
3. Document the test infrastructure plan using `templates/test-infrastructure-task.md`
The test infrastructure bootstrap must include:
- Test project folder layout (`e2e/` directory structure)
- Mock/stub service definitions for each external dependency
- `docker-compose.test.yml` structure from `environment.md`
- Test runner configuration (framework, plugins, fixtures)
- Test data fixture setup from `test-data.md` seed data sets
- Test reporting configuration (format, output path)
- Data isolation strategy
## Self-verification
- [ ] Every external dependency from `environment.md` has a mock service defined
- [ ] Docker Compose structure covers all services from `environment.md`
- [ ] Test data fixtures cover all seed data sets from `test-data.md`
- [ ] Test runner configuration matches the consumer app tech stack from `environment.md`
- [ ] Data isolation strategy is defined
## Save action
Write `todo/01_test_infrastructure.md` (temporary numeric name).
## Tracker action
Create a work item ticket for this task under the "Blackbox Tests" epic. Write the work item ticket ID and Epic ID back into the task header.
## Rename action
Rename the file from `todo/01_test_infrastructure.md` to `todo/[TRACKER-ID]_test_infrastructure.md`. Update the **Task** field inside the file to match the new filename.
## Blocking
**BLOCKING**: Present test infrastructure plan summary to user. Do NOT proceed until user confirms.
@@ -0,0 +1,59 @@
# Step 2: Task Decomposition (default and single component modes)
**Role**: Professional software architect
**Goal**: Decompose each component into atomic, implementable task specs — numbered sequentially starting from 02.
**Constraints**: Behavioral specs only — describe what, not how. No implementation code.
## Numbering
Tasks are numbered sequentially across all components in dependency order. Start from 02 (01 is `initial_structure`). In single component mode, start from the next available number in TASKS_DIR.
## Component ordering
Process components in dependency order — foundational components first (shared models, database), then components that depend on them.
## Consult LESSONS.md once at the start of Step 2
If `_docs/LESSONS.md` exists, read it and note `estimation`, `architecture`, or `dependencies` lessons that may bias task sizing in this pass (e.g., "auth-related changes historically take 2x estimate" → bump any auth task up one complexity tier). Apply the bias when filling the Complexity field in step 7 below. Record which lessons informed estimation in a comment in `_dependencies_table.md` (Step 4).
## Steps
For each component (or the single provided component):
1. Read the component's `description.md` and `tests.md` (if available)
2. Decompose into atomic tasks; create only 1 task if the component is simple or atomic
3. Split into multiple tasks only when it is necessary and would be easier to implement
4. Do not create tasks for other components — only tasks for the current component
5. Each task should be atomic, containing 1 API or a list of semantically connected APIs
6. Write each task spec using `templates/task.md`
7. Estimate complexity per task (1, 2, 3, 5, 8 points); no task should exceed 8 points — split if it does
8. Note task dependencies (referencing tracker IDs of already-created dependency tasks, e.g., `AZ-42_initial_structure`)
9. **Cross-cutting rule**: if a concern spans ≥2 components (logging, config loading, auth/authZ, error envelope, telemetry, feature flags, i18n), create ONE shared task under the cross-cutting epic. Per-component tasks declare it as a dependency and consume it; they MUST NOT re-implement it locally. Duplicate local implementations are an `Architecture` finding (High) in code-review Phase 7 and a `Maintainability` finding in Phase 6.
10. **Shared-models / shared-API rule**: classify the task as shared if ANY of the following is true:
- The component is listed under `shared/*` in `module-layout.md`.
- The task's Scope.Included mentions "public interface", "DTO", "schema", "event", "contract", "API endpoint", or "shared model".
- The task is parented to a cross-cutting epic.
- The task is depended on by ≥2 other tasks across different components.
For every shared task:
- Produce a contract file at `_docs/02_document/contracts/<component>/<name>.md` using `templates/api-contract.md`. Fill Shape, Invariants, Non-Goals, Versioning Rules, and at least 3 Test Cases.
- Add a mandatory `## Contract` section to the task spec pointing at the contract file.
- For every consuming task, add the contract path to its `## Dependencies` section as a document dependency (separate from task dependencies).
Consumers read the contract file, not the producer's task spec. This prevents interface drift when the producer's implementation detail leaks into consumers.
11. **Immediately after writing each task file**: create a work item ticket, link it to the component's epic, write the work item ticket ID and Epic ID back into the task header, then rename the file from `todo/[##]_[short_name].md` to `todo/[TRACKER-ID]_[short_name].md`.
## Self-verification (per component)
- [ ] Every task is atomic (single concern)
- [ ] No task exceeds 8 complexity points
- [ ] Task dependencies reference correct tracker IDs
- [ ] Tasks cover all interfaces defined in the component spec
- [ ] No tasks duplicate work from other components
- [ ] Every task has a work item ticket linked to the correct epic
- [ ] Every shared-models / shared-API task has a contract file at `_docs/02_document/contracts/<component>/<name>.md` and a `## Contract` section linking to it
- [ ] Every cross-cutting concern appears exactly once as a shared task, not N per-component copies
## Save action
Write each `todo/[##]_[short_name].md` (temporary numeric name), create work item ticket inline, then rename to `todo/[TRACKER-ID]_[short_name].md`. Update the **Task** field inside the file to match the new filename. Update **Dependencies** references in the file to use tracker IDs of the dependency tasks.
@@ -0,0 +1,35 @@
# Step 3: Blackbox Test Task Decomposition (default and tests-only modes)
**Role**: Professional Quality Assurance Engineer
**Goal**: Decompose blackbox test specs into atomic, implementable task specs.
**Constraints**: Behavioral specs only — describe what, not how. No test code.
## Numbering
- In default mode: continue sequential numbering from where Step 2 left off.
- In tests-only mode: start from 02 (01 is the test infrastructure bootstrap from Step 1t).
## Steps
1. Read all test specs from `DOCUMENT_DIR/tests/` (`blackbox-tests.md`, `performance-tests.md`, `resilience-tests.md`, `security-tests.md`, `resource-limit-tests.md`)
2. Group related test scenarios into atomic tasks (e.g., one task per test category or per component under test)
3. Each task should reference the specific test scenarios it implements and the environment/test-data specs
4. Dependencies:
- In default mode: blackbox test tasks depend on the component implementation tasks they exercise
- In tests-only mode: blackbox test tasks depend on the test infrastructure bootstrap task (Step 1t)
5. Write each task spec using `templates/task.md`
6. Estimate complexity per task (1, 2, 3, 5, 8 points); no task should exceed 8 points — split if it does
7. Note task dependencies (referencing tracker IDs of already-created dependency tasks)
8. **Immediately after writing each task file**: create a work item ticket under the "Blackbox Tests" epic, write the work item ticket ID and Epic ID back into the task header, then rename the file from `todo/[##]_[short_name].md` to `todo/[TRACKER-ID]_[short_name].md`.
## Self-verification
- [ ] Every scenario from `tests/blackbox-tests.md` is covered by a task
- [ ] Every scenario from `tests/performance-tests.md`, `tests/resilience-tests.md`, `tests/security-tests.md`, and `tests/resource-limit-tests.md` is covered by a task
- [ ] No task exceeds 8 complexity points
- [ ] Dependencies correctly reference the dependency tasks (component tasks in default mode, test infrastructure in tests-only mode)
- [ ] Every task has a work item ticket linked to the "Blackbox Tests" epic
## Save action
Write each `todo/[##]_[short_name].md` (temporary numeric name), create work item ticket inline, then rename to `todo/[TRACKER-ID]_[short_name].md`.
@@ -0,0 +1,39 @@
# Step 4: Cross-Task Verification (default and tests-only modes)
**Role**: Professional software architect and analyst
**Goal**: Verify task consistency and produce `_dependencies_table.md`.
**Constraints**: Review step — fix gaps found, do not add new tasks.
## Steps
1. Verify task dependencies across all tasks are consistent
2. Check no gaps:
- In default mode: every interface in `architecture.md` has tasks covering it
- In tests-only mode: every test scenario in `traceability-matrix.md` is covered by a task
3. Check no overlaps: tasks don't duplicate work
4. Check no circular dependencies in the task graph
5. Produce `_dependencies_table.md` using `templates/dependencies-table.md`
## Self-verification
### Default mode
- [ ] Every architecture interface is covered by at least one task
- [ ] No circular dependencies in the task graph
- [ ] Cross-component dependencies are explicitly noted in affected task specs
- [ ] `_dependencies_table.md` contains every task with correct dependencies
### Tests-only mode
- [ ] Every test scenario from `traceability-matrix.md` "Covered" entries has a corresponding task
- [ ] No circular dependencies in the task graph
- [ ] Test task dependencies reference the test infrastructure bootstrap
- [ ] `_dependencies_table.md` contains every task with correct dependencies
## Save action
Write `_dependencies_table.md`.
## Blocking
**BLOCKING**: Present dependency summary to user. Do NOT proceed until user confirms.
@@ -0,0 +1,133 @@
# API Contract Template
A contract is the **frozen, reviewed interface** between two or more components. When task A produces a shared model, DTO, schema, event payload, or public API, and task B consumes it, they must not reverse-engineer each other's implementation — they must read the contract.
Save the filled contract at `_docs/02_document/contracts/<component>/<name>.md`. Reference it from the producing task's `## Contract` section and from every consuming task's `## Dependencies` section.
---
```markdown
# Contract: [contract-name]
**Component**: [component-name]
**Producer task**: [TRACKER-ID] — [task filename]
**Consumer tasks**: [list of TRACKER-IDs or "TBD at decompose time"]
**Version**: 1.0.0
**Status**: [draft | frozen | deprecated]
**Last Updated**: [YYYY-MM-DD]
## Purpose
Short statement of what this contract represents and why it is shared (13 sentences).
## Shape
Choose ONE of the following shape forms per the contract type:
### For data models (DTO / schema / event)
```[language]
// language-native type definitions — e.g., Python dataclass, C# record, TypeScript interface, Rust struct, JSON Schema
```
For each field:
| Field | Type | Required | Description | Constraints |
|-------|------|----------|-------------|-------------|
| `id` | `string` (UUID) | yes | Unique identifier | RFC 4122 v4 |
| `created_at` | `datetime` (ISO 8601 UTC) | yes | Creation timestamp | |
| `...` | ... | ... | ... | ... |
### For function / method APIs
| Name | Signature | Throws / Errors | Blocking? |
|------|-----------|-----------------|-----------|
| `do_x` | `(input: InputDto) -> Result<OutputDto, XError>` | `XError::NotFound`, `XError::Invalid` | sync |
| ... | ... | ... | ... |
### For HTTP / RPC endpoints
| Method | Path | Request body | Response | Status codes |
|--------|------|--------------|----------|--------------|
| `POST` | `/api/v1/resource` | `CreateResource` | `Resource` | 201, 400, 409 |
| ... | ... | ... | ... | ... |
## Invariants
Properties that MUST hold for every valid instance or every allowed interaction. These survive refactors.
- Invariant 1: [statement]
- Invariant 2: [statement]
## Non-Goals
Things this contract intentionally does NOT cover. Helps prevent scope creep.
- Not covered: [statement]
## Versioning Rules
- **Breaking changes** (field renamed/removed, type changed, required→optional flipped) require a new major version and a deprecation path for consumers.
- **Non-breaking additions** (new optional field, new error variant consumers already tolerate) require a minor version bump.
## Test Cases
Representative cases that both producer and consumer tests must cover. Keep short — this is the contract test surface, not an exhaustive suite.
| Case | Input | Expected | Notes |
|------|-------|----------|-------|
| valid-minimal | minimal valid instance | accepted | |
| invalid-missing-required | missing `id` | rejected with specific error | |
| edge-case-x | ... | ... | |
## Change Log
| Version | Date | Change | Author |
|---------|------|--------|--------|
| 1.0.0 | YYYY-MM-DD | Initial contract | [agent/user] |
```
---
## Decompose-skill rules for emitting contracts
A task is a **shared-models / shared-API task** when ANY of the following is true:
- The component spec lists it as a shared component (under `shared/*` in `module-layout.md`).
- The task's **Scope.Included** mentions any of: "public interface", "DTO", "schema", "event", "contract", "API endpoint", "shared model".
- The task is parented to a cross-cutting epic (`epic_type: cross-cutting`).
- The task is depended on by ≥2 other tasks across different components.
For every shared-models / shared-API task:
1. Create a contract file at `_docs/02_document/contracts/<component>/<name>.md` using this template.
2. Fill in Shape, Invariants, Non-Goals, Versioning Rules, and at least 3 Test Cases.
3. Add a mandatory `## Contract` section to the task spec that links to the contract file:
```markdown
## Contract
This task produces/implements the contract at `_docs/02_document/contracts/<component>/<name>.md`.
Consumers MUST read that file — not this task spec — to discover the interface.
```
4. For every consuming task, add the contract path to its `## Dependencies` section as a document dependency (not a task dependency):
```markdown
### Document Dependencies
- `_docs/02_document/contracts/<component>/<name>.md` — API contract produced by [TRACKER-ID].
```
5. If the contract changes after it was frozen, the producer task must bump the `Version` and note the change in `Change Log`. Consumers referenced in the contract header must be notified (surface to user via Choose format).
## Code-review-skill rules for verifying contracts
Phase 2 (Spec Compliance) adds a check:
- For every task with a `## Contract` section:
- Verify the referenced contract file exists at the stated path.
- Verify the implementation's public signatures (types, method shapes, endpoint paths) match the contract's Shape section.
- If they diverge, emit a `Spec-Gap` finding with High severity.
- For every consuming task's Document Dependencies that reference a contract:
- Verify the consumer's imports / calls match the contract's Shape.
- If they diverge, emit a `Spec-Gap` finding with High severity and a hint that either the contract or the consumer is drifting.
@@ -0,0 +1,107 @@
# Module Layout Template
The module layout is the **authoritative file-ownership map** used by the `/implement` skill to assign OWNED / READ-ONLY / FORBIDDEN files to implementer subagents. It is derived from `_docs/02_document/architecture.md` and the component specs at `_docs/02_document/components/`, and it follows the target language's standard project-layout conventions.
Save as `_docs/02_document/module-layout.md`. This file is produced by the decompose skill (Step 1.5 module layout) and consumed by the implement skill (Step 4 file ownership). Task specs remain purely behavioral — they do NOT carry file paths. The layout is the single place where component → filesystem mapping lives.
---
```markdown
# Module Layout
**Language**: [python | csharp | rust | typescript | go | mixed]
**Layout Convention**: [src-layout | crates-workspace | packages-workspace | custom]
**Root**: [src/ | crates/ | packages/ | ./]
**Last Updated**: [YYYY-MM-DD]
## Layout Rules
1. Each component owns ONE top-level directory under the root.
2. Shared code lives under `<root>/shared/` (or language equivalent: `src/shared/`, `crates/shared/`, `packages/shared/`).
3. Cross-cutting concerns (logging, config, error handling, telemetry) live under `<root>/shared/<concern>/`.
4. Public API surface per component = files listed in `public:` below. Everything else is internal — other components MUST NOT import it directly.
5. Tests live outside the component tree in a separate `tests/` or `<component>/tests/` directory per the language's test convention.
## Per-Component Mapping
### Component: [component-name]
- **Epic**: [TRACKER-ID]
- **Directory**: `src/<path>/`
- **Public API**: files in this list are importable by other components
- `src/<path>/public_api.py` (or `mod.rs`, `index.ts`, `PublicApi.cs`, etc.)
- `src/<path>/types.py`
- **Internal (do NOT import from other components)**:
- `src/<path>/internal/*`
- `src/<path>/_helpers.py`
- **Owns (exclusive write during implementation)**: `src/<path>/**`
- **Imports from**: [list of other components whose Public API this component may use]
- **Consumed by**: [list of components that depend on this component's Public API]
### Component: [next-component]
...
## Shared / Cross-Cutting
### shared/models
- **Directory**: `src/shared/models/`
- **Purpose**: DTOs, value types, schemas shared across components
- **Owned by**: whoever implements task `[TRACKER-ID]_shared_models`
- **Consumed by**: all components
### shared/logging
- **Directory**: `src/shared/logging/`
- **Purpose**: structured logging setup
- **Owned by**: cross-cutting task `[TRACKER-ID]_logging`
- **Consumed by**: all components
### shared/[other concern]
...
## Allowed Dependencies (layering)
Read top-to-bottom; an upper layer may import from a lower layer but NEVER the reverse.
| Layer | Components | May import from |
|-------|------------|-----------------|
| 4. API / Entry | [list] | 1, 2, 3 |
| 3. Application | [list] | 1, 2 |
| 2. Domain | [list] | 1 |
| 1. Shared / Foundation | shared/* | (none) |
Violations of this table are **Architecture** findings in code-review Phase 7 and are High severity.
## Layout Conventions (reference)
| Language | Root | Per-component path | Public API file | Test path |
|----------|------|-------------------|-----------------|-----------|
| Python | `src/<pkg>/` | `src/<pkg>/<component>/` | `src/<pkg>/<component>/__init__.py` (re-exports) | `tests/<component>/` |
| C# (.NET) | `src/` | `src/<Component>/` | `src/<Component>/<Component>.cs` (namespace root) | `tests/<Component>.Tests/` |
| Rust | `crates/` | `crates/<component>/` | `crates/<component>/src/lib.rs` | `crates/<component>/tests/` |
| TypeScript / React | `packages/` or `src/` | `src/<component>/` | `src/<component>/index.ts` (barrel) | `src/<component>/__tests__/` or `tests/<component>/` |
| Go | `./` | `internal/<component>/` or `pkg/<component>/` | `internal/<component>/doc.go` + exported symbols | `internal/<component>/*_test.go` |
```
---
## Self-verification for the decompose skill
When writing `_docs/02_document/module-layout.md`, verify:
- [ ] Every component in `_docs/02_document/components/` has a Per-Component Mapping entry.
- [ ] Every shared / cross-cutting epic has an entry in the Shared section.
- [ ] Layering table rows cover every component.
- [ ] No component's `Imports from` list contains a component at a higher layer.
- [ ] Paths follow the detected language's convention.
- [ ] No two components own overlapping paths.
## How the implement skill consumes this
The implement skill's Step 4 (File Ownership) reads this file and, for each task in the batch:
1. Resolve the task's Component field to a Per-Component Mapping entry.
2. Set OWNED = the component's `Owns` glob.
3. Set READ-ONLY = the Public API files of every component listed in `Imports from`, plus `shared/*` Public API files.
4. Set FORBIDDEN = every other component's Owns glob.
If two tasks in the same batch map to the same component, the implement skill schedules them sequentially (one implementer at a time for that component) to avoid file conflicts on shared internal files.
@@ -81,6 +81,17 @@ Then [expected result]
**Risk 1: [Title]** **Risk 1: [Title]**
- *Risk*: [Description] - *Risk*: [Description]
- *Mitigation*: [Approach] - *Mitigation*: [Approach]
## Contract
<!--
OMIT this section for behavioral-only tasks.
INCLUDE this section ONLY for shared-models / shared-API / contract tasks.
See decompose/SKILL.md Step 2 shared-models rule and decompose/templates/api-contract.md.
-->
This task produces/implements the contract at `_docs/02_document/contracts/<component>/<name>.md`.
Consumers MUST read that file — not this task spec — to discover the interface.
``` ```
--- ---
+21 -303
View File
@@ -115,332 +115,43 @@ At the start of execution, create a TodoWrite with all steps (1 through 7). Upda
### Step 1: Deployment Status & Environment Setup ### Step 1: Deployment Status & Environment Setup
**Role**: DevOps / Platform engineer Read and follow `steps/01_status-env.md`.
**Goal**: Assess current deployment readiness, identify all required environment variables, and create `.env` files
**Constraints**: Must complete before any other step
1. Read architecture.md, all component specs, and restrictions.md
2. Assess deployment readiness:
- List all components and their current state (planned / implemented / tested)
- Identify external dependencies (databases, APIs, message queues, cloud services)
- Identify infrastructure prerequisites (container registry, cloud accounts, DNS, SSL certificates)
- Check if any deployment blockers exist
3. Identify all required environment variables by scanning:
- Component specs for configuration needs
- Database connection requirements
- External API endpoints and credentials
- Feature flags and runtime configuration
- Container registry credentials
- Cloud provider credentials
- Monitoring/logging service endpoints
4. Generate `.env.example` in project root with all variables and placeholder values (committed to VCS)
5. Generate `.env` in project root with development defaults filled in where safe (git-ignored)
6. Ensure `.gitignore` includes `.env` (but NOT `.env.example`)
7. Produce a deployment status report summarizing readiness, blockers, and required setup
**Self-verification**:
- [ ] All components assessed for deployment readiness
- [ ] External dependencies catalogued
- [ ] Infrastructure prerequisites identified
- [ ] All required environment variables discovered
- [ ] `.env.example` created with placeholder values
- [ ] `.env` created with safe development defaults
- [ ] `.gitignore` updated to exclude `.env`
- [ ] Status report written to `reports/deploy_status_report.md`
**Save action**: Write `reports/deploy_status_report.md` using `templates/deploy_status_report.md`, create `.env` and `.env.example` in project root
**BLOCKING**: Present status report and environment variables to user. Do NOT proceed until confirmed.
--- ---
### Step 2: Containerization ### Step 2: Containerization
**Role**: DevOps / Platform engineer Read and follow `steps/02_containerization.md`.
**Goal**: Define Docker configuration for every component, local development, and blackbox test environments
**Constraints**: Plan only — no Dockerfile creation. Describe what each Dockerfile should contain.
1. Read architecture.md and all component specs
2. Read restrictions.md for infrastructure constraints
3. Research best Docker practices for the project's tech stack (multi-stage builds, base image selection, layer optimization)
4. For each component, define:
- Base image (pinned version, prefer alpine/distroless for production)
- Build stages (dependency install, build, production)
- Non-root user configuration
- Health check endpoint and command
- Exposed ports
- `.dockerignore` contents
5. Define `docker-compose.yml` for local development:
- All application components
- Database (Postgres) with named volume
- Any message queues, caches, or external service mocks
- Shared network
- Environment variable files (`.env`)
6. Define `docker-compose.test.yml` for blackbox tests:
- Application components under test
- Test runner container (black-box, no internal imports)
- Isolated database with seed data
- All tests runnable via `docker compose -f docker-compose.test.yml up --abort-on-container-exit`
7. Define image tagging strategy: `<registry>/<project>/<component>:<git-sha>` for CI, `latest` for local dev only
**Self-verification**:
- [ ] Every component has a Dockerfile specification
- [ ] Multi-stage builds specified for all production images
- [ ] Non-root user for all containers
- [ ] Health checks defined for every service
- [ ] docker-compose.yml covers all components + dependencies
- [ ] docker-compose.test.yml enables black-box testing
- [ ] `.dockerignore` defined
**Save action**: Write `containerization.md` using `templates/containerization.md`
**BLOCKING**: Present containerization plan to user. Do NOT proceed until confirmed.
--- ---
### Step 3: CI/CD Pipeline ### Step 3: CI/CD Pipeline
**Role**: DevOps engineer Read and follow `steps/03_ci-cd-pipeline.md`.
**Goal**: Define the CI/CD pipeline with quality gates, security scanning, and multi-environment deployment
**Constraints**: Pipeline definition only — produce YAML specification, not implementation
1. Read architecture.md for tech stack and deployment targets
2. Read restrictions.md for CI/CD constraints (cloud provider, registry, etc.)
3. Research CI/CD best practices for the project's platform (GitHub Actions / Azure Pipelines)
4. Define pipeline stages:
| Stage | Trigger | Steps | Quality Gate |
|-------|---------|-------|-------------|
| **Lint** | Every push | Run linters per language (black, rustfmt, prettier, dotnet format) | Zero errors |
| **Test** | Every push | Unit tests, blackbox tests, coverage report | 75%+ coverage (see `.cursor/rules/cursor-meta.mdc` Quality Thresholds) |
| **Security** | Every push | Dependency audit, SAST scan (Semgrep/SonarQube), image scan (Trivy) | Zero critical/high CVEs |
| **Build** | PR merge to dev | Build Docker images, tag with git SHA | Build succeeds |
| **Push** | After build | Push to container registry | Push succeeds |
| **Deploy Staging** | After push | Deploy to staging environment | Health checks pass |
| **Smoke Tests** | After staging deploy | Run critical path tests against staging | All pass |
| **Deploy Production** | Manual approval | Deploy to production | Health checks pass |
5. Define caching strategy: dependency caches, Docker layer caches, build artifact caches
6. Define parallelization: which stages can run concurrently
7. Define notifications: build failures, deployment status, security alerts
**Self-verification**:
- [ ] All pipeline stages defined with triggers and gates
- [ ] Coverage threshold enforced (75%+)
- [ ] Security scanning included (dependencies + images + SAST)
- [ ] Caching configured for dependencies and Docker layers
- [ ] Multi-environment deployment (staging → production)
- [ ] Rollback procedure referenced
- [ ] Notifications configured
**Save action**: Write `ci_cd_pipeline.md` using `templates/ci_cd_pipeline.md`
--- ---
### Step 4: Environment Strategy ### Step 4: Environment Strategy
**Role**: Platform engineer Read and follow `steps/04_environment-strategy.md`.
**Goal**: Define environment configuration, secrets management, and environment parity
**Constraints**: Strategy document — no secrets or credentials in output
1. Define environments:
| Environment | Purpose | Infrastructure | Data |
|-------------|---------|---------------|------|
| **Development** | Local developer workflow | docker-compose, local volumes | Seed data, mocks for external APIs |
| **Staging** | Pre-production validation | Mirrors production topology | Anonymized production-like data |
| **Production** | Live system | Full infrastructure | Real data |
2. Define environment variable management:
- Reference `.env.example` created in Step 1
- Per-environment variable sources (`.env` for dev, secret manager for staging/prod)
- Validation: fail fast on missing required variables at startup
3. Define secrets management:
- Never commit secrets to version control
- Development: `.env` files (git-ignored)
- Staging/Production: secret manager (AWS Secrets Manager / Azure Key Vault / Vault)
- Rotation policy
4. Define database management per environment:
- Development: Docker Postgres with named volume, seed data
- Staging: managed Postgres, migrations applied via CI/CD
- Production: managed Postgres, migrations require approval
**Self-verification**:
- [ ] All three environments defined with clear purpose
- [ ] Environment variable documentation complete (references `.env.example` from Step 1)
- [ ] No secrets in any output document
- [ ] Secret manager specified for staging/production
- [ ] Database strategy per environment
**Save action**: Write `environment_strategy.md` using `templates/environment_strategy.md`
--- ---
### Step 5: Observability ### Step 5: Observability
**Role**: Site Reliability Engineer (SRE) Read and follow `steps/05_observability.md`.
**Goal**: Define logging, metrics, tracing, and alerting strategy
**Constraints**: Strategy document — describe what to implement, not how to wire it
1. Read architecture.md and component specs for service boundaries
2. Research observability best practices for the tech stack
**Logging**:
- Structured JSON to stdout/stderr (no file logging in containers)
- Fields: `timestamp` (ISO 8601), `level`, `service`, `correlation_id`, `message`, `context`
- Levels: ERROR (exceptions), WARN (degraded), INFO (business events), DEBUG (diagnostics, dev only)
- No PII in logs
- Retention: dev = console, staging = 7 days, production = 30 days
**Metrics**:
- Expose Prometheus-compatible `/metrics` endpoint per service
- System metrics: CPU, memory, disk, network
- Application metrics: `request_count`, `request_duration` (histogram), `error_count`, `active_connections`
- Business metrics: derived from acceptance criteria
- Collection interval: 15s
**Distributed Tracing**:
- OpenTelemetry SDK integration
- Trace context propagation via HTTP headers and message queue metadata
- Span naming: `<service>.<operation>`
- Sampling: 100% in dev/staging, 10% in production (adjust based on volume)
**Alerting**:
| Severity | Response Time | Condition Examples |
|----------|---------------|-------------------|
| Critical | 5 min | Service down, data loss, health check failed |
| High | 30 min | Error rate > 5%, P95 latency > 2x baseline |
| Medium | 4 hours | Disk > 80%, elevated latency |
| Low | Next business day | Non-critical warnings |
**Dashboards**:
- Operations: service health, request rate, error rate, response time percentiles, resource utilization
- Business: key business metrics from acceptance criteria
**Self-verification**:
- [ ] Structured logging format defined with required fields
- [ ] Metrics endpoint specified per service
- [ ] OpenTelemetry tracing configured
- [ ] Alert severities with response times defined
- [ ] Dashboards cover operations and business metrics
- [ ] PII exclusion from logs addressed
**Save action**: Write `observability.md` using `templates/observability.md`
--- ---
### Step 6: Deployment Procedures ### Step 6: Deployment Procedures
**Role**: DevOps / Platform engineer Read and follow `steps/06_procedures.md`.
**Goal**: Define deployment strategy, rollback procedures, health checks, and deployment checklist
**Constraints**: Procedures document — no implementation
1. Define deployment strategy:
- Preferred pattern: blue-green / rolling / canary (choose based on architecture)
- Zero-downtime requirement for production
- Graceful shutdown: 30-second grace period for in-flight requests
- Database migration ordering: migrate before deploy, backward-compatible only
2. Define health checks:
| Check | Type | Endpoint | Interval | Threshold |
|-------|------|----------|----------|-----------|
| Liveness | HTTP GET | `/health/live` | 10s | 3 failures → restart |
| Readiness | HTTP GET | `/health/ready` | 5s | 3 failures → remove from LB |
| Startup | HTTP GET | `/health/ready` | 5s | 30 attempts max |
3. Define rollback procedures:
- Trigger criteria: health check failures, error rate spike, critical alert
- Rollback steps: redeploy previous image tag, verify health, rollback database if needed
- Communication: notify stakeholders during rollback
- Post-mortem: required after every production rollback
4. Define deployment checklist:
- [ ] All tests pass in CI
- [ ] Security scan clean (zero critical/high CVEs)
- [ ] Database migrations reviewed and tested
- [ ] Environment variables configured
- [ ] Health check endpoints responding
- [ ] Monitoring alerts configured
- [ ] Rollback plan documented and tested
- [ ] Stakeholders notified
**Self-verification**:
- [ ] Deployment strategy chosen and justified
- [ ] Zero-downtime approach specified
- [ ] Health checks defined (liveness, readiness, startup)
- [ ] Rollback trigger criteria and steps documented
- [ ] Deployment checklist complete
**Save action**: Write `deployment_procedures.md` using `templates/deployment_procedures.md`
**BLOCKING**: Present deployment procedures to user. Do NOT proceed until confirmed.
--- ---
### Step 7: Deployment Scripts ### Step 7: Deployment Scripts
**Role**: DevOps / Platform engineer Read and follow `steps/07_scripts.md`.
**Goal**: Create executable deployment scripts for pulling Docker images and running services on the remote target machine
**Constraints**: Produce real, executable shell scripts. This is the ONLY step that creates implementation artifacts.
1. Read containerization.md and deployment_procedures.md from previous steps
2. Read `.env.example` for required variables
3. Create the following scripts in `SCRIPTS_DIR/`:
**`deploy.sh`** — Main deployment orchestrator:
- Validates that required environment variables are set (sources `.env` if present)
- Calls `pull-images.sh`, then `stop-services.sh`, then `start-services.sh`, then `health-check.sh`
- Exits with non-zero code on any failure
- Supports `--rollback` flag to redeploy previous image tags
**`pull-images.sh`** — Pull Docker images to target machine:
- Reads image list and tags from environment or config
- Authenticates with container registry
- Pulls all required images
- Verifies image integrity (digest check)
**`start-services.sh`** — Start services on target machine:
- Runs `docker compose up -d` or individual `docker run` commands
- Applies environment variables from `.env`
- Configures networks and volumes
- Waits for containers to reach healthy state
**`stop-services.sh`** — Graceful shutdown:
- Stops services with graceful shutdown period
- Saves current image tags for rollback reference
- Cleans up orphaned containers/networks
**`health-check.sh`** — Verify deployment health:
- Checks all health endpoints
- Reports status per service
- Returns non-zero if any service is unhealthy
4. All scripts must:
- Be POSIX-compatible (#!/bin/bash with set -euo pipefail)
- Source `.env` from project root or accept env vars from the environment
- Include usage/help output (`--help` flag)
- Be idempotent where possible
- Handle SSH connection to remote target (configurable via `DEPLOY_HOST` env var)
5. Document all scripts in `deploy_scripts.md`
**Self-verification**:
- [ ] All five scripts created and executable
- [ ] Scripts source environment variables correctly
- [ ] `deploy.sh` orchestrates the full flow
- [ ] `pull-images.sh` handles registry auth and image pull
- [ ] `start-services.sh` starts containers with correct config
- [ ] `stop-services.sh` handles graceful shutdown
- [ ] `health-check.sh` validates all endpoints
- [ ] Rollback supported via `deploy.sh --rollback`
- [ ] Scripts work for remote deployment via SSH (DEPLOY_HOST)
- [ ] `deploy_scripts.md` documents all scripts
**Save action**: Write scripts to `SCRIPTS_DIR/`, write `deploy_scripts.md` using `templates/deploy_scripts.md`
---
## Escalation Rules ## Escalation Rules
@@ -473,17 +184,24 @@ At the start of execution, create a TodoWrite with all steps (1 through 7). Upda
├────────────────────────────────────────────────────────────────┤ ├────────────────────────────────────────────────────────────────┤
│ PREREQ: architecture.md + component specs exist │ │ PREREQ: architecture.md + component specs exist │
│ │ │ │
│ 1. Status & Env → reports/deploy_status_report.md │ 1. Status & Env → steps/01_status-env.md
│ → reports/deploy_status_report.md │
│ + .env + .env.example │ │ + .env + .env.example │
│ [BLOCKING: user confirms status & env vars] │ │ [BLOCKING: user confirms status & env vars] │
│ 2. Containerization → containerization.md │ 2. Containerization → steps/02_containerization.md │
│ → containerization.md │
│ [BLOCKING: user confirms Docker plan] │ │ [BLOCKING: user confirms Docker plan] │
│ 3. CI/CD Pipeline → ci_cd_pipeline.md │ 3. CI/CD Pipeline → steps/03_ci-cd-pipeline.md │
4. Environment → environment_strategy.md → ci_cd_pipeline.md
5. Observability → observability.md 4. Environment → steps/04_environment-strategy.md
6. Proceduresdeployment_procedures.md │ environment_strategy.md │
│ 5. Observability → steps/05_observability.md │
│ → observability.md │
│ 6. Procedures → steps/06_procedures.md │
│ → deployment_procedures.md │
│ [BLOCKING: user confirms deployment plan] │ │ [BLOCKING: user confirms deployment plan] │
│ 7. Scripts deploy_scripts.md + scripts/ │ 7. Scripts → steps/07_scripts.md
│ → deploy_scripts.md + scripts/ │
├────────────────────────────────────────────────────────────────┤ ├────────────────────────────────────────────────────────────────┤
│ Principles: Docker-first · IaC · Observability built-in │ │ Principles: Docker-first · IaC · Observability built-in │
│ Environment parity · Save immediately │ │ Environment parity · Save immediately │
@@ -0,0 +1,45 @@
# Step 1: Deployment Status & Environment Setup
**Role**: DevOps / Platform engineer
**Goal**: Assess current deployment readiness, identify all required environment variables, and create `.env` files.
**Constraints**: Must complete before any other step.
## Steps
1. Read `architecture.md`, all component specs, and `restrictions.md`
2. Assess deployment readiness:
- List all components and their current state (planned / implemented / tested)
- Identify external dependencies (databases, APIs, message queues, cloud services)
- Identify infrastructure prerequisites (container registry, cloud accounts, DNS, SSL certificates)
- Check if any deployment blockers exist
3. Identify all required environment variables by scanning:
- Component specs for configuration needs
- Database connection requirements
- External API endpoints and credentials
- Feature flags and runtime configuration
- Container registry credentials
- Cloud provider credentials
- Monitoring/logging service endpoints
4. Generate `.env.example` in project root with all variables and placeholder values (committed to VCS)
5. Generate `.env` in project root with development defaults filled in where safe (git-ignored)
6. Ensure `.gitignore` includes `.env` (but NOT `.env.example`)
7. Produce a deployment status report summarizing readiness, blockers, and required setup
## Self-verification
- [ ] All components assessed for deployment readiness
- [ ] External dependencies catalogued
- [ ] Infrastructure prerequisites identified
- [ ] All required environment variables discovered
- [ ] `.env.example` created with placeholder values
- [ ] `.env` created with safe development defaults
- [ ] `.gitignore` updated to exclude `.env`
- [ ] Status report written to `reports/deploy_status_report.md`
## Save action
Write `reports/deploy_status_report.md` using `templates/deploy_status_report.md`. Create `.env` and `.env.example` in project root.
## Blocking
**BLOCKING**: Present status report and environment variables to user. Do NOT proceed until confirmed.
@@ -0,0 +1,48 @@
# Step 2: Containerization
**Role**: DevOps / Platform engineer
**Goal**: Define Docker configuration for every component, local development, and blackbox test environments.
**Constraints**: Plan only — no Dockerfile creation. Describe what each Dockerfile should contain.
## Steps
1. Read `architecture.md` and all component specs
2. Read `restrictions.md` for infrastructure constraints
3. Research best Docker practices for the project's tech stack (multi-stage builds, base image selection, layer optimization)
4. For each component, define:
- Base image (pinned version, prefer alpine/distroless for production)
- Build stages (dependency install, build, production)
- Non-root user configuration
- Health check endpoint and command
- Exposed ports
- `.dockerignore` contents
5. Define `docker-compose.yml` for local development:
- All application components
- Database (Postgres) with named volume
- Any message queues, caches, or external service mocks
- Shared network
- Environment variable files (`.env`)
6. Define `docker-compose.test.yml` for blackbox tests:
- Application components under test
- Test runner container (black-box, no internal imports)
- Isolated database with seed data
- All tests runnable via `docker compose -f docker-compose.test.yml up --abort-on-container-exit`
7. Define image tagging strategy: `<registry>/<project>/<component>:<git-sha>` for CI, `latest` for local dev only
## Self-verification
- [ ] Every component has a Dockerfile specification
- [ ] Multi-stage builds specified for all production images
- [ ] Non-root user for all containers
- [ ] Health checks defined for every service
- [ ] `docker-compose.yml` covers all components + dependencies
- [ ] `docker-compose.test.yml` enables black-box testing
- [ ] `.dockerignore` defined
## Save action
Write `containerization.md` using `templates/containerization.md`.
## Blocking
**BLOCKING**: Present containerization plan to user. Do NOT proceed until confirmed.
@@ -0,0 +1,41 @@
# Step 3: CI/CD Pipeline
**Role**: DevOps engineer
**Goal**: Define the CI/CD pipeline with quality gates, security scanning, and multi-environment deployment.
**Constraints**: Pipeline definition only — produce YAML specification, not implementation.
## Steps
1. Read `architecture.md` for tech stack and deployment targets
2. Read `restrictions.md` for CI/CD constraints (cloud provider, registry, etc.)
3. Research CI/CD best practices for the project's platform (GitHub Actions / Azure Pipelines)
4. Define pipeline stages:
| Stage | Trigger | Steps | Quality Gate |
|-------|---------|-------|-------------|
| **Lint** | Every push | Run linters per language (black, rustfmt, prettier, dotnet format) | Zero errors |
| **Test** | Every push | Unit tests, blackbox tests, coverage report | 75%+ coverage (see `.cursor/rules/cursor-meta.mdc` Quality Thresholds) |
| **Security** | Every push | Dependency audit, SAST scan (Semgrep/SonarQube), image scan (Trivy) | Zero critical/high CVEs |
| **Build** | PR merge to dev | Build Docker images, tag with git SHA | Build succeeds |
| **Push** | After build | Push to container registry | Push succeeds |
| **Deploy Staging** | After push | Deploy to staging environment | Health checks pass |
| **Smoke Tests** | After staging deploy | Run critical path tests against staging | All pass |
| **Deploy Production** | Manual approval | Deploy to production | Health checks pass |
5. Define caching strategy: dependency caches, Docker layer caches, build artifact caches
6. Define parallelization: which stages can run concurrently
7. Define notifications: build failures, deployment status, security alerts
## Self-verification
- [ ] All pipeline stages defined with triggers and gates
- [ ] Coverage threshold enforced (75%+)
- [ ] Security scanning included (dependencies + images + SAST)
- [ ] Caching configured for dependencies and Docker layers
- [ ] Multi-environment deployment (staging → production)
- [ ] Rollback procedure referenced
- [ ] Notifications configured
## Save action
Write `ci_cd_pipeline.md` using `templates/ci_cd_pipeline.md`.
@@ -0,0 +1,41 @@
# Step 4: Environment Strategy
**Role**: Platform engineer
**Goal**: Define environment configuration, secrets management, and environment parity.
**Constraints**: Strategy document — no secrets or credentials in output.
## Steps
1. Define environments:
| Environment | Purpose | Infrastructure | Data |
|-------------|---------|---------------|------|
| **Development** | Local developer workflow | docker-compose, local volumes | Seed data, mocks for external APIs |
| **Staging** | Pre-production validation | Mirrors production topology | Anonymized production-like data |
| **Production** | Live system | Full infrastructure | Real data |
2. Define environment variable management:
- Reference `.env.example` created in Step 1
- Per-environment variable sources (`.env` for dev, secret manager for staging/prod)
- Validation: fail fast on missing required variables at startup
3. Define secrets management:
- Never commit secrets to version control
- Development: `.env` files (git-ignored)
- Staging/Production: secret manager (AWS Secrets Manager / Azure Key Vault / Vault)
- Rotation policy
4. Define database management per environment:
- Development: Docker Postgres with named volume, seed data
- Staging: managed Postgres, migrations applied via CI/CD
- Production: managed Postgres, migrations require approval
## Self-verification
- [ ] All three environments defined with clear purpose
- [ ] Environment variable documentation complete (references `.env.example` from Step 1)
- [ ] No secrets in any output document
- [ ] Secret manager specified for staging/production
- [ ] Database strategy per environment
## Save action
Write `environment_strategy.md` using `templates/environment_strategy.md`.
@@ -0,0 +1,60 @@
# Step 5: Observability
**Role**: Site Reliability Engineer (SRE)
**Goal**: Define logging, metrics, tracing, and alerting strategy.
**Constraints**: Strategy document — describe what to implement, not how to wire it.
## Steps
1. Read `architecture.md` and component specs for service boundaries
2. Research observability best practices for the tech stack
## Logging
- Structured JSON to stdout/stderr (no file logging in containers)
- Fields: `timestamp` (ISO 8601), `level`, `service`, `correlation_id`, `message`, `context`
- Levels: ERROR (exceptions), WARN (degraded), INFO (business events), DEBUG (diagnostics, dev only)
- No PII in logs
- Retention: dev = console, staging = 7 days, production = 30 days
## Metrics
- Expose Prometheus-compatible `/metrics` endpoint per service
- System metrics: CPU, memory, disk, network
- Application metrics: `request_count`, `request_duration` (histogram), `error_count`, `active_connections`
- Business metrics: derived from acceptance criteria
- Collection interval: 15s
## Distributed Tracing
- OpenTelemetry SDK integration
- Trace context propagation via HTTP headers and message queue metadata
- Span naming: `<service>.<operation>`
- Sampling: 100% in dev/staging, 10% in production (adjust based on volume)
## Alerting
| Severity | Response Time | Condition Examples |
|----------|---------------|-------------------|
| Critical | 5 min | Service down, data loss, health check failed |
| High | 30 min | Error rate > 5%, P95 latency > 2x baseline |
| Medium | 4 hours | Disk > 80%, elevated latency |
| Low | Next business day | Non-critical warnings |
## Dashboards
- Operations: service health, request rate, error rate, response time percentiles, resource utilization
- Business: key business metrics from acceptance criteria
## Self-verification
- [ ] Structured logging format defined with required fields
- [ ] Metrics endpoint specified per service
- [ ] OpenTelemetry tracing configured
- [ ] Alert severities with response times defined
- [ ] Dashboards cover operations and business metrics
- [ ] PII exclusion from logs addressed
## Save action
Write `observability.md` using `templates/observability.md`.
@@ -0,0 +1,53 @@
# Step 6: Deployment Procedures
**Role**: DevOps / Platform engineer
**Goal**: Define deployment strategy, rollback procedures, health checks, and deployment checklist.
**Constraints**: Procedures document — no implementation.
## Steps
1. Define deployment strategy:
- Preferred pattern: blue-green / rolling / canary (choose based on architecture)
- Zero-downtime requirement for production
- Graceful shutdown: 30-second grace period for in-flight requests
- Database migration ordering: migrate before deploy, backward-compatible only
2. Define health checks:
| Check | Type | Endpoint | Interval | Threshold |
|-------|------|----------|----------|-----------|
| Liveness | HTTP GET | `/health/live` | 10s | 3 failures → restart |
| Readiness | HTTP GET | `/health/ready` | 5s | 3 failures → remove from LB |
| Startup | HTTP GET | `/health/ready` | 5s | 30 attempts max |
3. Define rollback procedures:
- Trigger criteria: health check failures, error rate spike, critical alert
- Rollback steps: redeploy previous image tag, verify health, rollback database if needed
- Communication: notify stakeholders during rollback
- Post-mortem: required after every production rollback
4. Define deployment checklist:
- [ ] All tests pass in CI
- [ ] Security scan clean (zero critical/high CVEs)
- [ ] Database migrations reviewed and tested
- [ ] Environment variables configured
- [ ] Health check endpoints responding
- [ ] Monitoring alerts configured
- [ ] Rollback plan documented and tested
- [ ] Stakeholders notified
## Self-verification
- [ ] Deployment strategy chosen and justified
- [ ] Zero-downtime approach specified
- [ ] Health checks defined (liveness, readiness, startup)
- [ ] Rollback trigger criteria and steps documented
- [ ] Deployment checklist complete
## Save action
Write `deployment_procedures.md` using `templates/deployment_procedures.md`.
## Blocking
**BLOCKING**: Present deployment procedures to user. Do NOT proceed until confirmed.
+70
View File
@@ -0,0 +1,70 @@
# Step 7: Deployment Scripts
**Role**: DevOps / Platform engineer
**Goal**: Create executable deployment scripts for pulling Docker images and running services on the remote target machine.
**Constraints**: Produce real, executable shell scripts. This is the ONLY step that creates implementation artifacts.
## Steps
1. Read `containerization.md` and `deployment_procedures.md` from previous steps
2. Read `.env.example` for required variables
3. Create the following scripts in `SCRIPTS_DIR/`:
### `deploy.sh` — Main deployment orchestrator
- Validates that required environment variables are set (sources `.env` if present)
- Calls `pull-images.sh`, then `stop-services.sh`, then `start-services.sh`, then `health-check.sh`
- Exits with non-zero code on any failure
- Supports `--rollback` flag to redeploy previous image tags
### `pull-images.sh` — Pull Docker images to target machine
- Reads image list and tags from environment or config
- Authenticates with container registry
- Pulls all required images
- Verifies image integrity (digest check)
### `start-services.sh` — Start services on target machine
- Runs `docker compose up -d` or individual `docker run` commands
- Applies environment variables from `.env`
- Configures networks and volumes
- Waits for containers to reach healthy state
### `stop-services.sh` — Graceful shutdown
- Stops services with graceful shutdown period
- Saves current image tags for rollback reference
- Cleans up orphaned containers/networks
### `health-check.sh` — Verify deployment health
- Checks all health endpoints
- Reports status per service
- Returns non-zero if any service is unhealthy
4. All scripts must:
- Be POSIX-compatible (`#!/bin/bash` with `set -euo pipefail`)
- Source `.env` from project root or accept env vars from the environment
- Include usage/help output (`--help` flag)
- Be idempotent where possible
- Handle SSH connection to remote target (configurable via `DEPLOY_HOST` env var)
5. Document all scripts in `deploy_scripts.md`
## Self-verification
- [ ] All five scripts created and executable
- [ ] Scripts source environment variables correctly
- [ ] `deploy.sh` orchestrates the full flow
- [ ] `pull-images.sh` handles registry auth and image pull
- [ ] `start-services.sh` starts containers with correct config
- [ ] `stop-services.sh` handles graceful shutdown
- [ ] `health-check.sh` validates all endpoints
- [ ] Rollback supported via `deploy.sh --rollback`
- [ ] Scripts work for remote deployment via SSH (`DEPLOY_HOST`)
- [ ] `deploy_scripts.md` documents all scripts
## Save action
Write scripts to `SCRIPTS_DIR/`. Write `deploy_scripts.md` using `templates/deploy_scripts.md`.
+33
View File
@@ -142,6 +142,37 @@ Re-entry is seamless: `state.json` tracks exactly which modules are done.
--- ---
### Step 2.5: Module Layout Derivation
**Role**: Software architect
**Goal**: Produce `_docs/02_document/module-layout.md` — the authoritative file-ownership map read by `/implement` Step 4, `/code-review` Phase 7, and `/refactor` discovery. Required for any downstream skill that assigns file ownership or checks architectural layering.
This step derives the layout from the **existing** codebase rather than from a plan. Decompose Step 1.5 is the greenfield counterpart and uses the same template; this step uses the same output shape so downstream consumers don't branch on origin.
1. For each component identified in Step 2, resolve its owning directory from module docs (Step 1) and from directory groupings used in Step 2.
2. For each component, compute:
- **Public API**: exported symbols. Language-specific: Python — `__init__.py` re-exports + non-underscore root-level symbols; TypeScript — `index.ts` / barrel exports; C# — `public` types in the namespace root; Rust — `pub` items in `lib.rs` / `mod.rs`; Go — exported (capitalized) identifiers in the package root.
- **Internal**: everything else under the component's directory.
- **Owns**: the component's directory glob.
- **Imports from**: other components whose Public API this one references (parse imports; reuse tooling from Step 0's dependency graph).
- **Consumed by**: reverse of Imports from across all components.
3. Identify `shared/*` directories already present in the code (or infer candidates: modules imported by ≥2 components and owning no domain logic). Create a Shared / Cross-Cutting entry per concern.
4. Infer the Allowed Dependencies layering table by topologically sorting the import graph built in step 2. Components that import only from `shared/*` go to Layer 1; each successive layer imports only from lower layers.
5. Write `_docs/02_document/module-layout.md` using `.cursor/skills/decompose/templates/module-layout.md`. At the top of the file add `**Status**: derived-from-code` and a `## Verification Needed` block listing any inference that was not clean (detected cycles, ambiguous ownership, components not cleanly assignable to a layer).
**Self-verification**:
- [ ] Every component from Step 2 has a Per-Component Mapping entry
- [ ] Every Public API list is grounded in an actual exported symbol (no guesses)
- [ ] No component's `Imports from` points at a component in a higher layer
- [ ] Shared directories detected in code are listed under Shared / Cross-Cutting
- [ ] Cycles from Step 0 that span components are surfaced in `## Verification Needed`
**Save**: `_docs/02_document/module-layout.md`
**BLOCKING**: Present the layering table and the `## Verification Needed` block to the user. Do NOT proceed until the user confirms (or patches) the derived layout. Downstream skills assume this file is accurate.
---
### Step 3: System-Level Synthesis ### Step 3: System-Level Synthesis
**Role**: Software architect **Role**: Software architect
@@ -358,6 +389,8 @@ Using `.cursor/skills/plan/templates/final-report.md` as structure:
│ (batched ~5 modules; session break between batches) │ │ (batched ~5 modules; session break between batches) │
│ 2. Component Assembly → group modules, write component specs │ │ 2. Component Assembly → group modules, write component specs │
│ [BLOCKING: user confirms components] │ │ [BLOCKING: user confirms components] │
│ 2.5 Module Layout → derive module-layout.md from code │
│ [BLOCKING: user confirms layout] │
│ 3. System Synthesis → architecture, flows, data model, deploy │ │ 3. System Synthesis → architecture, flows, data model, deploy │
│ 4. Verification → compare all docs vs code, fix errors │ │ 4. Verification → compare all docs vs code, fix errors │
│ [BLOCKING: user reviews corrections] │ │ [BLOCKING: user reviews corrections] │
+22
View File
@@ -26,6 +26,27 @@ One or more task spec files from `_docs/02_tasks/todo/` or `_docs/02_tasks/done/
- System-level docs (only if the task changed API endpoints, data models, or external integrations) - System-level docs (only if the task changed API endpoints, data models, or external integrations)
- Problem-level docs (only if the task changed input parameters, acceptance criteria, or restrictions) - Problem-level docs (only if the task changed input parameters, acceptance criteria, or restrictions)
### Task Step 0.5: Import-Graph Ripple
A module that changed may be imported by other modules whose docs are now stale even though those other modules themselves were not directly edited. Compute the reverse-dependency set and fold it into the update list.
1. For each source file in the set of changed files from Step 0, build its module-level identifier (Python module path, C# namespace, Rust module path, TS import-specifier, Go package path — depending on the project language).
2. Search the codebase for files that import from any of those identifiers. Preferred tooling per language:
- **Python**: `rg -e "^(from|import) <module>"` then parse with `ast` to confirm actual symbol use.
- **TypeScript / JavaScript**: `rg "from ['\"].*<path>"` then resolve via `tsconfig.json` paths / `jsconfig.json` if present.
- **C#**: `rg "^using <namespace>"` plus `.csproj` `ProjectReference` graph.
- **Rust**: `rg "use <crate>::"` plus `Cargo.toml` workspace members.
- **Go**: `rg "\"<module-path>\""` plus `go.mod` requires.
If a static analyzer is available for the project (e.g., `pydeps`, `madge`, `depcruise`, `NDepend`, `cargo modules`, `go list -deps`), prefer its output — it is more reliable than regex.
3. For each importing file found, look up the component it belongs to via `_docs/02_document/module-layout.md` (if present) or by directory match against `DOCUMENT_DIR/components/`.
4. Add every such component and module to the update list, even if it was not in the current cycle's task spec.
5. Produce `_docs/02_document/ripple_log_cycle<N>.md` (where `<N>` is `state.cycle` from `_docs/_autodev_state.md`, default `1`) listing each downstream doc that was added to the refresh set and the reason (which changed file triggered it). Example line:
```
- docs/components/02_ingestor.md — refreshed because src/ingestor/queue.py imports src/shared/serializer.py (changed by AZ-173)
```
6. When parsing imports fails (missing tooling, unsupported language), log the parse failure in the ripple log and fall back to a directory-proximity heuristic: any component whose source directory contains files matching the changed-file basenames. Note: heuristic mode is explicitly marked in the log so the user can request a manual pass.
### Task Step 1: Module Doc Updates ### Task Step 1: Module Doc Updates
For each affected module: For each affected module:
@@ -78,6 +99,7 @@ Present a summary of all docs updated:
Component docs updated: [count] Component docs updated: [count]
System-level docs updated: [list or "none"] System-level docs updated: [list or "none"]
Problem-level docs updated: [list or "none"] Problem-level docs updated: [list or "none"]
Ripple-refreshed docs (imports changed indirectly): [count, see ripple_log_cycle<N>.md]
══════════════════════════════════════ ══════════════════════════════════════
``` ```
+73 -21
View File
@@ -28,7 +28,7 @@ The `implementer` agent is the specialist that writes all the code — it receiv
- **Integrated review**: `/code-review` skill runs automatically after each batch - **Integrated review**: `/code-review` skill runs automatically after each batch
- **Auto-start**: batches launch immediately — no user confirmation before a batch - **Auto-start**: batches launch immediately — no user confirmation before a batch
- **Gate on failure**: user confirmation is required only when code review returns FAIL - **Gate on failure**: user confirmation is required only when code review returns FAIL
- **Commit and push per batch**: after each batch is confirmed, commit and push to remote - **Commit per batch**: after each batch is confirmed, commit. Ask the user whether to push to remote unless the user previously opted into auto-push for this session.
## Context Resolution ## Context Resolution
@@ -51,6 +51,13 @@ TASKS_DIR/
1. `TASKS_DIR/todo/` exists and contains at least one task file — **STOP if missing** 1. `TASKS_DIR/todo/` exists and contains at least one task file — **STOP if missing**
2. `_dependencies_table.md` exists — **STOP if missing** 2. `_dependencies_table.md` exists — **STOP if missing**
3. At least one task is not yet completed — **STOP if all done** 3. At least one task is not yet completed — **STOP if all done**
4. **Working tree is clean** — run `git status --porcelain`; the output must be empty.
- If dirty, STOP and present the list of changed files to the user via the Choose format:
- A) Commit or stash stray changes manually, then re-invoke `/implement`
- B) Agent commits stray changes as a single `chore: WIP pre-implement` commit and proceeds
- C) Abort
- Rationale: implementer subagents edit files in parallel and commit per batch. Unrelated uncommitted changes get silently folded into batch commits otherwise.
- This check is repeated at the start of each batch iteration (see step 6 / step 14 Loop).
## Algorithm ## Algorithm
@@ -77,11 +84,21 @@ TASKS_DIR/
### 4. Assign File Ownership ### 4. Assign File Ownership
The authoritative file-ownership map is `_docs/02_document/module-layout.md` (produced by the decompose skill's Step 1.5). Task specs are purely behavioral — they do NOT carry file paths. Derive ownership from the layout, not from the task spec's prose.
For each task in the batch: For each task in the batch:
- Parse the task spec's Component field and Scope section - Read the task spec's **Component** field.
- Map the component to directories/files in the project - Look up the component in `_docs/02_document/module-layout.md` → Per-Component Mapping.
- Determine: files OWNED (exclusive write), files READ-ONLY (shared interfaces, types), files FORBIDDEN (other agents' owned files) - Set **OWNED** = the component's `Owns` glob (exclusive write for the duration of the batch).
- If two tasks in the same batch would modify the same file, schedule them sequentially instead of in parallel - Set **READ-ONLY** = Public API files of every component in the component's `Imports from` list, plus all `shared/*` Public API files.
- Set **FORBIDDEN** = every other component's `Owns` glob, and every other component's internal (non-Public API) files.
- If the task is a shared / cross-cutting task (lives under `shared/*`), OWNED = that shared directory; READ-ONLY = nothing; FORBIDDEN = every component directory.
- If two tasks in the same batch map to the same component or overlapping `Owns` globs, schedule them sequentially instead of in parallel.
If `_docs/02_document/module-layout.md` is missing or the component is not found:
- STOP the batch.
- Instruct the user to run `/decompose` Step 1.5 or to manually add the component entry to `module-layout.md`.
- Do NOT guess file paths from the task spec — that is exactly the drift this file exists to prevent.
### 5. Update Tracker Status → In Progress ### 5. Update Tracker Status → In Progress
@@ -89,6 +106,8 @@ For each task in the batch, transition its ticket status to **In Progress** via
### 6. Launch Implementer Subagents ### 6. Launch Implementer Subagents
**Per-batch dirty-tree re-check**: before launching subagents, run `git status --porcelain`. On the first batch this is guaranteed clean by the prerequisite check. On subsequent batches, the previous batch ended with a commit so the tree should still be clean. If the tree is dirty at this point, STOP and surface the dirty files to the user using the same A/B/C choice as the prerequisite check. The most likely causes are a failed commit in the previous batch, a user who edited files mid-loop, or a pre-commit hook that re-wrote files and was not captured.
For each task in the batch, launch an `implementer` subagent with: For each task in the batch, launch an `implementer` subagent with:
- Path to the task spec file - Path to the task spec file
- List of files OWNED (exclusive write access) - List of files OWNED (exclusive write access)
@@ -134,25 +153,39 @@ Only proceed to Step 9 when every AC has a corresponding test.
### 10. Auto-Fix Gate ### 10. Auto-Fix Gate
Auto-fix loop with bounded retries (max 2 attempts) before escalating to user: Bounded auto-fix loop — only applies to **mechanical** findings. Critical and Security findings are never auto-fixed.
1. If verdict is **PASS** or **PASS_WITH_WARNINGS**: show findings as info, continue automatically to step 11 **Auto-fix eligibility matrix:**
2. If verdict is **FAIL** (attempt 1 or 2):
- Parse the code review findings (Critical and High severity items)
- For each finding, attempt an automated fix using the finding's location, description, and suggestion
- Re-run `/code-review` on the modified files
- If now PASS or PASS_WITH_WARNINGS → continue to step 11
- If still FAIL → increment retry counter, repeat from (2) up to max 2 attempts
3. If still **FAIL** after 2 auto-fix attempts: present all findings to user (**BLOCKING**). User must confirm fixes or accept before proceeding.
Track `auto_fix_attempts` count in the batch report for retrospective analysis. | Severity | Category | Auto-fix? |
|----------|----------|-----------|
| Low | any | yes |
| Medium | Style, Maintainability, Performance | yes |
| Medium | Bug, Spec-Gap, Security, Architecture | escalate |
| High | Style, Scope | yes |
| High | Bug, Spec-Gap, Performance, Maintainability, Architecture | escalate |
| Critical | any | escalate |
| any | Security | escalate |
| any | Architecture (cyclic deps) | escalate |
### 11. Commit and Push Flow:
1. If verdict is **PASS** or **PASS_WITH_WARNINGS**: show findings as info, continue to step 11
2. If verdict is **FAIL**:
- Partition findings into auto-fix-eligible and escalate (using the matrix above)
- For eligible findings, attempt fixes using location/description/suggestion, then re-run `/code-review` on modified files (max 2 rounds)
- If all remaining findings are auto-fix-eligible and re-review now passes → continue to step 11
- If any non-eligible finding exists at any point → stop auto-fixing, present the full list to the user (**BLOCKING**)
3. User must explicitly approve each non-auto-fix finding (accept, request manual fix, mark as out-of-scope) before proceeding.
Track `auto_fix_attempts` and `escalated_findings` in the batch report for retrospective analysis.
### 11. Commit (and optionally Push)
- After user confirms the batch (explicitly for FAIL, implicitly for PASS/PASS_WITH_WARNINGS): - After user confirms the batch (explicitly for FAIL, implicitly for PASS/PASS_WITH_WARNINGS):
- `git add` all changed files from the batch - `git add` all changed files from the batch
- `git commit` with a message that includes ALL task IDs (tracker IDs or numeric prefixes) of tasks implemented in the batch, followed by a summary of what was implemented. Format: `[TASK-ID-1] [TASK-ID-2] ... Summary of changes` - `git commit` with a message that includes ALL task IDs (tracker IDs or numeric prefixes) of tasks implemented in the batch, followed by a summary of what was implemented. Format: `[TASK-ID-1] [TASK-ID-2] ... Summary of changes`
- `git push` to the remote branch - Ask the user whether to push to remote, unless the user previously opted into auto-push for this session
### 12. Update Tracker Status → In Testing ### 12. Update Tracker Status → In Testing
@@ -166,6 +199,23 @@ Move each completed task file from `TASKS_DIR/todo/` to `TASKS_DIR/done/`.
- Go back to step 2 until all tasks in `todo/` are done - Go back to step 2 until all tasks in `todo/` are done
### 14.5. Cumulative Code Review (every K batches)
- **Trigger**: every K completed batches (default `K = 3`; configurable per run via a `cumulative_review_interval` knob in the invocation context)
- **Purpose**: per-batch review (Step 9) catches batch-local issues; cumulative review catches issues that only appear when tasks are combined — architecture drift, cross-task inconsistency, duplicate symbols introduced across different batches, contracts that drifted across producer/consumer batches
- **Scope**: the union of files changed since the **last** cumulative review (or since the start of the run if this is the first)
- **Action**: invoke `.cursor/skills/code-review/SKILL.md` in **cumulative mode**. All 7 phases run, with emphasis on Phase 6 (Cross-Task Consistency), Phase 7 (Architecture Compliance), and duplicate-symbol detection across the accumulated code
- **Output**: write the report to `_docs/03_implementation/cumulative_review_batches_[NN-MM]_cycle[N]_report.md` where `[NN-MM]` is the batch range covered and `[N]` is the current `state.cycle`. When `_docs/02_document/architecture_compliance_baseline.md` exists, the report includes the `## Baseline Delta` section (carried over / resolved / newly introduced) per `code-review/SKILL.md` "Baseline delta".
- **Gate**:
- `PASS` or `PASS_WITH_WARNINGS` → continue to next batch (step 14 loop)
- `FAIL` → STOP. Present the report to the user via the Choose format:
- A) Auto-fix findings using the Auto-Fix Gate matrix in step 10, then re-run cumulative review
- B) Open a targeted refactor run (invoke refactor skill in guided mode with the findings as `list-of-changes.md`)
- C) Manually fix, then re-invoke `/implement`
- Do NOT loop to the next batch on `FAIL` — the whole point is to stop drift before it compounds
- **Interaction with Auto-Fix Gate**: Architecture findings (new category from code-review Phase 7) always escalate per the implement auto-fix matrix; they cannot silently auto-fix
- **Resumability**: if interrupted, the next invocation checks for the latest `cumulative_review_batches_*.md` and computes the changed-file set from batch reports produced after that review
### 15. Final Test Run ### 15. Final Test Run
- After all batches are complete, run the full test suite once - After all batches are complete, run the full test suite once
@@ -175,13 +225,15 @@ Move each completed task file from `TASKS_DIR/todo/` to `TASKS_DIR/done/`.
## Batch Report Persistence ## Batch Report Persistence
After each batch completes, save the batch report to `_docs/03_implementation/batch_[NN]_report.md`. Create the directory if it doesn't exist. When all tasks are complete, produce a FINAL implementation report with a summary of all batches. The filename depends on context: After each batch completes, save the batch report to `_docs/03_implementation/batch_[NN]_cycle[N]_report.md` for feature implementation (or `batch_[NN]_report.md` for test/refactor runs). Create the directory if it doesn't exist. When all tasks are complete, produce a FINAL implementation report with a summary of all batches. The filename depends on context:
- **Test implementation** (tasks from test decomposition): `_docs/03_implementation/implementation_report_tests.md` - **Test implementation** (tasks from test decomposition): `_docs/03_implementation/implementation_report_tests.md`
- **Feature implementation**: `_docs/03_implementation/implementation_report_{feature_slug}.md` where `{feature_slug}` is derived from the batch task names (e.g., `implementation_report_core_api.md`) - **Feature implementation**: `_docs/03_implementation/implementation_report_{feature_slug}_cycle{N}.md` where `{feature_slug}` is derived from the batch task names (e.g., `implementation_report_core_api_cycle2.md`) and `{N}` is the current `state.cycle` from `_docs/_autodev_state.md`. If `state.cycle` is absent (pre-migration), default to `cycle1`.
- **Refactoring**: `_docs/03_implementation/implementation_report_refactor_{run_name}.md` - **Refactoring**: `_docs/03_implementation/implementation_report_refactor_{run_name}.md`
Determine the context from the task files being implemented: if all tasks have test-related names or belong to a test epic, use the tests filename; otherwise derive the feature slug from the component names. Determine the context from the task files being implemented: if all tasks have test-related names or belong to a test epic, use the tests filename; otherwise derive the feature slug from the component names and append the cycle suffix.
Batch report filenames must also include the cycle counter when running feature implementation: `_docs/03_implementation/batch_{NN}_cycle{N}_report.md` (test and refactor runs may use the plain `batch_{NN}_report.md` form since they are not cycle-scoped).
## Batch Report ## Batch Report
@@ -224,7 +276,7 @@ After each batch, produce a structured report:
Each batch commit serves as a rollback checkpoint. If recovery is needed: Each batch commit serves as a rollback checkpoint. If recovery is needed:
- **Tests fail after final test run**: `git revert <batch-commit-hash>` using hashes from the batch reports in `_docs/03_implementation/` - **Tests fail after final test run**: `git revert <batch-commit-hash>` using hashes from the batch reports in `_docs/03_implementation/`
- **Resuming after interruption**: Read `_docs/03_implementation/batch_*_report.md` files to determine which batches completed, then continue from the next batch - **Resuming after interruption**: Read `_docs/03_implementation/batch_*_report.md` files (filtered by current `state.cycle` for feature implementation) to determine which batches completed, then continue from the next batch
- **Multiple consecutive batches fail**: Stop and escalate to user with links to batch reports and commit hashes - **Multiple consecutive batches fail**: Stop and escalate to user with links to batch reports and commit hashes
## Safety Rules ## Safety Rules
+164
View File
@@ -0,0 +1,164 @@
---
name: monorepo-cicd
description: Syncs CI/CD and infrastructure configuration at the monorepo root (compose files, install scripts, env templates, CI service tables) after one or more components changed. Reads `_docs/_repo-config.yaml` (produced by monorepo-discover) to know which CI artifacts are in play and how they're structured. Touches ONLY CI/infra files — never documentation, component directories, or per-component CI configs. Use when a component added/changed a Dockerfile path, port, env var, image tag format, or runtime dependency.
---
# Monorepo CI/CD
Propagates component changes into the repo-level CI/CD and infrastructure artifacts. Strictly scoped — never edits docs, component internals, or per-component CI configs.
## Scope — explicit
| In scope | Out of scope |
| -------- | ------------ |
| `docker-compose.*.yml` at repo root | Unified docs in `_docs/*.md` → use `monorepo-document` |
| `.env.example` / `.env.template` | Root `README.md` documentation → `monorepo-document` |
| Install scripts (`ci-*.sh`, `setup.sh`, etc.) | Per-component CI configs (`<component>/.woodpecker/*`, `<component>/.github/*`) |
| CI service-registry docs (`ci_steps.md` or similar — the human-readable index of pipelines; in scope only if the config says so under `ci.service_registry_doc`) | Component source code, Dockerfiles, or internal docs |
| Kustomization / Helm manifests at repo root | `_docs/_repo-config.yaml` itself (only `monorepo-discover` and `monorepo-onboard` write it) |
If a component change needs doc updates too, tell the user to also run `monorepo-document`.
**Special case**: `ci.service_registry_doc` (e.g., `ci_steps.md`) is a **CI artifact that happens to be markdown**. It's in this skill's scope, not `monorepo-document`'s, because it describes the pipeline/service topology — not user-facing feature docs.
## Preconditions (hard gates)
1. `_docs/_repo-config.yaml` exists.
2. Top-level `confirmed_by_user: true`.
3. `ci.*` section is populated in config (not empty).
4. Components-in-scope have confirmed CI mappings, OR user explicitly approves inferred ones.
If any gate fails, redirect to `monorepo-discover` or ask for confirmation.
## Mitigations (M1M7)
- **M1** Separation: this skill only touches CI/infra files; no docs, no component internals.
- **M3** Factual vs. interpretive: image tag format, port numbers, env var names — FACTUAL, read from code. Doc cross-references — out of scope entirely (belongs to `monorepo-document`).
- **M4** Batch questions at checkpoints.
- **M5** Skip over guess: component with no CI mapping → skip and report.
- **M6** Assumptions footer + append to `_repo-config.yaml` `assumptions_log`.
- **M7** Drift detection: verify every file in `ci.orchestration_files`, `ci.install_scripts`, `ci.env_template` exists; stop if not.
## Workflow
### Phase 1: Drift check (M7)
Verify every CI file listed in config exists on disk. Missing file → stop, ask user:
- Run `monorepo-discover` to refresh, OR
- Skip the missing file (recorded in report)
Do NOT recreate missing infra files automatically.
### Phase 2: Determine scope
Ask the user (unless specified):
> Which components changed? (a) list them, (b) auto-detect, (c) skip detection (I'll apply specific changes).
For **auto-detect**, for each component:
```bash
git -C <path> log --oneline -20 # submodule
# or
git log --oneline -20 -- <path> # monorepo subfolder
```
Flag commits that touch CI-relevant concerns:
- Dockerfile additions, renames, or path changes
- CI pipeline files (`<component>/.woodpecker/*`, `<component>/.github/workflows/*`, etc.)
- New exposed ports
- New environment variables consumed by the component
- Changes to image name / tag format
- New dependency on another service (e.g., new DB, new broker)
Present the flagged list; confirm.
### Phase 3: Classify changes per component
| Change type | Target CI files |
| ----------- | --------------- |
| Dockerfile path moved/renamed | `ci.service_registry_doc` service table; per-component CI is OUT OF SCOPE (tell user to update it) |
| New port exposed | `ci.service_registry_doc` ports section (if infra port); component's service block in orchestration file |
| Registry URL changed | `ci.install_scripts` (all of them); `ci.env_template`; `ci.service_registry_doc` |
| Branch naming convention changed | All `ci.install_scripts`; all `ci.orchestration_files` referencing the branch; `ci.service_registry_doc` |
| New runtime env var | `ci.env_template`; component's service block in orchestration file |
| New infrastructure component (DB, cache, broker) | Relevant `ci.orchestration_files`; `ci.service_registry_doc` architecture section |
| New image tag format | All `ci.orchestration_files`; `ci.install_scripts`; `ci.service_registry_doc` |
| Watchtower/polling config change | Specific `ci.orchestration_files`; `ci.service_registry_doc` |
If a change type isn't covered here or in the config, add to an unresolved list and skip (M5).
### Phase 4: Apply edits
For each (change → target file) pair:
1. Read the target file.
2. Locate the service block / table row / section.
3. Edit carefully:
- **Orchestration files (compose/kustomize/helm)**: YAML; preserve indentation, anchors, and references exactly. Match existing service-block structure. Never reformat unchanged lines.
- **Install scripts (`*.sh`)**: shell; any edit must remain **idempotent**. Re-running the script on an already-configured host must not break it. If an edit cannot be made idempotent, flag for the user and skip.
- **`.env.example`**: append new vars at the appropriate section; never remove user's local customizations (file is in git, so comments may be significant).
- **`ci.service_registry_doc`** (markdown): preserve column widths, ordering (alphabetical or compose-order — whichever existed), ASCII diagrams.
### Phase 5: Skip-and-report (M5)
Skip a component if:
- No `ci_config` in its config entry AND no entry in config's CI mappings
- `confirmed: false` on its mapping and user didn't approve
- Component's Dockerfile path declared in config doesn't exist on disk — surface contradiction
- Change type unrecognized — skip, report for manual handling
### Phase 6: Idempotency / lint check
- Shell: if `shellcheck` available, run on any edited `*.sh`.
- YAML: if `yamllint` or `prettier` available, run on edited `*.yml` / `*.yaml`.
- For edited install scripts, **mentally re-run** the logic: would a second invocation crash, duplicate, or corrupt? Flag anything that might.
Skip linters silently if none configured — don't install tools.
### Phase 7: Report + assumptions footer (M6)
```
monorepo-cicd run complete.
CI files updated (N):
- docker-compose.run.yml — added `loader` service block
- .env.example — added LOADER_BUCKET_NAME placeholder
- ci_steps.md — added `loader` row in service table
Skipped (K):
- satellite-provider: no ci_config in repo-config.yaml
- detections: Dockerfile path in config (admin/src/Dockerfile) does not exist on disk
Manual actions needed (M):
- Update `<submodule>/.woodpecker/*.yml` inside the submodule's own workspace
(per-component CI is not maintained by this skill)
Assumptions used this run:
- image tag format: ${REGISTRY}/${NAME}:${BRANCH}-${ARCH_TAG} (confirmed in config)
- target branch for triggers: [stage, main] (confirmed in config)
Next step: review the diff, then commit with
`<commit_prefix> Sync CI after <components>` (or your own message).
```
Append run entry to `_docs/_repo-config.yaml` `assumptions_log:`.
## What this skill will NEVER do
- Modify files inside component directories
- Edit unified docs under `docs.root`
- Edit per-component CI configs (`.woodpecker/*`, `.github/*`, etc.)
- Auto-generate CI pipeline YAML for components (only provide template guidance)
- Set `confirmed_by_user` or `confirmed:` flags
- Auto-commit
- Install tools (shellcheck, yamllint, etc.) — use if present, skip if absent
## Edge cases
- **Compose file has service blocks for components NOT in config**: note in report; ask user whether to rediscover (`monorepo-discover`) or leave them alone.
- **`.env.example` has entries for removed components**: don't auto-remove; flag to user.
- **Install script edit cannot be made idempotent**: don't save; ask user to handle manually.
- **Branch trigger vs. runtime branch mismatch**: if config says triggers are `[stage, main]` but a compose file references a branch tag `develop`, stop and ask.
+182
View File
@@ -0,0 +1,182 @@
---
name: monorepo-discover
description: Scans a monorepo or meta-repo (git-submodule aggregators, npm/cargo workspaces, etc.) and generates a human-reviewable `_docs/_repo-config.yaml` that other `monorepo-*` skills (document, cicd, onboard, status) read. Produces inferred mappings tagged with evidence; never writes to the config's `confirmed_by_user` flag — the human does that. Use on first setup in a new monorepo, or to refresh the config after structural changes.
---
# Monorepo Discover
Writes or refreshes `_docs/_repo-config.yaml` — the shared config file that every other `monorepo-*` skill depends on. Does NOT modify any other files.
## Core principle
**Discovery is a suggestion, not a commitment.** The skill infers repo structure, but every inferred entry is tagged with `confirmed: false` + evidence. Action skills (`monorepo-document`, `monorepo-cicd`, `monorepo-onboard`) refuse to run until the human reviews the config and sets `confirmed_by_user: true`.
## Mitigations against LLM inference errors (applies throughout)
| Rule | What it means |
| ---- | ------------- |
| **M1** Separation | This skill never triggers other skills. It stops after writing config. |
| **M2** Evidence thresholds | No mapping gets recorded without at least one signal (name match, textual reference, directory convention, explicit statement). Zero-signal candidates go under `unresolved:` with a question. |
| **M3** Factual vs. interpretive | Resolve factual questions alone (file exists? line says what?). Ask for interpretive ones (does A feed into B?) unless M2 evidence is present. Ask for conventional ones always (commit prefix? target branch?). |
| **M4** Batch questions | Accumulate all `unresolved:` questions. Present at end of discovery, not drip-wise. |
| **M5** Skip over guess | Never record a zero-evidence mapping under `components:` or `docs:` — always put it in `unresolved:` with a question. |
| **M6** Assumptions footer | Every run ends with an explicit list of assumptions used. Also append to `assumptions_log:` in the config. |
| **M7** Structural drift | If the config already exists, produce a diff of what would change and ask for approval before overwriting. Never silently regenerate. |
## Guardrail
**This skill writes ONLY `_docs/_repo-config.yaml`.** It never edits unified docs, CI files, or component directories. If the workflow ever pushes you to modify anything else, stop.
## Workflow
### Phase 1: Detect repo type
Check which of these exists (first match wins):
1. `.gitmodules`**git-submodules meta-repo**
2. `package.json` with `workspaces` field → **npm/yarn/pnpm workspace**
3. `pnpm-workspace.yaml`**pnpm workspace**
4. `Cargo.toml` with `[workspace]` section → **cargo workspace**
5. `go.work`**go workspace**
6. Multiple top-level subfolders each with their own `package.json` / `Cargo.toml` / `pyproject.toml` / `*.csproj`**ad-hoc monorepo**
If none match → **ask the user** what kind of monorepo this is. Don't guess.
Record in `repo.type` and `repo.component_registry`.
### Phase 2: Enumerate components
Based on repo type, parse the registry and list components. For each collect:
- `name`, `path`
- `stack` — infer from files present (`.csproj` → .NET, `pyproject.toml` → Python, `Cargo.toml` → Rust, `package.json` → Node/TS, `go.mod` → Go). Multiple signals → pick dominant one. No signals → `stack: unknown` and add to `unresolved:`.
- `evidence` — list of signals used (e.g., `[gitmodules_entry, csproj_present]`)
Do NOT yet populate `primary_doc`, `secondary_docs`, `ci_config`, or `deployment_tier` — those come in Phases 4 and 5.
### Phase 3: Locate docs root
Probe in order: `_docs/`, `docs/`, `documentation/`, or a root-level README with links to sub-docs.
- Multiple candidates → ask user which is canonical
- None → `docs.root: null` + flag under `unresolved:`
Once located, classify each `*.md`:
- **Primary doc** — filename or H1 names a component/feature
- **Cross-cutting doc** — describes repo-wide concerns (architecture, schema, auth, index)
- **Index** — `README.md`, `index.md`, or `_index.md`
Detect filename convention (e.g., `NN_<name>.md`) and next unused prefix.
### Phase 4: Map components to docs (inference, M2-gated)
For each component, attempt to find its **primary doc** using the evidence rules. A mapping qualifies for `components:` (with `confirmed: false`) if at least ONE of these holds:
- **Name match** — component name appears in the doc filename OR H1
- **Textual reference** — doc body explicitly names the component path or git URL
- **Directory convention** — doc lives inside the component's folder
- **Explicit statement** — README, index, or comment asserts the mapping
No signal → entry goes under `unresolved:` with an A/B/C question, NOT under `components:` as a guess.
Cross-cutting docs go in `docs.cross_cutting:` with an `owns:` list describing what triggers updates to them. If you can't classify a doc, add an `unresolved:` entry asking the user.
### Phase 5: Detect CI tooling
Probe at repo root AND per-component for CI configs:
- `.github/workflows/*.yml` → GitHub Actions
- `.gitlab-ci.yml` → GitLab CI
- `.woodpecker/` or `.woodpecker.yml` → Woodpecker
- `.drone.yml` → Drone
- `Jenkinsfile` → Jenkins
- `bitbucket-pipelines.yml` → Bitbucket
- `azure-pipelines.yml` → Azure Pipelines
- `.circleci/config.yml` → CircleCI
Probe for orchestration/infra at root:
- `docker-compose*.yml`
- `kustomization.yaml`, `helm/`
- `Makefile` with build/deploy targets
- `*-install.sh`, `*-setup.sh`
- `.env.example`, `.env.template`
Record under `ci:`. For image tag formats, grep compose files for `image:` lines and record the pattern (e.g., `${REGISTRY}/${NAME}:${BRANCH}-${ARCH}`).
Anything ambiguous → `unresolved:` entry.
### Phase 6: Detect conventions
- **Commit prefix**: `git log --format=%s -50` → look for `[PREFIX]` consistency
- **Target/work branch**: check CI config trigger branches; fall back to `git remote show origin`
- **Ticket ID pattern**: grep commits and docs for regex like `[A-Z]+-\d+`
- **Image tag format**: see Phase 5
- **Deployment tiers**: scan root README and architecture docs for named tiers/environments
Record inferred conventions with `confirmed: false`.
### Phase 7: Read existing config (if any) and produce diff
If `_docs/_repo-config.yaml` already exists:
1. Parse it.
2. Compare against what Phases 16 discovered.
3. Produce a **diff report**:
- Entries added (new components, new docs)
- Entries changed (e.g., `primary_doc` changed due to doc renaming)
- Entries removed (component removed from registry)
4. **Ask the user** whether to apply the diff.
5. If applied, **preserve `confirmed: true` flags** for entries that still match — don't reset human-approved mappings.
6. If user declines, stop — leave config untouched.
### Phase 8: Batch question checkpoint (M4)
Present ALL accumulated `unresolved:` questions in one round. For each offer options when possible (A/B/C), open-ended only when no options exist.
After answers, update the draft config with the resolutions.
### Phase 9: Write config file
Write `_docs/_repo-config.yaml` using the schema in [templates/repo-config.example.yaml](templates/repo-config.example.yaml).
- Top-level `confirmed_by_user: false` ALWAYS — only the human flips this
- Every entry has `confirmed: <bool>` and (when `false`) `evidence: [...]`
- Append to `assumptions_log:` a new entry for this run
### Phase 10: Review handoff + assumptions footer (M6)
Output:
```
Generated/refreshed _docs/_repo-config.yaml:
- N components discovered (X confirmed, Y inferred, Z unresolved)
- M docs located (K primary, L cross-cutting)
- CI tooling: <detected>
- P unresolved questions resolved this run; Q still open — see config
- Assumptions made during discovery:
- Treated <path> as unified-docs root (only candidate found)
- Inferred `<component>` primary doc = `<doc>` (name match)
- Commit prefix `<prefix>` seen in N of last 20 commits
Next step: please review _docs/_repo-config.yaml, correct any wrong inferences,
and set `confirmed_by_user: true` at the top. After that, monorepo-document,
monorepo-cicd, monorepo-status, and monorepo-onboard will run.
```
Then stop.
## What this skill will NEVER do
- Modify any file other than `_docs/_repo-config.yaml`
- Set `confirmed_by_user: true`
- Record a mapping with zero evidence
- Chain to another skill automatically
- Commit the generated config
## Failure / ambiguity handling
- Internal contradictions in a component (README references files not in code) → surface to user, stop, do NOT silently reconcile
- Docs root cannot be located → record `docs.root: null` and list unresolved question; do not create a new `_docs/` folder
- Parsing fails on `_docs/_repo-config.yaml` (existing file is corrupt) → surface to user, stop; never overwrite silently
@@ -0,0 +1,172 @@
# _docs/_repo-config.yaml — schema and example
#
# Generated by monorepo-discover. Reviewed by a human. Consumed by:
# - monorepo-document (reads docs.* and components.*.primary_doc/secondary_docs)
# - monorepo-cicd (reads ci.* and components.*.ci_config)
# - monorepo-onboard (reads all sections; writes new component entries)
# - monorepo-status (reads all sections; writes nothing)
#
# Every entry has a `confirmed:` flag:
# true = human reviewed and approved
# false = inferred by monorepo-discover; needs review
# And an `evidence:` list documenting why discovery made the inference.
# ---------------------------------------------------------------------------
# Metadata
# ---------------------------------------------------------------------------
version: 1
last_updated: 2026-04-17
confirmed_by_user: false # HUMAN ONLY: flip to true after reviewing
# ---------------------------------------------------------------------------
# Repo identity
# ---------------------------------------------------------------------------
repo:
name: example-monorepo
type: git-submodules # git-submodules | npm-workspaces | cargo-workspace | pnpm-workspace | go-workspace | adhoc
component_registry: .gitmodules
root_readme: README.md
work_branch: dev
# ---------------------------------------------------------------------------
# Components
# ---------------------------------------------------------------------------
components:
- name: annotations
path: annotations/
stack: .NET 10
confirmed: true
evidence: [gitmodules_entry, csproj_present]
primary_doc: _docs/01_annotations.md
secondary_docs:
- _docs/00_database_schema.md
- _docs/00_roles_permissions.md
ci_config: annotations/.woodpecker/
deployment_tier: api-layer
ports:
- "5001/http"
depends_on: []
env_vars:
- ANNOTATIONS_DB_URL
- name: loader
path: loader/
stack: Python 3.12
confirmed: false # inferred, needs review
evidence: [gitmodules_entry, pyproject_present]
primary_doc: _docs/07_admin.md
primary_doc_section: "Model delivery"
secondary_docs:
- _docs/00_top_level_architecture.md
ci_config: loader/.woodpecker/
deployment_tier: edge
ports: []
depends_on: [admin]
env_vars: []
# ---------------------------------------------------------------------------
# Documentation
# ---------------------------------------------------------------------------
docs:
root: _docs/
index: _docs/README.md
file_convention: "NN_<name>.md"
next_unused_prefix: "13"
cross_cutting:
- path: _docs/00_top_level_architecture.md
owns:
- deployment topology
- component communication
- infrastructure inventory
confirmed: true
- path: _docs/00_database_schema.md
owns:
- database schema changes
- ER diagram
confirmed: true
- path: _docs/00_roles_permissions.md
owns:
- permission codes
- role-to-feature mapping
confirmed: true
# ---------------------------------------------------------------------------
# CI/CD
# ---------------------------------------------------------------------------
ci:
tooling: Woodpecker # GitHub Actions | GitLab CI | Woodpecker | Drone | Jenkins | ...
service_registry_doc: ci_steps.md
orchestration_files:
- docker-compose.ci.yml
- docker-compose.run.yml
- docker-compose.ci-agent-amd64.yml
install_scripts:
- ci-server-install.sh
- ci-client-install.sh
- ci-agent-amd64-install.sh
env_template: .env.example
image_tag_format: "${REGISTRY}/${NAME}:${BRANCH}-${ARCH_TAG}"
branch_triggers: [stage, main]
expected_files_per_component:
- path_glob: "<component>/.woodpecker/build-*.yml"
required: at-least-one
pipeline_template: |
when:
branch: [stage, main]
labels:
platform: arm64
steps:
- name: build-push
image: docker
commands:
- docker build -f Dockerfile -t localhost:5000/<service>:${CI_COMMIT_BRANCH}-arm .
- docker push localhost:5000/<service>:${CI_COMMIT_BRANCH}-arm
volumes:
- /var/run/docker.sock:/var/run/docker.sock
confirmed: false
# ---------------------------------------------------------------------------
# Conventions
# ---------------------------------------------------------------------------
conventions:
commit_prefix: "[suite]"
meta_commit_fallback: "[meta]"
ticket_id_pattern: "AZ-\\d+"
component_naming: lowercase-hyphen
deployment_tiers:
- edge
- remote
- operator-station
- api-layer
confirmed: false
# ---------------------------------------------------------------------------
# Unresolved questions (populated by monorepo-discover)
# ---------------------------------------------------------------------------
# Every question discovery couldn't resolve goes here. Action skills refuse
# to touch entries that map to `unresolved:` items until the human resolves them.
unresolved:
- id: satellite-provider-doc-slot
question: "Component `satellite-provider` has no matching doc. Create new file or extend an existing doc?"
options:
- "new _docs/13_satellite_provider.md"
- "extend _docs/11_gps_denied.md with a Satellite section"
- "no doc needed (internal utility)"
# ---------------------------------------------------------------------------
# Assumptions log (append-only, audit trail)
# ---------------------------------------------------------------------------
# monorepo-discover appends a new entry each run.
# monorepo-document, monorepo-cicd, monorepo-onboard also append their
# per-run assumptions here so the user can audit what was taken on faith.
assumptions_log:
- date: 2026-04-17
skill: monorepo-discover
run_notes: "Initial discovery"
assumptions:
- "Treated _docs/ as unified-docs root (only candidate found)"
- "Inferred component→doc mappings via name matching for 9/11 components"
- "Commit prefix [suite] observed in 14 of last 20 commits"
+175
View File
@@ -0,0 +1,175 @@
---
name: monorepo-document
description: Syncs unified documentation (`_docs/*.md` and equivalent) in a monorepo after one or more components changed. Reads `_docs/_repo-config.yaml` (produced by monorepo-discover) to know which doc files each component feeds into and which cross-cutting docs own which concerns. Touches ONLY documentation files — never CI, compose, env templates, or component directories. Use when a submodule/package added/changed an API, schema, permission, event, or dependency and the unified docs need to catch up.
---
# Monorepo Document
Propagates component changes into the unified documentation set. Strictly scoped to `*.md` files under `docs.root` (and `repo.root_readme` if referenced as cross-cutting).
## Scope — explicit
| In scope | Out of scope |
| -------- | ------------ |
| `_docs/*.md` (primary and cross-cutting) | `.env.example`, `docker-compose.*.yml` → use `monorepo-cicd` |
| Root `README.md` **only** if `_repo-config.yaml` lists it as a doc target (e.g., services table) | Install scripts (`ci-*.sh`) → use `monorepo-cicd` |
| Docs index (`_docs/README.md` or similar) cross-reference tables | Component-internal docs (`<component>/README.md`, `<component>/docs/*`) |
| Cross-cutting docs listed in `docs.cross_cutting` | `_docs/_repo-config.yaml` itself (only `monorepo-discover` and `monorepo-onboard` write it) |
If a component change requires CI/env updates too, tell the user to also run `monorepo-cicd`. This skill does NOT cross domains.
## Preconditions (hard gates)
1. `_docs/_repo-config.yaml` exists.
2. Top-level `confirmed_by_user: true` in the config.
3. `docs.root` is set (non-null) in the config.
4. Components-in-scope have `confirmed: true` mappings, OR the user explicitly approves an inferred mapping for this run.
If any gate fails:
- Config missing → redirect: "Run `monorepo-discover` first."
- `confirmed_by_user: false` → "Please review the config and set `confirmed_by_user: true`."
- `docs.root: null` → "Config has no docs root. Run `monorepo-discover` to re-detect, or edit the config."
- Component inferred but not confirmed → ask user: "Mapping `<component>``<doc>` is inferred. Use it this run? (y/n/edit config first)"
## Mitigations (same M1M7 spirit)
- **M1** Separation: this skill only syncs docs; never touches CI or config.
- **M3** Factual vs. interpretive: don't guess mappings. Use config. If config has an `unresolved:` entry for a component in scope, SKIP it (M5) and report.
- **M4** Batch questions at checkpoints: end of scope determination, end of drift check.
- **M5** Skip over guess: missing/ambiguous mapping → skip and report, never pick a default.
- **M6** Assumptions footer every run; append to config's `assumptions_log:`.
- **M7** Drift detection before action: re-scan `docs.root` to verify config-listed docs still exist; if not, stop and ask.
## Workflow
### Phase 1: Drift check (M7)
Before editing anything:
1. For each component in scope, verify its `primary_doc` and each `secondary_docs` file exists on disk.
2. For each entry in `docs.cross_cutting`, verify the file exists.
3. If any expected file is missing → **stop**, ask user whether to:
- Run `monorepo-discover` to refresh the config, OR
- Skip the missing file for this run (recorded as skipped in report)
Do NOT silently create missing docs. That's onboarding territory.
### Phase 2: Determine scope
If the user hasn't specified which components changed, ask:
> Which components changed? (a) list them, (b) auto-detect from recent commits, (c) skip to review changes you've already made.
For **auto-detect**, for each component in config:
```bash
git -C <path> log --oneline -20 # submodule
# or
git log --oneline -20 -- <path> # monorepo subfolder
```
Flag components whose recent commits touch doc-relevant concerns:
- API/route files (controllers, handlers, OpenAPI specs, route definitions)
- Schema/migration files
- Auth/permission files (attributes, middleware, policies)
- Streaming/SSE/websocket event definitions
- Public exports (`index.ts`, `mod.rs`, `__init__.py`)
- Component's own README if it documents API
- Environment variable additions (only impact docs if a Configuration section exists)
Present the flagged list; ask for confirmation before proceeding.
### Phase 3: Classify changes per component
For each in-scope component, read recent diffs and classify changes:
| Change type | Target doc concern |
| ----------- | ------------------ |
| New/changed REST endpoint | Primary doc API section; cross-cutting arch doc if pattern changes |
| Schema/migration | Cross-cutting schema doc; primary doc if entity documented there |
| New permission/role | Cross-cutting roles/permissions doc; index permission-matrix table |
| New streaming/SSE event | Primary doc events section; cross-cutting arch doc |
| New inter-component dependency | Cross-cutting arch doc; primary doc dependencies section |
| New env variable (affects docs) | Primary doc Configuration section only — `.env.example` is out of scope |
Match concerns to docs via `docs.cross_cutting[].owns`. If a concern has no owner, add to an in-memory unresolved list and skip it (M5) — tell the user at the end.
### Phase 4: Apply edits
For each mapping (component change → target doc):
1. Read the target doc.
2. Locate the relevant section (heading match, anchor, or `primary_doc_section` from config).
3. Edit only that section. Preserve:
- Heading structure and anchors (inbound links depend on them)
- Table column widths / alignment style
- ASCII diagrams (characters, indentation, widths)
- Cross-reference wording style
4. Update cross-references when needed: if a renamed doc is linked elsewhere, fix links too.
### Phase 5: Skip-and-report (M5)
Skip a component, don't guess, if:
- No mapping in config (the component itself isn't listed)
- Mapping tagged `confirmed: false` and user didn't approve it in Phase 2
- Component internally inconsistent (README asserts endpoints not in code) — surface contradiction
Each skip gets a line in the report with the reason.
### Phase 6: Lint / format
Run markdown linter or formatter if the project has one (check for `.markdownlintrc`, `prettier`, or similar at repo root). Skip if none.
### Phase 7: Report + assumptions footer (M6)
Output:
```
monorepo-document run complete.
Docs updated (N):
- _docs/01_flights.md — added endpoint POST /flights/gps-denied-start
- _docs/00_roles_permissions.md — added permission `FLIGHTS.GPS_DENIED.OPERATE`
- _docs/README.md — permission-matrix row updated
Skipped (K):
- satellite-provider: no confirmed mapping (config has unresolved entry)
- detections-semantic: internal README references endpoints not in code — needs reconciliation
Assumptions used this run:
- component `flights` → `_docs/02_flights.md` (user-confirmed in config)
- roles doc = `_docs/00_roles_permissions.md` (user-confirmed cross-cutting)
- target branch: `dev` (from conventions.work_branch)
Next step: review the diff in your editor, then commit with
`<commit_prefix> Sync docs after <components>` (or your own message).
```
Append to `_docs/_repo-config.yaml` under `assumptions_log:`:
```yaml
- date: 2026-04-17
skill: monorepo-document
run_notes: "Synced <components>"
assumptions:
- "<list>"
```
## What this skill will NEVER do
- Modify files inside component directories
- Edit CI files, compose files, install scripts, or env templates
- Create new doc files (that's `monorepo-onboard`)
- Change `confirmed_by_user` or any `confirmed: <bool>` flag
- Auto-commit or push
- Guess a mapping not in the config
## Edge cases
- **Component has no primary doc** (UI component that spans all feature docs): if config has `primary_doc: null` or similar marker, iterate through `docs.cross_cutting` where the component is referenced. Don't invent a doc.
- **Multiple components touch the same cross-cutting doc in one run**: apply sequentially; after each edit re-read to get updated line numbers.
- **Cosmetic-only changes** (whitespace renames, internal refactors without API changes): inform user, ask whether to sync or skip.
- **Large gap** (doc untouched for months, component has dozens of commits): ask user which commits matter — don't reconstruct full history.
+248
View File
@@ -0,0 +1,248 @@
---
name: monorepo-onboard
description: Adds a new component (submodule / package / workspace member) to a monorepo as a single atomic operation. Updates the component registry (`.gitmodules` / `package.json` workspaces / `Cargo.toml` / etc.), places or extends unified docs, updates CI/compose/env artifacts, and appends an entry to `_docs/_repo-config.yaml`. Intentionally monolithic — onboarding is one user intent that spans multiple artifact domains. Use when the user says "onboard X", "add service Y to the monorepo", "register new repo".
---
# Monorepo Onboard
Onboards a new component atomically. Spans registry + docs + CI + env + config in one coordinated run — because onboarding is a single user intent, and splitting it across multiple skills would fragment the user experience, cause duplicate input collection, and create inconsistent intermediate states in the config file.
## Why this skill is monolithic
Onboarding ONE component requires updating ~8 artifacts. If the user had to invoke `monorepo-document`, `monorepo-cicd`, and a registry skill separately, they would answer overlapping questions 23 times, and the config file would pass through invalid states between runs. Monolithic preserves atomicity and consistency.
Sync operations (after onboarding is done) ARE split by artifact — see `monorepo-document` and `monorepo-cicd`.
## Preconditions (hard gates)
1. `_docs/_repo-config.yaml` exists.
2. Top-level `confirmed_by_user: true`.
3. The component is NOT already in `components:` — if it is, redirect to `monorepo-document` or `monorepo-cicd` (it's an update, not an onboarding).
## Mitigations (M1M7)
- **M1** Separation: this skill does not invoke `monorepo-discover` automatically. If `_repo-config.yaml` needs regeneration first, tell the user.
- **M3** Factual vs. interpretive vs. conventional: all user inputs below are CONVENTIONAL (project choices) — always ASK, never infer.
- **M4** Batch inputs in one question round.
- **M5** Skip over guess: if the user's answer doesn't match enumerable options in config (e.g., unknown deployment tier), stop and ask whether to extend config or adjust answer.
- **M6** Assumptions footer + config `assumptions_log` append.
- **M7** Drift detection: before writing anything, verify every artifact path that will be touched exists (or will be created) — stop on unexpected conditions.
## Required inputs (batch-ask, M4)
Collect ALL of these upfront. If any missing, stop and ask. Offer choices from config when the input has a constrained domain (e.g., `conventions.deployment_tiers`).
| Input | Example | Enumerable? |
| ----- | ------- | ----------- |
| `name` | `satellite-provider` | No — open-ended, follow `conventions.component_naming` |
| `location` | git URL / path | No |
| `stack` | `.NET 10`, `Python 3.12` | No — open-ended |
| `purpose` (one line) | "Fetches satellite imagery" | No |
| `doc_placement` | "extend `_docs/07_admin.md`" OR "new `_docs/NN_satellite.md`" | Yes — offer options based on `docs.*` |
| `ci_required` | Which pipelines (or "none") | Yes — infer from `ci.tooling` |
| `deployment_tier` | `edge` | Yes — `conventions.deployment_tiers` |
| `ports` | "5010/http" or "none" | No |
| `depends_on` | Other components called | Yes — list from `components:` names |
| `env_vars` | Name + placeholder value | No (never real secrets) |
If the user provides an answer outside the enumerable set (e.g., deployment tier not in config), **stop** and ask whether to extend the config or pick from the existing set (M5).
## Workflow
### Phase 1: Drift check (M7)
Before writing:
1. Verify `repo.component_registry` exists on disk.
2. Verify `docs.root` exists.
3. If `doc_placement` = extend existing doc, verify that doc exists.
4. Verify every file in `ci.orchestration_files` and `ci.env_template` exists.
5. Verify `ci.service_registry_doc` exists (if set).
Any missing → stop, ask whether to run `monorepo-discover` first or proceed skipping that artifact.
### Phase 2: Register in component registry
Based on `repo.type`:
| Registry | Action |
| -------- | ------ |
| `git-submodules` | Append `[submodule "<name>"]` stanza to `.gitmodules`. Preserve existing indentation style exactly. |
| `npm-workspaces` | Add path to `workspaces` array in `package.json`. Preserve JSON formatting. |
| `pnpm-workspace` | Add to `packages:` in `pnpm-workspace.yaml`. |
| `cargo-workspace` | Add to `members:` in `Cargo.toml`. |
| `go-workspace` | Add to `use (...)` block in `go.work`. |
| `adhoc` | Update the registry file that config points to. |
**Do NOT run** `git submodule add`, `npm install`, or equivalent commands. Produce the text diff; the user runs the actual registration command after review.
### Phase 3: Root README update
If the root README contains a component/services table (check `repo.root_readme`):
1. Insert a new row following existing ordering (alphabetical or deployment-order — match what's there).
2. Match column widths and punctuation exactly.
If there's an ASCII architecture diagram and `deployment_tier` implies new runtime presence, **ask** the user where to place the new box — don't invent a position.
### Phase 4: Unified docs placement
**If extending an existing doc**:
1. Read the target file.
2. Add a new H2 section at the appropriate position. If ambiguous (the file has multiple possible sections), ask.
3. Update file's internal TOC if present.
4. Update `docs.index` ONLY if that index has a cross-reference table that includes sub-sections (check the file).
**If creating a new doc file**:
1. Determine the filename via `docs.file_convention` and `docs.next_unused_prefix` (e.g., `13_satellite_provider.md`).
2. Create using this template:
```markdown
# <Component Name>
## Overview
<expanded purpose from user input>
## API
<endpoints or "None">
## Data model
<if applicable, else "None">
## Configuration
<env vars from user input>
```
3. Update `docs.index` (`_docs/README.md` or equivalent):
- Add row to docs table, matching existing format
- If the component introduces a permission AND the index has a permission → feature matrix, update that too
4. After creating, update `docs.next_unused_prefix` in `_docs/_repo-config.yaml`.
### Phase 5: Cross-cutting docs
For each `docs.cross_cutting` entry whose `owns:` matches a fact provided by the user, update that doc:
- `depends_on` non-empty → architecture/communication doc
- New schema/tables → schema doc (ask user for schema details if not provided)
- New permission/role → permissions doc
If a cross-cutting concern is implied by inputs but has no owner in config → add to `unresolved:` in config and ask.
### Phase 6: CI/CD integration
Update:
- **`ci.service_registry_doc`**: add new row to the service table in that file (if set). Match existing format.
- **Orchestration files** (`ci.orchestration_files`): add service block if component is a runtime service. Use `ci.image_tag_format` for the image string. Include `depends_on`, `ports`, `environment`, `volumes` based on user inputs and existing service-block structure.
- **`ci.env_template`**: append new env vars with placeholder values. NEVER real secrets.
### Phase 7: Per-component CI — guidance ONLY
For `<component>/.woodpecker/*.yml`, `<component>/.github/workflows/*`, etc.:
**Do NOT create these files.** They live inside the component's own repo/workspace.
Instead, output the `ci.pipeline_template` (from config) customized for this component, so the user can copy it into the component's workspace themselves.
### Phase 8: Update `_docs/_repo-config.yaml`
Append new entry to `components:`:
```yaml
- name: <name>
path: <path>/
stack: <stack>
confirmed: true # user explicitly onboarded = confirmed
evidence: [user_onboarded]
primary_doc: <new doc path>
secondary_docs: [...]
ci_config: <component>/.<ci_tool>/ # expected location
deployment_tier: <tier>
ports: [...]
depends_on: [...]
env_vars: [...]
```
If `docs.next_unused_prefix` was consumed, increment it.
Append to `assumptions_log:`:
```yaml
- date: <date>
skill: monorepo-onboard
run_notes: "Onboarded <name>"
assumptions:
- "<list>"
```
Do NOT change `confirmed_by_user` — only human sets that.
### Phase 9: Verification report (M6 footer)
```
monorepo-onboard run complete — onboarded `<name>`.
Files modified (N):
- .gitmodules — added submodule entry
- README.md — added row in Services table
- _docs/NN_<name>.md — created
- _docs/README.md — added index row + permission-matrix row
- _docs/00_top_level_architecture.md — added to Communication section
- docker-compose.run.yml — added service block
- .env.example — added <NAME>_API_KEY placeholder
- ci_steps.md — added service-table row
- _docs/_repo-config.yaml — recorded component + updated next_unused_prefix
Files NOT modified but the user must handle:
- <component>/.woodpecker/build-*.yml — create inside the component's own workspace
(template below)
- CI system UI — activate the new repo
Next manual actions:
1. Actually add the component: `git submodule add <url> <path>` (or equivalent)
2. Create per-component CI config using the template
3. Activate the repo in your CI system
4. Review the full diff, then commit with `<commit_prefix> Onboard <name>`
Pipeline template for <name>:
<rendered ci.pipeline_template with <service> replaced>
Assumptions used this run:
- Doc filename convention: <from config>
- Image tag format: <from config>
- Alphabetical ordering in Services table (observed)
```
## What this skill will NEVER do
- Run `git submodule add`, `npm install`, or any network/install-touching command
- Create per-component CI configs inside component directories
- Invent env vars, ports, permissions, or ticket IDs — all from user
- Auto-commit
- Reorder existing table rows beyond inserting the new one
- Set `confirmed_by_user: true` in config
- Touch a file outside the explicit scope
## Rollback (pre-commit)
Before the user commits, revert is straightforward:
```bash
git checkout -- <every file listed in the report>
```
For the new doc file, remove it explicitly:
```bash
rm _docs/NN_<name>.md
```
The component itself (if already registered via `git submodule add` or workspace install) requires manual cleanup — outside this skill's scope.
## Edge cases
- **Component already in config** (not registry) or vice versa → state mismatch. Redirect to `monorepo-discover` to reconcile.
- **User input contradicts config convention** (e.g., new deployment tier not in `conventions.deployment_tiers`): stop, ask — extend config, or choose from existing.
- **`docs.next_unused_prefix` collides with an existing file** (race condition): bump and retry once; if still colliding, stop.
- **No `docs.root` in config**: cannot place a doc. Ask user to run `monorepo-discover` or manually set it in the config first.
+156
View File
@@ -0,0 +1,156 @@
---
name: monorepo-status
description: Read-only drift/coverage report for a monorepo. Reads `_docs/_repo-config.yaml` and compares live repo state (component commits, doc files, CI artifacts) against it. Surfaces which components have unsynced docs, missing CI coverage, unresolved questions, or structural drift. Writes nothing. Use before releases, during audits, or whenever the user asks "what's out of sync?".
---
# Monorepo Status
Read-only. Reports drift between the live repo and `_docs/_repo-config.yaml`. Writes **nothing** — not even `assumptions_log`. Its only deliverable is a text report.
## Preconditions (soft gates)
1. `_docs/_repo-config.yaml` exists — if not, redirect: "Run `monorepo-discover` first."
2. `confirmed_by_user: true` is NOT required — this skill can run on an unconfirmed config, but will flag it prominently.
## Mitigations (M1M7)
- **M1/M7** This skill IS M7 — it is the drift-detection mechanism other skills invoke conceptually. It surfaces drift, never "fixes" it.
- **M3** All checks are FACTUAL (file exists? commit date? referenced in config?). No interpretive work.
- **M6** Assumptions footer included; but this skill does NOT append to `assumptions_log` in config (writes nothing).
## What the report covers
### Section 1: Config health
- Is `confirmed_by_user: true`? (If false, flag prominently — other skills won't run)
- How many entries have `confirmed: false` (inferred)?
- Count of `unresolved:` entries + their IDs
- Age of config (`last_updated`) — flag if > 60 days old
### Section 2: Component drift
For each component in `components:`:
1. Last commit date of component:
```bash
git -C <path> log -1 --format=%cI # submodule
# or
git log -1 --format=%cI -- <path> # subfolder
```
2. Last commit date of `primary_doc` (and each `secondary_docs` entry):
```bash
git log -1 --format=%cI -- <doc_path>
```
3. Flag as drift if ANY doc's last commit is older than the component's last commit by more than a threshold (default: 0 days — any ordering difference is drift, but annotate magnitude).
### Section 3: CI coverage
For each component:
- Does it have files matching `ci.expected_files_per_component[*].path_glob`?
- Is it present in each `ci.orchestration_files` that's expected to include it (heuristic: check if the compose file mentions the component name or image)?
- Is it listed in `ci.service_registry_doc` if that file has a service table?
Mark each as `complete` / `partial` / `missing` and explain.
### Section 4: Registry vs. config consistency
- Every component in the registry (`.gitmodules`, workspaces, etc.) appears in `components:` — flag mismatches
- Every component in `components:` appears in the registry — flag mismatches
- Every `docs.root` file cross-referenced in config exists on disk — flag missing
- Every `ci.orchestration_files` and `ci.install_scripts` exists — flag missing
### Section 5: Unresolved questions
List every `unresolved:` entry in config with its ID and question — so the user knows what's blocking full confirmation.
## Workflow
1. Read `_docs/_repo-config.yaml`. If missing or unparseable, STOP with a redirect to `monorepo-discover`.
2. Run all checks above (purely read-only).
3. Render the single summary table and supporting sections.
4. Include the assumptions footer.
5. STOP. Do not edit any file.
## Report template
```
═══════════════════════════════════════════════════
MONOREPO STATUS
═══════════════════════════════════════════════════
Config: _docs/_repo-config.yaml
confirmed_by_user: <true|false> [FLAG if false]
last_updated: <date> [FLAG if > 60 days]
inferred entries: <count> of <total>
unresolved: <count> open
═══════════════════════════════════════════════════
Component drift
═══════════════════════════════════════════════════
Component | Last commit | Primary doc age | Secondary docs | Status
-------------------- | ----------- | --------------- | -------------- | ------
annotations | 2d ago | 2d ago | OK | in-sync
flights | 1d ago | 12d ago | 1 stale (schema)| drift
satellite-provider | 3d ago | N/A | N/A | no mapping
═══════════════════════════════════════════════════
CI coverage
═══════════════════════════════════════════════════
Component | CI configs | Orchestration | Service registry
-------------------- | ---------- | ------------- | ----------------
annotations | complete | yes | yes
flights | complete | yes | yes
satellite-provider | missing | no | no
═══════════════════════════════════════════════════
Registry vs. config
═══════════════════════════════════════════════════
In registry, not in config: [list or "(none)"]
In config, not in registry: [list or "(none)"]
Config-referenced docs missing: [list or "(none)"]
Config-referenced CI files missing: [list or "(none)"]
═══════════════════════════════════════════════════
Unresolved questions
═══════════════════════════════════════════════════
- <id>: <question>
- <id>: <question>
═══════════════════════════════════════════════════
Recommendations
═══════════════════════════════════════════════════
- Run monorepo-document for: flights (docs drift)
- Run monorepo-cicd for: satellite-provider (no CI coverage)
- Run monorepo-onboard for: satellite-provider (no mapping)
- Run monorepo-discover to refresh config (if drift is widespread or config is stale)
═══════════════════════════════════════════════════
Assumptions used this run
═══════════════════════════════════════════════════
- Drift threshold: any ordering difference counts as drift
- CI coverage heuristic: component name or image appears in compose file
- Component last-commit measured via `git log` against the component path
Report only. No files modified.
```
## What this skill will NEVER do
- Modify any file (including the config `assumptions_log`)
- Run `monorepo-discover`, `monorepo-document`, `monorepo-cicd`, or `monorepo-onboard` automatically — only recommend them
- Block on unresolved entries (it just lists them)
- Install tools
## Edge cases
- **Git not available / shallow clone**: commit dates may be inaccurate — note in the assumptions footer.
- **Config has `confirmed: false` but no unresolved entries**: this is a sign discovery ran but the human never reviewed. Flag in Section 1.
- **Component in registry but no entry in config** (or vice versa): flag in Section 4 — don't guess what the mapping should be; just report the mismatch.
- **Very large monorepos (100+ components)**: don't truncate tables; tell the user if the report will be long, offer to scope to a subset.
+34 -1
View File
@@ -77,6 +77,8 @@ Record the description verbatim for use in subsequent steps.
Read the user's description and the existing codebase documentation from DOCUMENT_DIR (architecture.md, components/, system-flows.md). Read the user's description and the existing codebase documentation from DOCUMENT_DIR (architecture.md, components/, system-flows.md).
**Consult LESSONS.md**: if `_docs/LESSONS.md` exists, read it and look for entries in categories `estimation`, `architecture`, `dependencies` that might apply to the task under consideration. If a relevant lesson exists (e.g., "estimation: auth-related changes historically take 2x estimate"), bias the classification and recommendation accordingly. Note in the output which lessons (if any) were applied.
Assess the change along these dimensions: Assess the change along these dimensions:
- **Scope**: how many components/files are affected? - **Scope**: how many components/files are affected?
- **Novelty**: does it involve libraries, protocols, or patterns not already in the codebase? - **Novelty**: does it involve libraries, protocols, or patterns not already in the codebase?
@@ -178,6 +180,34 @@ When gaps are found, the task spec (Step 6) MUST include the missing tests in th
--- ---
### Step 4.5: Contract & Layout Check
**Role**: Architect
**Goal**: Prevent silent public-API drift and keep `module-layout.md` consistent before implementation locks file ownership.
Apply the four shared-task triggers from `.cursor/skills/decompose/SKILL.md` Step 2 rule #10 (shared/*, Scope mentions interface/DTO/schema/event/contract/API/shared-model, parent epic is cross-cutting, ≥2 consumers) and classify the task:
- **Producer** — any trigger fires, OR the task changes a public signature / invariant / serialization / error variant of an existing symbol:
1. Check for an existing contract at `_docs/02_document/contracts/<component>/<name>.md`.
2. If present → decide version bump (patch / minor / major per the contract's Versioning Rules) and add the Change Log entry to the task's deliverables.
3. If absent → add creation of the contract file (using `.cursor/skills/decompose/templates/api-contract.md`) to the task's Scope.Included; add a `## Contract` section to the task spec.
4. List every currently-known consumer (from Codebase Analysis Step 4) and add them to the contract's Consumer tasks field.
- **Consumer** — the task imports or calls a public API belonging to another component:
1. Resolve the component's contract file; add it to the task's `### Document Dependencies` section.
2. If the cross-component interface has no contract file, Choose: **A)** create a retroactive contract now as a prerequisite task, **B)** proceed without (logs an explicit coupling risk in the task's Risks & Mitigation).
- **Layout delta** — the task introduces a new component OR changes an existing component's Public API surface:
1. Draft the Per-Component Mapping entry (or the Public API diff) against `_docs/02_document/module-layout.md` using `.cursor/skills/decompose/templates/module-layout.md` format.
2. Add the layout edit to the task's deliverables; the implementer writes it alongside the code change.
3. If `module-layout.md` does not exist, STOP and instruct the user to run `/document` first (existing-code flow) or `/decompose` default mode (greenfield). Do not guess.
Record the classification and any contract/layout deliverables in the working notes; they feed Step 5 (Validate Assumptions) and Step 6 (Create Task).
**BLOCKING**: none — this step surfaces findings; the user confirms them in Step 5.
---
### Step 5: Validate Assumptions ### Step 5: Validate Assumptions
**Role**: Quality gate **Role**: Quality gate
@@ -229,6 +259,9 @@ Present using the Choose format for each decision that has meaningful alternativ
- [ ] Complexity points match the assessment - [ ] Complexity points match the assessment
- [ ] Dependencies reference existing task tracker IDs where applicable - [ ] Dependencies reference existing task tracker IDs where applicable
- [ ] No implementation details leaked into the spec - [ ] No implementation details leaked into the spec
- [ ] If Step 4.5 classified the task as producer, the `## Contract` section exists and points at a contract file
- [ ] If Step 4.5 classified the task as consumer, `### Document Dependencies` lists the relevant contract file
- [ ] If Step 4.5 flagged a layout delta, the task's Scope.Included names the `module-layout.md` edit
--- ---
@@ -237,7 +270,7 @@ Present using the Choose format for each decision that has meaningful alternativ
**Role**: Project coordinator **Role**: Project coordinator
**Goal**: Create a work item ticket and link it to the task file. **Goal**: Create a work item ticket and link it to the task file.
1. Create a ticket via the configured work item tracker (see `autopilot/protocols.md` for tracker detection): 1. Create a ticket via the configured work item tracker (see `autodev/protocols.md` for tracker detection):
- Summary: the task's **Name** field - Summary: the task's **Name** field
- Description: the task's **Problem** and **Acceptance Criteria** sections - Description: the task's **Problem** and **Acceptance Criteria** sections
- Story points: the task's **Complexity** value - Story points: the task's **Complexity** value
+1 -1
View File
@@ -119,7 +119,7 @@ Read and follow `steps/07_quality-checklist.md`.
|-----------|--------| |-----------|--------|
| Missing acceptance_criteria.md, restrictions.md, or input_data/ | **STOP** — planning cannot proceed | | Missing acceptance_criteria.md, restrictions.md, or input_data/ | **STOP** — planning cannot proceed |
| Ambiguous requirements | ASK user | | Ambiguous requirements | ASK user |
| Input data coverage below 70% | Search internet for supplementary data, ASK user to validate | | Input data coverage below 75% | Search internet for supplementary data, ASK user to validate |
| Technology choice with multiple valid options | ASK user | | Technology choice with multiple valid options | ASK user |
| Component naming | PROCEED, confirm at next BLOCKING gate | | Component naming | PROCEED, confirm at next BLOCKING gate |
| File structure within templates | PROCEED | | File structure within templates | PROCEED |
@@ -6,12 +6,22 @@
**Constraints**: Epic descriptions must be **comprehensive and self-contained** — a developer reading only the epic should understand the full context without needing to open separate files. **Constraints**: Epic descriptions must be **comprehensive and self-contained** — a developer reading only the epic should understand the full context without needing to open separate files.
0. **Consult LESSONS.md** — if `_docs/LESSONS.md` exists, read it and factor any `estimation` / `architecture` / `dependencies` entries into epic sizing, scope, and dependency ordering. This closes the retrospective feedback loop; lessons from prior cycles directly inform current epic shape. Note in the Step 6 output which lessons were applied (or that none were relevant).
1. **Create "Bootstrap & Initial Structure" epic first** — this epic will parent the `01_initial_structure` task created by the decompose skill. It covers project scaffolding: folder structure, shared models, interfaces, stubs, CI/CD config, DB migrations setup, test structure. 1. **Create "Bootstrap & Initial Structure" epic first** — this epic will parent the `01_initial_structure` task created by the decompose skill. It covers project scaffolding: folder structure, shared models, interfaces, stubs, CI/CD config, DB migrations setup, test structure.
2. Generate epics for each component using the configured work item tracker (see `autopilot/protocols.md` for tracker detection), structured per `templates/epic-spec.md` 2. **Identify cross-cutting concerns from architecture.md and restrictions.md**. Default candidates to consider (include only if architecture/restrictions reference them):
3. Order epics by dependency (Bootstrap epic is always first, then components based on their dependency graph) - Logging / observability (structured logging, correlation IDs, metrics)
4. Include effort estimation per epic (T-shirt size or story points range) - Error handling / envelope / result types
5. Ensure each epic has clear acceptance criteria cross-referenced with component specs - Configuration loading (env vars, config files, secrets)
6. Generate Mermaid diagrams showing component-to-epic mapping and component relationships - Authentication / authorization middleware
- Feature flags / toggles
- Telemetry / tracing
- i18n / localization
For each identified concern, create ONE epic named `Cross-Cutting: <name>` with `epic_type: cross-cutting`. Each cross-cutting epic will parent exactly ONE shared implementation task (placed under `src/shared/<concern>/` by decompose skill). All component-level tasks that consume the concern declare the shared task as a dependency — they do NOT re-implement the concern locally. This rule is enforced by code-review Phase 6 (Cross-Task Consistency) and Phase 7 (Architecture Compliance).
3. Generate epics for each component using the configured work item tracker (see `autodev/protocols.md` for tracker detection), structured per `templates/epic-spec.md`
4. Order epics by dependency: Bootstrap epic first, then Cross-Cutting epics (they underlie everything), then component epics in dependency order
5. Include effort estimation per epic (T-shirt size or story points range). Use LESSONS.md estimation entries as a calibration hint — if a lesson says "component X was underestimated by 2x last time" and the current plan has a comparable component, widen that epic's estimate.
6. Ensure each epic has clear acceptance criteria cross-referenced with component specs
7. Generate Mermaid diagrams showing component-to-epic mapping and component relationships; include cross-cutting epics as horizontal dependencies of every consuming component epic
**CRITICAL — Epic description richness requirements**: **CRITICAL — Epic description richness requirements**:
@@ -35,14 +45,17 @@ Do NOT create minimal epics with just a summary and short description. The epic
**Self-verification**: **Self-verification**:
- [ ] "Bootstrap & Initial Structure" epic exists and is first in order - [ ] "Bootstrap & Initial Structure" epic exists and is first in order
- [ ] Every identified cross-cutting concern has exactly one `Cross-Cutting: <name>` epic
- [ ] No two epics own the same cross-cutting concern
- [ ] "Blackbox Tests" epic exists - [ ] "Blackbox Tests" epic exists
- [ ] Every component maps to exactly one epic - [ ] Every component maps to exactly one component epic
- [ ] Dependency order is respected (no epic depends on a later one) - [ ] Dependency order is respected (no epic depends on a later one)
- [ ] Cross-Cutting epics precede every consuming component epic
- [ ] Acceptance criteria are measurable - [ ] Acceptance criteria are measurable
- [ ] Effort estimates are realistic - [ ] Effort estimates are realistic and reflect LESSONS.md calibration hints (if any applied)
- [ ] Every epic description includes architecture diagram, interface spec, data flow, risks, and NFRs - [ ] Every epic description includes architecture diagram, interface spec, data flow, risks, and NFRs
- [ ] Epic descriptions are self-contained — readable without opening other files - [ ] Epic descriptions are self-contained — readable without opening other files
7. **Create "Blackbox Tests" epic** — this epic will parent the blackbox test tasks created by the `/decompose` skill. It covers implementing the test scenarios defined in `tests/`. 8. **Create "Blackbox Tests" epic** — this epic will parent the blackbox test tasks created by the `/decompose` skill. It covers implementing the test scenarios defined in `tests/`.
**Save action**: Epics created via the configured tracker MCP. Also saved locally in `epics.md` with ticket IDs. If `tracker: local`, save locally only. **Save action**: Epics created via the configured tracker MCP. Also saved locally in `epics.md` with ticket IDs. If `tracker: local`, save locally only.
+11 -2
View File
@@ -1,6 +1,6 @@
# Epic Template # Epic Template
Use this template for each epic. Create epics via the configured work item tracker (see `autopilot/protocols.md` for tracker detection). Use this template for each epic. Create epics via the configured work item tracker (see `autodev/protocols.md` for tracker detection).
--- ---
@@ -9,6 +9,9 @@ Use this template for each epic. Create epics via the configured work item track
**Example**: Data Ingestion — Near-real-time pipeline **Example**: Data Ingestion — Near-real-time pipeline
**epic_type**: [component | bootstrap | cross-cutting | tests]
**concern** (cross-cutting only): [logging | error-handling | config | authn | authz | feature-flags | telemetry | i18n | other-named-concern]
### Epic Summary ### Epic Summary
[1-2 sentences: what we are building + why it matters] [1-2 sentences: what we are building + why it matters]
@@ -123,5 +126,11 @@ Link to architecture.md and relevant component spec.]
- Be concise. Fewer words with the same meaning = better epic. - Be concise. Fewer words with the same meaning = better epic.
- Capabilities in scope are "what", not "how" — avoid describing implementation details. - Capabilities in scope are "what", not "how" — avoid describing implementation details.
- Dependency order matters: epics that must be done first should be listed earlier in the backlog. - Dependency order matters: epics that must be done first should be listed earlier in the backlog.
- Every epic maps to exactly one component. If a component is too large for one epic, split the component first. - Every `component` epic maps to exactly one component. If a component is too large for one epic, split the component first.
- A `cross-cutting` epic maps to exactly one shared concern and parents exactly one shared implementation task. Component epics that consume the concern declare the cross-cutting epic as a dependency.
- Valid `epic_type` values:
- `bootstrap` — the initial-structure epic (always exactly one per project)
- `component` — a normal per-component epic
- `cross-cutting` — a shared concern that spans ≥2 components
- `tests` — the blackbox-tests epic (always exactly one)
- Complexity points for child issues follow the project standard: 1, 2, 3, 5, 8. Do not create issues above 5 points — split them. - Complexity points for child issues follow the project standard: 1, 2, 3, 5, 8. Do not create issues above 5 points — split them.
+16
View File
@@ -69,6 +69,7 @@ Both modes produce `RUN_DIR/list-of-changes.md` (template: `templates/list-of-ch
| | | *Quick Assessment stops here* | | | | | *Quick Assessment stops here* | |
| 3 | `phases/03-safety-net.md` | Check existing tests or implement pre-refactoring tests (skip for testability runs) | GATE: all tests pass | | 3 | `phases/03-safety-net.md` | Check existing tests or implement pre-refactoring tests (skip for testability runs) | GATE: all tests pass |
| 4 | `phases/04-execution.md` | Delegate task execution to implement skill | GATE: implement completes | | 4 | `phases/04-execution.md` | Delegate task execution to implement skill | GATE: implement completes |
| 4.5 | (inline, testability runs only) | Produce `testability_changes_summary.md` listing every applied change in plain language, surface to user | GATE: user acknowledges summary |
| 5 | `phases/05-test-sync.md` | Remove obsolete, update broken, add new tests | GATE: all tests pass | | 5 | `phases/05-test-sync.md` | Remove obsolete, update broken, add new tests | GATE: all tests pass |
| 6 | `phases/06-verification.md` | Run full suite, compare metrics vs baseline | GATE: all pass, no regressions | | 6 | `phases/06-verification.md` | Run full suite, compare metrics vs baseline | GATE: all pass, no regressions |
| 7 | `phases/07-documentation.md` | Update `_docs/` to reflect refactored state | Skip if `_docs/02_document/` absent | | 7 | `phases/07-documentation.md` | Update `_docs/` to reflect refactored state | Skip if `_docs/02_document/` absent |
@@ -78,6 +79,20 @@ Both modes produce `RUN_DIR/list-of-changes.md` (template: `templates/list-of-ch
- "refactor [specific target]" → skip phase 1 if docs exist - "refactor [specific target]" → skip phase 1 if docs exist
- Default → all phases - Default → all phases
**Testability-run specifics** (guided mode invoked by autodev existing-code flow Step 4):
- Run name is `01-testability-refactoring`.
- Phase 3 (Safety Net) is skipped by design — no tests exist yet. Compensating control: the `list-of-changes.md` gate in Phase 1 must be reviewed and approved by the user before Phase 4 runs.
- Scope is MINIMAL and surgical; reject change entries that drift into full refactor territory (see existing-code flow Step 4 for allowed/disallowed lists). Flagged entries go to `RUN_DIR/deferred_to_refactor.md` for Step 8 (optional full refactor) consideration.
- After Phase 4 (Execution) completes, write `RUN_DIR/testability_changes_summary.md` as Phase 4.5. Format: one bullet per applied change.
```markdown
# Testability Changes Summary ({{run_name}})
Applied {{N}} change(s):
- **{{change_id}}** — changed {{symbol}} in `{{file}}`: {{plain-language reason}}. Risk: {{low|medium|high}}.
```
Group bullets by category (config extraction, DI insertion, singleton wrapping, interface extraction, function split). Present the summary to the user via the Choose format before proceeding to Phase 5.
At the start of execution, create a TodoWrite with all applicable phases. At the start of execution, create a TodoWrite with all applicable phases.
## Artifact Structure ## Artifact Structure
@@ -94,6 +109,7 @@ analysis/research_findings.md Phase 2
analysis/refactoring_roadmap.md Phase 2 analysis/refactoring_roadmap.md Phase 2
test_specs/[##]_[test_name].md Phase 3 test_specs/[##]_[test_name].md Phase 3
execution_log.md Phase 4 execution_log.md Phase 4
testability_changes_summary.md Phase 4.5 (testability runs only)
test_sync/{obsolete_tests,updated_tests,new_tests}.md Phase 5 test_sync/{obsolete_tests,updated_tests,new_tests}.md Phase 5
verification_report.md Phase 6 verification_report.md Phase 6
doc_update_log.md Phase 7 doc_update_log.md Phase 7
@@ -19,7 +19,7 @@ Determine the input mode set during Context Resolution (see SKILL.md):
### 1g. Read and Validate Input File ### 1g. Read and Validate Input File
1. Read the provided input file (e.g., `list-of-changes.md` from the autopilot testability revision step or user-provided file) 1. Read the provided input file (e.g., `list-of-changes.md` from the autodev testability revision step or user-provided file)
2. Extract file paths, problem descriptions, and proposed changes from each entry 2. Extract file paths, problem descriptions, and proposed changes from each entry
3. For each entry, verify against actual codebase: 3. For each entry, verify against actual codebase:
- Referenced files exist - Referenced files exist
@@ -95,7 +95,7 @@ Also copy to project standard locations:
**Critical step — do not skip.** Before producing the change list, cross-reference documented business flows against actual implementation. This catches issues that static code inspection alone misses. **Critical step — do not skip.** Before producing the change list, cross-reference documented business flows against actual implementation. This catches issues that static code inspection alone misses.
1. **Read documented flows**: Load `DOCUMENT_DIR/system-flows.md`, `DOCUMENT_DIR/architecture.md`, and `SOLUTION_DIR/solution.md` (if they exist). Extract every documented business flow, data path, and architectural decision. 1. **Read documented flows**: Load `DOCUMENT_DIR/system-flows.md`, `DOCUMENT_DIR/architecture.md`, `DOCUMENT_DIR/module-layout.md`, every file under `DOCUMENT_DIR/contracts/`, and `SOLUTION_DIR/solution.md` (whichever exist). Extract every documented business flow, data path, architectural decision, module ownership boundary, and contract shape.
2. **Trace each flow through code**: For every documented flow (e.g., "video batch processing", "image tiling", "engine initialization"), walk the actual code path line by line. At each decision point ask: 2. **Trace each flow through code**: For every documented flow (e.g., "video batch processing", "image tiling", "engine initialization"), walk the actual code path line by line. At each decision point ask:
- Does the code match the documented/intended behavior? - Does the code match the documented/intended behavior?
@@ -133,6 +133,8 @@ From the component analysis, solution synthesis, and **logical flow analysis**,
9. Performance bottlenecks 9. Performance bottlenecks
10. **Logical flow contradictions** (from step 1c) 10. **Logical flow contradictions** (from step 1c)
11. **Silent data loss or wasted computation** (from step 1c) 11. **Silent data loss or wasted computation** (from step 1c)
12. **Module ownership violations** — code that lives under one component's directory but implements another component's concern, or imports another component's internal (non-Public API) file. Cross-check against `DOCUMENT_DIR/module-layout.md` if present.
13. **Contract drift** — shared-models / shared-API implementations whose public shape has drifted from the contract file in `DOCUMENT_DIR/contracts/`. Include both producer drift and consumer drift.
Write `RUN_DIR/list-of-changes.md` using `templates/list-of-changes.md` format: Write `RUN_DIR/list-of-changes.md` using `templates/list-of-changes.md` format:
- Set **Mode**: `automatic` - Set **Mode**: `automatic`
@@ -32,3 +32,17 @@
6. Applicable scenarios 6. Applicable scenarios
7. Team capability requirements 7. Team capability requirements
8. Migration difficulty 8. Migration difficulty
## Decomposition Completeness Probes (Completeness Audit Reference)
Used during Step 1's Decomposition Completeness Audit. After generating sub-questions, ask each probe against the current decomposition. If a probe reveals an uncovered area, add a sub-question for it.
| Probe | What it catches |
|-------|-----------------|
| **What does this cost — in money, time, resources, or trade-offs?** | Budget, pricing, licensing, tax, opportunity cost, maintenance burden |
| **What are the hard constraints — physical, legal, regulatory, environmental?** | Regulations, certifications, spectrum/frequency rules, export controls, physics limits, IP restrictions |
| **What are the dependencies and assumptions that could break?** | Supply chain, vendor lock-in, API stability, single points of failure, standards evolution |
| **What does the operating environment actually look like?** | Terrain, weather, connectivity, infrastructure, power, latency, user skill level |
| **What failure modes exist and what happens when they trigger?** | Degraded operation, fallback, safety margins, blast radius, recovery time |
| **What do practitioners who solved similar problems say matters most?** | Field-tested priorities that don't appear in specs or papers |
| **What changes over time — and what looks stable now but isn't?** | Technology roadmaps, regulatory shifts, deprecation risk, scaling effects |
@@ -10,6 +10,12 @@
- [ ] Every citation can be directly verified by the user (source verifiability) - [ ] Every citation can be directly verified by the user (source verifiability)
- [ ] Structure hierarchy is clear; executives can quickly locate information - [ ] Structure hierarchy is clear; executives can quickly locate information
## Decomposition Completeness
- [ ] Domain discovery search executed: searched "key factors when [problem domain]" before starting research
- [ ] Completeness probes applied: every probe from `references/comparison-frameworks.md` checked against sub-questions
- [ ] No uncovered areas remain: all gaps filled with sub-questions or justified as not applicable
## Internet Search Depth ## Internet Search Depth
- [ ] Every sub-question was searched with at least 3-5 different query variants - [ ] Every sub-question was searched with at least 3-5 different query variants
@@ -97,6 +97,16 @@ When decomposing questions, you must explicitly define the **boundaries of the r
**Common mistake**: User asks about "university classroom issues" but sources include policies targeting "K-12 students" — mismatched target populations will invalidate the entire research. **Common mistake**: User asks about "university classroom issues" but sources include policies targeting "K-12 students" — mismatched target populations will invalidate the entire research.
#### Decomposition Completeness Audit (MANDATORY)
After generating sub-questions, verify the decomposition covers all major dimensions of the problem — not just the ones that came to mind first.
1. **Domain discovery search**: Search the web for "key factors when [problem domain]" / "what to consider when [problem domain]" (e.g., "key factors GPS-denied navigation", "what to consider when choosing an edge deployment strategy"). Extract dimensions that practitioners and domain experts consider important but are absent from the current sub-questions.
2. **Run completeness probes**: Walk through each probe in `references/comparison-frameworks.md` → "Decomposition Completeness Probes" against the current sub-question list. For each probe, note whether it is covered, not applicable (state why), or missing.
3. **Fill gaps**: Add sub-questions (with search query variants) for any uncovered area. Do this before proceeding to Step 2.
Record the audit result in `00_question_decomposition.md` as a "Completeness Audit" section.
**Save action**: **Save action**:
1. Read all files from INPUT_DIR to ground the research in the project context 1. Read all files from INPUT_DIR to ground the research in the project context
2. Create working directory `RESEARCH_DIR/` 2. Create working directory `RESEARCH_DIR/`
@@ -109,6 +119,7 @@ When decomposing questions, you must explicitly define the **boundaries of the r
- List of decomposed sub-questions - List of decomposed sub-questions
- **Chosen perspectives** (at least 3 from the Perspective Rotation table) with rationale - **Chosen perspectives** (at least 3 from the Perspective Rotation table) with rationale
- **Search query variants** for each sub-question (at least 3-5 per sub-question) - **Search query variants** for each sub-question (at least 3-5 per sub-question)
- **Completeness audit** (taxonomy cross-reference + domain discovery results)
4. Write TodoWrite to track progress 4. Write TodoWrite to track progress
--- ---
+75 -2
View File
@@ -53,9 +53,15 @@ METRICS_DIR/
└── ... └── ...
``` ```
## Invocation Modes
- **cycle-end mode** (default): invoked automatically at end of cycle by the autodev orchestrator — as greenfield Step 11 Retrospective (after Step 10 Deploy) and existing-code Step 17 Retrospective (after Step 16 Deploy). Runs Steps 14. Output: `retro_<YYYY-MM-DD>.md` + LESSONS.md update.
- **incident mode**: invoked automatically after the failure retry protocol reaches `retry_count: 3` and the user has made a recovery choice. Runs Steps 1 (scoped to the failing skill's artifacts only), 2 (focused on the failure), 3 (shorter report), 4 (append 13 lessons in the `process` or `tooling` category). Output: `_docs/06_metrics/incident_<YYYY-MM-DD>_<skill>.md` + LESSONS.md update. Pass the invocation context with `mode: incident`, `failing_skill: <skill-name>`, and `failure_summary: <string>`.
- **on-demand mode**: user-triggered (trigger phrases above). Runs Steps 14 over the entire artifact set.
## Progress Tracking ## Progress Tracking
At the start of execution, create a TodoWrite with all steps (1 through 3). Update status as each step completes. At the start of execution, create a TodoWrite with all steps (1 through 4). Update status as each step completes.
## Workflow ## Workflow
@@ -74,6 +80,9 @@ At the start of execution, create a TodoWrite with all steps (1 through 3). Upda
| Task spec files in TASKS_DIR | Complexity points per task, dependency count | | Task spec files in TASKS_DIR | Complexity points per task, dependency count |
| `implementation_report_*.md` | Total tasks, total batches, overall duration | | `implementation_report_*.md` | Total tasks, total batches, overall duration |
| Git log (if available) | Commits per batch, files changed per batch | | Git log (if available) | Commits per batch, files changed per batch |
| `cumulative_review_batches_*.md` `## Baseline Delta` | Architecture findings: carried over / resolved / newly introduced counts |
| `_docs/02_document/module-layout.md` + source import graph | Component count, cross-component edges, cycles, avg imports/module |
| `_docs/02_document/contracts/**/*.md` | Contract count, contracts per public-API symbol |
#### Metrics to Compute #### Metrics to Compute
@@ -90,15 +99,35 @@ At the start of execution, create a TodoWrite with all steps (1 through 3). Upda
- Code review findings by category: Bug, Spec-Gap, Security, Performance, Maintainability, Style, Scope - Code review findings by category: Bug, Spec-Gap, Security, Performance, Maintainability, Style, Scope
- FAIL count (batches that required user intervention) - FAIL count (batches that required user intervention)
**Structural Metrics** (skip only if `module-layout.md` is absent):
- Component count and change vs previous cycle
- Cross-component import edges and change vs previous cycle
- Cycles in the component import graph (should stay 0; any new cycle is a regression)
- Average imports per module
- New Architecture violations this cycle (from `## Baseline Delta` → Newly introduced)
- Resolved Architecture violations this cycle (from `## Baseline Delta` → Resolved)
- Net Architecture delta = new resolved (negative is good)
- Percentage of public-API symbols covered by a contract file (contract count / public-API symbol count)
- `shared/*` entries used by ≥2 components (healthy) vs by ≤1 component (dead cross-cutting)
Persist the structural snapshot to `METRICS_DIR/structure_[YYYY-MM-DD].md` so future retros can compute deltas without re-deriving from source.
**Efficiency Metrics**: **Efficiency Metrics**:
- Blocked task count and reasons - Blocked task count and reasons
- Tasks completed on first attempt vs requiring fixes - Tasks completed on first attempt vs requiring fixes
- Batch with most findings (identify problem areas) - Batch with most findings (identify problem areas)
**Auto-lesson triggers** (feed Step 4 LESSONS.md generation):
- Net Architecture delta > 0 this cycle → `architecture` lesson
- Any structural metric regressed by >20% vs previous snapshot → `architecture` or `dependencies` lesson depending on the metric
- Contract coverage % decreased → `architecture` lesson
**Self-verification**: **Self-verification**:
- [ ] All batch reports parsed - [ ] All batch reports parsed
- [ ] All metric categories computed - [ ] All metric categories computed
- [ ] No batch reports missed - [ ] No batch reports missed
- [ ] Structural snapshot written (or explicitly skipped with reason "module-layout.md absent")
- [ ] If a previous `structure_*.md` exists, deltas are computed against the most recent one
--- ---
@@ -141,12 +170,55 @@ Write `METRICS_DIR/retro_[YYYY-MM-DD].md` using `templates/retrospective-report.
- [ ] Top 3 improvement actions clearly stated - [ ] Top 3 improvement actions clearly stated
- [ ] Suggested rule/skill updates are specific - [ ] Suggested rule/skill updates are specific
**Save action**: Write `retro_[YYYY-MM-DD].md` **Save action**: Write `retro_[YYYY-MM-DD].md` (in cycle-end / on-demand mode) or `incident_[YYYY-MM-DD]_[skill].md` (in incident mode).
Present the report summary to the user. Present the report summary to the user.
--- ---
### Step 4: Update Lessons Log
**Role**: Process improvement analyst
**Goal**: Keep a short, frequently-consulted log of actionable lessons that downstream skills read before they plan or estimate.
1. Extract the **top 3 concrete lessons** from the current retrospective (or 13 lessons in incident mode, scoped to the failing skill). Each lesson must:
- Be specific enough to change future behavior (not a platitude).
- Be single-sentence.
- Be tied to one of the categories: `estimation`, `architecture`, `testing`, `dependencies`, `tooling`, `process`.
2. Append one bullet per lesson to `_docs/LESSONS.md` using this format:
```
- [YYYY-MM-DD] [category] one-line lesson statement.
Source: _docs/06_metrics/retro_YYYY-MM-DD.md
```
3. After appending, trim `_docs/LESSONS.md` to keep only the last **15 entries** (ring buffer). Oldest entries drop off the top. Preserve the file's header section if present.
4. If `_docs/LESSONS.md` does not exist, create it with this skeleton before appending:
```markdown
# Lessons Log
A ring buffer of the last 15 actionable lessons extracted from retrospectives and incidents.
Downstream skills consume this file:
- `.cursor/skills/new-task/SKILL.md` (Step 2 Complexity Assessment)
- `.cursor/skills/plan/steps/06_work-item-epics.md` (epic sizing)
- `.cursor/skills/decompose/SKILL.md` (Step 2 task complexity)
- `.cursor/skills/autodev/SKILL.md` (Execution Loop step 0 — surface top 3 lessons)
Categories: estimation · architecture · testing · dependencies · tooling · process
```
**Self-verification**:
- [ ] 13 lessons extracted (3 in cycle-end / on-demand mode, 13 in incident mode)
- [ ] Each lesson is single-sentence, specific, and tagged with a valid category
- [ ] Each lesson includes a Source link back to its retro or incident file
- [ ] `_docs/LESSONS.md` trimmed to at most 15 entries
- [ ] Skeleton header preserved if file was just created
**Save action**: Write (or update) `_docs/LESSONS.md`.
---
## Escalation Rules ## Escalation Rules
| Situation | Action | | Situation | Action |
@@ -167,6 +239,7 @@ Present the report summary to the user.
│ 1. Collect Metrics → parse batch reports, compute metrics │ │ 1. Collect Metrics → parse batch reports, compute metrics │
│ 2. Analyze Trends → patterns, comparison, improvement areas │ │ 2. Analyze Trends → patterns, comparison, improvement areas │
│ 3. Produce Report → _docs/06_metrics/retro_[date].md │ │ 3. Produce Report → _docs/06_metrics/retro_[date].md │
│ 4. Update Lessons → append top-3 to _docs/LESSONS.md (≤15) │
├────────────────────────────────────────────────────────────────┤ ├────────────────────────────────────────────────────────────────┤
│ Principles: Data-driven · Actionable · Cumulative │ │ Principles: Data-driven · Actionable · Cumulative │
│ Non-judgmental · Save immediately │ │ Non-judgmental · Save immediately │
+1 -1
View File
@@ -4,7 +4,7 @@ description: |
OWASP-based security audit skill. Analyzes codebase for vulnerabilities across dependency scanning, OWASP-based security audit skill. Analyzes codebase for vulnerabilities across dependency scanning,
static analysis, OWASP Top 10 review, and secrets detection. Produces a structured security report static analysis, OWASP Top 10 review, and secrets detection. Produces a structured security report
with severity-ranked findings and remediation guidance. with severity-ranked findings and remediation guidance.
Can be invoked standalone or as part of the autopilot flow (optional step before deploy). Can be invoked standalone or as part of the autodev flow (optional step before deploy).
Trigger phrases: Trigger phrases:
- "security audit", "security scan", "OWASP review" - "security audit", "security scan", "OWASP review"
- "vulnerability scan", "security check" - "vulnerability scan", "security check"
+151 -26
View File
@@ -13,9 +13,24 @@ disable-model-invocation: true
# Test Run # Test Run
Run the project's test suite and report results. This skill is invoked by the autopilot at verification checkpoints — after implementing tests, after implementing features, or at any point where the test suite must pass before proceeding. Run the project's test suite and report results. This skill is invoked by the autodev at verification checkpoints — after implementing tests, after implementing features, before deploy — or any point where a test suite must pass before proceeding.
## Workflow ## Modes
test-run has two modes. The caller passes the mode explicitly; if missing, default to `functional`.
| Mode | Scope | Typical caller | Input artifacts |
|------|-------|---------------|-----------------|
| `functional` (default) | Unit / integration / blackbox tests — correctness | autodev Steps that verify after Implement Tests or Implement | `scripts/run-tests.sh`, `_docs/02_document/tests/environment.md`, `_docs/02_document/tests/blackbox-tests.md` |
| `perf` | Performance / load / stress / soak tests — latency, throughput, error-rate thresholds | autodev greenfield Step 9, existing-code Step 15 (pre-deploy) | `scripts/run-performance-tests.sh`, `_docs/02_document/tests/performance-tests.md`, AC thresholds in `_docs/00_problem/acceptance_criteria.md` |
Direct user invocation (`/test-run`) defaults to `functional`. If the user says "perf tests", "load test", "performance", or passes a performance scenarios file, run `perf` mode.
After selecting a mode, read its corresponding workflow below; do not mix them.
---
## Functional Mode
### 1. Detect Test Runner ### 1. Detect Test Runner
@@ -79,7 +94,7 @@ Categorize skips as: **explicit skip (dead code)**, **runtime skip (unreachable)
### 5. Handle Outcome ### 5. Handle Outcome
**All tests pass, zero skipped** → return success to the autopilot for auto-chain. **All tests pass, zero skipped** → return success to the autodev for auto-chain.
**Any test fails or errors** → this is a **blocking gate**. Never silently ignore failures. **Always investigate the root cause before deciding on an action.** Read the failing test code, read the error output, check service logs if applicable, and determine whether the bug is in the test or in the production code. **Any test fails or errors** → this is a **blocking gate**. Never silently ignore failures. **Always investigate the root cause before deciding on an action.** Read the failing test code, read the error output, check service logs if applicable, and determine whether the bug is in the test or in the production code.
@@ -100,34 +115,48 @@ After investigating, present:
``` ```
- If user picks A → apply fixes, then re-run (loop back to step 2) - If user picks A → apply fixes, then re-run (loop back to step 2)
- If user picks B → return failure to the autopilot - If user picks B → return failure to the autodev
**Any test skipped** → this is also a **blocking gate**. Skipped tests mean something is wrong — either with the test, the environment, or the test design. **Never blindly remove a skipped test.** Always investigate the root cause first. **Any skipped test** → classify as legitimate or illegitimate before deciding whether to block.
#### Investigation Protocol for Skipped Tests #### Legitimate skips (accept and proceed)
For each skipped test: The code path genuinely cannot execute on this runner. Acceptable reasons:
1. **Read the test code** — understand what the test is supposed to verify and why it skips. - Hardware not physically present (GPU, Apple Neural Engine, sensor, serial device)
2. **Determine the root cause** — why did the skip condition fire? - Operating system mismatch (Darwin-only test on Linux CI, Windows-only test on macOS)
- Is the test environment misconfigured? (e.g., wrong ports, missing env vars, service not started correctly) - Feature-flag-gated test whose feature is intentionally disabled in this environment
- Is the test ordering wrong? (e.g., a fixture in an earlier test mutates shared state) - External service the project deliberately does not control (e.g., a third-party API with no sandbox, and the project has a documented contract test instead)
- Is a dependency missing? (e.g., package not installed, fixture file absent)
- Is the skip condition outdated? (e.g., code was refactored but the skip guard still checks the old behavior)
- Is the test fundamentally untestable in the current setup? (e.g., requires Docker restart, different OS, special hardware)
3. **Try to fix the root cause first** — the goal is to make the test run, not to delete it:
- Fix the environment or configuration
- Reorder tests or isolate shared state
- Install the missing dependency
- Update the skip condition to match current behavior
4. **Only remove as last resort** — if the test truly cannot run in any realistic test environment (e.g., requires hardware not available, duplicates another test with identical assertions), then removal is justified. Document the reasoning.
#### Categorization For legitimate skips: verify the skip condition is accurate (the test would run if the hardware/OS were present), verify it has a clear reason string, and proceed.
- **explicit skip (dead code)**: Has `@pytest.mark.skip` — investigate whether the reason in the decorator is still valid. Often these are temporary skips that became permanent by accident. #### Illegitimate skips (BLOCKING — must resolve)
- **runtime skip (unreachable)**: `pytest.skip()` fires inside the test body — investigate why the condition always triggers. Often fixable by adjusting test order, environment, or the condition itself.
- **environment mismatch**: Test assumes a different environment — investigate whether the test environment setup can be fixed. The skip is a workaround for something we can and should fix. NOT acceptable reasons:
- **missing fixture/data**: Data or service not available — investigate whether it can be provided.
- Required service not running (database, message broker, downstream API we control) → fix: bring the service up, add a docker-compose dependency, or add a mock
- Missing test fixture, seed data, or sample file → fix: provide the data, generate it, or ASK the user for it
- Missing environment variable or credential → fix: add to `.env.example`, document, ASK user for the value
- Flaky-test quarantine with no tracking ticket → fix: create the ticket (or replay via leftovers if tracker is down)
- Inherited skip from a prior refactor that was never cleaned up → fix: clean it up now
- Test ordering mutates shared state → fix: isolate the state
**Rule of thumb**: if the reason for skipping is "we didn't set something up," that's not a valid skip — set it up. If the reason is "this hardware/OS isn't here," that's valid.
#### Resolution steps for illegitimate skips
1. Classify the skip (read the skip reason and test body)
2. If the fix is **mechanical** — start a container, install a dep, add a mock, reorder fixtures — attempt it automatically and re-run
3. If the fix requires **user input** — credentials, sample data, a business decision — BLOCK and ASK
4. Never silently mark the skip as "accepted" — every illegitimate skip must either be fixed or escalated
5. Removal is a last resort and requires explicit user approval with documented reasoning
#### Categorization cheatsheet
- **explicit skip (e.g. `@pytest.mark.skip`)**: check whether the reason in the decorator is still valid
- **conditional skip (e.g. `@pytest.mark.skipif`)**: check whether the condition is accurate and whether we can change the environment to make it false
- **runtime skip (e.g. `pytest.skip()` in body)**: check why the condition fires — often an ordering or environment bug
- **missing fixture/data**: treated as illegitimate unless user confirms the data is unavailable
After investigating, present findings: After investigating, present findings:
@@ -145,6 +174,102 @@ After investigating, present findings:
Only option B allows proceeding with skips, and it requires explicit user approval with documented justification for each skip. Only option B allows proceeding with skips, and it requires explicit user approval with documented justification for each skip.
---
## Perf Mode
Performance tests differ from functional tests in what they measure (latency / throughput / error-rate distributions, not pass/fail of a single assertion) and when they run (once before deploy, not per batch). The mode reuses the same orchestration shape (detect → run → report → gate on outcome) but with perf-specific tool detection and threshold comparison.
### 1. Detect Perf Runner
Check in order — first match wins:
1. `scripts/run-performance-tests.sh` exists (generated by `test-spec` Phase 4) → use it; the script already encodes the correct load profile and tool invocation.
2. `_docs/02_document/tests/performance-tests.md` exists → read the scenarios, then auto-detect a load-testing tool:
- `k6` binary available → prefer k6 (scriptable, good default reporting)
- `locust` in project deps or installed → locust
- `artillery` in `package.json` or installed globally → artillery
- `wrk` binary available → wrk (simple HTTP only; use only if scenarios are HTTP GET/POST)
- Language-native benchmark harness (`cargo bench`, `go test -bench`, `pytest-benchmark`) → use when scenarios are CPU-bound or in-process
3. No runner and no scenarios spec → STOP and ask the user to either run `/test-spec` first (to produce `performance-tests.md` + the runner script) or supply a runner script manually. Do not improvise perf tests from scratch.
### 2. Run
Execute the detected runner against the target system. Capture per-scenario metrics:
- Latency percentiles: p50, p95, p99 (and p999 if load volume permits)
- Throughput: requests/sec or operations/sec
- Error rate: failed / total
- Duration: actual run time (for soak/ramp scenarios)
- Resource usage if the scenarios call for it (CPU%, RSS, GPU utilization)
Tear down any environment the runner spun up after metrics are captured.
### 3. Compare Against Thresholds
Load thresholds in this order:
1. Per-scenario expected results from `_docs/02_document/tests/performance-tests.md`
2. Project-wide thresholds from `_docs/00_problem/acceptance_criteria.md` (latency / throughput lines)
3. Fallback: no threshold → record the measurement but classify the scenario as **Unverified** (not pass/fail)
Classify each scenario:
- **Pass** — all thresholds met
- **Warn** — within 10% of any threshold (e.g., p95 = 460ms against a 500ms threshold)
- **Fail** — any threshold violated
- **Unverified** — no threshold to compare against
### 4. Report
```
══════════════════════════════════════
PERF RESULTS
══════════════════════════════════════
Scenarios: [pass N · warn M · fail K · unverified U]
──────────────────────────────────────
1. [scenario_name] — [Pass/Warn/Fail/Unverified]
p50 = [x]ms · p95 = [y]ms · p99 = [z]ms
throughput = [r] rps · errors = [e]%
threshold: [criterion and verdict detail]
2. ...
══════════════════════════════════════
```
Persist the full report to `_docs/06_metrics/perf_<YYYY-MM-DD>_<run_label>.md` for trend tracking across cycles.
### 5. Handle Outcome
**All scenarios Pass (or Pass + Unverified only)** → return success to the caller.
**Any Warn or Fail** → this is a **blocking gate**. Investigate before deciding — read the runner output, check if the warn-band was historically stable, rule out transient infrastructure noise (always worth one re-run before declaring a regression).
After investigating, present:
```
══════════════════════════════════════
PERF GATE: [summary]
══════════════════════════════════════
Failing / warning scenarios:
1. [scenario] — [metric] = [observed] vs threshold [threshold]
likely cause: [1-line diagnosis]
══════════════════════════════════════
A) Fix and re-run (investigate and address the regression)
B) Proceed anyway (accept the regression — requires written justification
recorded in the perf report)
C) Abort — investigate offline
══════════════════════════════════════
Recommendation: A — perf regressions caught pre-deploy
are orders of magnitude cheaper to fix than post-deploy
══════════════════════════════════════
```
- User picks A → apply fixes, re-run (back to step 2).
- User picks B → append the justification to the perf report; return success to the caller.
- User picks C → return failure to the caller.
**Any Unverified scenarios with no Warn/Fail** → not blocking, but surface them in the report so the user knows coverage gaps exist. Suggest running `/test-spec` to add expected results next cycle.
## Trigger Conditions ## Trigger Conditions
This skill is invoked by the autopilot at test verification checkpoints. It is not typically invoked directly by the user. This skill is invoked by the autodev at test verification checkpoints. It is not typically invoked directly by the user. When invoked directly, select the mode from the user's phrasing ("run tests" → functional; "load test" / "perf test" → perf).
+49 -331
View File
@@ -27,19 +27,27 @@ Analyze input data completeness and produce detailed black-box test specificatio
- **Save immediately**: write artifacts to disk after each phase; never accumulate unsaved work - **Save immediately**: write artifacts to disk after each phase; never accumulate unsaved work
- **Ask, don't assume**: when requirements are ambiguous, ask the user before proceeding - **Ask, don't assume**: when requirements are ambiguous, ask the user before proceeding
- **Spec, don't code**: this workflow produces test specifications, never test implementation code - **Spec, don't code**: this workflow produces test specifications, never test implementation code
- **No test without data**: every test scenario MUST have concrete test data; tests without data are removed - **Every test must have a pass/fail criterion**. Two acceptable shapes:
- **No test without expected result**: every test scenario MUST pair input data with a quantifiable expected result; a test that cannot compare actual output against a known-correct answer is not verifiable and must be removed - **Input/output shape**: concrete input data paired with a quantifiable expected result (exact value, tolerance, threshold, pattern, reference file). Typical for functional blackbox tests, performance tests with load data, data-processing pipelines.
- **Behavioral shape**: a trigger condition + observable system behavior + quantifiable pass/fail criterion, with no input data required. Typical for startup/shutdown tests, retry/backoff policies, state transitions, logging/metrics emission, resilience scenarios. Example criteria: "startup logs `service ready` within 5s", "retry emits 3 attempts with exponential backoff (base 100ms ± 20ms)", "on SIGTERM, service drains in-flight requests within 30s grace period", "health endpoint returns 503 while migrations run".
- For behavioral tests the observable (log line, metric value, state transition, emitted event, elapsed time) must still be quantifiable — the test must programmatically decide pass/fail.
- A test that cannot produce a pass/fail verdict through either shape is not verifiable and must be removed.
## Context Resolution ## Context Resolution
Fixed paths — no mode detection needed: Fixed paths:
- PROBLEM_DIR: `_docs/00_problem/` - PROBLEM_DIR: `_docs/00_problem/`
- SOLUTION_DIR: `_docs/01_solution/` - SOLUTION_DIR: `_docs/01_solution/`
- DOCUMENT_DIR: `_docs/02_document/` - DOCUMENT_DIR: `_docs/02_document/`
- TESTS_OUTPUT_DIR: `_docs/02_document/tests/` - TESTS_OUTPUT_DIR: `_docs/02_document/tests/`
Announce the resolved paths to the user before proceeding. Announce the resolved paths and the detected invocation mode (below) to the user before proceeding.
### Invocation Modes
- **full** (default): runs all 4 phases against the whole `PROBLEM_DIR` + `DOCUMENT_DIR`. Used in greenfield Plan Step 1 and existing-code Step 3.
- **cycle-update**: runs only a scoped refresh of the existing test-spec artifacts against the current feature cycle's completed tasks. Used by the existing-code flow's per-cycle sync step. See `modes/cycle-update.md` for the narrowed workflow.
## Input Specification ## Input Specification
@@ -59,7 +67,7 @@ Every input data item MUST have a corresponding expected result that defines wha
Expected results live inside `_docs/00_problem/input_data/` in one or both of: Expected results live inside `_docs/00_problem/input_data/` in one or both of:
1. **Mapping file** (`input_data/expected_results/results_report.md`): a table pairing each input with its quantifiable expected output, using the format defined in `.cursor/skills/test-spec/templates/expected-results.md` 1. **Mapping file** (`input_data/expected_results/results_report.md`): a table pairing each input with its quantifiable expected output, using the format defined in `templates/expected-results.md`
2. **Reference files folder** (`input_data/expected_results/`): machine-readable files (JSON, CSV, etc.) containing full expected outputs for complex cases, referenced from the mapping file 2. **Reference files folder** (`input_data/expected_results/`): machine-readable files (JSON, CSV, etc.) containing full expected outputs for complex cases, referenced from the mapping file
@@ -74,7 +82,8 @@ input_data/
└── data_parameters.md └── data_parameters.md
``` ```
**Quantifiability requirements** (see template for full format and examples): **Quantifiability requirements** (see `templates/expected-results.md` for full format and examples):
- Numeric values: exact value or value ± tolerance (e.g., `confidence ≥ 0.85`, `position ± 10px`) - Numeric values: exact value or value ± tolerance (e.g., `confidence ≥ 0.85`, `position ± 10px`)
- Structured data: exact JSON/CSV values, or a reference file in `expected_results/` - Structured data: exact JSON/CSV values, or a reference file in `expected_results/`
- Counts: exact counts (e.g., "3 detections", "0 errors") - Counts: exact counts (e.g., "3 detections", "0 errors")
@@ -95,7 +104,7 @@ input_data/
1. `acceptance_criteria.md` exists and is non-empty — **STOP if missing** 1. `acceptance_criteria.md` exists and is non-empty — **STOP if missing**
2. `restrictions.md` exists and is non-empty — **STOP if missing** 2. `restrictions.md` exists and is non-empty — **STOP if missing**
3. `input_data/` exists and contains at least one file — **STOP if missing** 3. `input_data/` exists and contains at least one file — **STOP if missing**
4. `input_data/expected_results/results_report.md` exists and is non-empty — **STOP if missing**. Prompt the user: *"Expected results mapping is required. Please create `_docs/00_problem/input_data/expected_results/results_report.md` pairing each input with its quantifiable expected output. Use `.cursor/skills/test-spec/templates/expected-results.md` as the format reference."* 4. `input_data/expected_results/results_report.md` exists and is non-empty — **STOP if missing**. Prompt the user: *"Expected results mapping is required. Please create `_docs/00_problem/input_data/expected_results/results_report.md` pairing each input with its quantifiable expected output. Use `templates/expected-results.md` as the format reference."*
5. `problem.md` exists and is non-empty — **STOP if missing** 5. `problem.md` exists and is non-empty — **STOP if missing**
6. `solution.md` exists and is non-empty — **STOP if missing** 6. `solution.md` exists and is non-empty — **STOP if missing**
7. Create TESTS_OUTPUT_DIR if it does not exist 7. Create TESTS_OUTPUT_DIR if it does not exist
@@ -133,6 +142,7 @@ TESTS_OUTPUT_DIR/
| Phase 3 | Updated test data spec (if data added) | `test-data.md` | | Phase 3 | Updated test data spec (if data added) | `test-data.md` |
| Phase 3 | Updated test files (if tests removed) | respective test file | | Phase 3 | Updated test files (if tests removed) | respective test file |
| Phase 3 | Updated traceability matrix (if tests removed) | `traceability-matrix.md` | | Phase 3 | Updated traceability matrix (if tests removed) | `traceability-matrix.md` |
| Hardware Assessment | Test Execution section | `environment.md` (updated) |
| Phase 4 | Test runner script | `scripts/run-tests.sh` | | Phase 4 | Test runner script | `scripts/run-tests.sh` |
| Phase 4 | Performance test runner script | `scripts/run-performance-tests.sh` | | Phase 4 | Performance test runner script | `scripts/run-performance-tests.sh` |
@@ -147,331 +157,44 @@ If TESTS_OUTPUT_DIR already contains files:
## Progress Tracking ## Progress Tracking
At the start of execution, create a TodoWrite with all four phases. Update status as each phase completes. At the start of execution, create a TodoWrite with all four phases (plus the hardware assessment between Phase 3 and Phase 4). Update status as each phase completes.
## Workflow ## Workflow
### Phase 1: Input Data Completeness Analysis ### Phase 1: Input Data & Expected Results Completeness Analysis
**Role**: Professional Quality Assurance Engineer Read and follow `phases/01-input-data-analysis.md`.
**Goal**: Assess whether the available input data is sufficient to build comprehensive test scenarios
**Constraints**: Analysis only — no test specs yet
1. Read `_docs/01_solution/solution.md`
2. Read `acceptance_criteria.md`, `restrictions.md`
3. Read testing strategy from solution.md (if present)
4. If `DOCUMENT_DIR/architecture.md` and `DOCUMENT_DIR/system-flows.md` exist, read them for additional context on system interfaces and flows
5. Read `input_data/expected_results/results_report.md` and any referenced files in `input_data/expected_results/`
6. Analyze `input_data/` contents against:
- Coverage of acceptance criteria scenarios
- Coverage of restriction edge cases
- Coverage of testing strategy requirements
7. Analyze `input_data/expected_results/results_report.md` completeness:
- Every input data item has a corresponding expected result row in the mapping
- Expected results are quantifiable (contain numeric thresholds, exact values, patterns, or file references — not vague descriptions like "works correctly" or "returns result")
- Expected results specify a comparison method (exact match, tolerance range, pattern match, threshold) per the template
- Reference files in `input_data/expected_results/` that are cited in the mapping actually exist and are valid
8. Present input-to-expected-result pairing assessment:
| Input Data | Expected Result Provided? | Quantifiable? | Issue (if any) |
|------------|--------------------------|---------------|----------------|
| [file/data] | Yes/No | Yes/No | [missing, vague, no tolerance, etc.] |
9. Threshold: at least 70% coverage of scenarios AND every covered scenario has a quantifiable expected result (see `.cursor/rules/cursor-meta.mdc` Quality Thresholds table)
10. If coverage is low, search the internet for supplementary data, assess quality with user, and if user agrees, add to `input_data/` and update `input_data/expected_results/results_report.md`
11. If expected results are missing or not quantifiable, ask user to provide them before proceeding
**BLOCKING**: Do NOT proceed until user confirms both input data coverage AND expected results completeness are sufficient.
--- ---
### Phase 2: Test Scenario Specification ### Phase 2: Test Scenario Specification
**Role**: Professional Quality Assurance Engineer Read and follow `phases/02-test-scenarios.md`.
**Goal**: Produce detailed black-box test specifications covering blackbox, performance, resilience, security, and resource limit scenarios
**Constraints**: Spec only — no test code. Tests describe what the system should do given specific inputs, not how the system is built.
Based on all acquired data, acceptance_criteria, and restrictions, form detailed test scenarios:
1. Define test environment using `.cursor/skills/plan/templates/test-environment.md` as structure
2. Define test data management using `.cursor/skills/plan/templates/test-data.md` as structure
3. Write blackbox test scenarios (positive + negative) using `.cursor/skills/plan/templates/blackbox-tests.md` as structure
4. Write performance test scenarios using `.cursor/skills/plan/templates/performance-tests.md` as structure
5. Write resilience test scenarios using `.cursor/skills/plan/templates/resilience-tests.md` as structure
6. Write security test scenarios using `.cursor/skills/plan/templates/security-tests.md` as structure
7. Write resource limit test scenarios using `.cursor/skills/plan/templates/resource-limit-tests.md` as structure
8. Build traceability matrix using `.cursor/skills/plan/templates/traceability-matrix.md` as structure
**Self-verification**:
- [ ] Every acceptance criterion is covered by at least one test scenario
- [ ] Every restriction is verified by at least one test scenario
- [ ] Every test scenario has a quantifiable expected result from `input_data/expected_results/results_report.md`
- [ ] Expected results use comparison methods from `.cursor/skills/test-spec/templates/expected-results.md`
- [ ] Positive and negative scenarios are balanced
- [ ] Consumer app has no direct access to system internals
- [ ] Test environment matches project constraints (see Hardware-Dependency & Execution Environment Assessment below)
- [ ] External dependencies have mock/stub services defined
- [ ] Traceability matrix has no uncovered AC or restrictions
**Save action**: Write all files under TESTS_OUTPUT_DIR:
- `environment.md`
- `test-data.md`
- `blackbox-tests.md`
- `performance-tests.md`
- `resilience-tests.md`
- `security-tests.md`
- `resource-limit-tests.md`
- `traceability-matrix.md`
**BLOCKING**: Present test coverage summary (from traceability-matrix.md) to user. Do NOT proceed until confirmed.
Capture any new questions, findings, or insights that arise during test specification — these feed forward into downstream skills (plan, refactor, etc.).
--- ---
### Phase 3: Test Data Validation Gate (HARD GATE) ### Phase 3: Test Data Validation Gate (HARD GATE)
**Role**: Professional Quality Assurance Engineer Read and follow `phases/03-data-validation-gate.md`.
**Goal**: Ensure every test scenario produced in Phase 2 has concrete, sufficient test data. Remove tests that lack data. Verify final coverage stays above 70%.
**Constraints**: This phase is MANDATORY and cannot be skipped.
#### Step 1 — Build the test-data and expected-result requirements checklist
Scan `blackbox-tests.md`, `performance-tests.md`, `resilience-tests.md`, `security-tests.md`, and `resource-limit-tests.md`. For every test scenario, extract:
| # | Test Scenario ID | Test Name | Required Input Data | Required Expected Result | Result Quantifiable? | Comparison Method | Input Provided? | Expected Result Provided? |
|---|-----------------|-----------|---------------------|-------------------------|---------------------|-------------------|----------------|--------------------------|
| 1 | [ID] | [name] | [data description] | [what system should output] | [Yes/No] | [exact/tolerance/pattern/threshold] | [Yes/No] | [Yes/No] |
Present this table to the user.
#### Step 2 — Ask user to provide missing test data AND expected results
For each row where **Input Provided?** is **No** OR **Expected Result Provided?** is **No**, ask the user:
> **Option A — Provide the missing items**: Supply what is missing:
> - **Missing input data**: Place test data files in `_docs/00_problem/input_data/` or indicate the location.
> - **Missing expected result**: Provide the quantifiable expected result for this input. Update `_docs/00_problem/input_data/expected_results/results_report.md` with a row mapping the input to its expected output. If the expected result is complex, provide a reference CSV file in `_docs/00_problem/input_data/expected_results/`. Use `.cursor/skills/test-spec/templates/expected-results.md` for format guidance.
>
> Expected results MUST be quantifiable — the test must be able to programmatically compare actual vs expected. Examples:
> - "3 detections with bounding boxes [(x1,y1,x2,y2), ...] ± 10px"
> - "HTTP 200 with JSON body matching `expected_response_01.json`"
> - "Processing time < 500ms"
> - "0 false positives in the output set"
>
> **Option B — Skip this test**: If you cannot provide the data or expected result, this test scenario will be **removed** from the specification.
**BLOCKING**: Wait for the user's response for every missing item.
#### Step 3 — Validate provided data and expected results
For each item where the user chose **Option A**:
**Input data validation**:
1. Verify the data file(s) exist at the indicated location
2. Verify **quality**: data matches the format, schema, and constraints described in the test scenario (e.g., correct image resolution, valid JSON structure, expected value ranges)
3. Verify **quantity**: enough data samples to cover the scenario (e.g., at least N images for a batch test, multiple edge-case variants)
**Expected result validation**:
4. Verify the expected result exists in `input_data/expected_results/results_report.md` or as a referenced file in `input_data/expected_results/`
5. Verify **quantifiability**: the expected result can be evaluated programmatically — it must contain at least one of:
- Exact values (counts, strings, status codes)
- Numeric values with tolerance (e.g., `± 10px`, `≥ 0.85`)
- Pattern matches (regex, substring, JSON schema)
- Thresholds (e.g., `< 500ms`, `≤ 5% error rate`)
- Reference file for structural comparison (JSON diff, CSV diff)
6. Verify **completeness**: the expected result covers all outputs the test checks (not just one field when the test validates multiple)
7. Verify **consistency**: the expected result is consistent with the acceptance criteria it traces to
If any validation fails, report the specific issue and loop back to Step 2 for that item.
#### Step 4 — Remove tests without data or expected results
For each item where the user chose **Option B**:
1. Warn the user: `⚠️ Test scenario [ID] "[Name]" will be REMOVED from the specification due to missing test data or expected result.`
2. Remove the test scenario from the respective test file
3. Remove corresponding rows from `traceability-matrix.md`
4. Update `test-data.md` to reflect the removal
**Save action**: Write updated files under TESTS_OUTPUT_DIR:
- `test-data.md`
- Affected test files (if tests removed)
- `traceability-matrix.md` (if tests removed)
#### Step 5 — Final coverage check
After all removals, recalculate coverage:
1. Count remaining test scenarios that trace to acceptance criteria
2. Count total acceptance criteria + restrictions
3. Calculate coverage percentage: `covered_items / total_items * 100`
| Metric | Value |
|--------|-------|
| Total AC + Restrictions | ? |
| Covered by remaining tests | ? |
| **Coverage %** | **?%** |
**Decision**:
- **Coverage ≥ 70%** → Phase 3 **PASSED**. Present final summary to user.
- **Coverage < 70%** → Phase 3 **FAILED**. Report:
> ❌ Test coverage dropped to **X%** (minimum 70% required). The removed test scenarios left gaps in the following acceptance criteria / restrictions:
>
> | Uncovered Item | Type (AC/Restriction) | Missing Test Data Needed |
> |---|---|---|
>
> **Action required**: Provide the missing test data for the items above, or add alternative test scenarios that cover these items with data you can supply.
**BLOCKING**: Loop back to Step 2 with the uncovered items. Do NOT finalize until coverage ≥ 70%.
#### Phase 3 Completion
When coverage ≥ 70% and all remaining tests have validated data AND quantifiable expected results:
1. Present the final coverage report
2. List all removed tests (if any) with reasons
3. Confirm every remaining test has: input data + quantifiable expected result + comparison method
4. Confirm all artifacts are saved and consistent
--- ---
### Hardware-Dependency & Execution Environment Assessment (BLOCKING — runs before Phase 4) ### Hardware-Dependency & Execution Environment Assessment (BLOCKING — runs between Phase 3 and Phase 4)
Docker is the **preferred** test execution environment (reproducibility, isolation, CI parity). However, hardware-dependent projects may require local execution to exercise the real code paths. This assessment determines the right execution strategy by scanning both documentation and source code. Read and follow `phases/hardware-assessment.md`.
#### Step 1 — Documentation scan
Check the following files for mentions of hardware-specific requirements:
| File | Look for |
|------|----------|
| `_docs/00_problem/restrictions.md` | Platform requirements, hardware constraints, OS-specific features |
| `_docs/01_solution/solution.md` | Engine selection logic, platform-dependent paths, hardware acceleration |
| `_docs/02_document/architecture.md` | Component diagrams showing hardware layers, engine adapters |
| `_docs/02_document/components/*/description.md` | Per-component hardware mentions |
| `TESTS_OUTPUT_DIR/environment.md` | Existing environment decisions |
#### Step 2 — Code scan
Search the project source for indicators of hardware dependence. The project is **hardware-dependent** if ANY of the following are found:
| Category | Code indicators (imports, APIs, config) |
|----------|-----------------------------------------|
| GPU / CUDA | `import pycuda`, `import tensorrt`, `import pynvml`, `torch.cuda`, `nvidia-smi`, `CUDA_VISIBLE_DEVICES`, `runtime: nvidia` |
| Apple Neural Engine / CoreML | `import coremltools`, `CoreML`, `MLModel`, `ComputeUnit`, `MPS`, `sys.platform == "darwin"`, `platform.machine() == "arm64"` |
| OpenCL / Vulkan | `import pyopencl`, `clCreateContext`, vulkan headers |
| TPU / FPGA | `import tensorflow.distribute.TPUStrategy`, FPGA bitstream loaders |
| Sensors / Cameras | `import cv2.VideoCapture(0)` (device index), serial port access, GPIO, V4L2 |
| OS-specific services | Kernel modules (`modprobe`), host-level drivers, platform-gated code (`sys.platform` branches selecting different backends) |
Also check dependency files (`requirements.txt`, `setup.py`, `pyproject.toml`, `Cargo.toml`, `*.csproj`) for hardware-specific packages.
#### Step 3 — Classify the project
Based on Steps 12, classify the project:
- **Not hardware-dependent**: no indicators found → use Docker (preferred default), skip to "Record the decision" below
- **Hardware-dependent**: one or more indicators found → proceed to Step 4
#### Step 4 — Present execution environment choice
Present the findings and ask the user using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Test execution environment
══════════════════════════════════════
Hardware dependencies detected:
- [list each indicator found, with file:line]
══════════════════════════════════════
Running in Docker means these hardware code paths
are NOT exercised — Docker uses a Linux VM where
[specific hardware, e.g. CoreML / CUDA] is unavailable.
The system would fall back to [fallback engine/path].
══════════════════════════════════════
A) Local execution only (tests the real hardware path)
B) Docker execution only (tests the fallback path)
C) Both local and Docker (tests both paths, requires
two test runs — recommended for CI with heterogeneous
runners)
══════════════════════════════════════
Recommendation: [A, B, or C] — [reason]
══════════════════════════════════════
```
#### Step 5 — Record the decision
Write or update a **"Test Execution"** section in `TESTS_OUTPUT_DIR/environment.md` with:
1. **Decision**: local / docker / both
2. **Hardware dependencies found**: list with file references
3. **Execution instructions** per chosen mode:
- **Local mode**: prerequisites (OS, SDK, hardware), how to start services, how to run the test runner, environment variables
- **Docker mode**: docker-compose profile/command, required images, how results are collected
- **Both mode**: instructions for each, plus guidance on which CI runner type runs which mode
--- ---
### Phase 4: Test Runner Script Generation ### Phase 4: Test Runner Script Generation
**Skip condition**: If this skill was invoked from the `/plan` skill (planning context, no code exists yet), skip Phase 4 entirely. Script creation should instead be planned as a task during decompose — the decomposer creates a task for creating these scripts. Phase 4 only runs when invoked from the existing-code flow (where source code already exists) or standalone. Read and follow `phases/04-runner-scripts.md`.
**Role**: DevOps engineer
**Goal**: Generate executable shell scripts that run the specified tests, so the autopilot and CI can invoke them consistently.
**Constraints**: Scripts must be idempotent, portable across dev/CI, and exit with non-zero on failure. Respect the Docker Suitability Assessment decision above.
#### Step 1 — Detect test infrastructure
1. Identify the project's test runner from manifests and config files:
- Python: `pytest` (pyproject.toml, setup.cfg, pytest.ini)
- .NET: `dotnet test` (*.csproj, *.sln)
- Rust: `cargo test` (Cargo.toml)
- Node: `npm test` or `vitest` / `jest` (package.json)
2. Check Docker Suitability Assessment result:
- If **local execution** was chosen → do NOT generate docker-compose test files; scripts run directly on host
- If **Docker execution** was chosen → identify/generate docker-compose files for integration/blackbox tests
3. Identify performance/load testing tools from dependencies (k6, locust, artillery, wrk, or built-in benchmarks)
4. Read `TESTS_OUTPUT_DIR/environment.md` for infrastructure requirements
#### Step 2 — Generate test runner
**Docker is the default.** Only generate a local `scripts/run-tests.sh` if the Hardware-Dependency Assessment determined **local** or **both** execution (i.e., the project requires real hardware like GPU/CoreML/TPU/sensors). For all other projects, use `docker-compose.test.yml` — it provides reproducibility, isolation, and CI parity without a custom shell script.
**If local script is needed** — create `scripts/run-tests.sh` at the project root using `.cursor/skills/test-spec/templates/run-tests-script.md` as structural guidance. The script must:
1. Set `set -euo pipefail` and trap cleanup on EXIT
2. **Install all project and test dependencies** (e.g. `pip install -q -r requirements.txt -r e2e/requirements.txt`, `dotnet restore`, `npm ci`). This prevents collection-time import errors on fresh environments.
3. Optionally accept a `--unit-only` flag to skip blackbox tests
4. Run unit/blackbox tests using the detected test runner (activate virtualenv if present, run test runner directly on host)
5. Print a summary of passed/failed/skipped tests
6. Exit 0 on all pass, exit 1 on any failure
**If Docker** — generate or update `docker-compose.test.yml` that builds the test image, installs all dependencies inside the container, runs the test suite, and exits with the test runner's exit code.
#### Step 3 — Generate `scripts/run-performance-tests.sh`
Create `scripts/run-performance-tests.sh` at the project root. The script must:
1. Set `set -euo pipefail` and trap cleanup on EXIT
2. Read thresholds from `_docs/02_document/tests/performance-tests.md` (or accept as CLI args)
3. Start the system under test (local or docker-compose, matching the Docker Suitability Assessment decision)
4. Run load/performance scenarios using the detected tool
5. Compare results against threshold values from the test spec
6. Print a pass/fail summary per scenario
7. Exit 0 if all thresholds met, exit 1 otherwise
#### Step 4 — Verify scripts
1. Verify both scripts are syntactically valid (`bash -n scripts/run-tests.sh`)
2. Mark both scripts as executable (`chmod +x`)
3. Present a summary of what each script does to the user
**Save action**: Write `scripts/run-tests.sh` and `scripts/run-performance-tests.sh` to the project root.
--- ---
### cycle-update mode
If invoked in `cycle-update` mode (see "Invocation Modes" above), read and follow `modes/cycle-update.md` instead of the full 4-phase workflow.
## Escalation Rules ## Escalation Rules
| Situation | Action | | Situation | Action |
@@ -479,27 +202,28 @@ Create `scripts/run-performance-tests.sh` at the project root. The script must:
| Missing acceptance_criteria.md, restrictions.md, or input_data/ | **STOP** — specification cannot proceed | | Missing acceptance_criteria.md, restrictions.md, or input_data/ | **STOP** — specification cannot proceed |
| Missing input_data/expected_results/results_report.md | **STOP** — ask user to provide expected results mapping using the template | | Missing input_data/expected_results/results_report.md | **STOP** — ask user to provide expected results mapping using the template |
| Ambiguous requirements | ASK user | | Ambiguous requirements | ASK user |
| Input data coverage below 70% (Phase 1) | Search internet for supplementary data, ASK user to validate | | Input data coverage below 75% (Phase 1) | Search internet for supplementary data, ASK user to validate |
| Expected results missing or not quantifiable (Phase 1) | ASK user to provide quantifiable expected results before proceeding | | Expected results missing or not quantifiable (Phase 1) | ASK user to provide quantifiable expected results before proceeding |
| Test scenario conflicts with restrictions | ASK user to clarify intent | | Test scenario conflicts with restrictions | ASK user to clarify intent |
| System interfaces unclear (no architecture.md) | ASK user or derive from solution.md | | System interfaces unclear (no architecture.md) | ASK user or derive from solution.md |
| Test data or expected result not provided for a test scenario (Phase 3) | WARN user and REMOVE the test | | Test data or expected result not provided for a test scenario (Phase 3) | WARN user and REMOVE the test |
| Final coverage below 70% after removals (Phase 3) | BLOCK — require user to supply data or accept reduced spec | | Final coverage below 75% after removals (Phase 3) | BLOCK — require user to supply data or accept reduced spec |
## Common Mistakes ## Common Mistakes
- **Referencing internals**: tests must be black-box — no internal module names, no direct DB queries against the system under test - **Referencing internals**: tests must be black-box — no internal module names, no direct DB queries against the system under test
- **Vague expected outcomes**: "works correctly" is not a test outcome; use specific measurable values - **Vague expected outcomes**: "works correctly" is not a test outcome; use specific measurable values
- **Missing expected results**: input data without a paired expected result is useless — the test cannot determine pass/fail without knowing what "correct" looks like - **Missing pass/fail criterion**: input/output tests without an expected result, OR behavioral tests without a measurable observable — both are unverifiable and must be removed
- **Non-quantifiable expected results**: "should return good results" is not verifiable; expected results must have exact values, tolerances, thresholds, or pattern matches that code can evaluate - **Non-quantifiable criteria**: "should return good results", "works correctly", "behaves properly" — not verifiable. Use exact values, tolerances, thresholds, pattern matches, or timing bounds that code can evaluate.
- **Forcing the wrong shape**: do not invent fake input data for a behavioral test (e.g., "input: SIGTERM signal") just to fit the input/output shape. Classify the test correctly and use the matching checklist.
- **Missing negative scenarios**: every positive scenario category should have corresponding negative/edge-case tests - **Missing negative scenarios**: every positive scenario category should have corresponding negative/edge-case tests
- **Untraceable tests**: every test should trace to at least one AC or restriction - **Untraceable tests**: every test should trace to at least one AC or restriction
- **Writing test code**: this skill produces specifications, never implementation code - **Writing test code**: this skill produces specifications, never implementation code
- **Tests without data**: every test scenario MUST have concrete test data AND a quantifiable expected result; a test spec without either is not executable and must be removed
## Trigger Conditions ## Trigger Conditions
When the user wants to: When the user wants to:
- Specify blackbox tests before implementation or refactoring - Specify blackbox tests before implementation or refactoring
- Analyze input data completeness for test coverage - Analyze input data completeness for test coverage
- Produce test scenarios from acceptance criteria - Produce test scenarios from acceptance criteria
@@ -516,36 +240,30 @@ When the user wants to:
│ → verify AC, restrictions, input_data (incl. expected_results.md) │ │ → verify AC, restrictions, input_data (incl. expected_results.md) │
│ │ │ │
│ Phase 1: Input Data & Expected Results Completeness Analysis │ │ Phase 1: Input Data & Expected Results Completeness Analysis │
│ → assess input_data/ coverage vs AC scenarios (≥70%) │ → phases/01-input-data-analysis.md
│ → verify every input has a quantifiable expected result │
│ → present input→expected-result pairing assessment │
│ [BLOCKING: user confirms input data + expected results coverage] │ │ [BLOCKING: user confirms input data + expected results coverage] │
│ │ │ │
│ Phase 2: Test Scenario Specification │ │ Phase 2: Test Scenario Specification │
│ → environment.md │ → phases/02-test-scenarios.md
│ → test-data.md (with expected results mapping) │ → environment.md · test-data.md · blackbox-tests.md
│ → blackbox-tests.md (positive + negative) │ → performance-tests.md · resilience-tests.md · security-tests.md
│ → performance-tests.md │ → resource-limit-tests.md · traceability-matrix.md
│ → resilience-tests.md │
│ → security-tests.md │
│ → resource-limit-tests.md │
│ → traceability-matrix.md │
│ [BLOCKING: user confirms test coverage] │ │ [BLOCKING: user confirms test coverage] │
│ │ │ │
│ Phase 3: Test Data & Expected Results Validation Gate (HARD GATE) │ │ Phase 3: Test Data & Expected Results Validation Gate (HARD GATE) │
│ → build test-data + expected-result requirements checklist │ → phases/03-data-validation-gate.md
→ ask user: provide data+result (A) or remove test (B) [BLOCKING: coverage ≥ 75% required to pass]
→ validate input data (quality + quantity)
→ validate expected results (quantifiable + comparison method) Hardware-Dependency Assessment (BLOCKING, pre-Phase-4)
│ → remove tests without data or expected result, warn user │ → phases/hardware-assessment.md
│ → final coverage check (≥70% or FAIL + loop back) │
│ [BLOCKING: coverage ≥ 70% required to pass] │
│ │ │ │
│ Phase 4: Test Runner Script Generation │ │ Phase 4: Test Runner Script Generation │
│ → detect test runner + docker-compose + load tool │ → phases/04-runner-scripts.md
│ → scripts/run-tests.sh (unit + blackbox) │ │ → scripts/run-tests.sh (unit + blackbox) │
│ → scripts/run-performance-tests.sh (load/perf scenarios) │ │ → scripts/run-performance-tests.sh (load/perf scenarios) │
→ verify scripts are valid and executable
│ cycle-update mode (scoped refresh) │
│ → modes/cycle-update.md │
├──────────────────────────────────────────────────────────────────────┤ ├──────────────────────────────────────────────────────────────────────┤
│ Principles: Black-box only · Traceability · Save immediately │ │ Principles: Black-box only · Traceability · Save immediately │
│ Ask don't assume · Spec don't code │ │ Ask don't assume · Spec don't code │
@@ -0,0 +1,26 @@
# Mode: cycle-update
A scoped refresh of existing test-spec artifacts against the current feature cycle's completed tasks. Used by `existing-code` flow's per-cycle sync step.
## Inputs
- The list of task spec files in `_docs/02_tasks/done/` implemented in the current cycle
- `_docs/03_implementation/implementation_report_{feature_slug}_cycle{N}.md`
## Phases that run
- Skip Phase 1 (input data analysis)
- Skip Phase 4 (script generation)
- Run a **narrowed** Phase 2 and Phase 3 per the rules below
## Narrowed rules
1. For each new AC in the cycle's task specs, check `traceability-matrix.md`. If not covered, append one row.
2. For each new component surface exposed in the cycle (new endpoint, event, DTO field — detectable from task Scope and from diffs against `module-layout.md`), append scenarios to the relevant `blackbox-tests.md` / `performance-tests.md` / `security-tests.md` / `resilience-tests.md` / `resource-limit-tests.md` category. Reuse the existing test template shapes.
3. For each NFR declared in a cycle task spec, propagate it to the matching spec file. If the NFR conflicts with an existing spec entry, present via the Choose format.
4. Do NOT rewrite unaffected sections. Preserve existing traceability IDs.
5. Save only the files that changed, update `traceability-matrix.md` last.
## Save action
Save only the changed test artifact files under `TESTS_OUTPUT_DIR`. Update `traceability-matrix.md` last, after all per-category files are written.
@@ -0,0 +1,39 @@
# Phase 1: Input Data & Expected Results Completeness Analysis
**Role**: Professional Quality Assurance Engineer
**Goal**: Assess whether the available input data is sufficient to build comprehensive test scenarios, and whether every input is paired with a quantifiable expected result.
**Constraints**: Analysis only — no test specs yet.
## Steps
1. Read `_docs/01_solution/solution.md`
2. Read `acceptance_criteria.md`, `restrictions.md`
3. Read testing strategy from `solution.md` (if present)
4. If `DOCUMENT_DIR/architecture.md` and `DOCUMENT_DIR/system-flows.md` exist, read them for additional context on system interfaces and flows
5. Read `input_data/expected_results/results_report.md` and any referenced files in `input_data/expected_results/`
6. Analyze `input_data/` contents against:
- Coverage of acceptance criteria scenarios
- Coverage of restriction edge cases
- Coverage of testing strategy requirements
7. Analyze `input_data/expected_results/results_report.md` completeness:
- Every input data item has a corresponding expected result row in the mapping
- Expected results are quantifiable (contain numeric thresholds, exact values, patterns, or file references — not vague descriptions like "works correctly" or "returns result")
- Expected results specify a comparison method (exact match, tolerance range, pattern match, threshold) per the template
- Reference files in `input_data/expected_results/` that are cited in the mapping actually exist and are valid
8. Present input-to-expected-result pairing assessment:
| Input Data | Expected Result Provided? | Quantifiable? | Issue (if any) |
|------------|--------------------------|---------------|----------------|
| [file/data] | Yes/No | Yes/No | [missing, vague, no tolerance, etc.] |
9. Threshold: at least 75% coverage of scenarios AND every covered scenario has a quantifiable expected result (see `.cursor/rules/cursor-meta.mdc` Quality Thresholds table)
10. If coverage is low, search the internet for supplementary data, assess quality with user, and if user agrees, add to `input_data/` and update `input_data/expected_results/results_report.md`
11. If expected results are missing or not quantifiable, ask user to provide them before proceeding
## Blocking
**BLOCKING**: Do NOT proceed to Phase 2 until the user confirms both input data coverage AND expected results completeness are sufficient.
## No save action
Phase 1 does not write an artifact. Findings feed Phase 2.
@@ -0,0 +1,49 @@
# Phase 2: Test Scenario Specification
**Role**: Professional Quality Assurance Engineer
**Goal**: Produce detailed black-box test specifications covering blackbox, performance, resilience, security, and resource limit scenarios.
**Constraints**: Spec only — no test code. Tests describe what the system should do given specific inputs, not how the system is built.
## Steps
Based on all acquired data, acceptance_criteria, and restrictions, form detailed test scenarios:
1. Define test environment using `.cursor/skills/plan/templates/test-environment.md` as structure
2. Define test data management using `.cursor/skills/plan/templates/test-data.md` as structure
3. Write blackbox test scenarios (positive + negative) using `.cursor/skills/plan/templates/blackbox-tests.md` as structure
4. Write performance test scenarios using `.cursor/skills/plan/templates/performance-tests.md` as structure
5. Write resilience test scenarios using `.cursor/skills/plan/templates/resilience-tests.md` as structure
6. Write security test scenarios using `.cursor/skills/plan/templates/security-tests.md` as structure
7. Write resource limit test scenarios using `.cursor/skills/plan/templates/resource-limit-tests.md` as structure
8. Build traceability matrix using `.cursor/skills/plan/templates/traceability-matrix.md` as structure
## Self-verification
- [ ] Every acceptance criterion is covered by at least one test scenario
- [ ] Every restriction is verified by at least one test scenario
- [ ] Every test scenario has a quantifiable expected result from `input_data/expected_results/results_report.md`
- [ ] Expected results use comparison methods from `.cursor/skills/test-spec/templates/expected-results.md`
- [ ] Positive and negative scenarios are balanced
- [ ] Consumer app has no direct access to system internals
- [ ] Test environment matches project constraints (see `phases/hardware-assessment.md`, which runs before Phase 4)
- [ ] External dependencies have mock/stub services defined
- [ ] Traceability matrix has no uncovered AC or restrictions
## Save action
Write all files under TESTS_OUTPUT_DIR:
- `environment.md`
- `test-data.md`
- `blackbox-tests.md`
- `performance-tests.md`
- `resilience-tests.md`
- `security-tests.md`
- `resource-limit-tests.md`
- `traceability-matrix.md`
## Blocking
**BLOCKING**: Present test coverage summary (from `traceability-matrix.md`) to user. Do NOT proceed to Phase 3 until confirmed.
Capture any new questions, findings, or insights that arise during test specification — these feed forward into downstream skills (plan, refactor, etc.).
@@ -0,0 +1,118 @@
# Phase 3: Test Data & Expected Results Validation Gate (HARD GATE)
**Role**: Professional Quality Assurance Engineer
**Goal**: Ensure every test scenario produced in Phase 2 has concrete, sufficient test data. Remove tests that lack data. Verify final coverage stays above 75%.
**Constraints**: This phase is MANDATORY and cannot be skipped.
## Step 1 — Build the requirements checklist
Scan `blackbox-tests.md`, `performance-tests.md`, `resilience-tests.md`, `security-tests.md`, and `resource-limit-tests.md`. For every test scenario, classify its shape (input/output or behavioral) and extract:
**Input/output tests:**
| # | Test Scenario ID | Test Name | Required Input Data | Required Expected Result | Result Quantifiable? | Comparison Method | Input Provided? | Expected Result Provided? |
|---|-----------------|-----------|---------------------|-------------------------|---------------------|-------------------|----------------|--------------------------|
| 1 | [ID] | [name] | [data description] | [what system should output] | [Yes/No] | [exact/tolerance/pattern/threshold] | [Yes/No] | [Yes/No] |
**Behavioral tests:**
| # | Test Scenario ID | Test Name | Trigger Condition | Observable Behavior | Pass/Fail Criterion | Quantifiable? |
|---|-----------------|-----------|-------------------|--------------------|--------------------|---------------|
| 1 | [ID] | [name] | [e.g., service receives SIGTERM] | [e.g., drain logs emitted, port closed] | [e.g., drain completes ≤30s] | [Yes/No] |
Present both tables to the user.
## Step 2 — Ask user to provide missing test data AND expected results
For each row where **Input Provided?** is **No** OR **Expected Result Provided?** is **No**, ask the user:
> **Option A — Provide the missing items**: Supply what is missing:
> - **Missing input data**: Place test data files in `_docs/00_problem/input_data/` or indicate the location.
> - **Missing expected result**: Provide the quantifiable expected result for this input. Update `_docs/00_problem/input_data/expected_results/results_report.md` with a row mapping the input to its expected output. If the expected result is complex, provide a reference CSV file in `_docs/00_problem/input_data/expected_results/`. Use `.cursor/skills/test-spec/templates/expected-results.md` for format guidance.
>
> Expected results MUST be quantifiable — the test must be able to programmatically compare actual vs expected. Examples:
> - "3 detections with bounding boxes [(x1,y1,x2,y2), ...] ± 10px"
> - "HTTP 200 with JSON body matching `expected_response_01.json`"
> - "Processing time < 500ms"
> - "0 false positives in the output set"
>
> **Option B — Skip this test**: If you cannot provide the data or expected result, this test scenario will be **removed** from the specification.
**BLOCKING**: Wait for the user's response for every missing item.
## Step 3 — Validate provided data and expected results
For each item where the user chose **Option A**:
**Input data validation**:
1. Verify the data file(s) exist at the indicated location
2. Verify **quality**: data matches the format, schema, and constraints described in the test scenario (e.g., correct image resolution, valid JSON structure, expected value ranges)
3. Verify **quantity**: enough data samples to cover the scenario (e.g., at least N images for a batch test, multiple edge-case variants)
**Expected result validation**:
4. Verify the expected result exists in `input_data/expected_results/results_report.md` or as a referenced file in `input_data/expected_results/`
5. Verify **quantifiability**: the expected result can be evaluated programmatically — it must contain at least one of:
- Exact values (counts, strings, status codes)
- Numeric values with tolerance (e.g., `± 10px`, `≥ 0.85`)
- Pattern matches (regex, substring, JSON schema)
- Thresholds (e.g., `< 500ms`, `≤ 5% error rate`)
- Reference file for structural comparison (JSON diff, CSV diff)
6. Verify **completeness**: the expected result covers all outputs the test checks (not just one field when the test validates multiple)
7. Verify **consistency**: the expected result is consistent with the acceptance criteria it traces to
If any validation fails, report the specific issue and loop back to Step 2 for that item.
## Step 4 — Remove tests without data or expected results
For each item where the user chose **Option B**:
1. Warn the user: `⚠️ Test scenario [ID] "[Name]" will be REMOVED from the specification due to missing test data or expected result.`
2. Remove the test scenario from the respective test file
3. Remove corresponding rows from `traceability-matrix.md`
4. Update `test-data.md` to reflect the removal
**Save action**: Write updated files under TESTS_OUTPUT_DIR:
- `test-data.md`
- Affected test files (if tests removed)
- `traceability-matrix.md` (if tests removed)
## Step 5 — Final coverage check
After all removals, recalculate coverage:
1. Count remaining test scenarios that trace to acceptance criteria
2. Count total acceptance criteria + restrictions
3. Calculate coverage percentage: `covered_items / total_items * 100`
| Metric | Value |
|--------|-------|
| Total AC + Restrictions | ? |
| Covered by remaining tests | ? |
| **Coverage %** | **?%** |
**Decision**:
- **Coverage ≥ 75%** → Phase 3 **PASSED**. Present final summary to user.
- **Coverage < 75%** → Phase 3 **FAILED**. Report:
> ❌ Test coverage dropped to **X%** (minimum 75% required). The removed test scenarios left gaps in the following acceptance criteria / restrictions:
>
> | Uncovered Item | Type (AC/Restriction) | Missing Test Data Needed |
> |---|---|---|
>
> **Action required**: Provide the missing test data for the items above, or add alternative test scenarios that cover these items with data you can supply.
**BLOCKING**: Loop back to Step 2 with the uncovered items. Do NOT finalize until coverage ≥ 75%.
## Phase 3 Completion
When coverage ≥ 75% and all remaining tests have validated data AND quantifiable expected results:
1. Present the final coverage report
2. List all removed tests (if any) with reasons
3. Confirm every remaining test has: input data + quantifiable expected result + comparison method
4. Confirm all artifacts are saved and consistent
After Phase 3 completion, run `phases/hardware-assessment.md` before Phase 4.
@@ -0,0 +1,60 @@
# Phase 4: Test Runner Script Generation
**Skip condition**: If this skill was invoked from the `/plan` skill (planning context, no code exists yet), skip Phase 4 entirely. Script creation should instead be planned as a task during decompose — the decomposer creates a task for creating these scripts. Phase 4 only runs when invoked from the existing-code flow (where source code already exists) or standalone.
**Role**: DevOps engineer
**Goal**: Generate executable shell scripts that run the specified tests, so autodev and CI can invoke them consistently.
**Constraints**: Scripts must be idempotent, portable across dev/CI, and exit with non-zero on failure. Respect the Hardware-Dependency Assessment decision recorded in `environment.md`.
**Prerequisite**: `phases/hardware-assessment.md` must have completed and written the "Test Execution" section to `TESTS_OUTPUT_DIR/environment.md`.
## Step 1 — Detect test infrastructure
1. Identify the project's test runner from manifests and config files:
- Python: `pytest` (`pyproject.toml`, `setup.cfg`, `pytest.ini`)
- .NET: `dotnet test` (`*.csproj`, `*.sln`)
- Rust: `cargo test` (`Cargo.toml`)
- Node: `npm test` or `vitest` / `jest` (`package.json`)
2. Check the Hardware-Dependency Assessment result recorded in `environment.md`:
- If **local execution** was chosen → do NOT generate docker-compose test files; scripts run directly on host
- If **Docker execution** was chosen → identify/generate docker-compose files for integration/blackbox tests
- If **both** was chosen → generate both
3. Identify performance/load testing tools from dependencies (`k6`, `locust`, `artillery`, `wrk`, or built-in benchmarks)
4. Read `TESTS_OUTPUT_DIR/environment.md` for infrastructure requirements
## Step 2 — Generate test runner
**Docker is the default.** Only generate a local `scripts/run-tests.sh` if the Hardware-Dependency Assessment determined **local** or **both** execution (i.e., the project requires real hardware like GPU/CoreML/TPU/sensors). For all other projects, use `docker-compose.test.yml` — it provides reproducibility, isolation, and CI parity without a custom shell script.
**If local script is needed** — create `scripts/run-tests.sh` at the project root using `.cursor/skills/test-spec/templates/run-tests-script.md` as structural guidance. The script must:
1. Set `set -euo pipefail` and trap cleanup on EXIT
2. **Install all project and test dependencies** (e.g. `pip install -q -r requirements.txt -r e2e/requirements.txt`, `dotnet restore`, `npm ci`). This prevents collection-time import errors on fresh environments.
3. Optionally accept a `--unit-only` flag to skip blackbox tests
4. Run unit/blackbox tests using the detected test runner (activate virtualenv if present, run test runner directly on host)
5. Print a summary of passed/failed/skipped tests
6. Exit 0 on all pass, exit 1 on any failure
**If Docker** — generate or update `docker-compose.test.yml` that builds the test image, installs all dependencies inside the container, runs the test suite, and exits with the test runner's exit code.
## Step 3 — Generate `scripts/run-performance-tests.sh`
Create `scripts/run-performance-tests.sh` at the project root. The script must:
1. Set `set -euo pipefail` and trap cleanup on EXIT
2. Read thresholds from `_docs/02_document/tests/performance-tests.md` (or accept as CLI args)
3. Start the system under test (local or docker-compose, matching the Hardware-Dependency Assessment decision)
4. Run load/performance scenarios using the detected tool
5. Compare results against threshold values from the test spec
6. Print a pass/fail summary per scenario
7. Exit 0 if all thresholds met, exit 1 otherwise
## Step 4 — Verify scripts
1. Verify both scripts are syntactically valid (`bash -n scripts/run-tests.sh`)
2. Mark both scripts as executable (`chmod +x`)
3. Present a summary of what each script does to the user
## Save action
Write `scripts/run-tests.sh` and `scripts/run-performance-tests.sh` to the project root.
@@ -0,0 +1,78 @@
# Hardware-Dependency & Execution Environment Assessment (BLOCKING)
Runs between Phase 3 and Phase 4.
Docker is the **preferred** test execution environment (reproducibility, isolation, CI parity). However, hardware-dependent projects may require local execution to exercise the real code paths. This assessment determines the right execution strategy by scanning both documentation and source code.
## Step 1 — Documentation scan
Check the following files for mentions of hardware-specific requirements:
| File | Look for |
|------|----------|
| `_docs/00_problem/restrictions.md` | Platform requirements, hardware constraints, OS-specific features |
| `_docs/01_solution/solution.md` | Engine selection logic, platform-dependent paths, hardware acceleration |
| `_docs/02_document/architecture.md` | Component diagrams showing hardware layers, engine adapters |
| `_docs/02_document/components/*/description.md` | Per-component hardware mentions |
| `TESTS_OUTPUT_DIR/environment.md` | Existing environment decisions |
## Step 2 — Code scan
Search the project source for indicators of hardware dependence. The project is **hardware-dependent** if ANY of the following are found:
| Category | Code indicators (imports, APIs, config) |
|----------|-----------------------------------------|
| GPU / CUDA | `import pycuda`, `import tensorrt`, `import pynvml`, `torch.cuda`, `nvidia-smi`, `CUDA_VISIBLE_DEVICES`, `runtime: nvidia` |
| Apple Neural Engine / CoreML | `import coremltools`, `CoreML`, `MLModel`, `ComputeUnit`, `MPS`, `sys.platform == "darwin"`, `platform.machine() == "arm64"` |
| OpenCL / Vulkan | `import pyopencl`, `clCreateContext`, vulkan headers |
| TPU / FPGA | `import tensorflow.distribute.TPUStrategy`, FPGA bitstream loaders |
| Sensors / Cameras | `import cv2.VideoCapture(0)` (device index), serial port access, GPIO, V4L2 |
| OS-specific services | Kernel modules (`modprobe`), host-level drivers, platform-gated code (`sys.platform` branches selecting different backends) |
Also check dependency files (`requirements.txt`, `setup.py`, `pyproject.toml`, `Cargo.toml`, `*.csproj`) for hardware-specific packages.
## Step 3 — Classify the project
Based on Steps 12, classify the project:
- **Not hardware-dependent**: no indicators found → use Docker (preferred default), skip to Step 5 "Record the decision"
- **Hardware-dependent**: one or more indicators found → proceed to Step 4
## Step 4 — Present execution environment choice
Present the findings and ask the user using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Test execution environment
══════════════════════════════════════
Hardware dependencies detected:
- [list each indicator found, with file:line]
══════════════════════════════════════
Running in Docker means these hardware code paths
are NOT exercised — Docker uses a Linux VM where
[specific hardware, e.g. CoreML / CUDA] is unavailable.
The system would fall back to [fallback engine/path].
══════════════════════════════════════
A) Local execution only (tests the real hardware path)
B) Docker execution only (tests the fallback path)
C) Both local and Docker (tests both paths, requires
two test runs — recommended for CI with heterogeneous
runners)
══════════════════════════════════════
Recommendation: [A, B, or C] — [reason]
══════════════════════════════════════
```
## Step 5 — Record the decision
Write or update a **"Test Execution"** section in `TESTS_OUTPUT_DIR/environment.md` with:
1. **Decision**: local / docker / both
2. **Hardware dependencies found**: list with file references
3. **Execution instructions** per chosen mode:
- **Local mode**: prerequisites (OS, SDK, hardware), how to start services, how to run the test runner, environment variables
- **Docker mode**: docker-compose profile/command, required images, how results are collected
- **Both mode**: instructions for each, plus guidance on which CI runner type runs which mode
The decision is consumed by Phase 4 to choose between local `scripts/run-tests.sh` and `docker-compose.test.yml`.
+31
View File
@@ -33,6 +33,37 @@ End-to-end UI design workflow producing production-quality HTML+CSS mockups enti
- **Ask, don't assume**: when design direction is ambiguous, STOP and ask the user - **Ask, don't assume**: when design direction is ambiguous, STOP and ask the user
- **One screen at a time**: generate individual screens, not entire applications at once - **One screen at a time**: generate individual screens, not entire applications at once
## Applicability Check
When invoked directly by a user (`/ui-design ...`), proceed — the user explicitly asked.
When invoked by an orchestrator (e.g. the autodev greenfield flow Step 4), first decide whether the project actually has UI work to do. The project IS a UI project if ANY of the following are true:
- `package.json` exists in the workspace root or any subdirectory
- `*.html`, `*.jsx`, or `*.tsx` files exist in the workspace
- `_docs/02_document/components/` contains a component whose `description.md` mentions UI, frontend, page, screen, dashboard, form, or view
- `_docs/02_document/architecture.md` mentions frontend, UI layer, SPA, or client-side rendering
- `_docs/01_solution/solution.md` mentions frontend, web interface, or user-facing UI
If none of the above match → return `outcome: skipped, reason: not-a-ui-project` to the caller and exit without running any phase.
If at least one matches → present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: UI project detected — generate mockups?
══════════════════════════════════════
A) Generate UI mockups (recommended before decomposition)
B) Skip — proceed without mockups
══════════════════════════════════════
Recommendation: A — mockups before decomposition
produce better task specs for frontend components
══════════════════════════════════════
```
- If **A** → continue to Context Resolution below and run the workflow.
- If **B** → return `outcome: skipped, reason: user-declined` and exit.
## Context Resolution ## Context Resolution
Determine the operating mode based on invocation before any other logic runs. Determine the operating mode based on invocation before any other logic runs.
+18 -3
View File
@@ -8,9 +8,24 @@ labels:
steps: steps:
- name: build-push - name: build-push
image: docker image: docker
environment:
REGISTRY_HOST:
from_secret: registry_host
REGISTRY_USER:
from_secret: registry_user
REGISTRY_TOKEN:
from_secret: registry_token
commands: commands:
- if [ "$CI_COMMIT_BRANCH" = "main" ]; then export TAG=arm; else export TAG=${CI_COMMIT_BRANCH}-arm; fi - echo "$REGISTRY_TOKEN" | docker login "$REGISTRY_HOST" -u "$REGISTRY_USER" --password-stdin
- docker build -f src/Dockerfile -t localhost:5000/annotations:$TAG src/ - export TAG=${CI_COMMIT_BRANCH}-arm
- docker push localhost:5000/annotations:$TAG - export BUILD_DATE=$(date -u +%Y-%m-%dT%H:%M:%SZ)
- |
docker build -f src/Dockerfile \
--build-arg CI_COMMIT_SHA=$CI_COMMIT_SHA \
--label org.opencontainers.image.revision=$CI_COMMIT_SHA \
--label org.opencontainers.image.created=$BUILD_DATE \
--label org.opencontainers.image.source=$CI_REPO_URL \
-t $REGISTRY_HOST/azaion/annotations:$TAG src/
- docker push $REGISTRY_HOST/azaion/annotations:$TAG
volumes: volumes:
- /var/run/docker.sock:/var/run/docker.sock - /var/run/docker.sock:/var/run/docker.sock
+2
View File
@@ -6,6 +6,8 @@ RUN arch=$([ "$TARGETARCH" = "amd64" ] && echo "x64" || echo "$TARGETARCH") && \
dotnet publish -c Release -o /app --os linux --arch $arch dotnet publish -c Release -o /app --os linux --arch $arch
FROM mcr.microsoft.com/dotnet/aspnet:10.0 FROM mcr.microsoft.com/dotnet/aspnet:10.0
ARG CI_COMMIT_SHA=unknown
ENV AZAION_REVISION=$CI_COMMIT_SHA
WORKDIR /app WORKDIR /app
COPY --from=build /app . COPY --from=build /app .
EXPOSE 8080 EXPOSE 8080