Sync .cursor from detections

This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-04-12 05:05:11 +03:00
parent fddf1b8706
commit 09117f90b5
108 changed files with 11844 additions and 15 deletions
@@ -0,0 +1,297 @@
# Existing Code Workflow
Workflow for projects with an existing codebase. Starts with documentation, produces test specs, checks code testability (refactoring if needed), decomposes and implements tests, verifies them, refactors with that safety net, then adds new functionality and deploys.
## Step Reference Table
| Step | Name | Sub-Skill | Internal SubSteps |
|------|------|-----------|-------------------|
| 1 | Document | document/SKILL.md | Steps 18 |
| 2 | Test Spec | test-spec/SKILL.md | Phase 1a1b |
| 3 | Code Testability Revision | refactor/SKILL.md (guided mode) | Phases 07 (conditional) |
| 4 | Decompose Tests | decompose/SKILL.md (tests-only) | Step 1t + Step 3 + Step 4 |
| 5 | Implement Tests | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
| 6 | Run Tests | test-run/SKILL.md | Steps 14 |
| 7 | Refactor | refactor/SKILL.md | Phases 07 (optional) |
| 8 | New Task | new-task/SKILL.md | Steps 18 (loop) |
| 9 | Implement | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
| 10 | Run Tests | test-run/SKILL.md | Steps 14 |
| 11 | Update Docs | document/SKILL.md (task mode) | Task Steps 05 |
| 12 | Security Audit | security/SKILL.md | Phase 15 (optional) |
| 13 | Performance Test | (autopilot-managed) | Load/stress tests (optional) |
| 14 | Deploy | deploy/SKILL.md | Step 17 |
After Step 14, the existing-code workflow is complete.
## Detection Rules
Check rules in order — first match wins.
---
**Step 1 — Document**
Condition: `_docs/` does not exist AND the workspace contains source code files (e.g., `*.py`, `*.cs`, `*.rs`, `*.ts`, `src/`, `Cargo.toml`, `*.csproj`, `package.json`)
Action: An existing codebase without documentation was detected. Read and execute `.cursor/skills/document/SKILL.md`. After the document skill completes, re-detect state (the produced `_docs/` artifacts will place the project at Step 2 or later).
---
**Step 2 — Test Spec**
Condition: `_docs/02_document/FINAL_report.md` exists AND workspace contains source code files (e.g., `*.py`, `*.cs`, `*.rs`, `*.ts`) AND `_docs/02_document/tests/traceability-matrix.md` does not exist AND the autopilot state shows `step >= 2` (Document already ran)
Action: Read and execute `.cursor/skills/test-spec/SKILL.md`
This step applies when the codebase was documented via the `/document` skill. Test specifications must be produced before refactoring or further development.
---
**Step 3 — Code Testability Revision**
Condition: `_docs/02_document/tests/traceability-matrix.md` exists AND the autopilot state shows Test Spec (Step 2) is completed AND the autopilot state does NOT show Code Testability Revision (Step 3) as completed or skipped
Action: Analyze the codebase against the test specs to determine whether the code can be tested as-is.
1. Read `_docs/02_document/tests/traceability-matrix.md` and all test scenario files in `_docs/02_document/tests/`
2. For each test scenario, check whether the code under test can be exercised in isolation. Look for:
- Hardcoded file paths or directory references
- Hardcoded configuration values (URLs, credentials, magic numbers)
- Global mutable state that cannot be overridden
- Tight coupling to external services without abstraction
- Missing dependency injection or non-configurable parameters
- Direct file system operations without path configurability
- Inline construction of heavy dependencies (models, clients)
3. If ALL scenarios are testable as-is:
- Mark Step 3 as `completed` with outcome "Code is testable — no changes needed"
- Auto-chain to Step 4 (Decompose Tests)
4. If testability issues are found:
- Create `_docs/04_refactoring/01-testability-refactoring/`
- Write `list-of-changes.md` in that directory using the refactor skill template (`.cursor/skills/refactor/templates/list-of-changes.md`), with:
- **Mode**: `guided`
- **Source**: `autopilot-testability-analysis`
- One change entry per testability issue found (change ID, file paths, problem, proposed change, risk, dependencies)
- Invoke the refactor skill in **guided mode**: read and execute `.cursor/skills/refactor/SKILL.md` with the `list-of-changes.md` as input
- The refactor skill will create RUN_DIR (`01-testability-refactoring`), create tasks in `_docs/02_tasks/todo/`, delegate to implement skill, and verify results
- Phase 3 (Safety Net) is automatically skipped by the refactor skill for testability runs
- After refactoring completes, mark Step 3 as `completed`
- Auto-chain to Step 4 (Decompose Tests)
---
**Step 4 — Decompose Tests**
Condition: `_docs/02_document/tests/traceability-matrix.md` exists AND workspace contains source code files AND the autopilot state shows Step 3 (Code Testability Revision) is completed or skipped AND (`_docs/02_tasks/todo/` does not exist or has no test task files)
Action: Read and execute `.cursor/skills/decompose/SKILL.md` in **tests-only mode** (pass `_docs/02_document/tests/` as input). The decompose skill will:
1. Run Step 1t (test infrastructure bootstrap)
2. Run Step 3 (blackbox test task decomposition)
3. Run Step 4 (cross-verification against test coverage)
If `_docs/02_tasks/` subfolders have some task files already (e.g., refactoring tasks from Step 3), the decompose skill's resumability handles it — it appends test tasks alongside existing tasks.
---
**Step 5 — Implement Tests**
Condition: `_docs/02_tasks/todo/` contains task files AND `_dependencies_table.md` exists AND the autopilot state shows Step 4 (Decompose Tests) is completed AND `_docs/03_implementation/implementation_report_tests.md` does not exist
Action: Read and execute `.cursor/skills/implement/SKILL.md`
The implement skill reads test tasks from `_docs/02_tasks/todo/` and implements them.
If `_docs/03_implementation/` has batch reports, the implement skill detects completed tasks and continues.
---
**Step 6 — Run Tests**
Condition: `_docs/03_implementation/implementation_report_tests.md` exists AND the autopilot state shows Step 5 (Implement Tests) is completed AND the autopilot state does NOT show Step 6 (Run Tests) as completed
Action: Read and execute `.cursor/skills/test-run/SKILL.md`
Verifies the implemented test suite passes before proceeding to refactoring. The tests form the safety net for all subsequent code changes.
---
**Step 7 — Refactor (optional)**
Condition: the autopilot state shows Step 6 (Run Tests) is completed AND the autopilot state does NOT show Step 7 (Refactor) as completed or skipped AND no `_docs/04_refactoring/` run folder contains a `FINAL_report.md` for a non-testability run
Action: Present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Refactor codebase before adding new features?
══════════════════════════════════════
A) Run refactoring (recommended if code quality issues were noted during documentation)
B) Skip — proceed directly to New Task
══════════════════════════════════════
Recommendation: [A or B — base on whether documentation
flagged significant code smells, coupling issues, or
technical debt worth addressing before new development]
══════════════════════════════════════
```
- If user picks A → Read and execute `.cursor/skills/refactor/SKILL.md` in automatic mode. The refactor skill creates a new run folder in `_docs/04_refactoring/` (e.g., `02-coupling-refactoring`), runs the full method using the implemented tests as a safety net. After completion, auto-chain to Step 8 (New Task).
- If user picks B → Mark Step 7 as `skipped` in the state file, auto-chain to Step 8 (New Task).
---
**Step 8 — New Task**
Condition: the autopilot state shows Step 7 (Refactor) is completed or skipped AND the autopilot state does NOT show Step 8 (New Task) as completed
Action: Read and execute `.cursor/skills/new-task/SKILL.md`
The new-task skill interactively guides the user through defining new functionality. It loops until the user is done adding tasks. New task files are written to `_docs/02_tasks/todo/`.
---
**Step 9 — Implement**
Condition: the autopilot state shows Step 8 (New Task) is completed AND `_docs/03_implementation/` does not contain an `implementation_report_*.md` file other than `implementation_report_tests.md` (the tests report from Step 5 is excluded from this check)
Action: Read and execute `.cursor/skills/implement/SKILL.md`
The implement skill reads the new tasks from `_docs/02_tasks/todo/` and implements them. Tasks already implemented in Step 5 are skipped (completed tasks have been moved to `done/`).
If `_docs/03_implementation/` has batch reports from this phase, the implement skill detects completed tasks and continues.
---
**Step 10 — Run Tests**
Condition: the autopilot state shows Step 9 (Implement) is completed AND the autopilot state does NOT show Step 10 (Run Tests) as completed
Action: Read and execute `.cursor/skills/test-run/SKILL.md`
---
**Step 11 — Update Docs**
Condition: the autopilot state shows Step 10 (Run Tests) is completed AND the autopilot state does NOT show Step 11 (Update Docs) as completed AND `_docs/02_document/` contains existing documentation (module or component docs)
Action: Read and execute `.cursor/skills/document/SKILL.md` in **Task mode**. Pass all task spec files from `_docs/02_tasks/done/` that were implemented in the current cycle (i.e., tasks moved to `done/` during Steps 89 of this cycle).
The document skill in Task mode:
1. Reads each task spec to identify changed source files
2. Updates affected module docs, component docs, and system-level docs
3. Does NOT redo full discovery, verification, or problem extraction
If `_docs/02_document/` does not contain existing docs (e.g., documentation step was skipped), mark Step 11 as `skipped`.
After completion, auto-chain to Step 12 (Security Audit).
---
**Step 12 — Security Audit (optional)**
Condition: the autopilot state shows Step 11 (Update Docs) is completed or skipped AND the autopilot state does NOT show Step 12 (Security Audit) as completed or skipped AND (`_docs/04_deploy/` does not exist or is incomplete)
Action: Present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Run security audit before deploy?
══════════════════════════════════════
A) Run security audit (recommended for production deployments)
B) Skip — proceed directly to deploy
══════════════════════════════════════
Recommendation: A — catches vulnerabilities before production
══════════════════════════════════════
```
- If user picks A → Read and execute `.cursor/skills/security/SKILL.md`. After completion, auto-chain to Step 13 (Performance Test).
- If user picks B → Mark Step 12 as `skipped` in the state file, auto-chain to Step 13 (Performance Test).
---
**Step 13 — Performance Test (optional)**
Condition: the autopilot state shows Step 12 (Security Audit) is completed or skipped AND the autopilot state does NOT show Step 13 (Performance Test) as completed or skipped AND (`_docs/04_deploy/` does not exist or is incomplete)
Action: Present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Run performance/load tests before deploy?
══════════════════════════════════════
A) Run performance tests (recommended for latency-sensitive or high-load systems)
B) Skip — proceed directly to deploy
══════════════════════════════════════
Recommendation: [A or B — base on whether acceptance criteria
include latency, throughput, or load requirements]
══════════════════════════════════════
```
- If user picks A → Run performance tests:
1. If `scripts/run-performance-tests.sh` exists (generated by the test-spec skill Phase 4), execute it
2. Otherwise, check if `_docs/02_document/tests/performance-tests.md` exists for test scenarios, detect appropriate load testing tool (k6, locust, artillery, wrk, or built-in benchmarks), and execute performance test scenarios against the running system
3. Present results vs acceptance criteria thresholds
4. If thresholds fail → present Choose format: A) Fix and re-run, B) Proceed anyway, C) Abort
5. After completion, auto-chain to Step 14 (Deploy)
- If user picks B → Mark Step 13 as `skipped` in the state file, auto-chain to Step 14 (Deploy).
---
**Step 14 — Deploy**
Condition: the autopilot state shows Step 10 (Run Tests) is completed AND (Step 11 is completed or skipped) AND (Step 12 is completed or skipped) AND (Step 13 is completed or skipped) AND (`_docs/04_deploy/` does not exist or is incomplete)
Action: Read and execute `.cursor/skills/deploy/SKILL.md`
After deployment completes, the existing-code workflow is done.
---
**Re-Entry After Completion**
Condition: the autopilot state shows `step: done` OR all steps through 14 (Deploy) are completed
Action: The project completed a full cycle. Print the status banner and automatically loop back to New Task — do NOT ask the user for confirmation:
```
══════════════════════════════════════
PROJECT CYCLE COMPLETE
══════════════════════════════════════
The previous cycle finished successfully.
Starting new feature cycle…
══════════════════════════════════════
```
Set `step: 8`, `status: not_started` in the state file, then auto-chain to Step 8 (New Task).
Note: the loop (Steps 8 → 14 → 8) ensures every feature cycle includes: New Task → Implement → Run Tests → Update Docs → Security → Performance → Deploy.
## Auto-Chain Rules
| Completed Step | Next Action |
|---------------|-------------|
| Document (1) | Auto-chain → Test Spec (2) |
| Test Spec (2) | Auto-chain → Code Testability Revision (3) |
| Code Testability Revision (3) | Auto-chain → Decompose Tests (4) |
| Decompose Tests (4) | **Session boundary** — suggest new conversation before Implement Tests |
| Implement Tests (5) | Auto-chain → Run Tests (6) |
| Run Tests (6, all pass) | Auto-chain → Refactor choice (7) |
| Refactor (7, done or skipped) | Auto-chain → New Task (8) |
| New Task (8) | **Session boundary** — suggest new conversation before Implement |
| Implement (9) | Auto-chain → Run Tests (10) |
| Run Tests (10, all pass) | Auto-chain → Update Docs (11) |
| Update Docs (11) | Auto-chain → Security Audit choice (12) |
| Security Audit (12, done or skipped) | Auto-chain → Performance Test choice (13) |
| Performance Test (13, done or skipped) | Auto-chain → Deploy (14) |
| Deploy (14) | **Workflow complete** — existing-code flow done |
## Status Summary Template
```
═══════════════════════════════════════════════════
AUTOPILOT STATUS (existing-code)
═══════════════════════════════════════════════════
Step 1 Document [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 2 Test Spec [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 3 Code Testability Rev. [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 4 Decompose Tests [DONE (N tasks) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 5 Implement Tests [DONE / IN PROGRESS (batch M) / NOT STARTED / FAILED (retry N/3)]
Step 6 Run Tests [DONE (N passed, M failed) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 7 Refactor [DONE / SKIPPED / IN PROGRESS (phase N) / NOT STARTED / FAILED (retry N/3)]
Step 8 New Task [DONE (N tasks) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 9 Implement [DONE / IN PROGRESS (batch M of ~N) / NOT STARTED / FAILED (retry N/3)]
Step 10 Run Tests [DONE (N passed, M failed) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 11 Update Docs [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 12 Security Audit [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 13 Performance Test [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 14 Deploy [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
═══════════════════════════════════════════════════
Current: Step N — Name
SubStep: M — [sub-skill internal step name]
Retry: [N/3 if retrying, omit if 0]
Action: [what will happen next]
═══════════════════════════════════════════════════
```
@@ -0,0 +1,235 @@
# Greenfield Workflow
Workflow for new projects built from scratch. Flows linearly: Problem → Research → Plan → UI Design (if applicable) → Decompose → Implement → Run Tests → Security Audit (optional) → Performance Test (optional) → Deploy.
## Step Reference Table
| Step | Name | Sub-Skill | Internal SubSteps |
|------|------|-----------|-------------------|
| 1 | Problem | problem/SKILL.md | Phase 14 |
| 2 | Research | research/SKILL.md | Mode A: Phase 14 · Mode B: Step 08 |
| 3 | Plan | plan/SKILL.md | Step 16 + Final |
| 4 | UI Design | ui-design/SKILL.md | Phase 08 (conditional — UI projects only) |
| 5 | Decompose | decompose/SKILL.md | Step 14 |
| 6 | Implement | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
| 7 | Run Tests | test-run/SKILL.md | Steps 14 |
| 8 | Security Audit | security/SKILL.md | Phase 15 (optional) |
| 9 | Performance Test | (autopilot-managed) | Load/stress tests (optional) |
| 10 | Deploy | deploy/SKILL.md | Step 17 |
## Detection Rules
Check rules in order — first match wins.
---
**Step 1 — Problem Gathering**
Condition: `_docs/00_problem/` does not exist, OR any of these are missing/empty:
- `problem.md`
- `restrictions.md`
- `acceptance_criteria.md`
- `input_data/` (must contain at least one file)
Action: Read and execute `.cursor/skills/problem/SKILL.md`
---
**Step 2 — Research (Initial)**
Condition: `_docs/00_problem/` is complete AND `_docs/01_solution/` has no `solution_draft*.md` files
Action: Read and execute `.cursor/skills/research/SKILL.md` (will auto-detect Mode A)
---
**Research Decision** (inline gate between Step 2 and Step 3)
Condition: `_docs/01_solution/` contains `solution_draft*.md` files AND `_docs/01_solution/solution.md` does not exist AND `_docs/02_document/architecture.md` does not exist
Action: Present the current research state to the user:
- How many solution drafts exist
- Whether tech_stack.md and security_analysis.md exist
- One-line summary from the latest draft
Then present using the **Choose format**:
```
══════════════════════════════════════
DECISION REQUIRED: Research complete — next action?
══════════════════════════════════════
A) Run another research round (Mode B assessment)
B) Proceed to planning with current draft
══════════════════════════════════════
Recommendation: [A or B] — [reason based on draft quality]
══════════════════════════════════════
```
- If user picks A → Read and execute `.cursor/skills/research/SKILL.md` (will auto-detect Mode B)
- If user picks B → auto-chain to Step 3 (Plan)
---
**Step 3 — Plan**
Condition: `_docs/01_solution/` has `solution_draft*.md` files AND `_docs/02_document/architecture.md` does not exist
Action:
1. The plan skill's Prereq 2 will rename the latest draft to `solution.md` — this is handled by the plan skill itself
2. Read and execute `.cursor/skills/plan/SKILL.md`
If `_docs/02_document/` exists but is incomplete (has some artifacts but no `FINAL_report.md`), the plan skill's built-in resumability handles it.
---
**Step 4 — UI Design (conditional)**
Condition: `_docs/02_document/architecture.md` exists AND the autopilot state does NOT show Step 4 (UI Design) as completed or skipped AND the project is a UI project
**UI Project Detection** — the project is a UI project if ANY of the following are true:
- `package.json` exists in the workspace root or any subdirectory
- `*.html`, `*.jsx`, `*.tsx` files exist in the workspace
- `_docs/02_document/components/` contains a component whose `description.md` mentions UI, frontend, page, screen, dashboard, form, or view
- `_docs/02_document/architecture.md` mentions frontend, UI layer, SPA, or client-side rendering
- `_docs/01_solution/solution.md` mentions frontend, web interface, or user-facing UI
If the project is NOT a UI project → mark Step 4 as `skipped` in the state file and auto-chain to Step 5.
If the project IS a UI project → present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: UI project detected — generate mockups?
══════════════════════════════════════
A) Generate UI mockups before decomposition (recommended)
B) Skip — proceed directly to decompose
══════════════════════════════════════
Recommendation: A — mockups before decomposition
produce better task specs for frontend components
══════════════════════════════════════
```
- If user picks A → Read and execute `.cursor/skills/ui-design/SKILL.md`. After completion, auto-chain to Step 5 (Decompose).
- If user picks B → Mark Step 4 as `skipped` in the state file, auto-chain to Step 5 (Decompose).
---
**Step 5 — Decompose**
Condition: `_docs/02_document/` contains `architecture.md` AND `_docs/02_document/components/` has at least one component AND `_docs/02_tasks/todo/` does not exist or has no task files
Action: Read and execute `.cursor/skills/decompose/SKILL.md`
If `_docs/02_tasks/` subfolders have some task files already, the decompose skill's resumability handles it.
---
**Step 6 — Implement**
Condition: `_docs/02_tasks/todo/` contains task files AND `_dependencies_table.md` exists AND `_docs/03_implementation/` does not contain any `implementation_report_*.md` file
Action: Read and execute `.cursor/skills/implement/SKILL.md`
If `_docs/03_implementation/` has batch reports, the implement skill detects completed tasks and continues. The FINAL report filename is context-dependent — see implement skill documentation for naming convention.
---
**Step 7 — Run Tests**
Condition: `_docs/03_implementation/` contains an `implementation_report_*.md` file AND the autopilot state does NOT show Step 7 (Run Tests) as completed AND (`_docs/04_deploy/` does not exist or is incomplete)
Action: Read and execute `.cursor/skills/test-run/SKILL.md`
---
**Step 8 — Security Audit (optional)**
Condition: the autopilot state shows Step 7 (Run Tests) is completed AND the autopilot state does NOT show Step 8 (Security Audit) as completed or skipped AND (`_docs/04_deploy/` does not exist or is incomplete)
Action: Present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Run security audit before deploy?
══════════════════════════════════════
A) Run security audit (recommended for production deployments)
B) Skip — proceed directly to deploy
══════════════════════════════════════
Recommendation: A — catches vulnerabilities before production
══════════════════════════════════════
```
- If user picks A → Read and execute `.cursor/skills/security/SKILL.md`. After completion, auto-chain to Step 9 (Performance Test).
- If user picks B → Mark Step 8 as `skipped` in the state file, auto-chain to Step 9 (Performance Test).
---
**Step 9 — Performance Test (optional)**
Condition: the autopilot state shows Step 8 (Security Audit) is completed or skipped AND the autopilot state does NOT show Step 9 (Performance Test) as completed or skipped AND (`_docs/04_deploy/` does not exist or is incomplete)
Action: Present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Run performance/load tests before deploy?
══════════════════════════════════════
A) Run performance tests (recommended for latency-sensitive or high-load systems)
B) Skip — proceed directly to deploy
══════════════════════════════════════
Recommendation: [A or B — base on whether acceptance criteria
include latency, throughput, or load requirements]
══════════════════════════════════════
```
- If user picks A → Run performance tests:
1. If `scripts/run-performance-tests.sh` exists (generated by the test-spec skill Phase 4), execute it
2. Otherwise, check if `_docs/02_document/tests/performance-tests.md` exists for test scenarios, detect appropriate load testing tool (k6, locust, artillery, wrk, or built-in benchmarks), and execute performance test scenarios against the running system
3. Present results vs acceptance criteria thresholds
4. If thresholds fail → present Choose format: A) Fix and re-run, B) Proceed anyway, C) Abort
5. After completion, auto-chain to Step 10 (Deploy)
- If user picks B → Mark Step 9 as `skipped` in the state file, auto-chain to Step 10 (Deploy).
---
**Step 10 — Deploy**
Condition: the autopilot state shows Step 7 (Run Tests) is completed AND (Step 8 is completed or skipped) AND (Step 9 is completed or skipped) AND (`_docs/04_deploy/` does not exist or is incomplete)
Action: Read and execute `.cursor/skills/deploy/SKILL.md`
---
**Done**
Condition: `_docs/04_deploy/` contains all expected artifacts (containerization.md, ci_cd_pipeline.md, environment_strategy.md, observability.md, deployment_procedures.md, deploy_scripts.md)
Action: Report project completion with summary. If the user runs autopilot again after greenfield completion, Flow Resolution rule 3 routes to the existing-code flow (re-entry after completion) so they can add new features.
## Auto-Chain Rules
| Completed Step | Next Action |
|---------------|-------------|
| Problem (1) | Auto-chain → Research (2) |
| Research (2) | Auto-chain → Research Decision (ask user: another round or proceed?) |
| Research Decision → proceed | Auto-chain → Plan (3) |
| Plan (3) | Auto-chain → UI Design detection (4) |
| UI Design (4, done or skipped) | Auto-chain → Decompose (5) |
| Decompose (5) | **Session boundary** — suggest new conversation before Implement |
| Implement (6) | Auto-chain → Run Tests (7) |
| Run Tests (7, all pass) | Auto-chain → Security Audit choice (8) |
| Security Audit (8, done or skipped) | Auto-chain → Performance Test choice (9) |
| Performance Test (9, done or skipped) | Auto-chain → Deploy (10) |
| Deploy (10) | Report completion |
## Status Summary Template
```
═══════════════════════════════════════════════════
AUTOPILOT STATUS (greenfield)
═══════════════════════════════════════════════════
Step 1 Problem [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 2 Research [DONE (N drafts) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 3 Plan [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 4 UI Design [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 5 Decompose [DONE (N tasks) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 6 Implement [DONE / IN PROGRESS (batch M of ~N) / NOT STARTED / FAILED (retry N/3)]
Step 7 Run Tests [DONE (N passed, M failed) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 8 Security Audit [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 9 Performance Test [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 10 Deploy [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
═══════════════════════════════════════════════════
Current: Step N — Name
SubStep: M — [sub-skill internal step name]
Retry: [N/3 if retrying, omit if 0]
Action: [what will happen next]
═══════════════════════════════════════════════════
```