18 KiB
Existing Code Workflow
Workflow for projects with an existing codebase. Starts with documentation, produces test specs, checks code testability (refactoring if needed), decomposes and implements tests, verifies them, refactors with that safety net, then adds new functionality and deploys.
Step Reference Table
| Step | Name | Sub-Skill | Internal SubSteps |
|---|---|---|---|
| 1 | Document | document/SKILL.md | Steps 1–8 |
| 2 | Test Spec | test-spec/SKILL.md | Phase 1a–1b |
| 3 | Code Testability Revision | refactor/SKILL.md (guided mode) | Phases 0–7 (conditional) |
| 4 | Decompose Tests | decompose/SKILL.md (tests-only) | Step 1t + Step 3 + Step 4 |
| 5 | Implement Tests | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
| 6 | Run Tests | test-run/SKILL.md | Steps 1–4 |
| 7 | Refactor | refactor/SKILL.md | Phases 0–7 (optional) |
| 8 | New Task | new-task/SKILL.md | Steps 1–8 (loop) |
| 9 | Implement | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
| 10 | Run Tests | test-run/SKILL.md | Steps 1–4 |
| 11 | Update Docs | document/SKILL.md (task mode) | Task Steps 0–5 |
| 12 | Security Audit | security/SKILL.md | Phase 1–5 (optional) |
| 13 | Performance Test | (autopilot-managed) | Load/stress tests (optional) |
| 14 | Deploy | deploy/SKILL.md | Step 1–7 |
After Step 14, the existing-code workflow is complete.
Detection Rules
Check rules in order — first match wins.
Step 1 — Document
Condition: _docs/ does not exist AND the workspace contains source code files (e.g., *.py, *.cs, *.rs, *.ts, src/, Cargo.toml, *.csproj, package.json)
Action: An existing codebase without documentation was detected. Read and execute .cursor/skills/document/SKILL.md. After the document skill completes, re-detect state (the produced _docs/ artifacts will place the project at Step 2 or later).
Step 2 — Test Spec
Condition: _docs/02_document/FINAL_report.md exists AND workspace contains source code files (e.g., *.py, *.cs, *.rs, *.ts) AND _docs/02_document/tests/traceability-matrix.md does not exist AND the autopilot state shows step >= 2 (Document already ran)
Action: Read and execute .cursor/skills/test-spec/SKILL.md
This step applies when the codebase was documented via the /document skill. Test specifications must be produced before refactoring or further development.
Step 3 — Code Testability Revision
Condition: _docs/02_document/tests/traceability-matrix.md exists AND the autopilot state shows Test Spec (Step 2) is completed AND the autopilot state does NOT show Code Testability Revision (Step 3) as completed or skipped
Action: Analyze the codebase against the test specs to determine whether the code can be tested as-is.
- Read
_docs/02_document/tests/traceability-matrix.mdand all test scenario files in_docs/02_document/tests/ - For each test scenario, check whether the code under test can be exercised in isolation. Look for:
- Hardcoded file paths or directory references
- Hardcoded configuration values (URLs, credentials, magic numbers)
- Global mutable state that cannot be overridden
- Tight coupling to external services without abstraction
- Missing dependency injection or non-configurable parameters
- Direct file system operations without path configurability
- Inline construction of heavy dependencies (models, clients)
- If ALL scenarios are testable as-is:
- Mark Step 3 as
completedwith outcome "Code is testable — no changes needed" - Auto-chain to Step 4 (Decompose Tests)
- Mark Step 3 as
- If testability issues are found:
- Create
_docs/04_refactoring/01-testability-refactoring/ - Write
list-of-changes.mdin that directory using the refactor skill template (.cursor/skills/refactor/templates/list-of-changes.md), with:- Mode:
guided - Source:
autopilot-testability-analysis - One change entry per testability issue found (change ID, file paths, problem, proposed change, risk, dependencies)
- Mode:
- Invoke the refactor skill in guided mode: read and execute
.cursor/skills/refactor/SKILL.mdwith thelist-of-changes.mdas input - The refactor skill will create RUN_DIR (
01-testability-refactoring), create tasks in_docs/02_tasks/todo/, delegate to implement skill, and verify results - Phase 3 (Safety Net) is automatically skipped by the refactor skill for testability runs
- After refactoring completes, mark Step 3 as
completed - Auto-chain to Step 4 (Decompose Tests)
- Create
Step 4 — Decompose Tests
Condition: _docs/02_document/tests/traceability-matrix.md exists AND workspace contains source code files AND the autopilot state shows Step 3 (Code Testability Revision) is completed or skipped AND (_docs/02_tasks/todo/ does not exist or has no test task files)
Action: Read and execute .cursor/skills/decompose/SKILL.md in tests-only mode (pass _docs/02_document/tests/ as input). The decompose skill will:
- Run Step 1t (test infrastructure bootstrap)
- Run Step 3 (blackbox test task decomposition)
- Run Step 4 (cross-verification against test coverage)
If _docs/02_tasks/ subfolders have some task files already (e.g., refactoring tasks from Step 3), the decompose skill's resumability handles it — it appends test tasks alongside existing tasks.
Step 5 — Implement Tests
Condition: _docs/02_tasks/todo/ contains task files AND _dependencies_table.md exists AND the autopilot state shows Step 4 (Decompose Tests) is completed AND _docs/03_implementation/implementation_report_tests.md does not exist
Action: Read and execute .cursor/skills/implement/SKILL.md
The implement skill reads test tasks from _docs/02_tasks/todo/ and implements them.
If _docs/03_implementation/ has batch reports, the implement skill detects completed tasks and continues.
Step 6 — Run Tests
Condition: _docs/03_implementation/implementation_report_tests.md exists AND the autopilot state shows Step 5 (Implement Tests) is completed AND the autopilot state does NOT show Step 6 (Run Tests) as completed
Action: Read and execute .cursor/skills/test-run/SKILL.md
Verifies the implemented test suite passes before proceeding to refactoring. The tests form the safety net for all subsequent code changes.
Step 7 — Refactor (optional)
Condition: the autopilot state shows Step 6 (Run Tests) is completed AND the autopilot state does NOT show Step 7 (Refactor) as completed or skipped AND no _docs/04_refactoring/ run folder contains a FINAL_report.md for a non-testability run
Action: Present using Choose format:
══════════════════════════════════════
DECISION REQUIRED: Refactor codebase before adding new features?
══════════════════════════════════════
A) Run refactoring (recommended if code quality issues were noted during documentation)
B) Skip — proceed directly to New Task
══════════════════════════════════════
Recommendation: [A or B — base on whether documentation
flagged significant code smells, coupling issues, or
technical debt worth addressing before new development]
══════════════════════════════════════
- If user picks A → Read and execute
.cursor/skills/refactor/SKILL.mdin automatic mode. The refactor skill creates a new run folder in_docs/04_refactoring/(e.g.,02-coupling-refactoring), runs the full method using the implemented tests as a safety net. After completion, auto-chain to Step 8 (New Task). - If user picks B → Mark Step 7 as
skippedin the state file, auto-chain to Step 8 (New Task).
Step 8 — New Task Condition: the autopilot state shows Step 7 (Refactor) is completed or skipped AND the autopilot state does NOT show Step 8 (New Task) as completed
Action: Read and execute .cursor/skills/new-task/SKILL.md
The new-task skill interactively guides the user through defining new functionality. It loops until the user is done adding tasks. New task files are written to _docs/02_tasks/todo/.
Step 9 — Implement
Condition: the autopilot state shows Step 8 (New Task) is completed AND _docs/03_implementation/ does not contain an implementation_report_*.md file other than implementation_report_tests.md (the tests report from Step 5 is excluded from this check)
Action: Read and execute .cursor/skills/implement/SKILL.md
The implement skill reads the new tasks from _docs/02_tasks/todo/ and implements them. Tasks already implemented in Step 5 are skipped (completed tasks have been moved to done/).
If _docs/03_implementation/ has batch reports from this phase, the implement skill detects completed tasks and continues.
Step 10 — Run Tests Condition: the autopilot state shows Step 9 (Implement) is completed AND the autopilot state does NOT show Step 10 (Run Tests) as completed
Action: Read and execute .cursor/skills/test-run/SKILL.md
Step 11 — Update Docs
Condition: the autopilot state shows Step 10 (Run Tests) is completed AND the autopilot state does NOT show Step 11 (Update Docs) as completed AND _docs/02_document/ contains existing documentation (module or component docs)
Action: Read and execute .cursor/skills/document/SKILL.md in Task mode. Pass all task spec files from _docs/02_tasks/done/ that were implemented in the current cycle (i.e., tasks moved to done/ during Steps 8–9 of this cycle).
The document skill in Task mode:
- Reads each task spec to identify changed source files
- Updates affected module docs, component docs, and system-level docs
- Does NOT redo full discovery, verification, or problem extraction
If _docs/02_document/ does not contain existing docs (e.g., documentation step was skipped), mark Step 11 as skipped.
After completion, auto-chain to Step 12 (Security Audit).
Step 12 — Security Audit (optional)
Condition: the autopilot state shows Step 11 (Update Docs) is completed or skipped AND the autopilot state does NOT show Step 12 (Security Audit) as completed or skipped AND (_docs/04_deploy/ does not exist or is incomplete)
Action: Present using Choose format:
══════════════════════════════════════
DECISION REQUIRED: Run security audit before deploy?
══════════════════════════════════════
A) Run security audit (recommended for production deployments)
B) Skip — proceed directly to deploy
══════════════════════════════════════
Recommendation: A — catches vulnerabilities before production
══════════════════════════════════════
- If user picks A → Read and execute
.cursor/skills/security/SKILL.md. After completion, auto-chain to Step 13 (Performance Test). - If user picks B → Mark Step 12 as
skippedin the state file, auto-chain to Step 13 (Performance Test).
Step 13 — Performance Test (optional)
Condition: the autopilot state shows Step 12 (Security Audit) is completed or skipped AND the autopilot state does NOT show Step 13 (Performance Test) as completed or skipped AND (_docs/04_deploy/ does not exist or is incomplete)
Action: Present using Choose format:
══════════════════════════════════════
DECISION REQUIRED: Run performance/load tests before deploy?
══════════════════════════════════════
A) Run performance tests (recommended for latency-sensitive or high-load systems)
B) Skip — proceed directly to deploy
══════════════════════════════════════
Recommendation: [A or B — base on whether acceptance criteria
include latency, throughput, or load requirements]
══════════════════════════════════════
- If user picks A → Run performance tests:
- If
scripts/run-performance-tests.shexists (generated by the test-spec skill Phase 4), execute it - Otherwise, check if
_docs/02_document/tests/performance-tests.mdexists for test scenarios, detect appropriate load testing tool (k6, locust, artillery, wrk, or built-in benchmarks), and execute performance test scenarios against the running system - Present results vs acceptance criteria thresholds
- If thresholds fail → present Choose format: A) Fix and re-run, B) Proceed anyway, C) Abort
- After completion, auto-chain to Step 14 (Deploy)
- If
- If user picks B → Mark Step 13 as
skippedin the state file, auto-chain to Step 14 (Deploy).
Step 14 — Deploy
Condition: the autopilot state shows Step 10 (Run Tests) is completed AND (Step 11 is completed or skipped) AND (Step 12 is completed or skipped) AND (Step 13 is completed or skipped) AND (_docs/04_deploy/ does not exist or is incomplete)
Action: Read and execute .cursor/skills/deploy/SKILL.md
After deployment completes, the existing-code workflow is done.
Re-Entry After Completion
Condition: the autopilot state shows step: done OR all steps through 14 (Deploy) are completed
Action: The project completed a full cycle. Print the status banner and automatically loop back to New Task — do NOT ask the user for confirmation:
══════════════════════════════════════
PROJECT CYCLE COMPLETE
══════════════════════════════════════
The previous cycle finished successfully.
Starting new feature cycle…
══════════════════════════════════════
Set step: 8, status: not_started in the state file, then auto-chain to Step 8 (New Task).
Note: the loop (Steps 8 → 14 → 8) ensures every feature cycle includes: New Task → Implement → Run Tests → Update Docs → Security → Performance → Deploy.
Auto-Chain Rules
| Completed Step | Next Action |
|---|---|
| Document (1) | Auto-chain → Test Spec (2) |
| Test Spec (2) | Auto-chain → Code Testability Revision (3) |
| Code Testability Revision (3) | Auto-chain → Decompose Tests (4) |
| Decompose Tests (4) | Session boundary — suggest new conversation before Implement Tests |
| Implement Tests (5) | Auto-chain → Run Tests (6) |
| Run Tests (6, all pass) | Auto-chain → Refactor choice (7) |
| Refactor (7, done or skipped) | Auto-chain → New Task (8) |
| New Task (8) | Session boundary — suggest new conversation before Implement |
| Implement (9) | Auto-chain → Run Tests (10) |
| Run Tests (10, all pass) | Auto-chain → Update Docs (11) |
| Update Docs (11) | Auto-chain → Security Audit choice (12) |
| Security Audit (12, done or skipped) | Auto-chain → Performance Test choice (13) |
| Performance Test (13, done or skipped) | Auto-chain → Deploy (14) |
| Deploy (14) | Workflow complete — existing-code flow done |
Status Summary Template
═══════════════════════════════════════════════════
AUTOPILOT STATUS (existing-code)
═══════════════════════════════════════════════════
Step 1 Document [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 2 Test Spec [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 3 Code Testability Rev. [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 4 Decompose Tests [DONE (N tasks) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 5 Implement Tests [DONE / IN PROGRESS (batch M) / NOT STARTED / FAILED (retry N/3)]
Step 6 Run Tests [DONE (N passed, M failed) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 7 Refactor [DONE / SKIPPED / IN PROGRESS (phase N) / NOT STARTED / FAILED (retry N/3)]
Step 8 New Task [DONE (N tasks) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 9 Implement [DONE / IN PROGRESS (batch M of ~N) / NOT STARTED / FAILED (retry N/3)]
Step 10 Run Tests [DONE (N passed, M failed) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 11 Update Docs [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 12 Security Audit [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 13 Performance Test [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 14 Deploy [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
═══════════════════════════════════════════════════
Current: Step N — Name
SubStep: M — [sub-skill internal step name]
Retry: [N/3 if retrying, omit if 0]
Action: [what will happen next]
═══════════════════════════════════════════════════