Files
ai-training/.cursor/skills/autopilot/flows/existing-code.md
T
Oleksandr Bezdieniezhnykh cdcd1f6ea7 Fix .cursor skills consistency: flow resolution, tracker-agnostic refs, report naming, error recovery
- Rewrite autopilot flow resolution to 4 deterministic rules based on source code + docs + state file presence
- Replace all hard-coded Jira references with tracker-agnostic terminology across 30+ files
- Move project-management.mdc to _project.md (project-specific, not portable with .cursor)
- Rename FINAL_implementation_report.md to context-dependent names (implementation_report_tests/features/refactor)
- Remove "acknowledged tech debt" option from test-run — failing tests must be fixed or removed
- Add debug/error recovery protocol to protocols.md
- Align directory paths: metrics -> 06_metrics/, add 05_security/, reviews/, 02_task_plans/ to README
- Add missing skills (test-spec, test-run, new-task, ui-design) to README
- Use language-appropriate comment syntax for Arrange/Act/Assert in coderule + testing rules
- Copy updated coderule.mdc to parent suite/.cursor/rules/
- Raise max task complexity from 5 to 8 points in decompose
- Skip test-spec Phase 4 (script generation) during planning context
- Document per-batch vs post-implement test run as intentional
- Add skill-internal state cross-check rule to state.md
2026-03-28 02:34:00 +02:00

16 KiB
Raw Blame History

Existing Code Workflow

Workflow for projects with an existing codebase. Starts with documentation, produces test specs, checks code testability (refactoring if needed), decomposes and implements tests, verifies them, refactors with that safety net, then adds new functionality and deploys.

Step Reference Table

Step Name Sub-Skill Internal SubSteps
1 Document document/SKILL.md Steps 18
2 Test Spec test-spec/SKILL.md Phase 1a1b
3 Code Testability Revision refactor/SKILL.md (guided mode) Phases 07 (conditional)
4 Decompose Tests decompose/SKILL.md (tests-only) Step 1t + Step 3 + Step 4
5 Implement Tests implement/SKILL.md (batch-driven, no fixed sub-steps)
6 Run Tests test-run/SKILL.md Steps 14
7 Refactor refactor/SKILL.md Phases 07 (optional)
8 New Task new-task/SKILL.md Steps 18 (loop)
9 Implement implement/SKILL.md (batch-driven, no fixed sub-steps)
10 Run Tests test-run/SKILL.md Steps 14
11 Security Audit security/SKILL.md Phase 15 (optional)
12 Performance Test (autopilot-managed) Load/stress tests (optional)
13 Deploy deploy/SKILL.md Step 17

After Step 13, the existing-code workflow is complete.

Detection Rules

Check rules in order — first match wins.


Step 1 — Document Condition: _docs/ does not exist AND the workspace contains source code files (e.g., *.py, *.cs, *.rs, *.ts, src/, Cargo.toml, *.csproj, package.json)

Action: An existing codebase without documentation was detected. Read and execute .cursor/skills/document/SKILL.md. After the document skill completes, re-detect state (the produced _docs/ artifacts will place the project at Step 2 or later).


Step 2 — Test Spec Condition: _docs/02_document/FINAL_report.md exists AND workspace contains source code files (e.g., *.py, *.cs, *.rs, *.ts) AND _docs/02_document/tests/traceability-matrix.md does not exist AND the autopilot state shows step >= 2 (Document already ran)

Action: Read and execute .cursor/skills/test-spec/SKILL.md

This step applies when the codebase was documented via the /document skill. Test specifications must be produced before refactoring or further development.


Step 3 — Code Testability Revision Condition: _docs/02_document/tests/traceability-matrix.md exists AND the autopilot state shows Test Spec (Step 2) is completed AND the autopilot state does NOT show Code Testability Revision (Step 3) as completed or skipped

Action: Analyze the codebase against the test specs to determine whether the code can be tested as-is.

  1. Read _docs/02_document/tests/traceability-matrix.md and all test scenario files in _docs/02_document/tests/
  2. For each test scenario, check whether the code under test can be exercised in isolation. Look for:
    • Hardcoded file paths or directory references
    • Hardcoded configuration values (URLs, credentials, magic numbers)
    • Global mutable state that cannot be overridden
    • Tight coupling to external services without abstraction
    • Missing dependency injection or non-configurable parameters
    • Direct file system operations without path configurability
    • Inline construction of heavy dependencies (models, clients)
  3. If ALL scenarios are testable as-is:
    • Mark Step 3 as completed with outcome "Code is testable — no changes needed"
    • Auto-chain to Step 4 (Decompose Tests)
  4. If testability issues are found:
    • Create _docs/04_refactoring/01-testability-refactoring/
    • Write list-of-changes.md in that directory using the refactor skill template (.cursor/skills/refactor/templates/list-of-changes.md), with:
      • Mode: guided
      • Source: autopilot-testability-analysis
      • One change entry per testability issue found (change ID, file paths, problem, proposed change, risk, dependencies)
    • Invoke the refactor skill in guided mode: read and execute .cursor/skills/refactor/SKILL.md with the list-of-changes.md as input
    • The refactor skill will create RUN_DIR (01-testability-refactoring), create tasks in _docs/02_tasks/todo/, delegate to implement skill, and verify results
    • Phase 3 (Safety Net) is automatically skipped by the refactor skill for testability runs
    • After refactoring completes, mark Step 3 as completed
    • Auto-chain to Step 4 (Decompose Tests)

Step 4 — Decompose Tests Condition: _docs/02_document/tests/traceability-matrix.md exists AND workspace contains source code files AND the autopilot state shows Step 3 (Code Testability Revision) is completed or skipped AND (_docs/02_tasks/todo/ does not exist or has no test task files)

Action: Read and execute .cursor/skills/decompose/SKILL.md in tests-only mode (pass _docs/02_document/tests/ as input). The decompose skill will:

  1. Run Step 1t (test infrastructure bootstrap)
  2. Run Step 3 (blackbox test task decomposition)
  3. Run Step 4 (cross-verification against test coverage)

If _docs/02_tasks/ subfolders have some task files already (e.g., refactoring tasks from Step 3), the decompose skill's resumability handles it — it appends test tasks alongside existing tasks.


Step 5 — Implement Tests Condition: _docs/02_tasks/todo/ contains task files AND _dependencies_table.md exists AND the autopilot state shows Step 4 (Decompose Tests) is completed AND _docs/03_implementation/implementation_report_tests.md does not exist

Action: Read and execute .cursor/skills/implement/SKILL.md

The implement skill reads test tasks from _docs/02_tasks/todo/ and implements them.

If _docs/03_implementation/ has batch reports, the implement skill detects completed tasks and continues.


Step 6 — Run Tests Condition: _docs/03_implementation/implementation_report_tests.md exists AND the autopilot state shows Step 5 (Implement Tests) is completed AND the autopilot state does NOT show Step 6 (Run Tests) as completed

Action: Read and execute .cursor/skills/test-run/SKILL.md

Verifies the implemented test suite passes before proceeding to refactoring. The tests form the safety net for all subsequent code changes.


Step 7 — Refactor (optional) Condition: the autopilot state shows Step 6 (Run Tests) is completed AND the autopilot state does NOT show Step 7 (Refactor) as completed or skipped AND no _docs/04_refactoring/ run folder contains a FINAL_report.md for a non-testability run

Action: Present using Choose format:

══════════════════════════════════════
 DECISION REQUIRED: Refactor codebase before adding new features?
══════════════════════════════════════
 A) Run refactoring (recommended if code quality issues were noted during documentation)
 B) Skip — proceed directly to New Task
══════════════════════════════════════
 Recommendation: [A or B — base on whether documentation
 flagged significant code smells, coupling issues, or
 technical debt worth addressing before new development]
══════════════════════════════════════
  • If user picks A → Read and execute .cursor/skills/refactor/SKILL.md in automatic mode. The refactor skill creates a new run folder in _docs/04_refactoring/ (e.g., 02-coupling-refactoring), runs the full method using the implemented tests as a safety net. After completion, auto-chain to Step 8 (New Task).
  • If user picks B → Mark Step 7 as skipped in the state file, auto-chain to Step 8 (New Task).

Step 8 — New Task Condition: the autopilot state shows Step 7 (Refactor) is completed or skipped AND the autopilot state does NOT show Step 8 (New Task) as completed

Action: Read and execute .cursor/skills/new-task/SKILL.md

The new-task skill interactively guides the user through defining new functionality. It loops until the user is done adding tasks. New task files are written to _docs/02_tasks/todo/.


Step 9 — Implement Condition: the autopilot state shows Step 8 (New Task) is completed AND _docs/03_implementation/ does not contain an implementation_report_*.md file other than implementation_report_tests.md (the tests report from Step 5 is excluded from this check)

Action: Read and execute .cursor/skills/implement/SKILL.md

The implement skill reads the new tasks from _docs/02_tasks/todo/ and implements them. Tasks already implemented in Step 5 are skipped (completed tasks have been moved to done/).

If _docs/03_implementation/ has batch reports from this phase, the implement skill detects completed tasks and continues.


Step 10 — Run Tests Condition: the autopilot state shows Step 9 (Implement) is completed AND the autopilot state does NOT show Step 10 (Run Tests) as completed

Action: Read and execute .cursor/skills/test-run/SKILL.md


Step 11 — Security Audit (optional) Condition: the autopilot state shows Step 10 (Run Tests) is completed AND the autopilot state does NOT show Step 11 (Security Audit) as completed or skipped AND (_docs/04_deploy/ does not exist or is incomplete)

Action: Present using Choose format:

══════════════════════════════════════
 DECISION REQUIRED: Run security audit before deploy?
══════════════════════════════════════
 A) Run security audit (recommended for production deployments)
 B) Skip — proceed directly to deploy
══════════════════════════════════════
 Recommendation: A — catches vulnerabilities before production
══════════════════════════════════════
  • If user picks A → Read and execute .cursor/skills/security/SKILL.md. After completion, auto-chain to Step 12 (Performance Test).
  • If user picks B → Mark Step 11 as skipped in the state file, auto-chain to Step 12 (Performance Test).

Step 12 — Performance Test (optional) Condition: the autopilot state shows Step 11 (Security Audit) is completed or skipped AND the autopilot state does NOT show Step 12 (Performance Test) as completed or skipped AND (_docs/04_deploy/ does not exist or is incomplete)

Action: Present using Choose format:

══════════════════════════════════════
 DECISION REQUIRED: Run performance/load tests before deploy?
══════════════════════════════════════
 A) Run performance tests (recommended for latency-sensitive or high-load systems)
 B) Skip — proceed directly to deploy
══════════════════════════════════════
 Recommendation: [A or B — base on whether acceptance criteria
 include latency, throughput, or load requirements]
══════════════════════════════════════
  • If user picks A → Run performance tests:
    1. If scripts/run-performance-tests.sh exists (generated by the test-spec skill Phase 4), execute it
    2. Otherwise, check if _docs/02_document/tests/performance-tests.md exists for test scenarios, detect appropriate load testing tool (k6, locust, artillery, wrk, or built-in benchmarks), and execute performance test scenarios against the running system
    3. Present results vs acceptance criteria thresholds
    4. If thresholds fail → present Choose format: A) Fix and re-run, B) Proceed anyway, C) Abort
    5. After completion, auto-chain to Step 13 (Deploy)
  • If user picks B → Mark Step 12 as skipped in the state file, auto-chain to Step 13 (Deploy).

Step 13 — Deploy Condition: the autopilot state shows Step 10 (Run Tests) is completed AND (Step 11 is completed or skipped) AND (Step 12 is completed or skipped) AND (_docs/04_deploy/ does not exist or is incomplete)

Action: Read and execute .cursor/skills/deploy/SKILL.md

After deployment completes, the existing-code workflow is done.


Re-Entry After Completion Condition: the autopilot state shows step: done OR all steps through 13 (Deploy) are completed

Action: The project completed a full cycle. Present status and loop back to New Task:

══════════════════════════════════════
 PROJECT CYCLE COMPLETE
══════════════════════════════════════
 The previous cycle finished successfully.
 You can now add new functionality.
══════════════════════════════════════
 A) Add new features (start New Task)
 B) Done — no more changes needed
══════════════════════════════════════
  • If user picks A → set step: 8, status: not_started in the state file, then auto-chain to Step 8 (New Task).
  • If user picks B → report final project status and exit.

Auto-Chain Rules

Completed Step Next Action
Document (1) Auto-chain → Test Spec (2)
Test Spec (2) Auto-chain → Code Testability Revision (3)
Code Testability Revision (3) Auto-chain → Decompose Tests (4)
Decompose Tests (4) Session boundary — suggest new conversation before Implement Tests
Implement Tests (5) Auto-chain → Run Tests (6)
Run Tests (6, all pass) Auto-chain → Refactor choice (7)
Refactor (7, done or skipped) Auto-chain → New Task (8)
New Task (8) Session boundary — suggest new conversation before Implement
Implement (9) Auto-chain → Run Tests (10)
Run Tests (10, all pass) Auto-chain → Security Audit choice (11)
Security Audit (11, done or skipped) Auto-chain → Performance Test choice (12)
Performance Test (12, done or skipped) Auto-chain → Deploy (13)
Deploy (13) Workflow complete — existing-code flow done

Status Summary Template

═══════════════════════════════════════════════════
 AUTOPILOT STATUS (existing-code)
═══════════════════════════════════════════════════
 Step 1   Document                 [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
 Step 2   Test Spec                [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
 Step 3   Code Testability Rev.    [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
 Step 4   Decompose Tests          [DONE (N tasks) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
 Step 5   Implement Tests          [DONE / IN PROGRESS (batch M) / NOT STARTED / FAILED (retry N/3)]
 Step 6   Run Tests                [DONE (N passed, M failed) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
 Step 7   Refactor                 [DONE / SKIPPED / IN PROGRESS (phase N) / NOT STARTED / FAILED (retry N/3)]
 Step 8   New Task                 [DONE (N tasks) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
 Step 9   Implement                [DONE / IN PROGRESS (batch M of ~N) / NOT STARTED / FAILED (retry N/3)]
 Step 10  Run Tests                [DONE (N passed, M failed) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
 Step 11  Security Audit           [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
 Step 12  Performance Test         [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
 Step 13  Deploy                   [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
═══════════════════════════════════════════════════
 Current: Step N — Name
 SubStep: M — [sub-skill internal step name]
 Retry:   [N/3 if retrying, omit if 0]
 Action:  [what will happen next]
═══════════════════════════════════════════════════