mirror of
https://github.com/azaion/satellite-provider.git
synced 2026-04-22 22:46:38 +00:00
407 lines
24 KiB
Markdown
407 lines
24 KiB
Markdown
# Existing Code Workflow
|
||
|
||
Workflow for projects with an existing codebase. Structurally it has **two phases**:
|
||
|
||
- **Phase A — One-time baseline setup (Steps 1–8)**: runs exactly once per codebase. Documents the code, produces test specs, makes the code testable, writes and runs the initial test suite, optionally refactors with that safety net.
|
||
- **Phase B — Feature cycle (Steps 9–17, loops)**: runs once per new feature. After Step 17 (Retrospective), the flow loops back to Step 9 (New Task) with `state.cycle` incremented.
|
||
|
||
A first-time run executes Phase A then Phase B; every subsequent invocation re-enters Phase B.
|
||
|
||
## Step Reference Table
|
||
|
||
### Phase A — One-time baseline setup
|
||
|
||
| Step | Name | Sub-Skill | Internal SubSteps |
|
||
|------|------|-----------|-------------------|
|
||
| 1 | Document | document/SKILL.md | Steps 1–8 |
|
||
| 2 | Architecture Baseline Scan | code-review/SKILL.md (baseline mode) | Phase 1 + Phase 7 |
|
||
| 3 | Test Spec | test-spec/SKILL.md | Phases 1–4 |
|
||
| 4 | Code Testability Revision | refactor/SKILL.md (guided mode) | Phases 0–7 (conditional) |
|
||
| 5 | Decompose Tests | decompose/SKILL.md (tests-only) | Step 1t + Step 3 + Step 4 |
|
||
| 6 | Implement Tests | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
|
||
| 7 | Run Tests | test-run/SKILL.md | Steps 1–4 |
|
||
| 8 | Refactor | refactor/SKILL.md | Phases 0–7 (optional) |
|
||
|
||
### Phase B — Feature cycle (loops back to Step 9 after Step 17)
|
||
|
||
| Step | Name | Sub-Skill | Internal SubSteps |
|
||
|------|------|-----------|-------------------|
|
||
| 9 | New Task | new-task/SKILL.md | Steps 1–8 (loop) |
|
||
| 10 | Implement | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
|
||
| 11 | Run Tests | test-run/SKILL.md | Steps 1–4 |
|
||
| 12 | Test-Spec Sync | test-spec/SKILL.md (cycle-update mode) | Phase 2 + Phase 3 (scoped) |
|
||
| 13 | Update Docs | document/SKILL.md (task mode) | Task Steps 0–5 |
|
||
| 14 | Security Audit | security/SKILL.md | Phase 1–5 (optional) |
|
||
| 15 | Performance Test | test-run/SKILL.md (perf mode) | Steps 1–5 (optional) |
|
||
| 16 | Deploy | deploy/SKILL.md | Step 1–7 |
|
||
| 17 | Retrospective | retrospective/SKILL.md (cycle-end mode) | Steps 1–4 |
|
||
|
||
After Step 17, the feature cycle completes and the flow loops back to Step 9 with `state.cycle + 1` — see "Re-Entry After Completion" below.
|
||
|
||
## Detection Rules
|
||
|
||
**Resolution**: when a state file exists, `state.step` + `state.status` drive detection and the conditions below are not consulted. When no state file exists (cold start), walk the rules in order — first folder-probe match wins. Steps without a folder probe are state-driven only; they can only be reached by auto-chain from a prior step. Cycle-scoped steps (Step 10 onward) always read `state.cycle` to disambiguate current vs. prior cycle artifacts.
|
||
|
||
---
|
||
|
||
### Phase A — One-time baseline setup (Steps 1–8)
|
||
|
||
**Step 1 — Document**
|
||
Condition: `_docs/` does not exist AND the workspace contains source code files (e.g., `*.py`, `*.cs`, `*.rs`, `*.ts`, `src/`, `Cargo.toml`, `*.csproj`, `package.json`)
|
||
|
||
Action: An existing codebase without documentation was detected. Read and execute `.cursor/skills/document/SKILL.md`. After the document skill completes, re-detect state (the produced `_docs/` artifacts will place the project at Step 2 or later).
|
||
|
||
The document skill's Step 2.5 produces `_docs/02_document/module-layout.md`, which is required by every downstream step that assigns file ownership (`/implement` Step 4, `/code-review` Phase 7, `/refactor` discovery). If this file is missing after Step 1 completes (e.g., a pre-existing `_docs/` dir predates the 2.5 addition), re-invoke `/document` in resume mode — it will pick up at Step 2.5.
|
||
|
||
---
|
||
|
||
**Step 2 — Architecture Baseline Scan**
|
||
Condition: `_docs/02_document/FINAL_report.md` exists AND `_docs/02_document/architecture.md` exists AND `_docs/02_document/architecture_compliance_baseline.md` does not exist.
|
||
|
||
Action: Invoke `.cursor/skills/code-review/SKILL.md` in **baseline mode** (Phase 1 + Phase 7 only) against the full existing codebase. Phase 7 produces a structural map of the code vs. the just-documented `architecture.md`. Save the output to `_docs/02_document/architecture_compliance_baseline.md`.
|
||
|
||
Rationale: existing codebases often have pre-existing architecture violations (cycles, cross-component private imports, duplicate logic). Catching them here, before the Testability Revision (Step 4), gives the user a chance to fold structural fixes into the refactor scope.
|
||
|
||
After completion, if the baseline report contains **High or Critical** Architecture findings:
|
||
- Append them to the testability `list-of-changes.md` input in Step 4 (so testability refactor can address the most disruptive ones along with testability fixes), OR
|
||
- Surface them to the user via Choose format to defer to Step 8 (optional Refactor).
|
||
|
||
If the baseline report is clean (no High/Critical findings), auto-chain directly to Step 3.
|
||
|
||
---
|
||
|
||
**Step 3 — Test Spec**
|
||
Condition (folder fallback): `_docs/02_document/FINAL_report.md` exists AND workspace contains source code files AND `_docs/02_document/tests/traceability-matrix.md` does not exist.
|
||
State-driven: reached by auto-chain from Step 2.
|
||
|
||
Action: Read and execute `.cursor/skills/test-spec/SKILL.md`
|
||
|
||
This step applies when the codebase was documented via the `/document` skill. Test specifications must be produced before refactoring or further development.
|
||
|
||
---
|
||
|
||
**Step 4 — Code Testability Revision**
|
||
Condition (folder fallback): `_docs/02_document/tests/traceability-matrix.md` exists AND no test tasks exist yet in `_docs/02_tasks/todo/`.
|
||
State-driven: reached by auto-chain from Step 3.
|
||
|
||
**Purpose**: enable tests to run at all. Without this step, hardcoded URLs, file paths, credentials, or global singletons can prevent the test suite from exercising the code against a controlled environment. The test authors need a testable surface before they can write tests that mean anything.
|
||
|
||
**Scope — MINIMAL, SURGICAL fixes**: this is not a profound refactor. It is the smallest set of changes (sometimes temporary hacks) required to make code runnable under tests. "Smallest" beats "elegant" here — deeper structural improvements belong in Step 8 (Refactor), not this step.
|
||
|
||
**Allowed changes** in this phase:
|
||
- Replace hardcoded URLs / file paths / credentials / magic numbers with env vars or constructor arguments.
|
||
- Extract narrow interfaces for components that need stubbing in tests.
|
||
- Add optional constructor parameters for dependency injection; default to the existing hardcoded behavior so callers do not break.
|
||
- Wrap global singletons in thin accessors that tests can override (thread-local / context var / setter gate).
|
||
- Split a huge function ONLY when necessary to stub one of its collaborators — do not split for clarity alone.
|
||
|
||
**NOT allowed** in this phase (defer to Step 8 Refactor):
|
||
- Renaming public APIs (breaks consumers without a safety net).
|
||
- Moving code between files unless strictly required for isolation.
|
||
- Changing algorithms or business logic.
|
||
- Restructuring module boundaries or rewriting layers.
|
||
|
||
**Safety**: Phase 3 (Safety Net) of the refactor skill is skipped here **by design** — no tests exist yet to form the safety net. Compensating controls:
|
||
- Every change is bounded by the allowed/not-allowed lists above.
|
||
- `list-of-changes.md` must be reviewed by the user BEFORE execution (refactor skill enforces this gate).
|
||
- After execution, the refactor skill produces `RUN_DIR/testability_changes_summary.md` — a plain-language list of every applied change and why. Present this to the user before auto-chaining to Step 5.
|
||
|
||
Action: Analyze the codebase against the test specs to determine whether the code can be tested as-is.
|
||
|
||
1. Read `_docs/02_document/tests/traceability-matrix.md` and all test scenario files in `_docs/02_document/tests/`.
|
||
2. Read `_docs/02_document/architecture_compliance_baseline.md` (produced in Step 2). If it contains High/Critical Architecture findings that overlap with testability issues, consider including the lightest structural fixes inline; leave the rest for Step 8.
|
||
3. For each test scenario, check whether the code under test can be exercised in isolation. Look for:
|
||
- Hardcoded file paths or directory references
|
||
- Hardcoded configuration values (URLs, credentials, magic numbers)
|
||
- Global mutable state that cannot be overridden
|
||
- Tight coupling to external services without abstraction
|
||
- Missing dependency injection or non-configurable parameters
|
||
- Direct file system operations without path configurability
|
||
- Inline construction of heavy dependencies (models, clients)
|
||
4. If ALL scenarios are testable as-is:
|
||
- Mark Step 4 as `completed` with outcome "Code is testable — no changes needed"
|
||
- Auto-chain to Step 5 (Decompose Tests)
|
||
5. If testability issues are found:
|
||
- Create `_docs/04_refactoring/01-testability-refactoring/`
|
||
- Write `list-of-changes.md` in that directory using the refactor skill template (`.cursor/skills/refactor/templates/list-of-changes.md`), with:
|
||
- **Mode**: `guided`
|
||
- **Source**: `autodev-testability-analysis`
|
||
- One change entry per testability issue found (change ID, file paths, problem, proposed change, risk, dependencies). Each entry must fit the allowed-changes list above; reject entries that drift into full refactor territory and log them under "Deferred to Step 8 Refactor" instead.
|
||
- Invoke the refactor skill in **guided mode**: read and execute `.cursor/skills/refactor/SKILL.md` with the `list-of-changes.md` as input
|
||
- The refactor skill will create RUN_DIR (`01-testability-refactoring`), create tasks in `_docs/02_tasks/todo/`, delegate to implement skill, and verify results
|
||
- Phase 3 (Safety Net) is automatically skipped by the refactor skill for testability runs
|
||
- After execution, the refactor skill produces `RUN_DIR/testability_changes_summary.md`. Surface this summary to the user via the Choose format (accept / request follow-up) before auto-chaining.
|
||
- Mark Step 4 as `completed`
|
||
- Auto-chain to Step 5 (Decompose Tests)
|
||
|
||
---
|
||
|
||
**Step 5 — Decompose Tests**
|
||
Condition (folder fallback): `_docs/02_document/tests/traceability-matrix.md` exists AND workspace contains source code files AND (`_docs/02_tasks/todo/` does not exist or has no test task files).
|
||
State-driven: reached by auto-chain from Step 4 (completed or skipped).
|
||
|
||
Action: Read and execute `.cursor/skills/decompose/SKILL.md` in **tests-only mode** (pass `_docs/02_document/tests/` as input). The decompose skill will:
|
||
1. Run Step 1t (test infrastructure bootstrap)
|
||
2. Run Step 3 (blackbox test task decomposition)
|
||
3. Run Step 4 (cross-verification against test coverage)
|
||
|
||
If `_docs/02_tasks/` subfolders have some task files already (e.g., refactoring tasks from Step 4), the decompose skill's resumability handles it — it appends test tasks alongside existing tasks.
|
||
|
||
---
|
||
|
||
**Step 6 — Implement Tests**
|
||
Condition (folder fallback): `_docs/02_tasks/todo/` contains task files AND `_dependencies_table.md` exists AND `_docs/03_implementation/implementation_report_tests.md` does not exist.
|
||
State-driven: reached by auto-chain from Step 5.
|
||
|
||
Action: Read and execute `.cursor/skills/implement/SKILL.md`
|
||
|
||
The implement skill reads test tasks from `_docs/02_tasks/todo/` and implements them.
|
||
|
||
If `_docs/03_implementation/` has batch reports, the implement skill detects completed tasks and continues.
|
||
|
||
---
|
||
|
||
**Step 7 — Run Tests**
|
||
Condition (folder fallback): `_docs/03_implementation/implementation_report_tests.md` exists.
|
||
State-driven: reached by auto-chain from Step 6.
|
||
|
||
Action: Read and execute `.cursor/skills/test-run/SKILL.md`
|
||
|
||
Verifies the implemented test suite passes before proceeding to refactoring. The tests form the safety net for all subsequent code changes.
|
||
|
||
---
|
||
|
||
**Step 8 — Refactor (optional)**
|
||
State-driven: reached by auto-chain from Step 7. (Sanity check: no `_docs/04_refactoring/` run folder should contain a `FINAL_report.md` for a non-testability run when entering this step for the first time.)
|
||
|
||
Action: Present using Choose format:
|
||
|
||
```
|
||
══════════════════════════════════════
|
||
DECISION REQUIRED: Refactor codebase before adding new features?
|
||
══════════════════════════════════════
|
||
A) Run refactoring (recommended if code quality issues were noted during documentation)
|
||
B) Skip — proceed directly to New Task
|
||
══════════════════════════════════════
|
||
Recommendation: [A or B — base on whether documentation
|
||
flagged significant code smells, coupling issues, or
|
||
technical debt worth addressing before new development]
|
||
══════════════════════════════════════
|
||
```
|
||
|
||
- If user picks A → Read and execute `.cursor/skills/refactor/SKILL.md` in automatic mode. The refactor skill creates a new run folder in `_docs/04_refactoring/` (e.g., `02-coupling-refactoring`), runs the full method using the implemented tests as a safety net. After completion, auto-chain to Step 9 (New Task).
|
||
- If user picks B → Mark Step 8 as `skipped` in the state file, auto-chain to Step 9 (New Task).
|
||
|
||
---
|
||
|
||
### Phase B — Feature cycle (Steps 9–17, loops)
|
||
|
||
**Step 9 — New Task**
|
||
State-driven: reached by auto-chain from Step 8 (completed or skipped). This is also the re-entry point after a completed cycle — see "Re-Entry After Completion" below.
|
||
|
||
Action: Read and execute `.cursor/skills/new-task/SKILL.md`
|
||
|
||
The new-task skill interactively guides the user through defining new functionality. It loops until the user is done adding tasks. New task files are written to `_docs/02_tasks/todo/`.
|
||
|
||
---
|
||
|
||
**Step 10 — Implement**
|
||
State-driven: reached by auto-chain from Step 9 in the CURRENT cycle (matching `state.cycle`). Detection is purely state-driven — prior cycles will have left `implementation_report_{feature_slug}_cycle{N-1}.md` artifacts that must not block new cycles.
|
||
|
||
Action: Read and execute `.cursor/skills/implement/SKILL.md`
|
||
|
||
The implement skill reads the new tasks from `_docs/02_tasks/todo/` and implements them. Tasks already implemented in Step 6 or prior cycles are skipped (completed tasks have been moved to `done/`).
|
||
|
||
**Implementation report naming**: the final report for this cycle must be named `implementation_report_{feature_slug}_cycle{N}.md` where `{N}` is `state.cycle`. Batch reports are named `batch_{NN}_cycle{M}_report.md` so the cycle counter survives folder scans.
|
||
|
||
If `_docs/03_implementation/` has batch reports from the current cycle, the implement skill detects completed tasks and continues.
|
||
|
||
---
|
||
|
||
**Step 11 — Run Tests**
|
||
State-driven: reached by auto-chain from Step 10.
|
||
|
||
Action: Read and execute `.cursor/skills/test-run/SKILL.md`
|
||
|
||
---
|
||
|
||
**Step 12 — Test-Spec Sync**
|
||
State-driven: reached by auto-chain from Step 11. Requires `_docs/02_document/tests/traceability-matrix.md` to exist — if missing, mark Step 12 `skipped` (see Action below).
|
||
|
||
Action: Read and execute `.cursor/skills/test-spec/SKILL.md` in **cycle-update mode**. Pass the cycle's completed task specs (files in `_docs/02_tasks/done/` moved during this cycle) and the implementation report `_docs/03_implementation/implementation_report_{feature_slug}_cycle{N}.md` as inputs.
|
||
|
||
The skill appends new ACs, scenarios, and NFRs to the existing test-spec files without rewriting unaffected sections. If `traceability-matrix.md` is missing (e.g., cycle added after a greenfield-only project), mark Step 12 as `skipped` — the next `/test-spec` full run will regenerate it.
|
||
|
||
After completion, auto-chain to Step 13 (Update Docs).
|
||
|
||
---
|
||
|
||
**Step 13 — Update Docs**
|
||
State-driven: reached by auto-chain from Step 12 (completed or skipped). Requires `_docs/02_document/` to contain existing documentation — if missing, mark Step 13 `skipped` (see Action below).
|
||
|
||
Action: Read and execute `.cursor/skills/document/SKILL.md` in **Task mode**. Pass all task spec files from `_docs/02_tasks/done/` that were implemented in the current cycle (i.e., tasks moved to `done/` during Steps 9–10 of this cycle).
|
||
|
||
The document skill in Task mode:
|
||
1. Reads each task spec to identify changed source files
|
||
2. Updates affected module docs, component docs, and system-level docs
|
||
3. Does NOT redo full discovery, verification, or problem extraction
|
||
|
||
If `_docs/02_document/` does not contain existing docs (e.g., documentation step was skipped), mark Step 13 as `skipped`.
|
||
|
||
After completion, auto-chain to Step 14 (Security Audit).
|
||
|
||
---
|
||
|
||
**Step 14 — Security Audit (optional)**
|
||
State-driven: reached by auto-chain from Step 13 (completed or skipped).
|
||
|
||
Action: Apply the **Optional Skill Gate** (`protocols.md` → "Optional Skill Gate") with:
|
||
- question: `Run security audit before deploy?`
|
||
- option-a-label: `Run security audit (recommended for production deployments)`
|
||
- option-b-label: `Skip — proceed directly to deploy`
|
||
- recommendation: `A — catches vulnerabilities before production`
|
||
- target-skill: `.cursor/skills/security/SKILL.md`
|
||
- next-step: Step 15 (Performance Test)
|
||
|
||
---
|
||
|
||
**Step 15 — Performance Test (optional)**
|
||
State-driven: reached by auto-chain from Step 14 (completed or skipped).
|
||
|
||
Action: Apply the **Optional Skill Gate** (`protocols.md` → "Optional Skill Gate") with:
|
||
- question: `Run performance/load tests before deploy?`
|
||
- option-a-label: `Run performance tests (recommended for latency-sensitive or high-load systems)`
|
||
- option-b-label: `Skip — proceed directly to deploy`
|
||
- recommendation: `A or B — base on whether acceptance criteria include latency, throughput, or load requirements`
|
||
- target-skill: `.cursor/skills/test-run/SKILL.md` in **perf mode** (the skill handles runner detection, threshold comparison, and its own A/B/C gate on threshold failures)
|
||
- next-step: Step 16 (Deploy)
|
||
|
||
---
|
||
|
||
**Step 16 — Deploy**
|
||
State-driven: reached by auto-chain from Step 15 (completed or skipped).
|
||
|
||
Action: Read and execute `.cursor/skills/deploy/SKILL.md`.
|
||
|
||
After the deploy skill completes successfully, mark Step 16 as `completed` and auto-chain to Step 17 (Retrospective).
|
||
|
||
---
|
||
|
||
**Step 17 — Retrospective**
|
||
State-driven: reached by auto-chain from Step 16, for the current `state.cycle`.
|
||
|
||
Action: Read and execute `.cursor/skills/retrospective/SKILL.md` in **cycle-end mode**. Pass cycle context (`cycle: state.cycle`) so the retro report and LESSONS.md entries record which feature cycle they came from.
|
||
|
||
After retrospective completes, mark Step 17 as `completed` and enter "Re-Entry After Completion" evaluation.
|
||
|
||
---
|
||
|
||
**Re-Entry After Completion**
|
||
State-driven: `state.step == done` OR Step 17 (Retrospective) is completed for `state.cycle`.
|
||
|
||
Action: The project completed a full cycle. Print the status banner and automatically loop back to New Task — do NOT ask the user for confirmation:
|
||
|
||
```
|
||
══════════════════════════════════════
|
||
PROJECT CYCLE COMPLETE
|
||
══════════════════════════════════════
|
||
The previous cycle finished successfully.
|
||
Starting new feature cycle…
|
||
══════════════════════════════════════
|
||
```
|
||
|
||
Set `step: 9`, `status: not_started`, and **increment `cycle`** (`cycle: state.cycle + 1`) in the state file, then auto-chain to Step 9 (New Task). Reset `sub_step` to `phase: 0, name: awaiting-invocation, detail: ""` and `retry_count: 0`.
|
||
|
||
Note: the loop (Steps 9 → 17 → 9) ensures every feature cycle includes: New Task → Implement → Run Tests → Test-Spec Sync → Update Docs → Security → Performance → Deploy → Retrospective.
|
||
|
||
## Auto-Chain Rules
|
||
|
||
### Phase A — One-time baseline setup
|
||
|
||
| Completed Step | Next Action |
|
||
|---------------|-------------|
|
||
| Document (1) | Auto-chain → Architecture Baseline Scan (2) |
|
||
| Architecture Baseline Scan (2) | Auto-chain → Test Spec (3). If baseline has High/Critical Architecture findings, surface them as inputs to Step 4 (testability) or defer to Step 8 (refactor). |
|
||
| Test Spec (3) | Auto-chain → Code Testability Revision (4) |
|
||
| Code Testability Revision (4) | Auto-chain → Decompose Tests (5) |
|
||
| Decompose Tests (5) | **Session boundary** — suggest new conversation before Implement Tests |
|
||
| Implement Tests (6) | Auto-chain → Run Tests (7) |
|
||
| Run Tests (7, all pass) | Auto-chain → Refactor choice (8) |
|
||
| Refactor (8, done or skipped) | Auto-chain → New Task (9) — enters Phase B |
|
||
|
||
### Phase B — Feature cycle (loops)
|
||
|
||
| Completed Step | Next Action |
|
||
|---------------|-------------|
|
||
| New Task (9) | **Session boundary** — suggest new conversation before Implement |
|
||
| Implement (10) | Auto-chain → Run Tests (11) |
|
||
| Run Tests (11, all pass) | Auto-chain → Test-Spec Sync (12) |
|
||
| Test-Spec Sync (12, done or skipped) | Auto-chain → Update Docs (13) |
|
||
| Update Docs (13) | Auto-chain → Security Audit choice (14) |
|
||
| Security Audit (14, done or skipped) | Auto-chain → Performance Test choice (15) |
|
||
| Performance Test (15, done or skipped) | Auto-chain → Deploy (16) |
|
||
| Deploy (16) | Auto-chain → Retrospective (17) |
|
||
| Retrospective (17) | **Cycle complete** — loop back to New Task (9) with incremented cycle counter |
|
||
|
||
## Status Summary — Step List
|
||
|
||
Flow name: `existing-code`. Render using the banner template in `protocols.md` → "Banner Template (authoritative)".
|
||
|
||
Flow-specific slot values:
|
||
- `<header-suffix>`: ` — Cycle <N>` when `state.cycle > 1`; otherwise empty.
|
||
- `<current-suffix>`: ` (cycle <N>)` when `state.cycle > 1`; otherwise empty.
|
||
- `<footer-extras>`: empty.
|
||
|
||
**Phase A — One-time baseline setup**
|
||
|
||
| # | Step Name | Extra state tokens (beyond the shared set) |
|
||
|---|-----------------------------|--------------------------------------------|
|
||
| 1 | Document | — |
|
||
| 2 | Architecture Baseline | — |
|
||
| 3 | Test Spec | — |
|
||
| 4 | Code Testability Revision | — |
|
||
| 5 | Decompose Tests | `DONE (N tasks)` |
|
||
| 6 | Implement Tests | `IN PROGRESS (batch M)` |
|
||
| 7 | Run Tests | `DONE (N passed, M failed)` |
|
||
| 8 | Refactor | `IN PROGRESS (phase N)` |
|
||
|
||
**Phase B — Feature cycle (loops)**
|
||
|
||
| # | Step Name | Extra state tokens (beyond the shared set) |
|
||
|---|-----------------------------|--------------------------------------------|
|
||
| 9 | New Task | `DONE (N tasks)` |
|
||
| 10 | Implement | `IN PROGRESS (batch M of ~N)` |
|
||
| 11 | Run Tests | `DONE (N passed, M failed)` |
|
||
| 12 | Test-Spec Sync | — |
|
||
| 13 | Update Docs | — |
|
||
| 14 | Security Audit | — |
|
||
| 15 | Performance Test | — |
|
||
| 16 | Deploy | — |
|
||
| 17 | Retrospective | — |
|
||
|
||
All rows accept the shared state tokens (`DONE`, `IN PROGRESS`, `NOT STARTED`, `FAILED (retry N/3)`); rows 2, 4, 8, 12, 13, 14, 15 additionally accept `SKIPPED`.
|
||
|
||
Row rendering format (renders with a phase separator between Step 8 and Step 9):
|
||
|
||
```
|
||
── Phase A: One-time baseline setup ──
|
||
Step 1 Document [<state token>]
|
||
Step 2 Architecture Baseline [<state token>]
|
||
Step 3 Test Spec [<state token>]
|
||
Step 4 Code Testability Rev. [<state token>]
|
||
Step 5 Decompose Tests [<state token>]
|
||
Step 6 Implement Tests [<state token>]
|
||
Step 7 Run Tests [<state token>]
|
||
Step 8 Refactor [<state token>]
|
||
── Phase B: Feature cycle (loops) ──
|
||
Step 9 New Task [<state token>]
|
||
Step 10 Implement [<state token>]
|
||
Step 11 Run Tests [<state token>]
|
||
Step 12 Test-Spec Sync [<state token>]
|
||
Step 13 Update Docs [<state token>]
|
||
Step 14 Security Audit [<state token>]
|
||
Step 15 Performance Test [<state token>]
|
||
Step 16 Deploy [<state token>]
|
||
Step 17 Retrospective [<state token>]
|
||
```
|