17 KiB
name, description, category, tags, disable-model-invocation
| name | description | category | tags | disable-model-invocation | |||||
|---|---|---|---|---|---|---|---|---|---|
| implement | Orchestrate task implementation with dependency-aware batching, parallel subagents, and integrated code review. Reads flat task files and _dependencies_table.md from TASKS_DIR, computes execution batches via topological sort, launches up to 4 implementer subagents in parallel, runs code-review skill after each batch, and loops until done. Use after /decompose has produced task files. Trigger phrases: - "implement", "start implementation", "implement tasks" - "run implementers", "execute tasks" | build |
|
true |
Implementation Orchestrator
Orchestrate the implementation of all tasks produced by the /decompose skill. This skill is a pure orchestrator — it does NOT write implementation code itself. It reads task specs, computes execution order, delegates to implementer subagents, validates results via the /code-review skill, and escalates issues.
The implementer agent is the specialist that writes all the code — it receives a task spec, analyzes the codebase, implements the feature, writes tests, and verifies acceptance criteria.
Core Principles
- Orchestrate, don't implement: this skill delegates all coding to
implementersubagents - Dependency-aware batching: tasks run only when all their dependencies are satisfied
- Max 4 parallel agents: never launch more than 4 implementer subagents simultaneously
- File isolation: no two parallel agents may write to the same file
- Integrated review:
/code-reviewskill runs automatically after each batch - Auto-start: batches launch immediately — no user confirmation before a batch
- Gate on failure: user confirmation is required only when code review returns FAIL
- Commit per batch: after each batch is confirmed, commit. Ask the user whether to push to remote unless the user previously opted into auto-push for this session.
Context Resolution
- TASKS_DIR:
_docs/02_tasks/ - Task files: all
*.mdfiles inTASKS_DIR/todo/(excluding files starting with_) - Dependency table:
TASKS_DIR/_dependencies_table.md
Task Lifecycle Folders
TASKS_DIR/
├── _dependencies_table.md
├── todo/ ← tasks ready for implementation (this skill reads from here)
├── backlog/ ← parked tasks (not scheduled yet, ignored by this skill)
└── done/ ← completed tasks (moved here after implementation)
Prerequisite Checks (BLOCKING)
TASKS_DIR/todo/exists and contains at least one task file — STOP if missing_dependencies_table.mdexists — STOP if missing- At least one task is not yet completed — STOP if all done
- Working tree is clean — run
git status --porcelain; the output must be empty.- If dirty, STOP and present the list of changed files to the user via the Choose format:
- A) Commit or stash stray changes manually, then re-invoke
/implement - B) Agent commits stray changes as a single
chore: WIP pre-implementcommit and proceeds - C) Abort
- A) Commit or stash stray changes manually, then re-invoke
- Rationale: implementer subagents edit files in parallel and commit per batch. Unrelated uncommitted changes get silently folded into batch commits otherwise.
- This check is repeated at the start of each batch iteration (see step 6 / step 14 Loop).
- If dirty, STOP and present the list of changed files to the user via the Choose format:
Algorithm
1. Parse
- Read all task
*.mdfiles fromTASKS_DIR/todo/(excluding files starting with_) - Read
_dependencies_table.md— parse into a dependency graph (DAG) - Validate: no circular dependencies, all referenced dependencies exist
2. Detect Progress
- Scan the codebase to determine which tasks are already completed
- Match implemented code against task acceptance criteria
- Mark completed tasks as done in the DAG
- Report progress to user: "X of Y tasks completed"
3. Compute Next Batch
- Topological sort remaining tasks
- Select tasks whose dependencies are ALL satisfied (completed)
- If a ready task depends on any task currently being worked on in this batch, it must wait for the next batch
- Cap the batch at 4 parallel agents
- If the batch would exceed 20 total complexity points, suggest splitting and let the user decide
4. Assign File Ownership
The authoritative file-ownership map is _docs/02_document/module-layout.md (produced by the decompose skill's Step 1.5). Task specs are purely behavioral — they do NOT carry file paths. Derive ownership from the layout, not from the task spec's prose.
For each task in the batch:
- Read the task spec's Component field.
- Look up the component in
_docs/02_document/module-layout.md→ Per-Component Mapping. - Set OWNED = the component's
Ownsglob (exclusive write for the duration of the batch). - Set READ-ONLY = Public API files of every component in the component's
Imports fromlist, plus allshared/*Public API files. - Set FORBIDDEN = every other component's
Ownsglob, and every other component's internal (non-Public API) files. - If the task is a shared / cross-cutting task (lives under
shared/*), OWNED = that shared directory; READ-ONLY = nothing; FORBIDDEN = every component directory. - If two tasks in the same batch map to the same component or overlapping
Ownsglobs, schedule them sequentially instead of in parallel.
If _docs/02_document/module-layout.md is missing or the component is not found:
- STOP the batch.
- Instruct the user to run
/decomposeStep 1.5 or to manually add the component entry tomodule-layout.md. - Do NOT guess file paths from the task spec — that is exactly the drift this file exists to prevent.
5. Update Tracker Status → In Progress
For each task in the batch, transition its ticket status to In Progress via the configured work item tracker (see protocols.md for tracker detection) before launching the implementer. If tracker: local, skip this step.
6. Launch Implementer Subagents
Per-batch dirty-tree re-check: before launching subagents, run git status --porcelain. On the first batch this is guaranteed clean by the prerequisite check. On subsequent batches, the previous batch ended with a commit so the tree should still be clean. If the tree is dirty at this point, STOP and surface the dirty files to the user using the same A/B/C choice as the prerequisite check. The most likely causes are a failed commit in the previous batch, a user who edited files mid-loop, or a pre-commit hook that re-wrote files and was not captured.
For each task in the batch, launch an implementer subagent with:
- Path to the task spec file
- List of files OWNED (exclusive write access)
- List of files READ-ONLY
- List of files FORBIDDEN
- Explicit instruction: the implementer must write or update tests that validate each acceptance criterion in the task spec. If a test cannot run in the current environment (e.g., TensorRT requires GPU), the test must still be written and skip with a clear reason.
Launch all subagents immediately — no user confirmation.
7. Monitor
- Wait for all subagents to complete
- Collect structured status reports from each implementer
- If any implementer reports "Blocked", log the blocker and continue with others
Stuck detection — while monitoring, watch for these signals per subagent:
- Same file modified 3+ times without test pass rate improving → flag as stuck, stop the subagent, report as Blocked
- Subagent has not produced new output for an extended period → flag as potentially hung
- If a subagent is flagged as stuck, do NOT let it continue looping — stop it and record the blocker in the batch report
8. AC Test Coverage Verification
Before code review, verify that every acceptance criterion in each task spec has at least one test that validates it. For each task in the batch:
- Read the task spec's Acceptance Criteria section
- Search the test files (new and existing) for tests that cover each AC
- Classify each AC as:
- Covered: a test directly validates this AC (running or skipped-with-reason)
- Not covered: no test exists for this AC
If any AC is Not covered:
- This is a BLOCKING failure — the implementer must write the missing test before proceeding
- Re-launch the implementer with the specific ACs that need tests
- If the test cannot run in the current environment (GPU required, platform-specific, external service), the test must still exist and skip with
pytest.mark.skipiforpytest.skip()explaining the prerequisite - A skipped test counts as Covered — the test exists and will run when the environment allows
Only proceed to Step 9 when every AC has a corresponding test.
9. Code Review
- Run
/code-reviewskill on the batch's changed files + corresponding task specs - The code-review skill produces a verdict: PASS, PASS_WITH_WARNINGS, or FAIL
10. Auto-Fix Gate
Bounded auto-fix loop — only applies to mechanical findings. Critical and Security findings are never auto-fixed.
Auto-fix eligibility matrix:
| Severity | Category | Auto-fix? |
|---|---|---|
| Low | any | yes |
| Medium | Style, Maintainability, Performance | yes |
| Medium | Bug, Spec-Gap, Security, Architecture | escalate |
| High | Style, Scope | yes |
| High | Bug, Spec-Gap, Performance, Maintainability, Architecture | escalate |
| Critical | any | escalate |
| any | Security | escalate |
| any | Architecture (cyclic deps) | escalate |
Flow:
- If verdict is PASS or PASS_WITH_WARNINGS: show findings as info, continue to step 11
- If verdict is FAIL:
- Partition findings into auto-fix-eligible and escalate (using the matrix above)
- For eligible findings, attempt fixes using location/description/suggestion, then re-run
/code-reviewon modified files (max 2 rounds) - If all remaining findings are auto-fix-eligible and re-review now passes → continue to step 11
- If any non-eligible finding exists at any point → stop auto-fixing, present the full list to the user (BLOCKING)
- User must explicitly approve each non-auto-fix finding (accept, request manual fix, mark as out-of-scope) before proceeding.
Track auto_fix_attempts and escalated_findings in the batch report for retrospective analysis.
11. Commit (and optionally Push)
- After user confirms the batch (explicitly for FAIL, implicitly for PASS/PASS_WITH_WARNINGS):
git addall changed files from the batchgit commitwith a message that includes ALL task IDs (tracker IDs or numeric prefixes) of tasks implemented in the batch, followed by a summary of what was implemented. Format:[TASK-ID-1] [TASK-ID-2] ... Summary of changes- Ask the user whether to push to remote, unless the user previously opted into auto-push for this session
12. Update Tracker Status → In Testing
After the batch is committed and pushed, transition the ticket status of each task in the batch to In Testing via the configured work item tracker. If tracker: local, skip this step.
13. Archive Completed Tasks
Move each completed task file from TASKS_DIR/todo/ to TASKS_DIR/done/.
14. Loop
- Go back to step 2 until all tasks in
todo/are done
14.5. Cumulative Code Review (every K batches)
- Trigger: every K completed batches (default
K = 3; configurable per run via acumulative_review_intervalknob in the invocation context) - Purpose: per-batch review (Step 9) catches batch-local issues; cumulative review catches issues that only appear when tasks are combined — architecture drift, cross-task inconsistency, duplicate symbols introduced across different batches, contracts that drifted across producer/consumer batches
- Scope: the union of files changed since the last cumulative review (or since the start of the run if this is the first)
- Action: invoke
.cursor/skills/code-review/SKILL.mdin cumulative mode. All 7 phases run, with emphasis on Phase 6 (Cross-Task Consistency), Phase 7 (Architecture Compliance), and duplicate-symbol detection across the accumulated code - Output: write the report to
_docs/03_implementation/cumulative_review_batches_[NN-MM]_cycle[N]_report.mdwhere[NN-MM]is the batch range covered and[N]is the currentstate.cycle. When_docs/02_document/architecture_compliance_baseline.mdexists, the report includes the## Baseline Deltasection (carried over / resolved / newly introduced) percode-review/SKILL.md"Baseline delta". - Gate:
PASSorPASS_WITH_WARNINGS→ continue to next batch (step 14 loop)FAIL→ STOP. Present the report to the user via the Choose format:- A) Auto-fix findings using the Auto-Fix Gate matrix in step 10, then re-run cumulative review
- B) Open a targeted refactor run (invoke refactor skill in guided mode with the findings as
list-of-changes.md) - C) Manually fix, then re-invoke
/implement
- Do NOT loop to the next batch on
FAIL— the whole point is to stop drift before it compounds
- Interaction with Auto-Fix Gate: Architecture findings (new category from code-review Phase 7) always escalate per the implement auto-fix matrix; they cannot silently auto-fix
- Resumability: if interrupted, the next invocation checks for the latest
cumulative_review_batches_*.mdand computes the changed-file set from batch reports produced after that review
15. Final Test Run
- After all batches are complete, run the full test suite once
- Read and execute
.cursor/skills/test-run/SKILL.md(detect runner, run suite, diagnose failures, present blocking choices) - Test failures are a blocking gate — do not proceed until the test-run skill completes with a user decision
- When tests pass, report final summary
Batch Report Persistence
After each batch completes, save the batch report to _docs/03_implementation/batch_[NN]_cycle[N]_report.md for feature implementation (or batch_[NN]_report.md for test/refactor runs). Create the directory if it doesn't exist. When all tasks are complete, produce a FINAL implementation report with a summary of all batches. The filename depends on context:
- Test implementation (tasks from test decomposition):
_docs/03_implementation/implementation_report_tests.md - Feature implementation:
_docs/03_implementation/implementation_report_{feature_slug}_cycle{N}.mdwhere{feature_slug}is derived from the batch task names (e.g.,implementation_report_core_api_cycle2.md) and{N}is the currentstate.cyclefrom_docs/_autodev_state.md. Ifstate.cycleis absent (pre-migration), default tocycle1. - Refactoring:
_docs/03_implementation/implementation_report_refactor_{run_name}.md
Determine the context from the task files being implemented: if all tasks have test-related names or belong to a test epic, use the tests filename; otherwise derive the feature slug from the component names and append the cycle suffix.
Batch report filenames must also include the cycle counter when running feature implementation: _docs/03_implementation/batch_{NN}_cycle{N}_report.md (test and refactor runs may use the plain batch_{NN}_report.md form since they are not cycle-scoped).
Batch Report
After each batch, produce a structured report:
# Batch Report
**Batch**: [N]
**Tasks**: [list]
**Date**: [YYYY-MM-DD]
## Task Results
| Task | Status | Files Modified | Tests | AC Coverage | Issues |
|------|--------|---------------|-------|-------------|--------|
| [TRACKER-ID]_[name] | Done | [count] files | [pass/fail] | [N/N ACs covered] | [count or None] |
## AC Test Coverage: [All covered / X of Y covered]
## Code Review Verdict: [PASS/FAIL/PASS_WITH_WARNINGS]
## Auto-Fix Attempts: [0/1/2]
## Stuck Agents: [count or None]
## Next Batch: [task list] or "All tasks complete"
Stop Conditions and Escalation
| Situation | Action |
|---|---|
| Implementer fails same approach 3+ times | Stop it, escalate to user |
| Task blocked on external dependency (not in task list) | Report and skip |
| File ownership conflict unresolvable | ASK user |
| Test failure after final test run | Delegate to test-run skill — blocking gate |
| All tasks complete | Report final summary, suggest final commit |
_dependencies_table.md missing |
STOP — run /decompose first |
Recovery
Each batch commit serves as a rollback checkpoint. If recovery is needed:
- Tests fail after final test run:
git revert <batch-commit-hash>using hashes from the batch reports in_docs/03_implementation/ - Resuming after interruption: Read
_docs/03_implementation/batch_*_report.mdfiles (filtered by currentstate.cyclefor feature implementation) to determine which batches completed, then continue from the next batch - Multiple consecutive batches fail: Stop and escalate to user with links to batch reports and commit hashes
Safety Rules
- Never launch tasks whose dependencies are not yet completed
- Never allow two parallel agents to write to the same file
- If a subagent fails or is flagged as stuck, stop it and report — do not let it loop indefinitely
- Always run the full test suite after all batches complete (step 15)