8.4 KiB
name, description, category, tags, disable-model-invocation
| name | description | category | tags | disable-model-invocation | |||||
|---|---|---|---|---|---|---|---|---|---|
| implement | Orchestrate task implementation with dependency-aware batching, parallel subagents, and integrated code review. Reads flat task files and _dependencies_table.md from TASKS_DIR, computes execution batches via topological sort, launches up to 4 implementer subagents in parallel, runs code-review skill after each batch, and loops until done. Use after /decompose has produced task files. Trigger phrases: - "implement", "start implementation", "implement tasks" - "run implementers", "execute tasks" | build |
|
true |
Implementation Orchestrator
Orchestrate the implementation of all tasks produced by the /decompose skill. This skill is a pure orchestrator — it does NOT write implementation code itself. It reads task specs, computes execution order, delegates to implementer subagents, validates results via the /code-review skill, and escalates issues.
The implementer agent is the specialist that writes all the code — it receives a task spec, analyzes the codebase, implements the feature, writes tests, and verifies acceptance criteria.
Core Principles
- Orchestrate, don't implement: this skill delegates all coding to
implementersubagents - Dependency-aware batching: tasks run only when all their dependencies are satisfied
- Max 4 parallel agents: never launch more than 4 implementer subagents simultaneously
- File isolation: no two parallel agents may write to the same file
- Integrated review:
/code-reviewskill runs automatically after each batch - Auto-start: batches launch immediately — no user confirmation before a batch
- Gate on failure: user confirmation is required only when code review returns FAIL
- Commit and push per batch: after each batch is confirmed, commit and push to remote
Context Resolution
- TASKS_DIR:
_docs/02_tasks/ - Task files: all
*.mdfiles in TASKS_DIR (excluding files starting with_) - Dependency table:
TASKS_DIR/_dependencies_table.md
Prerequisite Checks (BLOCKING)
- TASKS_DIR exists and contains at least one task file — STOP if missing
_dependencies_table.mdexists — STOP if missing- At least one task is not yet completed — STOP if all done
Algorithm
1. Parse
- Read all task
*.mdfiles from TASKS_DIR (excluding files starting with_) - Read
_dependencies_table.md— parse into a dependency graph (DAG) - Validate: no circular dependencies, all referenced dependencies exist
2. Detect Progress
- Scan the codebase to determine which tasks are already completed
- Match implemented code against task acceptance criteria
- Mark completed tasks as done in the DAG
- Report progress to user: "X of Y tasks completed"
3. Compute Next Batch
- Topological sort remaining tasks
- Select tasks whose dependencies are ALL satisfied (completed)
- If a ready task depends on any task currently being worked on in this batch, it must wait for the next batch
- Cap the batch at 4 parallel agents
- If the batch would exceed 20 total complexity points, suggest splitting and let the user decide
4. Assign File Ownership
For each task in the batch:
- Parse the task spec's Component field and Scope section
- Map the component to directories/files in the project
- Determine: files OWNED (exclusive write), files READ-ONLY (shared interfaces, types), files FORBIDDEN (other agents' owned files)
- If two tasks in the same batch would modify the same file, schedule them sequentially instead of in parallel
5. Update Jira Status → In Progress
For each task in the batch, transition its Jira ticket status to In Progress via Jira MCP before launching the implementer.
6. Launch Implementer Subagents
For each task in the batch, launch an implementer subagent with:
- Path to the task spec file
- List of files OWNED (exclusive write access)
- List of files READ-ONLY
- List of files FORBIDDEN
Launch all subagents immediately — no user confirmation.
7. Monitor
- Wait for all subagents to complete
- Collect structured status reports from each implementer
- If any implementer reports "Blocked", log the blocker and continue with others
Stuck detection — while monitoring, watch for these signals per subagent:
- Same file modified 3+ times without test pass rate improving → flag as stuck, stop the subagent, report as Blocked
- Subagent has not produced new output for an extended period → flag as potentially hung
- If a subagent is flagged as stuck, do NOT let it continue looping — stop it and record the blocker in the batch report
8. Code Review
- Run
/code-reviewskill on the batch's changed files + corresponding task specs - The code-review skill produces a verdict: PASS, PASS_WITH_WARNINGS, or FAIL
9. Auto-Fix Gate
Auto-fix loop with bounded retries (max 2 attempts) before escalating to user:
- If verdict is PASS or PASS_WITH_WARNINGS: show findings as info, continue automatically to step 10
- If verdict is FAIL (attempt 1 or 2):
- Parse the code review findings (Critical and High severity items)
- For each finding, attempt an automated fix using the finding's location, description, and suggestion
- Re-run
/code-reviewon the modified files - If now PASS or PASS_WITH_WARNINGS → continue to step 10
- If still FAIL → increment retry counter, repeat from (2) up to max 2 attempts
- If still FAIL after 2 auto-fix attempts: present all findings to user (BLOCKING). User must confirm fixes or accept before proceeding.
Track auto_fix_attempts count in the batch report for retrospective analysis.
10. Test
- Run the full test suite
- If failures: report to user with details
11. Commit and Push
- After user confirms the batch (explicitly for FAIL, implicitly for PASS/PASS_WITH_WARNINGS):
git addall changed files from the batchgit commitwith a message that includes ALL JIRA-IDs of tasks implemented in the batch, followed by a summary of what was implemented. Format:[JIRA-ID-1] [JIRA-ID-2] ... Summary of changesgit pushto the remote branch
12. Update Jira Status → In Testing
After the batch is committed and pushed, transition the Jira ticket status of each task in the batch to In Testing via Jira MCP.
13. Loop
- Go back to step 2 until all tasks are done
- When all tasks are complete, report final summary
Batch Report Persistence
After each batch completes, save the batch report to _docs/03_implementation/batch_[NN]_report.md. Create the directory if it doesn't exist. When all tasks are complete, produce _docs/03_implementation/FINAL_implementation_report.md with a summary of all batches.
Batch Report
After each batch, produce a structured report:
# Batch Report
**Batch**: [N]
**Tasks**: [list]
**Date**: [YYYY-MM-DD]
## Task Results
| Task | Status | Files Modified | Tests | Issues |
|------|--------|---------------|-------|--------|
| [JIRA-ID]_[name] | Done | [count] files | [pass/fail] | [count or None] |
## Code Review Verdict: [PASS/FAIL/PASS_WITH_WARNINGS]
## Auto-Fix Attempts: [0/1/2]
## Stuck Agents: [count or None]
## Next Batch: [task list] or "All tasks complete"
Stop Conditions and Escalation
| Situation | Action |
|---|---|
| Implementer fails same approach 3+ times | Stop it, escalate to user |
| Task blocked on external dependency (not in task list) | Report and skip |
| File ownership conflict unresolvable | ASK user |
| Test failures exceed 50% of suite after a batch | Stop and escalate |
| All tasks complete | Report final summary, suggest final commit |
_dependencies_table.md missing |
STOP — run /decompose first |
Recovery
Each batch commit serves as a rollback checkpoint. If recovery is needed:
- Tests fail after a batch commit:
git revert <batch-commit-hash>using the hash from the batch report in_docs/03_implementation/ - Resuming after interruption: Read
_docs/03_implementation/batch_*_report.mdfiles to determine which batches completed, then continue from the next batch - Multiple consecutive batches fail: Stop and escalate to user with links to batch reports and commit hashes
Safety Rules
- Never launch tasks whose dependencies are not yet completed
- Never allow two parallel agents to write to the same file
- If a subagent fails or is flagged as stuck, stop it and report — do not let it loop indefinitely
- Always run tests after each batch completes