chore: sync .cursor from suite
ci/woodpecker/push/02-build-push Pipeline was successful

Made-with: Cursor
This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-04-25 19:44:40 +03:00
parent b6516274f6
commit 3482173e7d
26 changed files with 807 additions and 288 deletions
+1 -1
View File
@@ -52,7 +52,7 @@ Determine which flow to use (check in order — first match wins):
After selecting the flow, apply its detection rules (first match wins) to determine the current step.
**Note**: the meta-repo flow uses a different artifact layout — its source of truth is `_docs/_repo-config.yaml`, not `_docs/NN_*/` folders. Other detection rules assume the BUILD-SHIP artifact layout; they don't apply to meta-repos.
**Note**: the meta-repo flow uses a different artifact layout — its source of truth is `_docs/_repo-config.yaml`, not `_docs/NN_*/` folders. After Step 2.5 it also produces `_docs/glossary.md` and a `## Architecture Vision` section in the cross-cutting architecture doc identified by `docs.cross_cutting`. Other detection rules assume the BUILD-SHIP artifact layout; they don't apply to meta-repos.
## Execution Loop
@@ -13,7 +13,7 @@ A first-time run executes Phase A then Phase B; every subsequent invocation re-e
| Step | Name | Sub-Skill | Internal SubSteps |
|------|------|-----------|-------------------|
| 1 | Document | document/SKILL.md | Steps 18 |
| 1 | Document | document/SKILL.md | Steps 07 incl. inline 2.5 (module-layout) and 4.5 (glossary + arch vision) |
| 2 | Architecture Baseline Scan | code-review/SKILL.md (baseline mode) | Phase 1 + Phase 7 |
| 3 | Test Spec | test-spec/SKILL.md | Phases 14 |
| 4 | Code Testability Revision | refactor/SKILL.md (guided mode) | Phases 07 (conditional) |
@@ -53,6 +53,8 @@ Action: An existing codebase without documentation was detected. Read and execut
The document skill's Step 2.5 produces `_docs/02_document/module-layout.md`, which is required by every downstream step that assigns file ownership (`/implement` Step 4, `/code-review` Phase 7, `/refactor` discovery). If this file is missing after Step 1 completes (e.g., a pre-existing `_docs/` dir predates the 2.5 addition), re-invoke `/document` in resume mode — it will pick up at Step 2.5.
The document skill's Step 4.5 produces `_docs/02_document/glossary.md` and prepends a confirmed `## Architecture Vision` section to `architecture.md`. Both are user-confirmed artifacts; downstream skills (refactor, decompose, new-task) treat them as authoritative for terminology and structural intent. If `glossary.md` is missing after Step 1 (pre-existing `_docs/` dir from before the 4.5 addition), re-invoke `/document` in resume mode — it will pick up at Step 4.5 without redoing module/component analysis.
---
**Step 2 — Architecture Baseline Scan**
+159 -22
View File
@@ -15,8 +15,10 @@ This flow differs fundamentally from `greenfield` and `existing-code`:
|------|------|-----------|-------------------|
| 1 | Discover | monorepo-discover/SKILL.md | Phase 110 |
| 2 | Config Review | (human checkpoint, no sub-skill) | — |
| 2.5 | Glossary & Architecture Vision | (inline, no sub-skill) | Steps 15 |
| 3 | Status | monorepo-status/SKILL.md | Sections 15 |
| 4 | Document Sync | monorepo-document/SKILL.md | Phase 17 (conditional on doc drift) |
| 4.5 | Integration Test Sync | monorepo-e2e/SKILL.md | Phase 16 (conditional on suite-e2e drift; skipped if `suite_e2e:` block absent in config) |
| 5 | CICD Sync | monorepo-cicd/SKILL.md | Phase 17 (conditional on CI drift) |
| 6 | Loop | (auto-return to Step 3 on next invocation) | — |
@@ -58,17 +60,121 @@ Action: This is a **hard session boundary**. The skill cannot proceed until a hu
══════════════════════════════════════
```
- If user picks A → verify `confirmed_by_user: true` is now set in the config. If still `false`, re-ask. If true, auto-chain to **Step 3 (Status)**.
- If user picks A → verify `confirmed_by_user: true` is now set in the config. If still `false`, re-ask. If true, auto-chain to **Step 2.5 (Glossary & Architecture Vision)**.
- If user picks B → mark Step 2 as `in_progress`, update state file, end the session. Tell the user to invoke `/autodev` again after reviewing.
**Do NOT auto-flip `confirmed_by_user`.** Only the human does that.
---
**Step 2.5 — Glossary & Architecture Vision** (one-shot)
Condition (folder fallback): `_docs/_repo-config.yaml` exists AND `confirmed_by_user: true` AND (`_docs/glossary.md` does NOT exist OR the cross-cutting architecture doc identified in `docs.cross_cutting` does NOT contain a `## Architecture Vision` section).
State-driven: reached by auto-chain from Step 2 (user picked A).
**Goal**: Capture meta-repo-wide terminology and the user's architecture vision **once**, after the config is confirmed but before any sync skill runs. Without this, `monorepo-document` will faithfully propagate per-component changes but never surface a unified mental model of the meta-repo to the user, and the AI will keep re-inferring the same project terminology on every invocation.
**Why inline (no sub-skill)**: `monorepo-discover` is hard-guarded to write only `_repo-config.yaml`; `monorepo-document` only edits *existing* docs. Glossary and architecture-vision creation is a first-time, user-confirmed write that crosses both guarantees, so it lives directly in the flow.
**Inputs**:
- `_docs/_repo-config.yaml` (component list, doc map, conventions, assumptions log)
- Cross-cutting docs listed under `docs.cross_cutting` (existing architecture doc, if any)
- Each component's `primary_doc` (read-only, for terminology + responsibility extraction)
- Root `README.md` if `repo.root_readme` is referenced
**Outputs**:
- `_docs/glossary.md` (or `<docs.root>/glossary.md` if `docs.root``_docs/`) — NEW
- The cross-cutting architecture doc updated in place: a `## Architecture Vision` section is prepended (or merged into an existing "Vision" / "Overview" heading)
- One new entry appended to `_docs/_repo-config.yaml` under `assumptions_log:` recording the run
- A new top-level config entry: `glossary_doc: <path>` so future `monorepo-status` and `monorepo-document` runs treat the glossary as a known cross-cutting doc
**Procedure**:
1. **Draft glossary** from `_repo-config.yaml` + each component's primary doc. Include:
- Component codenames as they appear in the config (`name` field) and any rename pairs the user noted in `unresolved:` resolutions
- Domain terms that recur across ≥2 component docs
- Acronyms / abbreviations
- Convention names from `conventions:` (e.g., commit prefix, deployment tier names)
- Stakeholder personas if cross-cutting docs reference them
Each entry: one-line definition + source (`source: components.<name>.primary_doc` or `source: _repo-config.yaml conventions`). Skip generic terms.
2. **Draft architecture vision** from the meta-repo perspective:
- **One paragraph**: what the system as a whole is, what each component contributes, the runtime topology (one binary / N services / N clients + 1 server / hybrid), how components communicate (REST / gRPC / queue / DB-shared / file-shared)
- **Components & responsibilities** (one-line each), pulled directly from `_repo-config.yaml` `components:` list
- **Cross-cutting concerns ownership**: which doc owns which concern (auth, schema, deployment, etc.) — pulled from `docs.cross_cutting[].owns`
- **Architectural principles / non-negotiables** the user has implied across components (e.g., "all components share a single Postgres", "submodules own their own CI", "deployment is per-tier, not per-component")
- **Open questions / structural drift signals**: components missing from `docs.cross_cutting`, components in registry but not in config (registry mismatch), or contradictions between component primary docs
3. **Present condensed view** to the user (NOT the full draft files):
```
══════════════════════════════════════
REVIEW: Meta-Repo Glossary + Architecture Vision
══════════════════════════════════════
Glossary (N terms drafted from config + component docs):
- <Term>: <one-line definition>
- ...
Architecture Vision — meta-repo level:
<one-paragraph synopsis>
Components / responsibilities:
- <component>: <one-line>
- ...
Cross-cutting ownership:
- <concern> → <doc>
- ...
Principles / non-negotiables:
- <principle>
- ...
Open questions / drift signals:
- <q1>
- <q2>
══════════════════════════════════════
A) Looks correct — write the files
B) Add / correct entries (provide diffs)
C) Resolve open questions / drift signals first
══════════════════════════════════════
Recommendation: pick C if drift signals exist;
otherwise B if components or principles
don't match your intent; A only when
the inferred vision is exactly right.
══════════════════════════════════════
```
4. **Iterate**:
- On B → integrate the user's diffs/additions, re-present, loop until A.
- On C → ask the listed open questions in one batch, integrate answers, re-present.
- **Do NOT proceed to step 5 until the user picks A.**
5. **Save**:
- Write `_docs/glossary.md` (alphabetical) with `**Status**: confirmed-by-user` + date.
- Update the cross-cutting architecture doc identified in `docs.cross_cutting` (or create one at `_docs/00_architecture.md` if none exists and the user's option-B input named one): prepend `## Architecture Vision` with the confirmed paragraph + components + ownership + principles. Preserve every existing H2 below verbatim.
- Append to `_docs/_repo-config.yaml`:
- Top-level `glossary_doc: <path-relative-to-repo-root>` (sibling of `docs.root`)
- New `assumptions_log:` entry: `{ date: <today>, skill: autodev-meta-repo Step 2.5, run_notes: "Captured glossary + architecture vision", assumptions: [...] }`
- Do NOT flip any `confirmed: false` → `confirmed: true` in the config; this step writes its own confirmed artifact, it does not retroactively confirm config inferences.
**Self-verification**:
- [ ] Every glossary entry traces to either the config or a component primary doc
- [ ] Every component listed in the vision matches a `components:` entry in the config
- [ ] All open questions are answered or explicitly deferred (with the user's acknowledgement)
- [ ] The cross-cutting architecture doc still contains every H2 it had before this step
- [ ] User picked option A on the latest condensed view
**Idempotency**: if both `_docs/glossary.md` exists AND the architecture doc already has a `## Architecture Vision` section, this step is **skipped on re-invocation**. To refresh, the user invokes `/autodev` after deleting `glossary.md` (or running `monorepo-discover` with structural changes that justify a re-confirmation).
After completion, auto-chain to **Step 3 (Status)**.
---
**Step 3 — Status**
Condition (folder fallback): `_docs/_repo-config.yaml` exists AND `confirmed_by_user: true`.
State-driven: reached by auto-chain from Step 2 (user picked A), or entered on any re-invocation after a completed cycle.
Condition (folder fallback): `_docs/_repo-config.yaml` exists AND `confirmed_by_user: true` AND (`_docs/glossary.md` exists OR `glossary_doc:` is recorded in the config).
State-driven: reached by auto-chain from Step 2.5, or entered on any re-invocation after a completed cycle.
Action: Read and execute `.cursor/skills/monorepo-status/SKILL.md`.
@@ -115,6 +221,28 @@ The skill:
3. Applies doc edits
4. Skips any component with unconfirmed mapping (M5), reports
After completion:
- If the status report ALSO flagged suite-e2e drift → auto-chain to **Step 4.5 (Integration Test Sync)**
- Else if the status report ALSO flagged CI drift → auto-chain to **Step 5 (CICD Sync)**
- Else → end cycle, report done
---
**Step 4.5 — Integration Test Sync**
State-driven: reached by auto-chain from Step 3 (when status report flagged suite-e2e drift and no doc drift) or from Step 4 (when both doc and suite-e2e drift were flagged).
**Skip condition**: if `_docs/_repo-config.yaml` has no `suite_e2e:` block, this step is skipped entirely — there's no harness to sync. The status report should not flag suite-e2e drift in that case; if it does, that's a status-skill bug.
Action: Read and execute `.cursor/skills/monorepo-e2e/SKILL.md` with scope = components flagged by status.
The skill:
1. Verifies every path under `suite_e2e.*` exists (binary fixtures excepted — see the skill's Phase 1)
2. Classifies each flagged change against the suite-e2e impact table
3. Applies edits to `e2e/docker-compose.suite-e2e.yml`, `e2e/fixtures/init.sql`, `e2e/fixtures/expected_detections.json` metadata, and `e2e/runner/tests/*.spec.ts` selectors as needed
4. Bumps baseline `fixture_version` with a `-stale` suffix and appends a `_docs/_process_leftovers/` entry whenever the detection model revision changes (binary fixture cannot be regenerated automatically)
5. Reports synced files; does not run the suite e2e itself
After completion:
- If the status report ALSO flagged CI drift → auto-chain to **Step 5 (CICD Sync)**
- Else → end cycle, report done
@@ -123,11 +251,11 @@ After completion:
**Step 5 — CICD Sync**
State-driven: reached by auto-chain from Step 3 (when status report flagged CI drift and no doc drift) or from Step 4 (when both doc and CI drift were flagged).
State-driven: reached by auto-chain from Step 3 (when status report flagged CI drift and no doc/suite-e2e drift), Step 4, or Step 4.5.
Action: Read and execute `.cursor/skills/monorepo-cicd/SKILL.md` with scope = components flagged by status.
After completion, end cycle. Report files updated across both doc and CI sync.
After completion, end cycle. Report files updated across doc, suite-e2e, and CI sync.
---
@@ -156,14 +284,19 @@ After onboarding completes, the config is updated. Auto-chain back to **Step 3 (
| Completed Step | Next Action |
|---------------|-------------|
| Discover (1) | Auto-chain → Config Review (2) |
| Config Review (2, user picked A, confirmed_by_user: true) | Auto-chain → Status (3) |
| Config Review (2, user picked A, confirmed_by_user: true) | Auto-chain → Glossary & Architecture Vision (2.5) |
| Config Review (2, user picked B) | **Session boundary** — end session, await re-invocation |
| Glossary & Architecture Vision (2.5) | Auto-chain → Status (3) |
| Status (3, doc drift) | Auto-chain → Document Sync (4) |
| Status (3, suite-e2e drift only) | Auto-chain → Integration Test Sync (4.5) |
| Status (3, CI drift only) | Auto-chain → CICD Sync (5) |
| Status (3, no drift) | **Cycle complete** — end session, await re-invocation |
| Status (3, registry mismatch) | Ask user (A: discover, B: onboard, C: continue) |
| Document Sync (4) + CI drift pending | Auto-chain → CICD Sync (5) |
| Document Sync (4) + no CI drift | **Cycle complete** |
| Document Sync (4) + suite-e2e drift pending | Auto-chain → Integration Test Sync (4.5) |
| Document Sync (4) + CI drift only pending | Auto-chain → CICD Sync (5) |
| Document Sync (4) + no further drift | **Cycle complete** |
| Integration Test Sync (4.5) + CI drift pending | Auto-chain → CICD Sync (5) |
| Integration Test Sync (4.5) + no CI drift | **Cycle complete** |
| CICD Sync (5) | **Cycle complete** |
## Status Summary — Step List
@@ -178,29 +311,33 @@ Flow-specific slot values:
Config: _docs/_repo-config.yaml [confirmed_by_user: <true|false>, last_updated: <date>]
```
| # | Step Name | Extra state tokens (beyond the shared set) |
|---|------------------|--------------------------------------------|
| 1 | Discover | — |
| 2 | Config Review | `IN PROGRESS (awaiting human)` |
| 3 | Status | `DONE (no drift)`, `DONE (N drifts)` |
| 4 | Document Sync | `DONE (N docs)`, `SKIPPED (no doc drift)` |
| 5 | CICD Sync | `DONE (N files)`, `SKIPPED (no CI drift)` |
| # | Step Name | Extra state tokens (beyond the shared set) |
|---|------------------------------------|--------------------------------------------|
| 1 | Discover | — |
| 2 | Config Review | `IN PROGRESS (awaiting human)` |
| 2.5 | Glossary & Architecture Vision | `SKIPPED (already captured)` |
| 3 | Status | `DONE (no drift)`, `DONE (N drifts)` |
| 4 | Document Sync | `DONE (N docs)`, `SKIPPED (no doc drift)` |
| 4.5 | Integration Test Sync | `DONE (N files)`, `SKIPPED (no suite-e2e drift)`, `SKIPPED (no suite_e2e config block)` |
| 5 | CICD Sync | `DONE (N files)`, `SKIPPED (no CI drift)` |
All rows accept the shared state tokens (`DONE`, `IN PROGRESS`, `NOT STARTED`, `FAILED (retry N/3)`); rows 4 and 5 additionally accept `SKIPPED`.
All rows accept the shared state tokens (`DONE`, `IN PROGRESS`, `NOT STARTED`, `FAILED (retry N/3)`); rows 2.5, 4, 4.5, and 5 additionally accept `SKIPPED`.
Row rendering format:
```
Step 1 Discover [<state token>]
Step 2 Config Review [<state token>]
Step 3 Status [<state token>]
Step 4 Document Sync [<state token>]
Step 5 CICD Sync [<state token>]
Step 1 Discover [<state token>]
Step 2 Config Review [<state token>]
Step 2.5 Glossary & Architecture Vision [<state token>]
Step 3 Status [<state token>]
Step 4 Document Sync [<state token>]
Step 4.5 Integration Test Sync [<state token>]
Step 5 CICD Sync [<state token>]
```
## Notes for the meta-repo flow
- **No session boundary except Step 2**: unlike existing-code flow (which has boundaries around decompose), meta-repo flow only pauses at config review. Syncing is fast enough to complete in one session.
- **No session boundary except Step 2 and Step 2.5**: unlike existing-code flow (which has boundaries around decompose), meta-repo flow only pauses at config review and the one-shot glossary/vision capture. Once both are confirmed, syncing is fast enough to complete in one session and Step 2.5 idempotently no-ops on every subsequent invocation.
- **Cyclical, not terminal**: no "done forever" state. Each invocation completes a drift cycle; next invocation starts fresh.
- **No tracker integration**: this flow does NOT create Jira/ADO tickets. Maintenance is not a feature — if a feature-level ticket spans the meta-repo's concerns, it lives in the per-component workspace.
- **Onboarding is opt-in**: never auto-onboarded. User must explicitly request.
+2 -2
View File
@@ -138,7 +138,7 @@ One retry ladder covers all failure modes: explicit failure returned by a sub-sk
Treat the sub-skill as **failed** when ANY of the following is observed:
- The sub-skill explicitly returns a failed result (including blocked subagents, auto-fix loop exhaustion, prerequisite violations).
- The sub-skill explicitly returns a failed result (including blocked tasks, auto-fix loop exhaustion, prerequisite violations).
- **Stuck signals**: the same artifact is rewritten 3+ times without meaningful change; the sub-skill re-asks a question that was already answered; no new artifact has been saved despite active execution.
### Retry ladder
@@ -291,7 +291,7 @@ For steps that produce `_docs/` artifacts (problem, research, plan, decompose, d
## Debug Protocol
When the implement skill's auto-fix loop fails (code review FAIL after 2 auto-fix attempts) or an implementer subagent reports a blocker, the user is asked to intervene. This protocol guides the debugging process. (Retry budget and escalation are covered by Failure Handling above; this section is about *how* to diagnose once the user has been looped in.)
When the implement skill's auto-fix loop fails (code review FAIL after 2 auto-fix attempts) or a task reports a blocker, the user is asked to intervene. This protocol guides the debugging process. (Retry budget and escalation are covered by Failure Handling above; this section is about *how* to diagnose once the user has been looped in.)
### Structured Debugging Workflow
+2 -2
View File
@@ -209,7 +209,7 @@ Bug, Spec-Gap, Security, Performance, Maintainability, Style, Scope, Architectur
The `/implement` skill invokes this skill after each batch completes:
1. Collects changed files from all implementer agents in the batch
1. Collects changed files from all tasks implemented in the batch
2. Passes task spec paths + changed files to this skill
3. If verdict is FAIL — presents findings to user (BLOCKING), user fixes or confirms
4. If verdict is PASS or PASS_WITH_WARNINGS — proceeds automatically (findings shown as info)
@@ -221,7 +221,7 @@ The `/implement` skill invokes this skill after each batch completes:
| Input | Type | Source | Required |
|-------|------|--------|----------|
| `task_specs` | list of file paths | Task `.md` files from `_docs/02_tasks/todo/` for the current batch | Yes |
| `changed_files` | list of file paths | Files modified by implementer agents (from `git diff` or agent reports) | Yes |
| `changed_files` | list of file paths | Files modified by the tasks in the batch (from `git diff`) | Yes |
| `batch_number` | integer | Current batch number (for report naming) | Yes |
| `project_restrictions` | file path | `_docs/00_problem/restrictions.md` | If exists |
| `solution_overview` | file path | `_docs/01_solution/solution.md` | If exists |
+2 -1
View File
@@ -80,7 +80,8 @@ Announce the detected mode and resolved paths to the user before proceeding.
| `_docs/00_problem/restrictions.md` | Constraints and limitations |
| `_docs/00_problem/acceptance_criteria.md` | Measurable acceptance criteria |
| `_docs/01_solution/solution.md` | Finalized solution |
| `DOCUMENT_DIR/architecture.md` | Architecture from plan skill |
| `DOCUMENT_DIR/architecture.md` | Architecture from plan/document skill (must contain a `## Architecture Vision` H2 — confirmed user intent) |
| `DOCUMENT_DIR/glossary.md` | Project terminology (confirmed by user in plan Phase 2a.0 or document Step 4.5). Use it to keep task names, component references, and AC wording consistent with the user's vocabulary |
| `DOCUMENT_DIR/system-flows.md` | System flows from plan skill |
| `DOCUMENT_DIR/components/[##]_[name]/description.md` | Component specs from plan skill |
| `DOCUMENT_DIR/tests/` | Blackbox test specs from plan skill |
@@ -1,6 +1,6 @@
# Module Layout Template
The module layout is the **authoritative file-ownership map** used by the `/implement` skill to assign OWNED / READ-ONLY / FORBIDDEN files to implementer subagents. It is derived from `_docs/02_document/architecture.md` and the component specs at `_docs/02_document/components/`, and it follows the target language's standard project-layout conventions.
The module layout is the **authoritative file-ownership map** used by the `/implement` skill to assign OWNED / READ-ONLY / FORBIDDEN files to each task. It is derived from `_docs/02_document/architecture.md` and the component specs at `_docs/02_document/components/`, and it follows the target language's standard project-layout conventions.
Save as `_docs/02_document/module-layout.md`. This file is produced by the decompose skill (Step 1.5 module layout) and consumed by the implement skill (Step 4 file ownership). Task specs remain purely behavioral — they do NOT carry file paths. The layout is the single place where component → filesystem mapping lives.
@@ -104,4 +104,4 @@ The implement skill's Step 4 (File Ownership) reads this file and, for each task
3. Set READ-ONLY = the Public API files of every component listed in `Imports from`, plus `shared/*` Public API files.
4. Set FORBIDDEN = every other component's Owns glob.
If two tasks in the same batch map to the same component, the implement skill schedules them sequentially (one implementer at a time for that component) to avoid file conflicts on shared internal files.
Execution inside a batch is already sequential (one task at a time). This mapping is still required because it enforces scope discipline per task — preventing a task from drifting into files that belong to another component.
@@ -26,7 +26,8 @@
- Application components under test
- Test runner container (black-box, no internal imports)
- Isolated database with seed data
- All tests runnable via `docker compose -f docker-compose.test.yml up --abort-on-container-exit`
- All tests runnable via `docker compose -f docker-compose.test.yml up --abort-on-container-exit --exit-code-from e2e-runner`
- See the Woodpecker two-workflow contract in [`../templates/ci_cd_pipeline.md`](../templates/ci_cd_pipeline.md) — the test runner entry point defined here becomes the first step of `.woodpecker/01-test.yml`.
7. Define image tagging strategy: `<registry>/<project>/<component>:<git-sha>` for CI, `latest` for local dev only
## Self-verification
@@ -85,3 +85,140 @@ Save as `_docs/04_deploy/ci_cd_pipeline.md`.
| Deploy success | [Slack] | [team] |
| Deploy failure | [Slack/email + PagerDuty] | [on-call] |
```
---
## Reference Implementation: Woodpecker CI two-workflow contract
Use this when the project's CI is **Woodpecker** and the test layout follows the autodev e2e contract from [`../../decompose/templates/test-infrastructure-task.md`](../../decompose/templates/test-infrastructure-task.md) (an `e2e/` folder containing `Dockerfile`, `docker-compose.test.yml`, `conftest.py`, `requirements.txt`, `mocks/`, `fixtures/`, `tests/`).
The contract is **two workflows in `.woodpecker/`**, scheduled on the same agent label, with the build workflow gated on a successful test run:
- `.woodpecker/01-test.yml` — runs the e2e contract, publishes `results/report.csv` as an artifact, fails the pipeline on any test failure.
- `.woodpecker/02-build-push.yml``depends_on: [01-test]`. Builds the image, tags it `${CI_COMMIT_BRANCH}-${TAG_SUFFIX}`, pushes it to the registry. Skipped automatically if test failed.
The agent label is parameterized via `matrix:` so a single workflow file fans out across architectures: `labels: platform: ${PLATFORM}` routes each matrix entry to the matching agent. Both workflows for a repo must use the same matrix so test and build run on the same machine and share Docker layer cache. New architectures = new matrix entries; never new files.
### Multi-arch matrix conventions
| Variable | Meaning | Typical values |
|----------|---------|----------------|
| `PLATFORM` | Woodpecker agent label — selects which physical machine runs the entry. | `arm64`, `amd64` |
| `TAG_SUFFIX` | Image tag suffix appended after the branch name. | `arm`, `amd` |
| `DOCKERFILE` *(only when arches need different Dockerfiles)* | Path to the Dockerfile for this entry. | `Dockerfile`, `Dockerfile.jetson` |
Most repos use the same `Dockerfile` for both arches (multi-arch base images handle the rest), so `DOCKERFILE` can be omitted from the matrix and hardcoded in the build command. Repos with split per-arch Dockerfiles (e.g., `detections` uses `Dockerfile.jetson` on Jetson with TensorRT/CUDA-on-L4T) declare `DOCKERFILE` as a matrix var.
When only one architecture is currently in use, keep the matrix block with a single entry and the second entry commented out — adding a new arch is then a one-line uncomment, not a structural change.
### `.woodpecker/01-test.yml`
```yaml
when:
event: [push, pull_request, manual]
branch: [dev, stage, main]
matrix:
include:
- PLATFORM: arm64
TAG_SUFFIX: arm
# - PLATFORM: amd64
# TAG_SUFFIX: amd
labels:
platform: ${PLATFORM}
steps:
- name: e2e
image: docker
commands:
- cd e2e
- docker compose -f docker-compose.test.yml up --abort-on-container-exit --exit-code-from e2e-runner --build
- docker compose -f docker-compose.test.yml down -v
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- name: report
image: docker
when:
status: [success, failure]
commands:
- test -f e2e/results/report.csv && cat e2e/results/report.csv || echo "no report"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
```
Notes:
- `--abort-on-container-exit` shuts the whole compose down as soon as ANY service exits, so a crashed dependency surfaces immediately instead of hanging the runner.
- `--exit-code-from e2e-runner` ensures the pipeline's exit code reflects the test runner's, not the SUT's.
- The `report` step runs on `[success, failure]` so the report is always published; without this the CSV is lost on red builds.
- `down -v` between runs drops mock state and DB volumes — every test run starts clean.
### `.woodpecker/02-build-push.yml`
```yaml
when:
event: [push, manual]
branch: [dev, stage, main]
depends_on:
- 01-test
matrix:
include:
- PLATFORM: arm64
TAG_SUFFIX: arm
# - PLATFORM: amd64
# TAG_SUFFIX: amd
labels:
platform: ${PLATFORM}
steps:
- name: build-push
image: docker
environment:
REGISTRY_HOST:
from_secret: registry_host
REGISTRY_USER:
from_secret: registry_user
REGISTRY_TOKEN:
from_secret: registry_token
commands:
- echo "$REGISTRY_TOKEN" | docker login "$REGISTRY_HOST" -u "$REGISTRY_USER" --password-stdin
- export TAG=${CI_COMMIT_BRANCH}-${TAG_SUFFIX}
- export BUILD_DATE=$(date -u +%Y-%m-%dT%H:%M:%SZ)
- |
docker build -f Dockerfile \
--build-arg CI_COMMIT_SHA=$CI_COMMIT_SHA \
--label org.opencontainers.image.revision=$CI_COMMIT_SHA \
--label org.opencontainers.image.created=$BUILD_DATE \
--label org.opencontainers.image.source=$CI_REPO_URL \
-t $REGISTRY_HOST/azaion/<service>:$TAG .
- docker push $REGISTRY_HOST/azaion/<service>:$TAG
volumes:
- /var/run/docker.sock:/var/run/docker.sock
```
Notes:
- `depends_on: [01-test]` is enforced by Woodpecker — a failed `01-test` (any matrix entry) skips this workflow.
- The build workflow does NOT trigger on `pull_request` events: PRs get test signal only; pushes to `dev`/`stage`/`main` produce images. Avoids polluting the registry with PR images.
- Replace `<service>` with the actual service name (matches the registry namespace pattern `azaion/<service>`).
- For repos with split per-arch Dockerfiles, add `DOCKERFILE: Dockerfile.jetson` (or similar) to the matrix entry and substitute `${DOCKERFILE}` for `Dockerfile` in the `docker build -f` line.
### Variations by stack
The contract is language-agnostic because the runner is `docker compose`. The Dockerfile inside `e2e/` selects the test framework:
| Stack | `e2e/Dockerfile` runs |
|-------|----------------------|
| Python | `pytest --csv=/results/report.csv -v` |
| .NET | `dotnet test --logger:"trx;LogFileName=/results/report.trx"` (convert to CSV in a final step if needed) |
| Node/UI | `npm test -- --reporters=default --reporters=jest-junit --outputDirectory=/results` |
| Rust | `cargo test --no-fail-fast -- --format json > /results/report.json` |
When the repo has **only unit tests** (no `e2e/docker-compose.test.yml`), drop the compose orchestration and run the native test command directly inside a stack-appropriate image. Keep the same two-workflow split — `01-test.yml` runs unit tests, `02-build-push.yml` is unchanged.
### Manual-trigger override (test infrastructure not yet validated)
If a repo ships a complete `e2e/` layout but the test fixtures are not yet validated end-to-end (e.g., expected-results data is still being authored), gate `01-test.yml` on `event: [manual]` only and add a TODO comment pointing to the unblocking task. The `02-build-push.yml` workflow drops its `depends_on` clause for the manual-only window — an explicit and reversible exception, not a permanent split.
@@ -31,6 +31,7 @@ _docs/
│ ├── components.md
│ └── flows/
├── 04_verification_log.md # Step 4
├── glossary.md # Step 4.5 (confirmed-by-user)
├── FINAL_report.md # Step 7
└── state.json # Resumability
```
@@ -49,6 +50,7 @@ Maintained in `DOCUMENT_DIR/state.json` for resumability:
"modules_remaining": ["services/auth", "api/endpoints"],
"module_batch": 1,
"components_written": [],
"step_4_5_glossary_vision": "not_started",
"last_updated": "2026-03-21T14:00:00Z"
}
```
+102 -2
View File
@@ -15,7 +15,7 @@ Covers three related modes that share the same 8-step pipeline:
## Progress Tracking
Create a TodoWrite with all steps (0 through 7). Update status as each step completes.
Create a TodoWrite with all steps (0 through 7, including the inline Step 2.5 Module Layout Derivation and Step 4.5 Glossary & Architecture Vision). Update status as each step completes.
## Steps
@@ -251,7 +251,107 @@ Apply corrections inline to the documents that need them.
**BLOCKING**: Present verification summary to user. Do NOT proceed until user confirms corrections are acceptable or requests additional fixes.
**Session boundary**: After verification is confirmed, suggest a session break before proceeding to the synthesis steps (57). These steps produce different artifact types and benefit from fresh context:
---
### Step 4.5: Glossary & Architecture Vision (BLOCKING)
**Role**: Software architect + business analyst
**Goal**: Reconcile the AI's verified understanding of the codebase with the user's intended terminology and architecture vision. Existing-code projects often carry domain language and structural intent that is invisible from code alone (synonyms, deprecated names, modules that are "supposed to" be split, components the user thinks of as one logical unit even though they live in two folders). This step makes that intent explicit before any downstream skill (refactor, decompose, new-task) acts on the docs.
**When this step runs**:
- Always, after Step 4 (Verification Pass) — for Full and Resume modes.
- **Skipped** in Focus Area mode (the glossary/vision is system-wide; running it on a partial scan would produce a partial glossary). Resume the user once a full pass exists.
**Inputs** (already on disk after Step 4):
- `DOCUMENT_DIR/architecture.md`, `system-flows.md`, `data_model.md`, `deployment/*`
- `DOCUMENT_DIR/components/*/description.md`
- `DOCUMENT_DIR/modules/*.md`
- `DOCUMENT_DIR/04_verification_log.md` (so the AI knows which doc parts are confirmed vs. flagged)
**Outputs**:
- `DOCUMENT_DIR/glossary.md` (NEW)
- `DOCUMENT_DIR/architecture.md` updated in place: a new `## Architecture Vision` section is prepended (or merged into an existing "Overview" / "Vision" heading if already present); existing technical sections are preserved verbatim
**Procedure**:
1. **Draft glossary** from verified docs:
- Domain entities, processes, roles named in module/component docs
- Acronyms / abbreviations
- Internal codenames (project, service, model names) that recur in the codebase
- Synonym pairs the AI noticed (e.g., the codebase uses "flight" but module comments say "mission")
- Stakeholder personas if any docs reference them
Each entry: one-line definition + source reference (`source: components/03_flights/description.md`). Skip generic CS/industry terms.
2. **Draft architecture vision** as the AI currently understands the codebase:
- **One paragraph**: what the system is, who runs it, the runtime topology shape (monolith / services / pipeline / library / hybrid), and the dominant pattern (e.g., "submodule-based meta-repo with REST + SSE between UI and backend").
- **Components & responsibilities** (one-line each), pulled from `components/*/description.md`.
- **Major data flows** (one or two sentences each), pulled from `system-flows.md`.
- **Architectural principles / non-negotiables** the AI inferred from the code (e.g., "DB-driven config", "all UI traffic via REST + SSE only", "no per-component shared state"). Mark each with `inferred-from: <source>`.
- **Open questions / drift signals**: places where the code disagrees with itself, or where the AI cannot tell intent from implementation (e.g., two components doing similar work — is that legacy duplication or deliberate?).
3. **Present condensed view** to the user (NOT the full draft files — a synopsis only):
```
══════════════════════════════════════
REVIEW: Glossary + Architecture Vision (existing code)
══════════════════════════════════════
Glossary (N terms drafted from verified docs):
- <Term>: <one-line definition>
- ...
Architecture Vision — as inferred from the codebase:
<one-paragraph synopsis>
Components / responsibilities:
- <component>: <one-line>
- ...
Principles / non-negotiables (inferred):
- <principle> [inferred-from: <source>]
- ...
Open questions / drift signals:
- <q1>
- <q2>
══════════════════════════════════════
A) Inferred vision matches my intent — write the files
B) Add / correct entries (provide diffs — terms, components,
principles, or rename pairs)
C) Resolve the open questions / drift signals first
══════════════════════════════════════
Recommendation: pick C if any drift signals exist;
otherwise B if the vision misses
project-specific intent; A only when
the inferred vision is exactly right.
══════════════════════════════════════
```
4. **Iterate**:
- On B → integrate the user's diffs/additions, re-present, loop until A.
- On C → ask the listed open questions in one batch (M4-style), integrate answers, re-present.
- **Do NOT proceed to step 5 until the user picks A.**
5. **Save**:
- Write `DOCUMENT_DIR/glossary.md`, alphabetical, with a top-line `**Status**: confirmed-by-user` and the date.
- Update `DOCUMENT_DIR/architecture.md`:
- If a `## Architecture Vision` (or `## Vision` / `## Overview`) section already exists at the top, replace its body with the confirmed paragraph + components + principles.
- Otherwise, insert `## Architecture Vision` as the first H2 after the title; preserve every existing H2 below.
- Do NOT delete or re-order existing technical sections (Tech Stack, Deployment Model, Data Model, NFRs, ADRs).
6. **Update `state.json`**: mark `step_4_5_glossary_vision: confirmed`. Resume on rerun must skip this step unless the user explicitly invokes `/document --refresh-vision`.
**Self-verification**:
- [ ] Every glossary entry traces to at least one file under `DOCUMENT_DIR/`
- [ ] Every component listed in the vision matches a folder under `DOCUMENT_DIR/components/`
- [ ] All open questions are answered or explicitly deferred (with the user's acknowledgement)
- [ ] `architecture.md` still contains all H2 sections it had before this step
- [ ] User picked option A on the latest condensed view
**BLOCKING**: Do NOT proceed to the session boundary / Step 5 until both files are saved and the user has picked A.
---
**Session boundary**: After Step 4.5 is confirmed, suggest a session break before proceeding to the synthesis steps (57). These steps produce different artifact types and benefit from fresh context:
```
══════════════════════════════════════
+41 -42
View File
@@ -1,32 +1,31 @@
---
name: implement
description: |
Orchestrate task implementation with dependency-aware batching, parallel subagents, and integrated code review.
Implement tasks sequentially with dependency-aware batching and integrated code review.
Reads flat task files and _dependencies_table.md from TASKS_DIR, computes execution batches via topological sort,
launches up to 4 implementer subagents in parallel, runs code-review skill after each batch, and loops until done.
implements tasks one at a time in dependency order, runs code-review skill after each batch, and loops until done.
Use after /decompose has produced task files.
Trigger phrases:
- "implement", "start implementation", "implement tasks"
- "run implementers", "execute tasks"
- "execute tasks"
category: build
tags: [implementation, orchestration, batching, parallel, code-review]
tags: [implementation, batching, code-review]
disable-model-invocation: true
---
# Implementation Orchestrator
# Implementation Runner
Orchestrate the implementation of all tasks produced by the `/decompose` skill. This skill is a **pure orchestrator** — it does NOT write implementation code itself. It reads task specs, computes execution order, delegates to `implementer` subagents, validates results via the `/code-review` skill, and escalates issues.
Implement all tasks produced by the `/decompose` skill. This skill reads task specs, computes execution order, writes the code and tests for each task **sequentially** (no subagents, no parallel execution), validates results via the `/code-review` skill, and escalates issues.
The `implementer` agent is the specialist that writes all the code — it receives a task spec, analyzes the codebase, implements the feature, writes tests, and verifies acceptance criteria.
For each task the main agent receives a task spec, analyzes the codebase, implements the feature, writes tests, and verifies acceptance criteria — then moves on to the next task.
## Core Principles
- **Orchestrate, don't implement**: this skill delegates all coding to `implementer` subagents
- **Dependency-aware batching**: tasks run only when all their dependencies are satisfied
- **Max 4 parallel agents**: never launch more than 4 implementer subagents simultaneously
- **File isolation**: no two parallel agents may write to the same file
- **Sequential execution**: implement one task at a time. Do NOT spawn subagents and do NOT run tasks in parallel. (See `.cursor/rules/no-subagents.mdc`.)
- **Dependency-aware ordering**: tasks run only when all their dependencies are satisfied
- **Batching for review, not parallelism**: tasks are grouped into batches so `/code-review` and commits operate on a coherent unit of work — all tasks inside a batch are still implemented one after the other
- **Integrated review**: `/code-review` skill runs automatically after each batch
- **Auto-start**: batches launch immediately — no user confirmation before a batch
- **Auto-start**: batches start immediately — no user confirmation before a batch
- **Gate on failure**: user confirmation is required only when code review returns FAIL
- **Commit per batch**: after each batch is confirmed, commit. Ask the user whether to push to remote unless the user previously opted into auto-push for this session.
@@ -56,7 +55,7 @@ TASKS_DIR/
- A) Commit or stash stray changes manually, then re-invoke `/implement`
- B) Agent commits stray changes as a single `chore: WIP pre-implement` commit and proceeds
- C) Abort
- Rationale: implementer subagents edit files in parallel and commit per batch. Unrelated uncommitted changes get silently folded into batch commits otherwise.
- Rationale: each batch ends with a commit. Unrelated uncommitted changes would get silently folded into batch commits otherwise.
- This check is repeated at the start of each batch iteration (see step 6 / step 14 Loop).
## Algorithm
@@ -78,8 +77,8 @@ TASKS_DIR/
- Topological sort remaining tasks
- Select tasks whose dependencies are ALL satisfied (completed)
- If a ready task depends on any task currently being worked on in this batch, it must wait for the next batch
- Cap the batch at 4 parallel agents
- A batch is simply a coherent group of tasks for review + commit. Within the batch, tasks are implemented sequentially in topological order.
- Cap the batch size at a reasonable review scope (default: 4 tasks)
- If the batch would exceed 20 total complexity points, suggest splitting and let the user decide
### 4. Assign File Ownership
@@ -89,11 +88,12 @@ The authoritative file-ownership map is `_docs/02_document/module-layout.md` (pr
For each task in the batch:
- Read the task spec's **Component** field.
- Look up the component in `_docs/02_document/module-layout.md` → Per-Component Mapping.
- Set **OWNED** = the component's `Owns` glob (exclusive write for the duration of the batch).
- Set **OWNED** = the component's `Owns` glob (the files this task is allowed to write).
- Set **READ-ONLY** = Public API files of every component in the component's `Imports from` list, plus all `shared/*` Public API files.
- Set **FORBIDDEN** = every other component's `Owns` glob, and every other component's internal (non-Public API) files.
- If the task is a shared / cross-cutting task (lives under `shared/*`), OWNED = that shared directory; READ-ONLY = nothing; FORBIDDEN = every component directory.
- If two tasks in the same batch map to the same component or overlapping `Owns` globs, schedule them sequentially instead of in parallel.
Since execution is sequential, there is no parallel-write conflict to resolve; ownership here is a **scope discipline** check — it stops a task from drifting into unrelated components even when alone.
If `_docs/02_document/module-layout.md` is missing or the component is not found:
- STOP the batch.
@@ -102,31 +102,30 @@ If `_docs/02_document/module-layout.md` is missing or the component is not found
### 5. Update Tracker Status → In Progress
For each task in the batch, transition its ticket status to **In Progress** via the configured work item tracker (see `protocols.md` for tracker detection) before launching the implementer. If `tracker: local`, skip this step.
For each task in the batch, transition its ticket status to **In Progress** via the configured work item tracker (see `protocols.md` for tracker detection) before starting work. If `tracker: local`, skip this step.
### 6. Launch Implementer Subagents
### 6. Implement Tasks Sequentially
**Per-batch dirty-tree re-check**: before launching subagents, run `git status --porcelain`. On the first batch this is guaranteed clean by the prerequisite check. On subsequent batches, the previous batch ended with a commit so the tree should still be clean. If the tree is dirty at this point, STOP and surface the dirty files to the user using the same A/B/C choice as the prerequisite check. The most likely causes are a failed commit in the previous batch, a user who edited files mid-loop, or a pre-commit hook that re-wrote files and was not captured.
**Per-batch dirty-tree re-check**: before starting the batch, run `git status --porcelain`. On the first batch this is guaranteed clean by the prerequisite check. On subsequent batches, the previous batch ended with a commit so the tree should still be clean. If the tree is dirty at this point, STOP and surface the dirty files to the user using the same A/B/C choice as the prerequisite check. The most likely causes are a failed commit in the previous batch, a user who edited files mid-loop, or a pre-commit hook that re-wrote files and was not captured.
For each task in the batch, launch an `implementer` subagent with:
- Path to the task spec file
- List of files OWNED (exclusive write access)
- List of files READ-ONLY
- List of files FORBIDDEN
- **Explicit instruction**: the implementer must write or update tests that validate each acceptance criterion in the task spec. If a test cannot run in the current environment (e.g., TensorRT requires GPU), the test must still be written and skip with a clear reason.
For each task in the batch **in topological order, one at a time**:
1. Read the task spec file.
2. Respect the file-ownership envelope computed in Step 4 (OWNED / READ-ONLY / FORBIDDEN).
3. Implement the feature and write/update tests for every acceptance criterion in the spec. If a test cannot run in the current environment (e.g., TensorRT requires GPU), the test must still be written and skip with a clear reason.
4. Run the relevant tests locally before moving on to the next task in the batch. If tests fail, fix in-place — do not defer.
5. Capture a short per-task status line (files changed, tests pass/fail, any blockers) for the batch report.
Launch all subagents immediately — no user confirmation.
Do NOT spawn subagents and do NOT attempt to implement two tasks simultaneously, even if they touch disjoint files. See `.cursor/rules/no-subagents.mdc`.
### 7. Monitor
### 7. Collect Status
- Wait for all subagents to complete
- Collect structured status reports from each implementer
- If any implementer reports "Blocked", log the blocker and continue with others
- After all tasks in the batch are finished, aggregate the per-task status lines into a structured batch status.
- If any task reported "Blocked", log the blocker with the failing task's ID and continue — the batch report will surface it.
**Stuck detection** — while monitoring, watch for these signals per subagent:
- Same file modified 3+ times without test pass rate improving → flag as stuck, stop the subagent, report as Blocked
- Subagent has not produced new output for an extended period → flag as potentially hung
- If a subagent is flagged as stuck, do NOT let it continue looping — stop it and record the blocker in the batch report
**Stuck detection** — while implementing a task, watch for these signals in your own progress:
- The same file has been rewritten 3+ times without tests going green → stop, mark the task Blocked, and move to the next task in the batch (the user will be asked at the end of the batch).
- You have tried 3+ distinct approaches without evidence-driven progress → stop, mark Blocked, move on.
- Do NOT loop indefinitely on a single task. Record the blocker and proceed.
### 8. AC Test Coverage Verification
@@ -139,8 +138,8 @@ Before code review, verify that every acceptance criterion in each task spec has
- **Not covered**: no test exists for this AC
If any AC is **Not covered**:
- This is a **BLOCKING** failure — the implementer must write the missing test before proceeding
- Re-launch the implementer with the specific ACs that need tests
- This is a **BLOCKING** failure — the missing test must be written before proceeding
- Go back to the offending task, add tests for the specific ACs that lack coverage, then re-run this check
- If the test cannot run in the current environment (GPU required, platform-specific, external service), the test must still exist and skip with `pytest.mark.skipif` or `pytest.skip()` explaining the prerequisite
- A skipped test counts as **Covered** — the test exists and will run when the environment allows
@@ -264,9 +263,9 @@ After each batch, produce a structured report:
| Situation | Action |
|-----------|--------|
| Implementer fails same approach 3+ times | Stop it, escalate to user |
| Same task rewritten 3+ times without green tests | Mark Blocked, continue batch, escalate at batch end |
| Task blocked on external dependency (not in task list) | Report and skip |
| File ownership conflict unresolvable | ASK user |
| File ownership violated (task wrote outside OWNED) | ASK user |
| Test failure after final test run | Delegate to test-run skill — blocking gate |
| All tasks complete | Report final summary, suggest final commit |
| `_dependencies_table.md` missing | STOP — run `/decompose` first |
@@ -281,7 +280,7 @@ Each batch commit serves as a rollback checkpoint. If recovery is needed:
## Safety Rules
- Never launch tasks whose dependencies are not yet completed
- Never allow two parallel agents to write to the same file
- If a subagent fails or is flagged as stuck, stop it and report — do not let it loop indefinitely
- Never start a task whose dependencies are not yet completed
- Never run tasks in parallel and never spawn subagents — see `.cursor/rules/no-subagents.mdc`
- If a task is flagged as stuck, stop working on it and report — do not let it loop indefinitely
- Always run the full test suite after all batches complete (step 15)
@@ -3,29 +3,31 @@
## Topological Sort with Batch Grouping
The `/implement` skill uses a topological sort to determine execution order,
then groups tasks into batches for parallel execution.
then groups tasks into batches for code review and commit. Execution within a
batch is **sequential** — see `.cursor/rules/no-subagents.mdc`.
## Algorithm
1. Build adjacency list from `_dependencies_table.md`
2. Compute in-degree for each task node
3. Initialize batch 0 with all nodes that have in-degree 0
3. Initialize the ready set with all nodes that have in-degree 0
4. For each batch:
a. Select up to 4 tasks from the ready set
b. Check file ownership — if two tasks would write the same file, defer one to the next batch
c. Launch selected tasks as parallel implementer subagents
d. When all complete, remove them from the graph and decrement in-degrees of dependents
e. Add newly zero-in-degree nodes to the next batch's ready set
a. Select up to 4 tasks from the ready set (default batch size cap)
b. Implement the selected tasks one at a time in topological order
c. When all tasks in the batch complete, remove them from the graph and
decrement in-degrees of dependents
d. Add newly zero-in-degree nodes to the ready set
5. Repeat until the graph is empty
## File Ownership Conflict Resolution
## Ordering Inside a Batch
When two tasks in the same batch map to overlapping files:
- Prefer to run the lower-numbered task first (it's more foundational)
- Defer the higher-numbered task to the next batch
- If both have equal priority, ask the user
Tasks inside a batch are executed in topological order — a task is only
started after every task it depends on (inside the batch or in a previous
batch) is done. When two tasks have the same topological rank, prefer the
lower-numbered (more foundational) task first.
## Complexity Budget
Each batch should not exceed 20 total complexity points.
If it does, split the batch and let the user choose which tasks to include.
The budget exists to keep the per-batch code review scope reviewable.
+2 -1
View File
@@ -129,7 +129,8 @@ If `_docs/_repo-config.yaml` already exists:
- Entries removed (component removed from registry)
4. **Ask the user** whether to apply the diff.
5. If applied, **preserve `confirmed: true` flags** for entries that still match — don't reset human-approved mappings.
6. If user declines, stop — leave config untouched.
6. **Preserve user-owned top-level keys verbatim**: `glossary_doc:` (written by autodev meta-repo Step 2.5) and any `assumptions_log:` entries are NEVER edited or removed by this skill. Carry them through unchanged. If the file referenced by `glossary_doc:` no longer exists on disk, surface as an `unresolved:` question — do not auto-clear the field.
7. If user declines, stop — leave config untouched.
### Phase 8: Batch question checkpoint (M4)
@@ -15,6 +15,8 @@ Propagates component changes into the unified documentation set. Strictly scoped
| Root `README.md` **only** if `_repo-config.yaml` lists it as a doc target (e.g., services table) | Install scripts (`ci-*.sh`) → use `monorepo-cicd` |
| Docs index (`_docs/README.md` or similar) cross-reference tables | Component-internal docs (`<component>/README.md`, `<component>/docs/*`) |
| Cross-cutting docs listed in `docs.cross_cutting` | `_docs/_repo-config.yaml` itself (only `monorepo-discover` and `monorepo-onboard` write it) |
| Body of cross-cutting docs **except** the `## Architecture Vision` section (preserved verbatim — owned by autodev meta-repo Step 2.5) | The file at `glossary_doc:` (user-confirmed; only autodev meta-repo Step 2.5 rewrites it). New project terms surfaced during sync are reported back to the user, not silently appended |
| `## Architecture Vision` body — read-only, may be referenced for terminology consistency but never edited | — |
If a component change requires CI/env updates too, tell the user to also run `monorepo-cicd`. This skill does NOT cross domains.
@@ -166,6 +168,8 @@ Append to `_docs/_repo-config.yaml` under `assumptions_log:`:
- Change `confirmed_by_user` or any `confirmed: <bool>` flag
- Auto-commit or push
- Guess a mapping not in the config
- Edit `glossary_doc:` (the file recorded under the config's `glossary_doc:` key)
- Edit the `## Architecture Vision` section of any cross-cutting doc; if a sync would conflict with that section, surface the conflict to the user and skip — do not silently rewrite user-confirmed content
## Edge cases
+152
View File
@@ -0,0 +1,152 @@
---
name: monorepo-e2e
description: Syncs the suite-level integration e2e harness (`e2e/docker-compose.suite-e2e.yml`, fixtures, Playwright runner) when component contracts drift in ways that affect the cross-service scenario. Reads `_docs/_repo-config.yaml` to know which suite-e2e artifacts are in play. Touches ONLY suite-e2e files — never per-component CI, docs, or component internals. Use when a component changes a port, env var, public API endpoint, DB schema column, or detection model that the suite e2e exercises.
---
# Monorepo Suite-E2E
Propagates component changes into the suite-level integration e2e harness. Strictly scoped — never edits docs, component internals, per-component CI configs, or the production deploy compose.
## Scope — explicit
| In scope | Out of scope |
| -------- | ------------ |
| `e2e/docker-compose.suite-e2e.yml` (overlay, healthchecks, seed services) | Production `_infra/deploy/<target>/docker-compose.yml``monorepo-cicd` owns it |
| `e2e/fixtures/init.sql` (seeded rows that the spec depends on) | Component DB migrations — owned by each component |
| `e2e/fixtures/expected_detections.json` (detection baseline) | Detection model itself — owned by `detections/` |
| `e2e/runner/tests/*.spec.ts` selector / contract-driven edits | New scenarios (user-driven, not drift-driven) |
| `e2e/runner/Dockerfile` / `package.json` Playwright version bumps | Net-new e2e infrastructure (use `monorepo-onboard` or initial scaffolding) |
| `.woodpecker/suite-e2e.yml` (suite-level pipeline) | Per-component `.woodpecker/01-test.yml` / `02-build-push.yml``monorepo-cicd` owns those |
| Suite-e2e leftover entries under `_docs/_process_leftovers/` | Per-component leftovers — owned by each component |
If a component change needs doc updates too, tell the user to also run `monorepo-document`. If it needs production-deploy or per-component CI updates, run `monorepo-cicd`. This skill **only** updates the suite-e2e surface.
## Preconditions (hard gates)
1. `_docs/_repo-config.yaml` exists.
2. Top-level `confirmed_by_user: true`.
3. `suite_e2e.*` section is populated in config (see "Required config block" below). If absent, abort and ask the user to extend the config via `monorepo-discover`.
4. Components-in-scope have confirmed contract mappings (port, public API path, DB tables touched), OR user explicitly approves inferred ones.
## Required config block
This skill expects `_docs/_repo-config.yaml` to carry:
```yaml
suite_e2e:
overlay: e2e/docker-compose.suite-e2e.yml
fixtures:
init_sql: e2e/fixtures/init.sql
baseline_json: e2e/fixtures/expected_detections.json
binary_fixtures:
- e2e/fixtures/sample.mp4
- e2e/fixtures/model.tar.gz
runner:
dockerfile: e2e/runner/Dockerfile
package_json: e2e/runner/package.json
spec_dir: e2e/runner/tests
pipeline: .woodpecker/suite-e2e.yml
scenario:
description: "Upload video → detect → overlays → dataset → DB persistence"
components_exercised:
- ui
- annotations
- detections
- postgres-local
api_contracts:
- component: ui
path: /api/admin/auth/login
- component: annotations
path: /api/annotations/media/batch
- component: annotations
path: /api/annotations/media/{id}/annotations
db_tables:
- media
- annotations
- detection
- detection_classes
model_pin:
detections_repo_path: <path-to-model-config-or-classes-source>
classes_source: annotations/src/Database/DatabaseMigrator.cs
```
If `suite_e2e:` is missing the skill **stops** — it does not invent a default mapping.
## Mitigations (M1M7)
- **M1** Separation: this skill only touches suite-e2e files; no production deploy compose, no per-component CI, no docs, no component internals.
- **M3** Factual vs. interpretive: port, env var, API path, DB column — FACTUAL, read from the components' code. Whether a baseline still matches the model — DEFERRED to the user (the skill flags drift, never silently re-records).
- **M4** Batch questions at checkpoints.
- **M5** Skip over guess: a component change that doesn't map cleanly to one of the in-scope artifacts → skip and report.
- **M6** Assumptions footer + append to `_repo-config.yaml` `assumptions_log`.
- **M7** Drift detection: verify every path under `suite_e2e.*` exists on disk; stop if not.
## Workflow
### Phase 1: Drift check (M7)
Verify every file listed under `suite_e2e.*` (excluding `binary_fixtures`, which are gitignored) exists on disk. Missing file → stop and ask:
- Run `monorepo-discover` to refresh, OR
- Skip the missing artifact (recorded in report)
For `binary_fixtures` paths that are absent (expected — they live in S3/LFS), check whether `expected_detections.json._meta.video_sha256` is still a `TBD-...` placeholder. If yes, surface this as a known leftover (`_docs/_process_leftovers/2026-04-22_suite-e2e-binary-fixtures.md`) and continue.
### Phase 2: Determine scope
Same as `monorepo-cicd` Phase 2 — ask the user, or auto-detect. For **auto-detect**, flag commits that touch suite-e2e-relevant concerns:
| Commit pattern | Suite-e2e impact |
| -------------- | ---------------- |
| New port exposed by `<component>` | Healthcheck override may change in `e2e/docker-compose.suite-e2e.yml` |
| New required env var on `<component>` | `e2e/docker-compose.suite-e2e.yml` `e2e-runner` env block + `init.sql` seed |
| Public API path renamed / removed | Spec selector / API call path in `e2e/runner/tests/*.spec.ts` |
| DB schema column renamed in a `db_tables` entry | `init.sql` column reference + spec `pg.query` text |
| New required DB table referenced by spec | `init.sql` insert block (skip if owned by component migration) |
| Detection model rev change in `detections/` | `expected_detections.json` `_meta.model.revision` + flag baseline as stale |
| New canonical detection class added | `expected_detections.json._meta` annotation |
Present the flagged list; confirm.
### Phase 3: Classify changes per component
| Change type | Target suite-e2e files |
| ----------- | ---------------------- |
| Port / env var change | `e2e/docker-compose.suite-e2e.yml` |
| API path / contract change | `e2e/runner/tests/*.spec.ts` |
| DB schema reference change | `e2e/fixtures/init.sql` and spec SQL queries |
| Model / class catalog change | `e2e/fixtures/expected_detections.json` (mark `_meta.fixture_version` bump + leftover entry for binary refresh) |
| Playwright dependency drift | `e2e/runner/package.json` + `e2e/runner/Dockerfile` |
| Suite scenario steps gone stale | **Stop and ask** — scenario edits are user-driven, not drift-driven |
### Phase 4: Apply edits
Edit each in-scope file. After each batch, run `ReadLints` on touched files. Do NOT run the suite e2e itself — that's a downstream pipeline operation, not a sync-skill responsibility.
For `expected_detections.json`: when the model revision changes, the skill **does not** re-record the baseline — the binary fixture cannot be regenerated from the dev environment. Instead:
1. Set `_meta.model.revision` to the new revision.
2. Set `_meta.fixture_version` to a new bumped version with a `-stale` suffix (e.g., `0.2.0-stale`).
3. Append a new entry to `_docs/_process_leftovers/` describing the required re-record.
4. Leave `expected.by_class` untouched — the spec's tolerance check will fail loudly until the binary refresh lands.
### Phase 5: Update assumptions log
Append a new `assumptions_log:` entry to `_docs/_repo-config.yaml` recording:
- Date, components in scope, which suite-e2e files were touched
- Any inferred contract mappings still tagged `confirmed: false`
- Any leftover entries created
### Phase 6: Report
Render a Choose-format summary of the synced files, surface any `_process_leftovers/` entries created, and end. Do NOT auto-commit.
## Self-verification
- [ ] No file outside `e2e/`, `.woodpecker/suite-e2e.yml`, or `_docs/_process_leftovers/` was edited
- [ ] `_docs/_repo-config.yaml` `suite_e2e:` block was not silently mutated except for `assumptions_log` append
- [ ] `expected_detections.json` was not re-recorded (only metadata bumped + leftover added)
- [ ] Every spec edit traces to a flagged commit pattern in Phase 2
- [ ] `ReadLints` clean on every touched file
## Failure handling
Same retry / escalation protocol as `monorepo-cicd` — see `protocols.md`. The most common failure mode is the binary-fixture leftover (sample.mp4 missing or SHA-mismatched); this skill does not attempt to resolve it, only surfaces it.
+4
View File
@@ -59,6 +59,8 @@ Mark each as `complete` / `partial` / `missing` and explain.
- Every component in `components:` appears in the registry — flag mismatches
- Every `docs.root` file cross-referenced in config exists on disk — flag missing
- Every `ci.orchestration_files` and `ci.install_scripts` exists — flag missing
- `glossary_doc:` (if recorded in config) points to a file that exists on disk — flag missing
- The cross-cutting architecture doc identified by `docs.cross_cutting` contains a `## Architecture Vision` section — flag missing (signals the meta-repo flow's Step 2.5 was skipped or the section was removed)
### Section 5: Unresolved questions
@@ -113,6 +115,8 @@ In registry, not in config: [list or "(none)"]
In config, not in registry: [list or "(none)"]
Config-referenced docs missing: [list or "(none)"]
Config-referenced CI files missing: [list or "(none)"]
glossary_doc: [path or "not recorded — run /autodev to capture"]
Architecture Vision section: [present | missing in <doc>]
═══════════════════════════════════════════════════
Unresolved questions
+3 -2
View File
@@ -75,7 +75,7 @@ Record the description verbatim for use in subsequent steps.
**Role**: Technical analyst
**Goal**: Determine whether deep research is needed.
Read the user's description and the existing codebase documentation from DOCUMENT_DIR (architecture.md, components/, system-flows.md).
Read the user's description and the existing codebase documentation from DOCUMENT_DIR (architecture.md including its `## Architecture Vision` section, glossary.md, components/, system-flows.md). Use `glossary.md` to keep the new task's name, acceptance-criteria wording, and component references aligned with the user's confirmed vocabulary; flag the task to the user if the request appears to violate an Architecture Vision principle, do not silently allow it.
**Consult LESSONS.md**: if `_docs/LESSONS.md` exists, read it and look for entries in categories `estimation`, `architecture`, `dependencies` that might apply to the task under consideration. If a relevant lesson exists (e.g., "estimation: auth-related changes historically take 2x estimate"), bias the classification and recommendation accordingly. Note in the output which lessons (if any) were applied.
@@ -134,7 +134,8 @@ The `<task_slug>` is a short kebab-case name derived from the feature descriptio
**Goal**: Determine where and how to insert the new functionality, and whether existing tests cover the new requirements.
1. Read the codebase documentation from DOCUMENT_DIR:
- `architecture.md` — overall structure
- `architecture.md` — overall structure (the `## Architecture Vision` H2 is user-confirmed intent and must not be violated by the new task without explicit approval)
- `glossary.md` — project terminology; reuse the user's vocabulary in task names, AC, and component references
- `components/` — component specs
- `system-flows.md` — data flows (if exists)
- `data_model.md` — data model (if exists)
+6 -3
View File
@@ -69,7 +69,7 @@ Capture any new questions, findings, or insights that arise during test specific
### Step 2: Solution Analysis
Read and follow `steps/02_solution-analysis.md`.
Read and follow `steps/02_solution-analysis.md`. The step opens with **Phase 2a.0: Glossary & Architecture Vision** (BLOCKING) — drafts `_docs/02_document/glossary.md` and a one-paragraph architecture vision, presents the condensed view to the user, iterates until confirmed, then proceeds into the architecture, data-model, and deployment phases. The confirmed vision becomes the first `## Architecture Vision` H2 of `architecture.md`.
---
@@ -107,6 +107,7 @@ Read and follow `steps/07_quality-checklist.md`.
- **Coding during planning**: this workflow produces documents, never code
- **Multi-responsibility components**: if a component does two things, split it
- **Skipping BLOCKING gates**: never proceed past a BLOCKING marker without user confirmation
- **Skipping the glossary/vision gate (Phase 2a.0)**: drafting `architecture.md` from raw `solution.md` without confirming terminology and vision means the AI's mental model is not aligned with the user's; every downstream artifact will inherit that drift
- **Diagrams without data**: generate diagrams only after the underlying structure is documented
- **Copy-pasting problem.md**: the architecture doc should analyze and transform, not repeat the input
- **Vague interfaces**: "component A talks to component B" is not enough; define the method, input, output
@@ -137,8 +138,10 @@ Read and follow `steps/07_quality-checklist.md`.
│ │
│ 1. Blackbox Tests → test-spec/SKILL.md │
│ [BLOCKING: user confirms test coverage] │
│ 2. Solution Analysis → architecture, data model, deployment
[BLOCKING: user confirms architecture]
│ 2. Solution Analysis → glossary + vision, architecture,
data model, deployment
│ [BLOCKING 2a.0: user confirms glossary + vision] │
│ [BLOCKING 2a: user confirms architecture] │
│ 3. Component Decomp → component specs + interfaces │
│ [BLOCKING: user confirms components] │
│ 4. Review & Risk → risk register, iterations │
@@ -4,20 +4,105 @@
**Goal**: Produce `architecture.md`, `system-flows.md`, `data_model.md`, and `deployment/` from the solution draft
**Constraints**: No code, no component-level detail yet; focus on system-level view
### Phase 2a.0: Glossary & Architecture Vision (BLOCKING)
**Role**: Software architect + business analyst
**Goal**: Align the AI's mental model of the project with the user's intent BEFORE drafting `architecture.md`. Capture domain terminology and the user's high-level architecture vision so every downstream artifact (architecture, components, flows, tests, epics) is grounded in confirmed user intent — not in AI inference.
**Inputs**:
- `_docs/00_problem/problem.md`, `acceptance_criteria.md`, `restrictions.md`
- `_docs/00_problem/input_data/*`
- `_docs/01_solution/solution.md` (and any earlier `solution_draft*.md` siblings)
- Any blackbox-test findings produced in Step 1
**Outputs**:
- `_docs/02_document/glossary.md` (NEW)
- A confirmed "Architecture Vision" paragraph + bullet list held in working memory and used as the spine of Phase 2a's `architecture.md`
**Procedure**:
1. **Draft glossary** — extract project-specific terminology from inputs (NOT generic software terms). Include:
- Domain entities, processes, and roles
- Acronyms / abbreviations
- Internal codenames or product names
- Synonym pairs in active use (e.g., "flight" vs. "mission")
- Stakeholder personas referenced in problem.md
Each entry: one-line definition, plus a parenthetical source (`source: problem.md`, `source: solution.md §3`).
Skip terms that have a single well-known industry meaning (REST, JSON, etc.).
2. **Draft architecture vision** — synthesize from inputs:
- **One paragraph**: what the system is, who uses it, the shape of the runtime topology (monolith / services / pipeline / library / hybrid).
- **Components & responsibilities** (one-line each). At this stage these are *intent-level*, not the formal decomposition that Step 3 produces.
- **Major data flows** (one or two sentences each).
- **Architectural principles / non-negotiables** the user has implied (e.g., "DB-driven config", "no per-component state outside Redis", "all UI traffic via REST + SSE only").
- **Open architectural questions** the AI cannot resolve from inputs alone.
3. **Present condensed view** to the user (NOT the full draft files — a synopsis only):
```
══════════════════════════════════════
REVIEW: Glossary + Architecture Vision
══════════════════════════════════════
Glossary (N terms drafted):
- <Term>: <one-line definition>
- ...
Architecture Vision:
<one-paragraph synopsis>
Components / responsibilities:
- <component>: <one-line>
- ...
Principles / non-negotiables:
- <principle>
- ...
Open questions (AI could not resolve):
- <q1>
- <q2>
══════════════════════════════════════
A) Looks correct — write glossary.md, use vision for Phase 2a
B) I want to add / correct entries (provide diffs)
C) Answer the open questions first, then re-present
══════════════════════════════════════
Recommendation: pick C if open questions exist, otherwise A
══════════════════════════════════════
```
4. **Iterate**:
- On B → integrate the user's diffs/additions, re-present the condensed view, loop until A.
- On C → ask the listed open questions one round (M4-style batch), integrate answers, re-present.
- **Do NOT proceed to step 5 until the user picks A.**
5. **Save**:
- Write `_docs/02_document/glossary.md` with terms in alphabetical order. Include a top-line `**Status**: confirmed-by-user` and the date.
- Hold the confirmed vision (paragraph + components + principles) in working memory; Phase 2a will materialize it into `architecture.md` and **must** preserve every confirmed principle and component intent verbatim.
**Self-verification**:
- [ ] Every glossary entry traces to at least one input file (no invented terms)
- [ ] Every component listed in the vision is one the inputs reference
- [ ] All open questions are either answered or explicitly deferred (with the user's acknowledgement)
- [ ] User picked option A on the latest condensed view
**BLOCKING**: Do NOT proceed to Phase 2a until `glossary.md` is saved and the user has confirmed the architecture vision.
### Phase 2a: Architecture & Flows
1. Read all input files thoroughly
2. Incorporate findings, questions, and insights discovered during Step 1 (blackbox tests)
3. Research unknown or questionable topics via internet; ask user about ambiguities
4. Document architecture using `templates/architecture.md` as structure
5. Document system flows using `templates/system-flows.md` as structure
3. **Apply confirmed vision from Phase 2a.0**: the architecture document must include a top-level `## Architecture Vision` section that contains the user-confirmed paragraph, components, and principles verbatim. The rest of `architecture.md` (tech stack, deployment model, NFRs, ADRs) builds on top of that section, never contradicts it
4. Research unknown or questionable topics via internet; ask user about ambiguities
5. Document architecture using `templates/architecture.md` as structure
6. Document system flows using `templates/system-flows.md` as structure
**Self-verification**:
- [ ] `architecture.md` opens with a `## Architecture Vision` section matching Phase 2a.0
- [ ] Architecture covers all capabilities mentioned in solution.md
- [ ] System flows cover all main user/system interactions
- [ ] No contradictions with problem.md or restrictions.md
- [ ] No contradictions with problem.md, restrictions.md, or the confirmed vision
- [ ] Technology choices are justified
- [ ] Blackbox test findings are reflected in architecture decisions
- [ ] Every term used in `architecture.md` that is project-specific appears in `glossary.md`
**Save action**: Write `architecture.md` and `system-flows.md`
@@ -95,7 +95,7 @@ Also copy to project standard locations:
**Critical step — do not skip.** Before producing the change list, cross-reference documented business flows against actual implementation. This catches issues that static code inspection alone misses.
1. **Read documented flows**: Load `DOCUMENT_DIR/system-flows.md`, `DOCUMENT_DIR/architecture.md`, `DOCUMENT_DIR/module-layout.md`, every file under `DOCUMENT_DIR/contracts/`, and `SOLUTION_DIR/solution.md` (whichever exist). Extract every documented business flow, data path, architectural decision, module ownership boundary, and contract shape.
1. **Read documented flows**: Load `DOCUMENT_DIR/system-flows.md`, `DOCUMENT_DIR/architecture.md` (paying special attention to its `## Architecture Vision` section — that's the user-confirmed structural intent), `DOCUMENT_DIR/glossary.md`, `DOCUMENT_DIR/module-layout.md`, every file under `DOCUMENT_DIR/contracts/`, and `SOLUTION_DIR/solution.md` (whichever exist). Extract every documented business flow, data path, architectural decision, module ownership boundary, and contract shape. Any refactor change that contradicts a confirmed Architecture Vision principle must either be rejected or surfaced to the user before being added to `list-of-changes.md` — those principles are not refactor targets without explicit user approval.
2. **Trace each flow through code**: For every documented flow (e.g., "video batch processing", "image tiling", "engine initialization"), walk the actual code path line by line. At each decision point ask:
- Does the code match the documented/intended behavior?
@@ -21,7 +21,7 @@ The implement skill will:
1. Parse task files and dependency graph from TASKS_DIR
2. Detect already-completed tasks (skip non-refactoring tasks from prior workflow steps)
3. Compute execution batches for the refactoring tasks
4. Launch implementer subagents (up to 4 in parallel)
4. Implement tasks sequentially in topological order (no subagents, no parallelism)
5. Run code review after each batch
6. Commit and push per batch
7. Update work item ticket status
@@ -32,7 +32,7 @@ For each component doc affected:
## 7d. Update System-Level Documentation
If structural changes were made (new modules, removed modules, changed interfaces):
1. Update `_docs/02_document/architecture.md` if architecture changed
1. Update `_docs/02_document/architecture.md` if architecture changed — but **never edit the `## Architecture Vision` section**. That section is user-confirmed (plan Phase 2a.0 / document Step 4.5); if a refactor invalidates a vision principle, surface it to the user and let them update the vision themselves before continuing. Update only the technical sections below the Vision H2.
2. Update `_docs/02_document/system-flows.md` if flow sequences changed
3. Update `_docs/02_document/diagrams/components.md` if component relationships changed