Sync .cursor from suite (autodev orchestrator + monorepo skills)

This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-04-18 22:04:12 +03:00
parent c51cb9b4a5
commit d8ac1606d6
60 changed files with 4232 additions and 1728 deletions
@@ -0,0 +1,36 @@
# Step 1.5: Module Layout (default mode only)
**Role**: Professional software architect
**Goal**: Produce `_docs/02_document/module-layout.md` — the authoritative file-ownership map used by the implement skill. Separates **behavioral** task specs (no file paths) from **structural** file mapping (no behavior).
**Constraints**: Follow the target language's standard project-layout conventions. Do not invent non-standard directory structures.
## Steps
1. Detect the target language from `DOCUMENT_DIR/architecture.md` and the bootstrap structure plan produced in Step 1.
2. Apply the language's conventional layout (see table in `templates/module-layout.md`):
- Python → `src/<pkg>/<component>/`
- C# → `src/<Component>/`
- Rust → `crates/<component>/`
- TypeScript / React → `src/<component>/` with `index.ts` barrel
- Go → `internal/<component>/` or `pkg/<component>/`
3. Each component owns ONE top-level directory. Shared code goes under `<root>/shared/` (or language equivalent).
4. Public API surface = files in the layout's `public:` list for each component; everything else is internal and MUST NOT be imported from other components.
5. Cross-cutting concerns (logging, error handling, config, telemetry, auth middleware, feature flags, i18n) each get ONE entry under Shared / Cross-Cutting; per-component tasks consume them (see Step 2 cross-cutting rule).
6. Write `_docs/02_document/module-layout.md` using `templates/module-layout.md` format.
## Self-verification
- [ ] Every component in `DOCUMENT_DIR/components/` has a Per-Component Mapping entry
- [ ] Every shared / cross-cutting concern has a Shared section entry
- [ ] Layering table covers every component (shared at the bottom)
- [ ] No component's `Imports from` list points at a higher layer
- [ ] Paths follow the detected language's convention
- [ ] No two components own overlapping paths
## Save action
Write `_docs/02_document/module-layout.md`.
## Blocking
**BLOCKING**: Present layout summary to user. Do NOT proceed to Step 2 until user confirms. The implement skill depends on this file; inconsistencies here cause file-ownership conflicts at batch time.
@@ -0,0 +1,57 @@
# Step 1: Bootstrap Structure Plan (default mode only)
**Role**: Professional software architect
**Goal**: Produce `01_initial_structure.md` — the first task describing the project skeleton.
**Constraints**: This is a plan document, not code. The `/implement` skill executes it.
## Steps
1. Read `architecture.md`, all component specs, `system-flows.md`, `data_model.md`, and `deployment/` from DOCUMENT_DIR
2. Read problem, solution, and restrictions from `_docs/00_problem/` and `_docs/01_solution/`
3. Research best implementation patterns for the identified tech stack
4. Document the structure plan using `templates/initial-structure-task.md`
The bootstrap structure plan must include:
- Project folder layout with all component directories
- Shared models, interfaces, and DTOs
- Dockerfile per component (multi-stage, non-root, health checks, pinned base images)
- `docker-compose.yml` for local development (all components + database + dependencies)
- `docker-compose.test.yml` for blackbox test environment (blackbox test runner)
- `.dockerignore`
- CI/CD pipeline file (`.github/workflows/ci.yml` or `azure-pipelines.yml`) with stages from `deployment/ci_cd_pipeline.md`
- Database migration setup and initial seed data scripts
- Observability configuration: structured logging setup, health check endpoints (`/health/live`, `/health/ready`), metrics endpoint (`/metrics`)
- Environment variable documentation (`.env.example`)
- Test structure with unit and blackbox test locations
## Self-verification
- [ ] All components have corresponding folders in the layout
- [ ] All inter-component interfaces have DTOs defined
- [ ] Dockerfile defined for each component
- [ ] `docker-compose.yml` covers all components and dependencies
- [ ] `docker-compose.test.yml` enables blackbox testing
- [ ] CI/CD pipeline file defined with lint, test, security, build, deploy stages
- [ ] Database migration setup included
- [ ] Health check endpoints specified for each service
- [ ] Structured logging configuration included
- [ ] `.env.example` with all required environment variables
- [ ] Environment strategy covers dev, staging, production
- [ ] Test structure includes unit and blackbox test locations
## Save action
Write `todo/01_initial_structure.md` (temporary numeric name).
## Tracker action
Create a work item ticket for this task under the "Bootstrap & Initial Structure" epic. Write the work item ticket ID and Epic ID back into the task header.
## Rename action
Rename the file from `todo/01_initial_structure.md` to `todo/[TRACKER-ID]_initial_structure.md` (e.g., `todo/AZ-42_initial_structure.md`). Update the **Task** field inside the file to match the new filename.
## Blocking
**BLOCKING**: Present structure plan summary to user. Do NOT proceed until user confirms.
@@ -0,0 +1,45 @@
# Step 1t: Test Infrastructure Bootstrap (tests-only mode only)
**Role**: Professional Quality Assurance Engineer
**Goal**: Produce `01_test_infrastructure.md` — the first task describing the test project scaffold.
**Constraints**: This is a plan document, not code. The `/implement` skill executes it.
## Steps
1. Read `TESTS_DIR/environment.md` and `TESTS_DIR/test-data.md`
2. Read `problem.md`, `restrictions.md`, `acceptance_criteria.md` for domain context
3. Document the test infrastructure plan using `templates/test-infrastructure-task.md`
The test infrastructure bootstrap must include:
- Test project folder layout (`e2e/` directory structure)
- Mock/stub service definitions for each external dependency
- `docker-compose.test.yml` structure from `environment.md`
- Test runner configuration (framework, plugins, fixtures)
- Test data fixture setup from `test-data.md` seed data sets
- Test reporting configuration (format, output path)
- Data isolation strategy
## Self-verification
- [ ] Every external dependency from `environment.md` has a mock service defined
- [ ] Docker Compose structure covers all services from `environment.md`
- [ ] Test data fixtures cover all seed data sets from `test-data.md`
- [ ] Test runner configuration matches the consumer app tech stack from `environment.md`
- [ ] Data isolation strategy is defined
## Save action
Write `todo/01_test_infrastructure.md` (temporary numeric name).
## Tracker action
Create a work item ticket for this task under the "Blackbox Tests" epic. Write the work item ticket ID and Epic ID back into the task header.
## Rename action
Rename the file from `todo/01_test_infrastructure.md` to `todo/[TRACKER-ID]_test_infrastructure.md`. Update the **Task** field inside the file to match the new filename.
## Blocking
**BLOCKING**: Present test infrastructure plan summary to user. Do NOT proceed until user confirms.
@@ -0,0 +1,59 @@
# Step 2: Task Decomposition (default and single component modes)
**Role**: Professional software architect
**Goal**: Decompose each component into atomic, implementable task specs — numbered sequentially starting from 02.
**Constraints**: Behavioral specs only — describe what, not how. No implementation code.
## Numbering
Tasks are numbered sequentially across all components in dependency order. Start from 02 (01 is `initial_structure`). In single component mode, start from the next available number in TASKS_DIR.
## Component ordering
Process components in dependency order — foundational components first (shared models, database), then components that depend on them.
## Consult LESSONS.md once at the start of Step 2
If `_docs/LESSONS.md` exists, read it and note `estimation`, `architecture`, or `dependencies` lessons that may bias task sizing in this pass (e.g., "auth-related changes historically take 2x estimate" → bump any auth task up one complexity tier). Apply the bias when filling the Complexity field in step 7 below. Record which lessons informed estimation in a comment in `_dependencies_table.md` (Step 4).
## Steps
For each component (or the single provided component):
1. Read the component's `description.md` and `tests.md` (if available)
2. Decompose into atomic tasks; create only 1 task if the component is simple or atomic
3. Split into multiple tasks only when it is necessary and would be easier to implement
4. Do not create tasks for other components — only tasks for the current component
5. Each task should be atomic, containing 1 API or a list of semantically connected APIs
6. Write each task spec using `templates/task.md`
7. Estimate complexity per task (1, 2, 3, 5, 8 points); no task should exceed 8 points — split if it does
8. Note task dependencies (referencing tracker IDs of already-created dependency tasks, e.g., `AZ-42_initial_structure`)
9. **Cross-cutting rule**: if a concern spans ≥2 components (logging, config loading, auth/authZ, error envelope, telemetry, feature flags, i18n), create ONE shared task under the cross-cutting epic. Per-component tasks declare it as a dependency and consume it; they MUST NOT re-implement it locally. Duplicate local implementations are an `Architecture` finding (High) in code-review Phase 7 and a `Maintainability` finding in Phase 6.
10. **Shared-models / shared-API rule**: classify the task as shared if ANY of the following is true:
- The component is listed under `shared/*` in `module-layout.md`.
- The task's Scope.Included mentions "public interface", "DTO", "schema", "event", "contract", "API endpoint", or "shared model".
- The task is parented to a cross-cutting epic.
- The task is depended on by ≥2 other tasks across different components.
For every shared task:
- Produce a contract file at `_docs/02_document/contracts/<component>/<name>.md` using `templates/api-contract.md`. Fill Shape, Invariants, Non-Goals, Versioning Rules, and at least 3 Test Cases.
- Add a mandatory `## Contract` section to the task spec pointing at the contract file.
- For every consuming task, add the contract path to its `## Dependencies` section as a document dependency (separate from task dependencies).
Consumers read the contract file, not the producer's task spec. This prevents interface drift when the producer's implementation detail leaks into consumers.
11. **Immediately after writing each task file**: create a work item ticket, link it to the component's epic, write the work item ticket ID and Epic ID back into the task header, then rename the file from `todo/[##]_[short_name].md` to `todo/[TRACKER-ID]_[short_name].md`.
## Self-verification (per component)
- [ ] Every task is atomic (single concern)
- [ ] No task exceeds 8 complexity points
- [ ] Task dependencies reference correct tracker IDs
- [ ] Tasks cover all interfaces defined in the component spec
- [ ] No tasks duplicate work from other components
- [ ] Every task has a work item ticket linked to the correct epic
- [ ] Every shared-models / shared-API task has a contract file at `_docs/02_document/contracts/<component>/<name>.md` and a `## Contract` section linking to it
- [ ] Every cross-cutting concern appears exactly once as a shared task, not N per-component copies
## Save action
Write each `todo/[##]_[short_name].md` (temporary numeric name), create work item ticket inline, then rename to `todo/[TRACKER-ID]_[short_name].md`. Update the **Task** field inside the file to match the new filename. Update **Dependencies** references in the file to use tracker IDs of the dependency tasks.
@@ -0,0 +1,35 @@
# Step 3: Blackbox Test Task Decomposition (default and tests-only modes)
**Role**: Professional Quality Assurance Engineer
**Goal**: Decompose blackbox test specs into atomic, implementable task specs.
**Constraints**: Behavioral specs only — describe what, not how. No test code.
## Numbering
- In default mode: continue sequential numbering from where Step 2 left off.
- In tests-only mode: start from 02 (01 is the test infrastructure bootstrap from Step 1t).
## Steps
1. Read all test specs from `DOCUMENT_DIR/tests/` (`blackbox-tests.md`, `performance-tests.md`, `resilience-tests.md`, `security-tests.md`, `resource-limit-tests.md`)
2. Group related test scenarios into atomic tasks (e.g., one task per test category or per component under test)
3. Each task should reference the specific test scenarios it implements and the environment/test-data specs
4. Dependencies:
- In default mode: blackbox test tasks depend on the component implementation tasks they exercise
- In tests-only mode: blackbox test tasks depend on the test infrastructure bootstrap task (Step 1t)
5. Write each task spec using `templates/task.md`
6. Estimate complexity per task (1, 2, 3, 5, 8 points); no task should exceed 8 points — split if it does
7. Note task dependencies (referencing tracker IDs of already-created dependency tasks)
8. **Immediately after writing each task file**: create a work item ticket under the "Blackbox Tests" epic, write the work item ticket ID and Epic ID back into the task header, then rename the file from `todo/[##]_[short_name].md` to `todo/[TRACKER-ID]_[short_name].md`.
## Self-verification
- [ ] Every scenario from `tests/blackbox-tests.md` is covered by a task
- [ ] Every scenario from `tests/performance-tests.md`, `tests/resilience-tests.md`, `tests/security-tests.md`, and `tests/resource-limit-tests.md` is covered by a task
- [ ] No task exceeds 8 complexity points
- [ ] Dependencies correctly reference the dependency tasks (component tasks in default mode, test infrastructure in tests-only mode)
- [ ] Every task has a work item ticket linked to the "Blackbox Tests" epic
## Save action
Write each `todo/[##]_[short_name].md` (temporary numeric name), create work item ticket inline, then rename to `todo/[TRACKER-ID]_[short_name].md`.
@@ -0,0 +1,39 @@
# Step 4: Cross-Task Verification (default and tests-only modes)
**Role**: Professional software architect and analyst
**Goal**: Verify task consistency and produce `_dependencies_table.md`.
**Constraints**: Review step — fix gaps found, do not add new tasks.
## Steps
1. Verify task dependencies across all tasks are consistent
2. Check no gaps:
- In default mode: every interface in `architecture.md` has tasks covering it
- In tests-only mode: every test scenario in `traceability-matrix.md` is covered by a task
3. Check no overlaps: tasks don't duplicate work
4. Check no circular dependencies in the task graph
5. Produce `_dependencies_table.md` using `templates/dependencies-table.md`
## Self-verification
### Default mode
- [ ] Every architecture interface is covered by at least one task
- [ ] No circular dependencies in the task graph
- [ ] Cross-component dependencies are explicitly noted in affected task specs
- [ ] `_dependencies_table.md` contains every task with correct dependencies
### Tests-only mode
- [ ] Every test scenario from `traceability-matrix.md` "Covered" entries has a corresponding task
- [ ] No circular dependencies in the task graph
- [ ] Test task dependencies reference the test infrastructure bootstrap
- [ ] `_dependencies_table.md` contains every task with correct dependencies
## Save action
Write `_dependencies_table.md`.
## Blocking
**BLOCKING**: Present dependency summary to user. Do NOT proceed until user confirms.