Update project structure and documentation: Add new entries to .gitignore for standalone outputs and MCP config, delete obsolete design_skill.md file, and revise README and various skills to reflect updated workflow steps, including UI design and performance testing. Adjust paths in retrospective and metrics documentation to align with new directory structure.

This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-03-24 04:48:08 +02:00
parent e609586c7c
commit 749217bbb6
22 changed files with 286 additions and 165 deletions
+7 -5
View File
@@ -35,7 +35,7 @@ Auto-chaining execution engine that drives the full BUILD → SHIP workflow. Det
- **State from disk**: all progress is persisted to `_docs/_autopilot_state.md` and cross-checked against `_docs/` folder structure
- **Rich re-entry**: on every invocation, read the state file for full context before continuing
- **Delegate, don't duplicate**: read and execute each sub-skill's SKILL.md; never inline their logic here
- **Sound on pause**: follow `.cursor/rules/human-input-sound.mdc` — play a notification sound before every pause that requires human input
- **Sound on pause**: follow `.cursor/rules/human-attention-sound.mdc` — play a notification sound before every pause that requires human input
- **Minimize interruptions**: only ask the user when the decision genuinely cannot be resolved automatically
## Flow Resolution
@@ -114,15 +114,17 @@ This skill activates when the user wants to:
│ │
│ GREENFIELD FLOW (flows/greenfield.md): │
│ Step 0 Problem → Step 1 Research → Step 2 Plan │
│ → Step 3 Decompose → [SESSION] → Step 4 Implement
│ → Step 5 Run Tests → 5b Security (opt) → Step 6 Deploy
│ → 2a UI Design (if UI) → Step 3 Decompose → [SESSION] │
│ → Step 4 Implement → Step 5 Run Tests
│ → 5b Security (opt) → 5c Perf Test (opt) → Step 6 Deploy │
│ → DONE │
│ │
│ EXISTING CODE FLOW (flows/existing-code.md): │
│ Pre-Step Document → 2b Test Spec → 2c Decompose Tests │
│ → [SESSION] → 2d Implement Tests → 2e Refactor │
│ → 2f New Task → [SESSION] → 2g Implement
│ → 2h Run Tests → 2hb Security (opt) → 2i Deploy → DONE
│ → 2ea UI Design (if UI) → 2f New Task → [SESSION] │
│ → 2g Implement → 2h Run Tests → 2hb Security (opt)
│ → 2hc Perf Test (opt) → 2i Deploy → DONE │
│ │
│ STATE: _docs/_autopilot_state.md (see state.md) │
│ PROTOCOLS: choice format, Jira auth, errors (see protocols.md) │
@@ -11,10 +11,12 @@ Workflow for projects with an existing codebase. Starts with documentation, prod
| 2c | Decompose Tests | decompose/SKILL.md (tests-only) | Step 1t + Step 3 + Step 4 |
| 2d | Implement Tests | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
| 2e | Refactor | refactor/SKILL.md | Phases 05 (6-phase method) |
| 2ea | UI Design | ui-design/SKILL.md | Phase 08 (conditional — UI projects only) |
| 2f | New Task | new-task/SKILL.md | Steps 18 (loop) |
| 2g | Implement | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
| 2h | Run Tests | (autopilot-managed) | Unit tests → Blackbox tests |
| 2hb | Security Audit | security/SKILL.md | Phase 15 (optional) |
| 2hc | Performance Test | (autopilot-managed) | Load/stress tests (optional) |
| 2i | Deploy | deploy/SKILL.md | Steps 17 |
After Step 2i, the existing-code workflow is complete.
@@ -91,8 +93,37 @@ If `_docs/04_refactor/` has phase reports, the refactor skill detects completed
---
**Step 2ea — UI Design (conditional)**
Condition: the autopilot state shows Step 2e (Refactor) is completed AND the autopilot state does NOT show Step 2ea (UI Design) as completed or skipped
**UI Project Detection** — the project is a UI project if ANY of the following are true:
- `package.json` exists in the workspace root or any subdirectory
- `*.html`, `*.jsx`, `*.tsx` files exist in the workspace
- `_docs/02_document/components/` contains a component whose `description.md` mentions UI, frontend, page, screen, dashboard, form, or view
- `_docs/02_document/architecture.md` mentions frontend, UI layer, SPA, or client-side rendering
If the project is NOT a UI project → mark Step 2ea as `skipped` in the state file and auto-chain to Step 2f.
If the project IS a UI project → present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: UI project detected — generate/update mockups?
══════════════════════════════════════
A) Generate UI mockups before new task planning (recommended)
B) Skip — proceed directly to new task
══════════════════════════════════════
Recommendation: A — mockups inform better frontend task specs
══════════════════════════════════════
```
- If user picks A → Read and execute `.cursor/skills/ui-design/SKILL.md`. After completion, auto-chain to Step 2f (New Task).
- If user picks B → Mark Step 2ea as `skipped` in the state file, auto-chain to Step 2f (New Task).
---
**Step 2f — New Task**
Condition: `_docs/04_refactor/FINAL_refactor_report.md` exists AND the autopilot state shows Step 2e (Refactor) is completed AND the autopilot state does NOT show Step 2f (New Task) as completed
Condition: (the autopilot state shows Step 2ea (UI Design) is completed or skipped) AND the autopilot state does NOT show Step 2f (New Task) as completed
Action: Read and execute `.cursor/skills/new-task/SKILL.md`
@@ -159,8 +190,36 @@ Action: Present using Choose format:
---
**Step 2hc — Performance Test (optional)**
Condition: the autopilot state shows Step 2hb (Security Audit) is completed or skipped AND the autopilot state does NOT show Step 2hc (Performance Test) as completed or skipped AND (`_docs/04_deploy/` does not exist or is incomplete)
Action: Present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Run performance/load tests before deploy?
══════════════════════════════════════
A) Run performance tests (recommended for latency-sensitive or high-load systems)
B) Skip — proceed directly to deploy
══════════════════════════════════════
Recommendation: [A or B — base on whether acceptance criteria
include latency, throughput, or load requirements]
══════════════════════════════════════
```
- If user picks A → Run performance tests:
1. Check if `_docs/02_document/tests/performance-tests.md` exists for test scenarios
2. Detect appropriate load testing tool (k6, locust, artillery, wrk, or built-in benchmarks)
3. Execute performance test scenarios against the running system
4. Present results vs acceptance criteria thresholds
5. If thresholds fail → present Choose format: A) Fix and re-run, B) Proceed anyway, C) Abort
6. After completion, auto-chain to Step 2i (Deploy)
- If user picks B → Mark Step 2hc as `skipped` in the state file, auto-chain to Step 2i (Deploy).
---
**Step 2i — Deploy**
Condition: the autopilot state shows Step 2h (Run Tests) is completed AND (Step 2hb is completed or skipped) AND (`_docs/04_deploy/` does not exist or is incomplete)
Condition: the autopilot state shows Step 2h (Run Tests) is completed AND (Step 2hb is completed or skipped) AND (Step 2hc is completed or skipped) AND (`_docs/04_deploy/` does not exist or is incomplete)
Action: Read and execute `.cursor/skills/deploy/SKILL.md`
@@ -196,9 +255,11 @@ Action: The project completed a full cycle. Present status and loop back to New
| Blackbox Test Spec (Step 2b) | Auto-chain → Decompose Tests (Step 2c) |
| Decompose Tests (Step 2c) | **Session boundary** — suggest new conversation before Implement Tests |
| Implement Tests (Step 2d) | Auto-chain → Refactor (Step 2e) |
| Refactor (Step 2e) | Auto-chain → New Task (Step 2f) |
| Refactor (Step 2e) | Auto-chain → UI Design detection (Step 2ea) |
| UI Design (Step 2ea, done or skipped) | Auto-chain → New Task (Step 2f) |
| New Task (Step 2f) | **Session boundary** — suggest new conversation before Implement |
| Implement (Step 2g) | Auto-chain → Run Tests (Step 2h) |
| Run Tests (Step 2h, all pass) | Auto-chain → Security Audit choice (Step 2hb) |
| Security Audit (Step 2hb, done or skipped) | Auto-chain → Deploy (Step 2i) |
| Security Audit (Step 2hb, done or skipped) | Auto-chain → Performance Test choice (Step 2hc) |
| Performance Test (Step 2hc, done or skipped) | Auto-chain → Deploy (Step 2i) |
| Deploy (Step 2i) | **Workflow complete** — existing-code flow done |
+67 -4
View File
@@ -1,6 +1,6 @@
# Greenfield Workflow
Workflow for new projects built from scratch. Flows linearly: Problem → Research → Plan → Decompose → Implement → Run Tests → Security Audit (optional) → Deploy.
Workflow for new projects built from scratch. Flows linearly: Problem → Research → Plan → UI Design (if applicable) → Decompose → Implement → Run Tests → Security Audit (optional) → Deploy.
## Step Reference Table
@@ -9,10 +9,12 @@ Workflow for new projects built from scratch. Flows linearly: Problem → Resear
| 0 | Problem | problem/SKILL.md | Phase 14 |
| 1 | Research | research/SKILL.md | Mode A: Phase 14 · Mode B: Step 08 |
| 2 | Plan | plan/SKILL.md | Step 16 + Final |
| 2a | UI Design | ui-design/SKILL.md | Phase 08 (conditional — UI projects only) |
| 3 | Decompose | decompose/SKILL.md | Step 14 |
| 4 | Implement | implement/SKILL.md | (batch-driven, no fixed sub-steps) |
| 5 | Run Tests | (autopilot-managed) | Unit tests → Blackbox tests |
| 5b | Security Audit | security/SKILL.md | Phase 15 (optional) |
| 5c | Performance Test | (autopilot-managed) | Load/stress tests (optional) |
| 6 | Deploy | deploy/SKILL.md | Step 17 |
## Detection Rules
@@ -76,6 +78,37 @@ If `_docs/02_document/` exists but is incomplete (has some artifacts but no `FIN
---
**Step 2a — UI Design (conditional)**
Condition: `_docs/02_document/architecture.md` exists AND the autopilot state does NOT show Step 2a (UI Design) as completed or skipped AND the project is a UI project
**UI Project Detection** — the project is a UI project if ANY of the following are true:
- `package.json` exists in the workspace root or any subdirectory
- `*.html`, `*.jsx`, `*.tsx` files exist in the workspace
- `_docs/02_document/components/` contains a component whose `description.md` mentions UI, frontend, page, screen, dashboard, form, or view
- `_docs/02_document/architecture.md` mentions frontend, UI layer, SPA, or client-side rendering
- `_docs/01_solution/solution.md` mentions frontend, web interface, or user-facing UI
If the project is NOT a UI project → mark Step 2a as `skipped` in the state file and auto-chain to Step 3.
If the project IS a UI project → present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: UI project detected — generate mockups?
══════════════════════════════════════
A) Generate UI mockups before decomposition (recommended)
B) Skip — proceed directly to decompose
══════════════════════════════════════
Recommendation: A — mockups before decomposition
produce better task specs for frontend components
══════════════════════════════════════
```
- If user picks A → Read and execute `.cursor/skills/ui-design/SKILL.md`. After completion, auto-chain to Step 3 (Decompose).
- If user picks B → Mark Step 2a as `skipped` in the state file, auto-chain to Step 3 (Decompose).
---
**Step 3 — Decompose**
Condition: `_docs/02_document/` contains `architecture.md` AND `_docs/02_document/components/` has at least one component AND `_docs/02_tasks/` does not exist or has no task files (excluding `_dependencies_table.md`)
@@ -142,8 +175,36 @@ Action: Present using Choose format:
---
**Step 5c — Performance Test (optional)**
Condition: the autopilot state shows Step 5b (Security Audit) is completed or skipped AND the autopilot state does NOT show Step 5c (Performance Test) as completed or skipped AND (`_docs/04_deploy/` does not exist or is incomplete)
Action: Present using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Run performance/load tests before deploy?
══════════════════════════════════════
A) Run performance tests (recommended for latency-sensitive or high-load systems)
B) Skip — proceed directly to deploy
══════════════════════════════════════
Recommendation: [A or B — base on whether acceptance criteria
include latency, throughput, or load requirements]
══════════════════════════════════════
```
- If user picks A → Run performance tests:
1. Check if `_docs/02_document/tests/performance-tests.md` exists for test scenarios
2. Detect appropriate load testing tool (k6, locust, artillery, wrk, or built-in benchmarks)
3. Execute performance test scenarios against the running system
4. Present results vs acceptance criteria thresholds
5. If thresholds fail → present Choose format: A) Fix and re-run, B) Proceed anyway, C) Abort
6. After completion, auto-chain to Step 6 (Deploy)
- If user picks B → Mark Step 5c as `skipped` in the state file, auto-chain to Step 6 (Deploy).
---
**Step 6 — Deploy**
Condition: the autopilot state shows Step 5 (Run Tests) is completed AND (Step 5b is completed or skipped) AND (`_docs/04_deploy/` does not exist or is incomplete)
Condition: the autopilot state shows Step 5 (Run Tests) is completed AND (Step 5b is completed or skipped) AND (Step 5c is completed or skipped) AND (`_docs/04_deploy/` does not exist or is incomplete)
Action: Read and execute `.cursor/skills/deploy/SKILL.md`
@@ -161,9 +222,11 @@ Action: Report project completion with summary. If the user runs autopilot again
| Problem Gathering | Auto-chain → Research (Mode A) |
| Research (any round) | Auto-chain → Research Decision (ask user: another round or proceed?) |
| Research Decision → proceed | Auto-chain → Plan |
| Plan | Auto-chain → Decompose |
| Plan | Auto-chain → UI Design detection (Step 2a) |
| UI Design (done or skipped) | Auto-chain → Decompose |
| Decompose | **Session boundary** — suggest new conversation before Implement |
| Implement | Auto-chain → Run Tests (Step 5) |
| Run Tests (all pass) | Auto-chain → Security Audit choice (Step 5b) |
| Security Audit (done or skipped) | Auto-chain → Deploy (Step 6) |
| Security Audit (done or skipped) | Auto-chain → Performance Test choice (Step 5c) |
| Performance Test (done or skipped) | Auto-chain → Deploy (Step 6) |
| Deploy | Report completion |
+30 -19
View File
@@ -50,49 +50,56 @@ Rules:
6. Record every user decision in the state file's `Key Decisions` section
7. After the user picks, proceed immediately — no follow-up confirmation unless the choice was destructive
## Jira MCP Authentication
## Work Item Tracker Authentication
Several workflow steps create Jira artifacts (epics, tasks, links). The Jira MCP server must be authenticated **before** any step that writes to Jira.
Several workflow steps create work items (epics, tasks, links). The system supports **Jira MCP** and **Azure DevOps MCP** as interchangeable backends. Detect which is configured by listing available MCP servers.
### Steps That Require Jira MCP
### Tracker Detection
| Step | Sub-Step | Jira Action |
|------|----------|-------------|
| 2 (Plan) | Step 6 — Jira Epics | Create epics for each component |
| 2c (Decompose Tests) | Step 1t + Step 3 — All test tasks | Create Jira ticket per task, link to epic |
| 2f (New Task) | Step 7 — Jira ticket | Create Jira ticket per task, link to epic |
| 3 (Decompose) | Step 13 — All tasks | Create Jira ticket per task, link to epic |
1. Check for available MCP servers: Jira MCP (`user-Jira-MCP-Server`) or Azure DevOps MCP (`user-AzureDevops`)
2. If both are available, ask the user which to use (Choose format)
3. Record the choice in the state file: `tracker: jira` or `tracker: ado`
4. If neither is available, set `tracker: local` and proceed without external tracking
### Steps That Require Work Item Tracker
| Step | Sub-Step | Tracker Action |
|------|----------|----------------|
| 2 (Plan) | Step 6 — Epics | Create epics for each component |
| 2c (Decompose Tests) | Step 1t + Step 3 — All test tasks | Create ticket per task, link to epic |
| 2f (New Task) | Step 7 — Ticket | Create ticket per task, link to epic |
| 3 (Decompose) | Step 13 — All tasks | Create ticket per task, link to epic |
### Authentication Gate
Before entering **Step 2 (Plan)**, **Step 2c (Decompose Tests)**, **Step 2f (New Task)**, or **Step 3 (Decompose)** for the first time, the autopilot must:
1. Call `mcp_auth` on the Jira MCP server
1. Call `mcp_auth` on the detected tracker's MCP server
2. If authentication succeeds → proceed normally
3. If the user **skips** or authentication fails → present using Choose format:
```
══════════════════════════════════════
Jira MCP authentication failed
Tracker authentication failed
══════════════════════════════════════
A) Retry authentication (retry mcp_auth)
B) Continue without Jira (tasks saved locally only)
B) Continue without tracker (tasks saved locally only)
══════════════════════════════════════
Recommendation: A — Jira IDs drive task referencing,
Recommendation: A — Tracker IDs drive task referencing,
dependency tracking, and implementation batching.
Without Jira, task files use numeric prefixes instead.
Without tracker, task files use numeric prefixes instead.
══════════════════════════════════════
```
If user picks **B** (continue without Jira):
- Set a flag in the state file: `jira_enabled: false`
- All skills that would create Jira tickets instead save metadata locally in the task/epic files with `Jira: pending` status
- Task files keep numeric prefixes (e.g., `01_initial_structure.md`) instead of Jira ID prefixes
If user picks **B** (continue without tracker):
- Set a flag in the state file: `tracker: local`
- All skills that would create tickets instead save metadata locally in the task/epic files with `Tracker: pending` status
- Task files keep numeric prefixes (e.g., `01_initial_structure.md`) instead of tracker ID prefixes
- The workflow proceeds normally in all other respects
### Re-Authentication
If Jira MCP was already authenticated in a previous invocation (verify by listing available Jira tools beyond `mcp_auth`), skip the auth gate.
If the tracker MCP was already authenticated in a previous invocation (verify by listing available tools beyond `mcp_auth`), skip the auth gate.
## Error Handling
@@ -284,10 +291,12 @@ On every invocation, before executing any skill, present a status summary built
Step 0 Problem [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 1 Research [DONE (N drafts) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 2 Plan [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 2a UI Design [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 3 Decompose [DONE (N tasks) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 4 Implement [DONE / IN PROGRESS (batch M of ~N) / NOT STARTED / FAILED (retry N/3)]
Step 5 Run Tests [DONE (N passed, M failed) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 5b Security Audit [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 5c Performance Test [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 6 Deploy [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
═══════════════════════════════════════════════════
Current: Step N — Name
@@ -308,10 +317,12 @@ On every invocation, before executing any skill, present a status summary built
Step 2c Decompose Tests [DONE (N tasks) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 2d Implement Tests [DONE / IN PROGRESS (batch M) / NOT STARTED / FAILED (retry N/3)]
Step 2e Refactor [DONE / IN PROGRESS (phase N) / NOT STARTED / FAILED (retry N/3)]
Step 2ea UI Design [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 2f New Task [DONE (N tasks) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 2g Implement [DONE / IN PROGRESS (batch M of ~N) / NOT STARTED / FAILED (retry N/3)]
Step 2h Run Tests [DONE (N passed, M failed) / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 2hb Security Audit [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 2hc Performance Test [DONE / SKIPPED / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
Step 2i Deploy [DONE / IN PROGRESS / NOT STARTED / FAILED (retry N/3)]
═══════════════════════════════════════════════════
Current: Step N — Name
+2 -2
View File
@@ -10,8 +10,8 @@ The autopilot persists its state to `_docs/_autopilot_state.md`. This file is th
# Autopilot State
## Current Step
step: [0-6 or "2b" / "2c" / "2d" / "2e" / "2f" / "2g" / "2h" / "2hb" / "2i" or "5b" or "done"]
name: [Problem / Research / Plan / Blackbox Test Spec / Decompose Tests / Implement Tests / Refactor / New Task / Implement / Run Tests / Security Audit / Deploy / Decompose / Done]
step: [0-6 or "2a" / "2b" / "2c" / "2d" / "2e" / "2ea" / "2f" / "2g" / "2h" / "2hb" / "2hc" / "2i" or "5b" / "5c" or "done"]
name: [Problem / Research / Plan / UI Design / Blackbox Test Spec / Decompose Tests / Implement Tests / Refactor / UI Design / New Task / Implement / Run Tests / Security Audit / Performance Test / Deploy / Decompose / Done]
status: [not_started / in_progress / completed / skipped / failed]
sub_step: [optional — sub-skill internal step number + name if interrupted mid-step]
retry_count: [0-3 — number of consecutive auto-retry attempts for current step, reset to 0 on success]
+1 -1
View File
@@ -212,7 +212,7 @@ At the start of execution, create a TodoWrite with all steps (1 through 7). Upda
| Stage | Trigger | Steps | Quality Gate |
|-------|---------|-------|-------------|
| **Lint** | Every push | Run linters per language (black, rustfmt, prettier, dotnet format) | Zero errors |
| **Test** | Every push | Unit tests, blackbox tests, coverage report | 75%+ coverage |
| **Test** | Every push | Unit tests, blackbox tests, coverage report | 75%+ coverage (see `.cursor/rules/cursor-meta.mdc` Quality Thresholds) |
| **Security** | Every push | Dependency audit, SAST scan (Semgrep/SonarQube), image scan (Trivy) | Zero critical/high CVEs |
| **Build** | PR merge to dev | Build Docker images, tag with git SHA | Build succeeds |
| **Push** | After build | Push to container registry | Push succeeds |
+1 -1
View File
@@ -196,7 +196,7 @@ Present using the Choose format for each decision that has meaningful alternativ
**Goal**: Produce the task specification file.
1. Determine the next numeric prefix by scanning TASKS_DIR for existing files
2. Write the task file using `templates/task.md`:
2. Write the task file using `.cursor/skills/decompose/templates/task.md`:
- Fill all fields from the gathered information
- Set **Complexity** based on the assessment from Step 2
- Set **Dependencies** by cross-referencing existing tasks in TASKS_DIR
+2 -113
View File
@@ -1,113 +1,2 @@
# Task Specification Template
Create a focused behavioral specification that describes **what** the system should do, not **how** it should be built.
Save as `TASKS_DIR/[##]_[short_name].md` initially, then rename to `TASKS_DIR/[JIRA-ID]_[short_name].md` after Jira ticket creation.
---
```markdown
# [Feature Name]
**Task**: [JIRA-ID]_[short_name]
**Name**: [short human name]
**Description**: [one-line description of what this task delivers]
**Complexity**: [1|2|3|5] points
**Dependencies**: [AZ-43_shared_models, AZ-44_db_migrations] or "None"
**Component**: [component name for context]
**Jira**: [TASK-ID]
**Epic**: [EPIC-ID]
## Problem
Clear, concise statement of the problem users are facing.
## Outcome
- Measurable or observable goal 1
- Measurable or observable goal 2
- ...
## Scope
### Included
- What's in scope for this task
### Excluded
- Explicitly what's NOT in scope
## Acceptance Criteria
**AC-1: [Title]**
Given [precondition]
When [action]
Then [expected result]
**AC-2: [Title]**
Given [precondition]
When [action]
Then [expected result]
## Non-Functional Requirements
**Performance**
- [requirement if relevant]
**Compatibility**
- [requirement if relevant]
**Reliability**
- [requirement if relevant]
## Unit Tests
| AC Ref | What to Test | Required Outcome |
|--------|-------------|-----------------|
| AC-1 | [test subject] | [expected result] |
## Blackbox Tests
| AC Ref | Initial Data/Conditions | What to Test | Expected Behavior | NFR References |
|--------|------------------------|-------------|-------------------|----------------|
| AC-1 | [setup] | [test subject] | [expected behavior] | [NFR if any] |
## Constraints
- [Architectural pattern constraint if critical]
- [Technical limitation]
- [Integration requirement]
## Risks & Mitigation
**Risk 1: [Title]**
- *Risk*: [Description]
- *Mitigation*: [Approach]
```
---
## Complexity Points Guide
- 1 point: Trivial, self-contained, no dependencies
- 2 points: Non-trivial, low complexity, minimal coordination
- 3 points: Multi-step, moderate complexity, potential alignment needed
- 5 points: Difficult, interconnected logic, medium-high risk
- 8 points: Too complex — split into smaller tasks
## Output Guidelines
**DO:**
- Focus on behavior and user experience
- Use clear, simple language
- Keep acceptance criteria testable (Gherkin format)
- Include realistic scope boundaries
- Write from the user's perspective
- Include complexity estimation
- Reference dependencies by Jira ID (e.g., AZ-43_shared_models)
**DON'T:**
- Include implementation details (file paths, classes, methods)
- Prescribe technical solutions or libraries
- Add architectural diagrams or code examples
- Specify exact API endpoints or data structures
- Include step-by-step implementation instructions
- Add "how to build" guidance
<!-- This skill uses the shared task template at .cursor/skills/decompose/templates/task.md -->
<!-- See that file for the full template structure. -->
@@ -7,7 +7,7 @@ All artifacts are written directly under DOCUMENT_DIR:
```
DOCUMENT_DIR/
├── tests/
│ ├── test-environment.md
│ ├── environment.md
│ ├── test-data.md
│ ├── blackbox-tests.md
│ ├── performance-tests.md
@@ -50,7 +50,7 @@ DOCUMENT_DIR/
| Step | Save immediately after | Filename |
|------|------------------------|----------|
| Step 1 | Blackbox test environment spec | `tests/test-environment.md` |
| Step 1 | Blackbox test environment spec | `tests/environment.md` |
| Step 1 | Blackbox test data spec | `tests/test-data.md` |
| Step 1 | Blackbox tests | `tests/blackbox-tests.md` |
| Step 1 | Blackbox performance tests | `tests/performance-tests.md` |
+2 -1
View File
@@ -46,7 +46,7 @@ The interview is complete when the AI can write ALL of these:
| `problem.md` | Clear problem statement: what is being built, why, for whom, what it does |
| `restrictions.md` | All constraints identified: hardware, software, environment, operational, regulatory, budget, timeline |
| `acceptance_criteria.md` | Measurable success criteria with specific numeric targets grouped by category |
| `input_data/` | At least one reference data file or detailed data description document |
| `input_data/` | At least one reference data file or detailed data description document. Must include `expected_results.md` with input→output pairs for downstream test specification |
| `security_approach.md` | (optional) Security requirements identified, or explicitly marked as not applicable |
## Interview Protocol
@@ -187,6 +187,7 @@ At least one file. Options:
- User provides actual data files (CSV, JSON, images, etc.) — save as-is
- User describes data parameters — save as `data_parameters.md`
- User provides URLs to data — save as `data_sources.md` with links and descriptions
- `expected_results.md` — expected outputs for given inputs (required by downstream test-spec skill). During the Acceptance Criteria dimension, probe for concrete input→output pairs and save them here. Format: use the template from `.cursor/skills/test-spec/templates/expected-results.md`.
### security_approach.md (optional)
+1 -1
View File
@@ -276,7 +276,7 @@ Write `REFACTOR_DIR/analysis/refactoring_roadmap.md`:
#### 3a. Design Test Specs
Coverage requirements (must meet before refactoring):
Coverage requirements (must meet before refactoring — see `.cursor/rules/cursor-meta.mdc` Quality Thresholds):
- Minimum overall coverage: 75%
- Critical path coverage: 90%
- All public APIs must have blackbox tests
+1 -1
View File
@@ -1,5 +1,5 @@
---
name: deep-research
name: research
description: |
Deep Research Methodology (8-Step Method) with two execution modes:
- Mode A (Initial Research): Assess acceptance criteria, then research problem and produce solution draft
+3 -3
View File
@@ -4,7 +4,7 @@ description: |
Collect metrics from implementation batch reports and code review findings, analyze trends across cycles,
and produce improvement reports with actionable recommendations.
3-step workflow: collect metrics, analyze trends, produce report.
Outputs to _docs/05_metrics/.
Outputs to _docs/06_metrics/.
Trigger phrases:
- "retrospective", "retro", "run retro"
- "metrics review", "feedback loop"
@@ -31,7 +31,7 @@ Collect metrics from implementation artifacts, analyze trends across development
Fixed paths:
- IMPL_DIR: `_docs/03_implementation/`
- METRICS_DIR: `_docs/05_metrics/`
- METRICS_DIR: `_docs/06_metrics/`
- TASKS_DIR: `_docs/02_tasks/`
Announce the resolved paths to the user before proceeding.
@@ -166,7 +166,7 @@ Present the report summary to the user.
│ │
│ 1. Collect Metrics → parse batch reports, compute metrics │
│ 2. Analyze Trends → patterns, comparison, improvement areas │
│ 3. Produce Report → _docs/05_metrics/retro_[date].md │
│ 3. Produce Report → _docs/06_metrics/retro_[date].md │
├────────────────────────────────────────────────────────────────┤
│ Principles: Data-driven · Actionable · Cumulative │
│ Non-judgmental · Save immediately │
+1 -1
View File
@@ -175,7 +175,7 @@ At the start of execution, create a TodoWrite with all three phases. Update stat
|------------|--------------------------|---------------|----------------|
| [file/data] | Yes/No | Yes/No | [missing, vague, no tolerance, etc.] |
9. Threshold: at least 70% coverage of scenarios AND every covered scenario has a quantifiable expected result
9. Threshold: at least 70% coverage of scenarios AND every covered scenario has a quantifiable expected result (see `.cursor/rules/cursor-meta.mdc` Quality Thresholds table)
10. If coverage is low, search the internet for supplementary data, assess quality with user, and if user agrees, add to `input_data/` and update `input_data/expected_results.md`
11. If expected results are missing or not quantifiable, ask user to provide them before proceeding