Merge research-skill-approach into dev (taking research-skill-approach tree)

Made-with: Cursor
This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-03-17 18:45:01 +02:00
464 changed files with 15486 additions and 39790 deletions
Vendored
BIN
View File
Binary file not shown.
+179
View File
@@ -0,0 +1,179 @@
## Developer TODO (Project Mode)
### BUILD (green-field or new features)
```
1. Create _docs/00_problem/ — describe what you're building
- problem.md (required)
- restrictions.md (required)
- acceptance_criteria.md (required)
- security_approach.md (optional)
2. /research — produces solution drafts in _docs/01_solution/
Run multiple times: Mode A → draft, Mode B → assess & revise
Finalize as solution.md
3. /plan — architecture, components, risks, tests → _docs/02_plans/
4. /decompose — feature specs, implementation order → _docs/02_tasks/
5. /implement-initial — scaffold project from initial_structure.md (once)
6. /implement-wave — implement next wave of features (repeat per wave)
7. /implement-code-review — review implemented code (after each wave or at the end)
8. /implement-black-box-tests — E2E tests via Docker consumer app (after all waves)
9. commit & push
```
### SHIP (deploy and operate)
```
10. /implement-cicd — validate/enhance CI/CD pipeline
11. /deploy — deployment strategy per environment
12. /observability — monitoring, logging, alerting plan
```
### EVOLVE (maintenance and improvement)
```
13. /refactor — structured refactoring (skill, 6-phase workflow)
```
## Implementation Flow
### `/implement-initial`
Reads `_docs/02_tasks/<topic>/initial_structure.md` and scaffolds the project skeleton: folder structure, shared models, interfaces, stubs, .gitignore, .env.example, CI/CD config, DB migrations setup, test structure.
Run once after decompose.
### `/implement-wave`
Reads `SUMMARY.md` and `cross_dependencies.md` from `_docs/02_tasks/<topic>/`.
1. Detects which features are already implemented
2. Identifies the next wave (phase) of independent features
3. Presents the wave for confirmation (blocks until user confirms)
4. Launches parallel `implementer` subagents (max 4 concurrent; same-component features run sequentially)
5. Runs tests, reports results
6. Suggests commit
Repeat `/implement-wave` until all phases are done.
### `/implement-code-review`
Reviews implemented code against specs. Reports issues by type (Bug/Security/Performance/Style/Debt) with priorities and suggested fixes.
### `/implement-black-box-tests`
Reads `_docs/02_plans/<topic>/e2e_test_infrastructure.md` (produced by plan skill). Builds a separate Docker-based consumer app that exercises the system as a black box — no internal imports, no direct DB access. Runs E2E scenarios, produces a CSV test report.
Run after all waves are done.
### `/implement-cicd`
Reviews existing CI/CD pipeline configuration, validates all stages work, optimizes performance (parallelization, caching), ensures quality gates are enforced (coverage, linting, security scanning).
Run after `/implement-initial` or after all waves.
### `/deploy`
Defines deployment strategy per environment: deployment procedures, rollback procedures, health checks, deployment checklist. Outputs `_docs/02_components/deployment_strategy.md`.
Run before first production release.
### `/observability`
Plans logging strategy, metrics collection, distributed tracing, alerting rules, and dashboards. Outputs `_docs/02_components/observability_plan.md`.
Run before first production release.
### Commit
After each wave or review — standard `git add && git commit`. The wave command suggests a commit message.
## Available Skills
| Skill | Triggers | Purpose |
|-------|----------|---------|
| **research** | "research", "investigate", "assess solution" | 8-step research → solution drafts |
| **plan** | "plan", "decompose solution" | Architecture, components, risks, tests, epics |
| **decompose** | "decompose", "task decomposition" | Feature specs + implementation order |
| **refactor** | "refactor", "refactoring", "improve code" | 6-phase structured refactoring workflow |
| **security** | "security audit", "OWASP" | OWASP-based security testing |
## Project Folder Structure
```
_docs/
├── 00_problem/
│ ├── problem.md
│ ├── restrictions.md
│ ├── acceptance_criteria.md
│ └── security_approach.md
├── 01_solution/
│ ├── solution_draft01.md
│ ├── solution_draft02.md
│ ├── solution.md
│ ├── tech_stack.md
│ └── security_analysis.md
├── 01_research/
│ └── <topic>/
├── 02_plans/
│ └── <topic>/
│ ├── architecture.md
│ ├── system-flows.md
│ ├── components/
│ └── FINAL_report.md
├── 02_tasks/
│ └── <topic>/
│ ├── initial_structure.md
│ ├── cross_dependencies.md
│ ├── SUMMARY.md
│ └── [##]_[component]/
│ └── [##].[##]_feature_[name].md
└── 04_refactoring/
├── baseline_metrics.md
├── discovery/
├── analysis/
├── test_specs/
├── coupling_analysis.md
├── execution_log.md
├── hardening/
└── FINAL_report.md
```
## Implementation Tools
| Tool | Type | Purpose |
|------|------|---------|
| `implementer` | Subagent | Implements a single feature from its spec. Launched by implement-wave. |
| `/implement-initial` | Command | Scaffolds project skeleton from `initial_structure.md`. Run once. |
| `/implement-wave` | Command | Detects next wave, launches parallel implementers. Repeatable. |
| `/implement-code-review` | Command | Reviews code against specs. |
| `/implement-black-box-tests` | Command | E2E tests via Docker consumer app. After all waves. |
| `/implement-cicd` | Command | Validate and enhance CI/CD pipeline. |
| `/deploy` | Command | Plan deployment strategy per environment. |
| `/observability` | Command | Plan logging, metrics, tracing, alerting. |
## Standalone Mode (Reference)
Any skill can run in standalone mode by passing an explicit file:
```
/research @my_problem.md
/plan @my_design.md
/decompose @some_spec.md
/refactor @some_component.md
```
Output goes to `_standalone/<topic>/` (git-ignored) instead of `_docs/`. Standalone mode relaxes guardrails — only the provided file is required; restrictions and acceptance criteria are optional.
Single component decompose is also supported:
```
/decompose @_docs/02_plans/<topic>/components/03_parser/description.md
```
+49
View File
@@ -0,0 +1,49 @@
---
name: implementer
description: |
Implements a single feature from its spec file. Use when implementing features from _docs/02_tasks/.
Reads the feature spec, analyzes the codebase, implements the feature with tests, and verifies acceptance criteria.
---
You are a professional software developer implementing a single feature.
## Input
You receive a path to a feature spec file (e.g., `_docs/02_tasks/<topic>/[##]_[name]/[##].[##]_feature_[name].md`).
## Context
Read these files for project context:
- `_docs/00_problem/problem.md`
- `_docs/00_problem/restrictions.md`
- `_docs/00_problem/acceptance_criteria.md`
- `_docs/01_solution/solution.md`
## Process
1. Read the feature spec thoroughly — understand acceptance criteria, scope, constraints
2. Analyze the existing codebase: conventions, patterns, related code, shared interfaces
3. Research best implementation approaches for the tech stack if needed
4. If the feature has a dependency on an unimplemented component, create a temporary mock
5. Implement the feature following existing code conventions
6. Implement error handling per the project's defined strategy
7. Implement unit tests (use //Arrange //Act //Assert comments)
8. Implement integration tests — analyze existing tests, add to them or create new
9. Run all tests, fix any failures
10. Verify the implementation satisfies every acceptance criterion from the spec
## After completion
Report:
- What was implemented
- Which acceptance criteria are satisfied
- Test results (passed/failed)
- Any mocks created for unimplemented dependencies
- Any concerns or deviations from the spec
## Principles
- Follow SOLID, KISS, DRY
- Dumb code, smart data
- No unnecessary comments or logs (only exceptions)
- Ask if requirements are ambiguous — do not assume
@@ -69,4 +69,3 @@ Store output to `_docs/02_components/deployment_strategy.md`
- Zero-downtime deployments for production - Zero-downtime deployments for production
- Always have a rollback plan - Always have a rollback plan
- Ask questions about infrastructure constraints - Ask questions about infrastructure constraints
@@ -0,0 +1,45 @@
# Implement E2E Black-Box Tests
Build a separate Docker-based consumer application that exercises the main system as a black box, validating end-to-end use cases.
## Input
- E2E test infrastructure spec: `_docs/02_plans/<topic>/e2e_test_infrastructure.md` (produced by plan skill Step 4b)
## Context
- Problem description: `@_docs/00_problem/problem.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Solution: `@_docs/01_solution/solution.md`
- Architecture: `@_docs/02_plans/<topic>/architecture.md`
## Role
You are a professional QA engineer and developer
## Task
- Read the E2E test infrastructure spec thoroughly
- Build the Docker test environment:
- Create docker-compose.yml with all services (system under test, test DB, consumer app, dependency mocks)
- Configure networks and volumes per spec
- Implement the consumer application:
- Separate project/folder that communicates with the main system only through its public interfaces
- No internal imports from the main system, no direct DB access
- Use the tech stack and entry point defined in the spec
- Implement each E2E test scenario from the spec:
- Check existing E2E tests; update if a similar test already exists
- Prepare seed data and fixtures per the test data management section
- Implement teardown/cleanup procedures
- Run the full E2E suite via `docker compose up`
- If tests fail:
- Fix issues iteratively until all pass
- If a failure is caused by missing external data, API access, or environment config, ask the user
- Ensure the E2E suite integrates into the CI pipeline per the spec
- Produce a CSV test report (test ID, name, execution time, result, error message) at the output path defined in the spec
## Safety Rules
- The consumer app must treat the main system as a true black box
- Never import internal modules or access the main system's database directly
- Docker environment must be self-contained — no host dependencies beyond Docker itself
- If external services need mocking, implement mock/stub services as Docker containers
## Notes
- Ask questions if the spec is ambiguous or incomplete
- If `e2e_test_infrastructure.md` is missing, stop and inform the user to run the plan skill first
@@ -36,4 +36,3 @@
## Notes ## Notes
- Can also use Cursor's built-in review feature - Can also use Cursor's built-in review feature
- Focus on critical issues first - Focus on critical issues first
+53
View File
@@ -0,0 +1,53 @@
# Implement Initial Structure
## Input
- Structure plan: `_docs/02_tasks/<topic>/initial_structure.md` (produced by decompose skill)
## Context
- Problem description: `@_docs/00_problem/problem.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Solution: `@_docs/01_solution/solution.md`
## Role
You are a professional software architect
## Task
- Read carefully the structure plan in `initial_structure.md`
- Execute the plan — create the project skeleton:
- DTOs and shared models
- Component interfaces
- Empty implementations (stubs)
- Helpers — empty implementations or interfaces
- Add .gitignore appropriate for the project's language/framework
- Add .env.example with required environment variables
- Configure CI/CD pipeline per the structure plan stages
- Apply environment strategy (dev, staging, production) per the structure plan
- Add database migration setup if applicable
- Add README.md, describe the project based on the solution
- Create test folder structure per the structure plan
- Configure branch protection rules recommendations
## Example
The structure should roughly look like this (varies by tech stack):
- .gitignore
- .env.example
- .github/workflows/ (or .gitlab-ci.yml or azure-pipelines.yml)
- api/
- components/
- component1_folder/
- component2_folder/
- db/
- migrations/
- helpers/
- models/
- tests/
- unit/
- integration/
- test_data/
Semantically coherent components may have their own project or subfolder. Common interfaces can be in a shared layer or per-component — follow language conventions.
## Notes
- Follow SOLID, KISS, DRY
- Follow conventions of the project's programming language
- Ask as many questions as needed
+62
View File
@@ -0,0 +1,62 @@
# Implement Next Wave
Identify the next batch of independent features and implement them in parallel using the implementer subagent.
## Prerequisites
- Project scaffolded (`/implement-initial` completed)
- `_docs/02_tasks/<topic>/SUMMARY.md` exists
- `_docs/02_tasks/<topic>/cross_dependencies.md` exists
## Wave Sizing
- One wave = one phase from SUMMARY.md (features whose dependencies are all satisfied)
- Max 4 subagents run concurrently; features in the same component run sequentially
- If a phase has more than 8 features or more than 20 complexity points, suggest splitting into smaller waves and let the user cherry-pick which features to include
## Task
1. **Read the implementation plan**
- Read `SUMMARY.md` for the phased implementation order
- Read `cross_dependencies.md` for the dependency graph
2. **Detect current progress**
- Analyze the codebase to determine which features are already implemented
- Match implemented code against feature specs in `_docs/02_tasks/<topic>/`
- Identify the next incomplete wave/phase from the implementation order
3. **Present the wave**
- List all features in this wave with their complexity points
- Show which component each feature belongs to
- Confirm total features and estimated complexity
- If the phase exceeds 8 features or 20 complexity points, recommend splitting and let user select a subset
- **BLOCKING**: Do NOT proceed until user confirms
4. **Launch parallel implementation**
- For each feature in the wave, launch an `implementer` subagent in background
- Each subagent receives the path to its feature spec file
- Features within different components can run in parallel
- Features within the same component should run sequentially to avoid file conflicts
5. **Monitor and report**
- Wait for all subagents to complete
- Collect results from each: what was implemented, test results, any issues
- Run the full test suite
- Report summary:
- Features completed successfully
- Features that failed or need manual attention
- Test results (passed/failed/skipped)
- Any mocks created for future-wave dependencies
6. **Post-wave actions**
- Suggest: `git add . && git commit` with a wave-level commit message
- If all features passed: "Ready for next wave. Run `/implement-wave` again."
- If some failed: "Fix the failing features before proceeding to the next wave."
## Safety Rules
- Never launch features whose dependencies are not yet implemented
- Features within the same component run sequentially, not in parallel
- If a subagent fails, do NOT retry automatically — report and let user decide
- Always run tests after the wave completes, before suggesting commit
## Notes
- Ask questions if the implementation order is ambiguous
- If SUMMARY.md or cross_dependencies.md is missing, stop and inform the user to run the decompose skill first
@@ -120,4 +120,3 @@ Store output to `_docs/02_components/observability_plan.md`
- Balance verbosity with cost - Balance verbosity with cost
- Ensure PII is not logged - Ensure PII is not logged
- Plan for log rotation and retention - Plan for log rotation and retention
+22
View File
@@ -0,0 +1,22 @@
---
description: Coding rules
alwaysApply: true
---
# Coding preferences
- Always prefer simple solution
- Generate concise code
- Do not put comments in the code
- Do not put logs unless it is an exception, or was asked specifically
- Do not put code annotations unless it was asked specifically
- Write code that takes into account the different environments: development, production
- You are careful to make changes that are requested or you are confident the changes are well understood and related to the change being requested
- Mocking data is needed only for tests, never mock data for dev or prod env
- When you add new libraries or dependencies make sure you are using the same version of it as other parts of the code
- Focus on the areas of code relevant to the task
- Do not touch code that is unrelated to the task
- Always think about what other methods and areas of code might be affected by the code changes
- When you think you are done with changes, run tests and make sure they are not broken
- Do not rename any databases or tables or table columns without confirmation. Avoid such renaming if possible.
- Do not create diagrams unless I ask explicitly
- Make sure we don't commit binaries, create and keep .gitignore up to date and delete binaries after you are done with the task
+9
View File
@@ -0,0 +1,9 @@
---
description: Techstack
alwaysApply: true
---
# Tech Stack
- Using Postgres database
- Depending on task, for backend prefer .Net or Python. Could be RUST for more specific things.
- For Frontend, use React with Tailwind css (or even plain css, if it is a simple project)
- document api with OpenAPI
+281
View File
@@ -0,0 +1,281 @@
---
name: decompose
description: |
Decompose planned components into atomic implementable features with bootstrap structure plan.
4-step workflow: bootstrap structure plan, feature decomposition, cross-component verification, and Jira task creation.
Supports project mode (_docs/ structure), single component mode, and standalone mode (@file.md).
Trigger phrases:
- "decompose", "decompose features", "feature decomposition"
- "task decomposition", "break down components"
- "prepare for implementation"
disable-model-invocation: true
---
# Feature Decomposition
Decompose planned components into atomic, implementable feature specs with a bootstrap structure plan through a systematic workflow.
## Core Principles
- **Atomic features**: each feature does one thing; if it exceeds 5 complexity points, split it
- **Behavioral specs, not implementation plans**: describe what the system should do, not how to build it
- **Save immediately**: write artifacts to disk after each component; never accumulate unsaved work
- **Ask, don't assume**: when requirements are ambiguous, ask the user before proceeding
- **Plan, don't code**: this workflow produces documents and Jira tasks, never implementation code
## Context Resolution
Determine the operating mode based on invocation before any other logic runs.
**Full project mode** (no explicit input file provided):
- PLANS_DIR: `_docs/02_plans/`
- TASKS_DIR: `_docs/02_tasks/`
- Reads from: `_docs/00_problem/`, `_docs/01_solution/`, PLANS_DIR
- Runs Step 1 (bootstrap) + Step 2 (all components) + Step 3 (cross-verification) + Step 4 (Jira)
**Single component mode** (provided file is within `_docs/02_plans/` and inside a `components/` subdirectory):
- PLANS_DIR: `_docs/02_plans/`
- TASKS_DIR: `_docs/02_tasks/`
- Derive `<topic>`, component number, and component name from the file path
- Ask user for the parent Epic ID
- Runs Step 2 (that component only) + Step 4 (Jira)
- Overwrites existing feature files in that component's TASKS_DIR subdirectory
**Standalone mode** (explicit input file provided, not within `_docs/02_plans/`):
- INPUT_FILE: the provided file (treated as a component spec)
- Derive `<topic>` from the input filename (without extension)
- TASKS_DIR: `_standalone/<topic>/tasks/`
- Guardrails relaxed: only INPUT_FILE must exist and be non-empty
- Ask user for the parent Epic ID
- Runs Step 2 (that component only) + Step 4 (Jira)
Announce the detected mode and resolved paths to the user before proceeding.
## Input Specification
### Required Files
**Full project mode:**
| File | Purpose |
|------|---------|
| `_docs/00_problem/problem.md` | Problem description and context |
| `_docs/00_problem/restrictions.md` | Constraints and limitations (if available) |
| `_docs/00_problem/acceptance_criteria.md` | Measurable acceptance criteria (if available) |
| `_docs/01_solution/solution.md` | Finalized solution |
| `PLANS_DIR/<topic>/architecture.md` | Architecture from plan skill |
| `PLANS_DIR/<topic>/system-flows.md` | System flows from plan skill |
| `PLANS_DIR/<topic>/components/[##]_[name]/description.md` | Component specs from plan skill |
**Single component mode:**
| File | Purpose |
|------|---------|
| The provided component `description.md` | Component spec to decompose |
| Corresponding `tests.md` in the same directory (if available) | Test specs for context |
**Standalone mode:**
| File | Purpose |
|------|---------|
| INPUT_FILE (the provided file) | Component spec to decompose |
### Prerequisite Checks (BLOCKING)
**Full project mode:**
1. At least one `<topic>/` directory exists under PLANS_DIR with `architecture.md` and `components/`**STOP if missing**
2. If multiple topics exist, ask user which one to decompose
3. Create TASKS_DIR if it does not exist
4. If `TASKS_DIR/<topic>/` already exists, ask user: **resume from last checkpoint or start fresh?**
**Single component mode:**
1. The provided component file exists and is non-empty — **STOP if missing**
2. Create the component's subdirectory under TASKS_DIR if it does not exist
**Standalone mode:**
1. INPUT_FILE exists and is non-empty — **STOP if missing**
2. Create TASKS_DIR if it does not exist
## Artifact Management
### Directory Structure
```
TASKS_DIR/<topic>/
├── initial_structure.md (Step 1, full mode only)
├── cross_dependencies.md (Step 3, full mode only)
├── SUMMARY.md (final)
├── [##]_[component_name]/
│ ├── [##].[##]_feature_[feature_name].md
│ ├── [##].[##]_feature_[feature_name].md
│ └── ...
├── [##]_[component_name]/
│ └── ...
└── ...
```
### Save Timing
| Step | Save immediately after | Filename |
|------|------------------------|----------|
| Step 1 | Bootstrap structure plan complete | `initial_structure.md` |
| Step 2 | Each component decomposed | `[##]_[name]/[##].[##]_feature_[feature_name].md` |
| Step 3 | Cross-component verification complete | `cross_dependencies.md` |
| Step 4 | Jira tasks created | Jira via MCP |
| Final | All steps complete | `SUMMARY.md` |
### Resumability
If `TASKS_DIR/<topic>/` already contains artifacts:
1. List existing files and match them to the save timing table
2. Identify the last completed component based on which feature files exist
3. Resume from the next incomplete component
4. Inform the user which components are being skipped
## Progress Tracking
At the start of execution, create a TodoWrite with all applicable steps. Update status as each step/component completes.
## Workflow
### Step 1: Bootstrap Structure Plan (full project mode only)
**Role**: Professional software architect
**Goal**: Produce `initial_structure.md` describing the project skeleton for implementation
**Constraints**: This is a plan document, not code. The `implement-initial` command executes it.
1. Read architecture.md, all component specs, and system-flows.md from PLANS_DIR
2. Read problem, solution, and restrictions from `_docs/00_problem/` and `_docs/01_solution/`
3. Research best implementation patterns for the identified tech stack
4. Document the structure plan using `templates/initial-structure.md`
**Self-verification**:
- [ ] All components have corresponding folders in the layout
- [ ] All inter-component interfaces have DTOs defined
- [ ] CI/CD stages cover build, lint, test, security, deploy
- [ ] Environment strategy covers dev, staging, production
- [ ] Test structure includes unit and integration test locations
**Save action**: Write `initial_structure.md`
**BLOCKING**: Present structure plan summary to user. Do NOT proceed until user confirms.
---
### Step 2: Feature Decomposition (all modes)
**Role**: Professional software architect
**Goal**: Decompose each component into atomic, implementable feature specs
**Constraints**: Behavioral specs only — describe what, not how. No implementation code.
For each component (or the single provided component):
1. Read the component's `description.md` and `tests.md` (if available)
2. Decompose into atomic features; create only 1 feature if the component is simple or atomic
3. Split into multiple features only when it is necessary and would be easier to implement
4. Do not create features of other components — only features of the current component
5. Each feature should be atomic, containing 0 APIs or a list of semantically connected APIs
6. Write each feature spec using `templates/feature-spec.md`
7. Estimate complexity per feature (1, 2, 3, 5 points); no feature should exceed 5 points — split if it does
8. Note feature dependencies (within component and cross-component)
**Self-verification** (per component):
- [ ] Every feature is atomic (single concern)
- [ ] No feature exceeds 5 complexity points
- [ ] Feature dependencies are noted
- [ ] Features cover all interfaces defined in the component spec
- [ ] No features duplicate work from other components
**Save action**: Write each `[##]_[name]/[##].[##]_feature_[feature_name].md`
---
### Step 3: Cross-Component Verification (full project mode only)
**Role**: Professional software architect and analyst
**Goal**: Verify feature consistency across all components
**Constraints**: Review step — fix gaps found, do not add new features
1. Verify feature dependencies across all components are consistent
2. Check no gaps: every interface in architecture.md has features covering it
3. Check no overlaps: features don't duplicate work across components
4. Produce dependency matrix showing cross-component feature dependencies
5. Determine recommended implementation order based on dependencies
**Self-verification**:
- [ ] Every architecture interface is covered by at least one feature
- [ ] No circular feature dependencies across components
- [ ] Cross-component dependencies are explicitly noted in affected feature specs
**Save action**: Write `cross_dependencies.md`
**BLOCKING**: Present cross-component summary to user. Do NOT proceed until user confirms.
---
### Step 4: Jira Tasks (all modes)
**Role**: Professional product manager
**Goal**: Create Jira tasks from feature specs under the appropriate parent epics
**Constraints**: Be concise — fewer words with the same meaning is better
1. For each feature spec, create a Jira task following the parsing rules and field mapping from `gen_jira_task_and_branch.md` (skip branch creation and file renaming — those happen during implementation)
2. In full mode: search Jira for epics matching component names/labels to find parent epic IDs
3. In single component mode: use the Epic ID obtained during context resolution
4. In standalone mode: use the Epic ID obtained during context resolution
5. Do NOT create git branches or rename files — that happens during implementation
**Self-verification**:
- [ ] Every feature has a corresponding Jira task
- [ ] Every task is linked to the correct parent epic
- [ ] Task descriptions match feature spec content
**Save action**: Jira tasks created via MCP
---
## Summary Report
After all steps complete, write `SUMMARY.md` using `templates/summary.md` as structure.
## Common Mistakes
- **Coding during decomposition**: this workflow produces specs, never code
- **Over-splitting**: don't create many features if the component is simple — 1 feature is fine
- **Features exceeding 5 points**: split them; no feature should be too complex for a single task
- **Cross-component features**: each feature belongs to exactly one component
- **Skipping BLOCKING gates**: never proceed past a BLOCKING marker without user confirmation
- **Creating git branches**: branch creation is an implementation concern, not a decomposition one
## Escalation Rules
| Situation | Action |
|-----------|--------|
| Ambiguous component boundaries | ASK user |
| Feature complexity exceeds 5 points after splitting | ASK user |
| Missing component specs in PLANS_DIR | ASK user |
| Cross-component dependency conflict | ASK user |
| Jira epic not found for a component | ASK user for Epic ID |
| Component naming | PROCEED, confirm at next BLOCKING gate |
## Methodology Quick Reference
```
┌────────────────────────────────────────────────────────────────┐
│ Feature Decomposition (4-Step Method) │
├────────────────────────────────────────────────────────────────┤
│ CONTEXT: Resolve mode (full / single component / standalone) │
│ 1. Bootstrap Structure → initial_structure.md (full only) │
│ [BLOCKING: user confirms structure] │
│ 2. Feature Decompose → [##]_[name]/[##].[##]_feature_* │
│ 3. Cross-Verification → cross_dependencies.md (full only) │
│ [BLOCKING: user confirms dependencies] │
│ 4. Jira Tasks → Jira via MCP │
│ ───────────────────────────────────────────────── │
│ Summary → SUMMARY.md │
├────────────────────────────────────────────────────────────────┤
│ Principles: Atomic features · Behavioral specs · Save now │
│ Ask don't assume · Plan don't code │
└────────────────────────────────────────────────────────────────┘
```
@@ -0,0 +1,108 @@
# Feature Specification Template
Create a focused behavioral specification that describes **what** the system should do, not **how** it should be built.
Save as `TASKS_DIR/<topic>/[##]_[component_name]/[##].[##]_feature_[feature_name].md`.
---
```markdown
# [Feature Name]
**Status**: Draft | **Date**: [YYYY-MM-DD] | **Feature**: [Brief Feature Description]
**Complexity**: [1|2|3|5] points
**Dependencies**: [List dependent features or "None"]
**Component**: [##]_[component_name]
## Problem
Clear, concise statement of the problem users are facing.
## Outcome
- Measurable or observable goal 1
- Measurable or observable goal 2
## Scope
### Included
- What's in scope for this feature
### Excluded
- Explicitly what's NOT in scope
## Acceptance Criteria
**AC-1: [Title]**
Given [precondition]
When [action]
Then [expected result]
**AC-2: [Title]**
Given [precondition]
When [action]
Then [expected result]
## Non-Functional Requirements
**Performance**
- [requirement if relevant]
**Compatibility**
- [requirement if relevant]
**Reliability**
- [requirement if relevant]
## Unit Tests
| AC Ref | What to Test | Required Outcome |
|--------|-------------|-----------------|
| AC-1 | [test subject] | [expected result] |
## Integration Tests
| AC Ref | Initial Data/Conditions | What to Test | Expected Behavior | NFR References |
|--------|------------------------|-------------|-------------------|----------------|
| AC-1 | [setup] | [test subject] | [expected behavior] | [NFR if any] |
## Constraints
- [Architectural pattern constraint if critical]
- [Technical limitation]
- [Integration requirement]
## Risks & Mitigation
**Risk 1: [Title]**
- *Risk*: [Description]
- *Mitigation*: [Approach]
```
---
## Complexity Points Guide
- 1 point: Trivial, self-contained, no dependencies
- 2 points: Non-trivial, low complexity, minimal coordination
- 3 points: Multi-step, moderate complexity, potential alignment needed
- 5 points: Difficult, interconnected logic, medium-high risk
- 8 points: Too complex — split into smaller features
## Output Guidelines
**DO:**
- Focus on behavior and user experience
- Use clear, simple language
- Keep acceptance criteria testable (Gherkin format)
- Include realistic scope boundaries
- Write from the user's perspective
- Include complexity estimation
- Note dependencies on other features
**DON'T:**
- Include implementation details (file paths, classes, methods)
- Prescribe technical solutions or libraries
- Add architectural diagrams or code examples
- Specify exact API endpoints or data structures
- Include step-by-step implementation instructions
- Add "how to build" guidance
@@ -0,0 +1,113 @@
# Initial Structure Plan Template
Use this template for the bootstrap structure plan. Save as `TASKS_DIR/<topic>/initial_structure.md`.
---
```markdown
# Initial Project Structure Plan
**Date**: [YYYY-MM-DD]
**Tech Stack**: [language, framework, database, etc.]
**Source**: architecture.md, component specs from _docs/02_plans/<topic>/
## Project Folder Layout
```
project-root/
├── [folder structure based on tech stack and components]
└── ...
```
### Layout Rationale
[Brief explanation of why this structure was chosen — language conventions, framework patterns, etc.]
## DTOs and Interfaces
### Shared DTOs
| DTO Name | Used By Components | Fields Summary |
|----------|-------------------|---------------|
| [name] | [component list] | [key fields] |
### Component Interfaces
| Component | Interface | Methods | Exposed To |
|-----------|-----------|---------|-----------|
| [##]_[name] | [InterfaceName] | [method list] | [consumers] |
## CI/CD Pipeline
| Stage | Purpose | Trigger |
|-------|---------|---------|
| Build | Compile/bundle the application | Every push |
| Lint / Static Analysis | Code quality and style checks | Every push |
| Unit Tests | Run unit test suite | Every push |
| Integration Tests | Run integration test suite | Every push |
| Security Scan | SAST / dependency check | Every push |
| Deploy to Staging | Deploy to staging environment | Merge to staging branch |
### Pipeline Configuration Notes
[Framework-specific notes: CI tool, runners, caching, parallelism, etc.]
## Environment Strategy
| Environment | Purpose | Configuration Notes |
|-------------|---------|-------------------|
| Development | Local development | [local DB, mock services, debug flags] |
| Staging | Pre-production testing | [staging DB, staging services, production-like config] |
| Production | Live system | [production DB, real services, optimized config] |
### Environment Variables
| Variable | Dev | Staging | Production | Description |
|----------|-----|---------|------------|-------------|
| [VAR_NAME] | [value/source] | [value/source] | [value/source] | [purpose] |
## Database Migration Approach
**Migration tool**: [tool name]
**Strategy**: [migration strategy — e.g., versioned scripts, ORM migrations]
### Initial Schema
[Key tables/collections that need to be created, referencing component data access patterns]
## Test Structure
```
tests/
├── unit/
│ ├── [component_1]/
│ ├── [component_2]/
│ └── ...
├── integration/
│ ├── test_data/
│ └── [test files]
└── ...
```
### Test Configuration Notes
[Test runner, fixtures, test data management, isolation strategy]
## Implementation Order
| Order | Component | Reason |
|-------|-----------|--------|
| 1 | [##]_[name] | [why first — foundational, no dependencies] |
| 2 | [##]_[name] | [depends on #1] |
| ... | ... | ... |
```
---
## Guidance Notes
- This is a PLAN document, not code. The `3.05_implement_initial_structure` command executes it.
- Focus on structure and organization decisions, not implementation details.
- Reference component specs for interface and DTO details — don't repeat everything.
- The folder layout should follow conventions of the identified tech stack.
- Environment strategy should account for secrets management and configuration.
@@ -0,0 +1,59 @@
# Decomposition Summary Template
Use this template after all steps complete. Save as `TASKS_DIR/<topic>/SUMMARY.md`.
---
```markdown
# Decomposition Summary
**Date**: [YYYY-MM-DD]
**Topic**: [topic name]
**Total Components**: [N]
**Total Features**: [N]
**Total Complexity Points**: [N]
## Component Breakdown
| # | Component | Features | Total Points | Jira Epic |
|---|-----------|----------|-------------|-----------|
| 01 | [name] | [count] | [sum] | [EPIC-ID] |
| 02 | [name] | [count] | [sum] | [EPIC-ID] |
| ... | ... | ... | ... | ... |
## Feature List
| Component | Feature | Complexity | Jira Task | Dependencies |
|-----------|---------|-----------|-----------|-------------|
| [##]_[name] | [##].[##]_feature_[name] | [points] | [TASK-ID] | [deps or "None"] |
| ... | ... | ... | ... | ... |
## Implementation Order
Recommended sequence based on dependency analysis:
| Phase | Components / Features | Rationale |
|-------|----------------------|-----------|
| 1 | [list] | [foundational, no dependencies] |
| 2 | [list] | [depends on phase 1] |
| 3 | [list] | [depends on phase 1-2] |
| ... | ... | ... |
### Parallelization Opportunities
[Features/components that can be implemented concurrently within each phase]
## Cross-Component Dependencies
| From (Feature) | To (Feature) | Dependency Type |
|----------------|-------------|-----------------|
| [comp.feature] | [comp.feature] | [data / API / event] |
| ... | ... | ... |
## Artifacts Produced
- `initial_structure.md` — project skeleton plan
- `cross_dependencies.md` — dependency matrix
- `[##]_[name]/[##].[##]_feature_*.md` — feature specs per component
- Jira tasks created under respective epics
```
+393
View File
@@ -0,0 +1,393 @@
---
name: plan
description: |
Decompose a solution into architecture, system flows, components, tests, and Jira epics.
Systematic 5-step planning workflow with BLOCKING gates, self-verification, and structured artifact management.
Supports project mode (_docs/ + _docs/02_plans/ structure) and standalone mode (@file.md).
Trigger phrases:
- "plan", "decompose solution", "architecture planning"
- "break down the solution", "create planning documents"
- "component decomposition", "solution analysis"
disable-model-invocation: true
---
# Solution Planning
Decompose a problem and solution into architecture, system flows, components, tests, and Jira epics through a systematic 5-step workflow.
## Core Principles
- **Single Responsibility**: each component does one thing well; do not spread related logic across components
- **Dumb code, smart data**: keep logic simple, push complexity into data structures and configuration
- **Save immediately**: write artifacts to disk after each step; never accumulate unsaved work
- **Ask, don't assume**: when requirements are ambiguous, ask the user before proceeding
- **Plan, don't code**: this workflow produces documents and specs, never implementation code
## Context Resolution
Determine the operating mode based on invocation before any other logic runs.
**Project mode** (no explicit input file provided):
- PROBLEM_FILE: `_docs/00_problem/problem.md`
- SOLUTION_FILE: `_docs/01_solution/solution.md`
- PLANS_DIR: `_docs/02_plans/`
- All existing guardrails apply as-is.
**Standalone mode** (explicit input file provided, e.g. `/plan @some_doc.md`):
- INPUT_FILE: the provided file (treated as combined problem + solution context)
- Derive `<topic>` from the input filename (without extension)
- PLANS_DIR: `_standalone/<topic>/plans/`
- Guardrails relaxed: only INPUT_FILE must exist and be non-empty
- `acceptance_criteria.md` and `restrictions.md` are optional — warn if absent
Announce the detected mode and resolved paths to the user before proceeding.
## Input Specification
### Required Files
**Project mode:**
| File | Purpose |
|------|---------|
| PROBLEM_FILE (`_docs/00_problem/problem.md`) | Problem description and context |
| `_docs/00_problem/input_data/` | Reference data examples (if available) |
| `_docs/00_problem/restrictions.md` | Constraints and limitations (if available) |
| `_docs/00_problem/acceptance_criteria.md` | Measurable acceptance criteria (if available) |
| SOLUTION_FILE (`_docs/01_solution/solution.md`) | Solution draft to decompose |
**Standalone mode:**
| File | Purpose |
|------|---------|
| INPUT_FILE (the provided file) | Combined problem + solution context |
### Prerequisite Checks (BLOCKING)
**Project mode:**
1. PROBLEM_FILE exists and is non-empty — **STOP if missing**
2. SOLUTION_FILE exists and is non-empty — **STOP if missing**
3. Create PLANS_DIR if it does not exist
4. If `PLANS_DIR/<topic>/` already exists, ask user: **resume from last checkpoint or start fresh?**
**Standalone mode:**
1. INPUT_FILE exists and is non-empty — **STOP if missing**
2. Warn if no `restrictions.md` or `acceptance_criteria.md` provided alongside INPUT_FILE
3. Create PLANS_DIR if it does not exist
4. If `PLANS_DIR/<topic>/` already exists, ask user: **resume from last checkpoint or start fresh?**
## Artifact Management
### Directory Structure
At the start of planning, create a topic-named working directory under PLANS_DIR:
```
PLANS_DIR/<topic>/
├── architecture.md
├── system-flows.md
├── risk_mitigations.md
├── risk_mitigations_02.md (iterative, ## as sequence)
├── components/
│ ├── 01_[name]/
│ │ ├── description.md
│ │ └── tests.md
│ ├── 02_[name]/
│ │ ├── description.md
│ │ └── tests.md
│ └── ...
├── common-helpers/
│ ├── 01_helper_[name]/
│ ├── 02_helper_[name]/
│ └── ...
├── e2e_test_infrastructure.md
├── diagrams/
│ ├── components.drawio
│ └── flows/
│ ├── flow_[name].md (Mermaid)
│ └── ...
└── FINAL_report.md
```
### Save Timing
| Step | Save immediately after | Filename |
|------|------------------------|----------|
| Step 1 | Architecture analysis complete | `architecture.md` |
| Step 1 | System flows documented | `system-flows.md` |
| Step 2 | Each component analyzed | `components/[##]_[name]/description.md` |
| Step 2 | Common helpers generated | `common-helpers/[##]_helper_[name].md` |
| Step 2 | Diagrams generated | `diagrams/` |
| Step 3 | Risk assessment complete | `risk_mitigations.md` |
| Step 4 | Tests written per component | `components/[##]_[name]/tests.md` |
| Step 4b | E2E test infrastructure spec | `e2e_test_infrastructure.md` |
| Step 5 | Epics created in Jira | Jira via MCP |
| Final | All steps complete | `FINAL_report.md` |
### Save Principles
1. **Save immediately**: write to disk as soon as a step completes; do not wait until the end
2. **Incremental updates**: same file can be updated multiple times; append or replace
3. **Preserve process**: keep all intermediate files even after integration into final report
4. **Enable recovery**: if interrupted, resume from the last saved artifact (see Resumability)
### Resumability
If `PLANS_DIR/<topic>/` already contains artifacts:
1. List existing files and match them to the save timing table above
2. Identify the last completed step based on which artifacts exist
3. Resume from the next incomplete step
4. Inform the user which steps are being skipped
## Progress Tracking
At the start of execution, create a TodoWrite with all steps (1 through 5, including 4b). Update status as each step completes.
## Workflow
### Step 1: Solution Analysis
**Role**: Professional software architect
**Goal**: Produce `architecture.md` and `system-flows.md` from the solution draft
**Constraints**: No code, no component-level detail yet; focus on system-level view
1. Read all input files thoroughly
2. Research unknown or questionable topics via internet; ask user about ambiguities
3. Document architecture using `templates/architecture.md` as structure
4. Document system flows using `templates/system-flows.md` as structure
**Self-verification**:
- [ ] Architecture covers all capabilities mentioned in solution.md
- [ ] System flows cover all main user/system interactions
- [ ] No contradictions with problem.md or restrictions.md
- [ ] Technology choices are justified
**Save action**: Write `architecture.md` and `system-flows.md`
**BLOCKING**: Present architecture summary to user. Do NOT proceed until user confirms.
---
### Step 2: Component Decomposition
**Role**: Professional software architect
**Goal**: Decompose the architecture into components with detailed specs
**Constraints**: No code; only names, interfaces, inputs/outputs. Follow SRP strictly.
1. Identify components from the architecture; think about separation, reusability, and communication patterns
2. If additional components are needed (data preparation, shared helpers), create them
3. For each component, write a spec using `templates/component-spec.md` as structure
4. Generate diagrams:
- draw.io component diagram showing relations (minimize line intersections, group semantically coherent components, place external users near their components)
- Mermaid flowchart per main control flow
5. Components can share and reuse common logic, same for multiple components. Hence for such occurences common-helpers folder is specified.
**Self-verification**:
- [ ] Each component has a single, clear responsibility
- [ ] No functionality is spread across multiple components
- [ ] All inter-component interfaces are defined (who calls whom, with what)
- [ ] Component dependency graph has no circular dependencies
- [ ] All components from architecture.md are accounted for
**Save action**: Write:
- each component `components/[##]_[name]/description.md`
- comomon helper `common-helpers/[##]_helper_[name].md`
- diagrams `diagrams/`
**BLOCKING**: Present component list with one-line summaries to user. Do NOT proceed until user confirms.
---
### Step 3: Architecture Review & Risk Assessment
**Role**: Professional software architect and analyst
**Goal**: Validate all artifacts for consistency, then identify and mitigate risks
**Constraints**: This is a review step — fix problems found, do not add new features
#### 3a. Evaluator Pass (re-read ALL artifacts)
Review checklist:
- [ ] All components follow Single Responsibility Principle
- [ ] All components follow dumb code / smart data principle
- [ ] Inter-component interfaces are consistent (caller's output matches callee's input)
- [ ] No circular dependencies in the dependency graph
- [ ] No missing interactions between components
- [ ] No over-engineering — is there a simpler decomposition?
- [ ] Security considerations addressed in component design
- [ ] Performance bottlenecks identified
- [ ] API contracts are consistent across components
Fix any issues found before proceeding to risk identification.
#### 3b. Risk Identification
1. Identify technical and project risks
2. Assess probability and impact using `templates/risk-register.md`
3. Define mitigation strategies
4. Apply mitigations to architecture, flows, and component documents where applicable
**Self-verification**:
- [ ] Every High/Critical risk has a concrete mitigation strategy
- [ ] Mitigations are reflected in the relevant component or architecture docs
- [ ] No new risks introduced by the mitigations themselves
**Save action**: Write `risk_mitigations.md`
**BLOCKING**: Present risk summary to user. Ask whether assessment is sufficient.
**Iterative**: If user requests another round, repeat Step 3 and write `risk_mitigations_##.md` (## as sequence number). Continue until user confirms.
---
### Step 4: Test Specifications
**Role**: Professional Quality Assurance Engineer
**Goal**: Write test specs for each component achieving minimum 75% acceptance criteria coverage
**Constraints**: Test specs only — no test code. Each test must trace to an acceptance criterion.
1. For each component, write tests using `templates/test-spec.md` as structure
2. Cover all 4 types: integration, performance, security, acceptance
3. Include test data management (setup, teardown, isolation)
4. Verify traceability: every acceptance criterion from `acceptance_criteria.md` must be covered by at least one test
**Self-verification**:
- [ ] Every acceptance criterion has at least one test covering it
- [ ] Test inputs are realistic and well-defined
- [ ] Expected results are specific and measurable
- [ ] No component is left without tests
**Save action**: Write each `components/[##]_[name]/tests.md`
---
### Step 4b: E2E Black-Box Test Infrastructure
**Role**: Professional Quality Assurance Engineer
**Goal**: Specify a separate consumer application and Docker environment for black-box end-to-end testing of the main system
**Constraints**: Spec only — no test code. Consumer must treat the main system as a black box (no internal imports, no direct DB access).
1. Define Docker environment: services (system under test, test DB, consumer app, dependencies), networks, volumes
2. Specify consumer application: tech stack, entry point, communication interfaces with the main system
3. Define E2E test scenarios from acceptance criteria — focus on critical end-to-end use cases that cross component boundaries
4. Specify test data management: seed data, isolation strategy, external dependency mocks
5. Define CI/CD integration: when to run, gate behavior, timeout
6. Define reporting format (CSV: test ID, name, execution time, result, error message)
Use `templates/e2e-test-infrastructure.md` as structure.
**Self-verification**:
- [ ] Critical acceptance criteria are covered by at least one E2E scenario
- [ ] Consumer app has no direct access to system internals
- [ ] Docker environment is self-contained (`docker compose up` sufficient)
- [ ] External dependencies have mock/stub services defined
**Save action**: Write `e2e_test_infrastructure.md`
---
### Step 5: Jira Epics
**Role**: Professional product manager
**Goal**: Create Jira epics from components, ordered by dependency
**Constraints**: Be concise — fewer words with the same meaning is better
1. Generate Jira Epics from components using Jira MCP, structured per `templates/epic-spec.md`
2. Order epics by dependency (which must be done first)
3. Include effort estimation per epic (T-shirt size or story points range)
4. Ensure each epic has clear acceptance criteria cross-referenced with component specs
5. Generate updated draw.io diagram showing component-to-epic mapping
**Self-verification**:
- [ ] Every component maps to exactly one epic
- [ ] Dependency order is respected (no epic depends on a later one)
- [ ] Acceptance criteria are measurable
- [ ] Effort estimates are realistic
**Save action**: Epics created in Jira via MCP
---
## Quality Checklist (before FINAL_report.md)
Before writing the final report, verify ALL of the following:
### Architecture
- [ ] Covers all capabilities from solution.md
- [ ] Technology choices are justified
- [ ] Deployment model is defined
### Components
- [ ] Every component follows SRP
- [ ] No circular dependencies
- [ ] All inter-component interfaces are defined and consistent
- [ ] No orphan components (unused by any flow)
### Risks
- [ ] All High/Critical risks have mitigations
- [ ] Mitigations are reflected in component/architecture docs
- [ ] User has confirmed risk assessment is sufficient
### Tests
- [ ] Every acceptance criterion is covered by at least one test
- [ ] All 4 test types are represented per component (where applicable)
- [ ] Test data management is defined
### E2E Test Infrastructure
- [ ] Critical use cases covered by E2E scenarios
- [ ] Docker environment is self-contained
- [ ] Consumer app treats main system as black box
- [ ] CI/CD integration and reporting defined
### Epics
- [ ] Every component maps to an epic
- [ ] Dependency order is correct
- [ ] Acceptance criteria are measurable
**Save action**: Write `FINAL_report.md` using `templates/final-report.md` as structure
## Common Mistakes
- **Coding during planning**: this workflow produces documents, never code
- **Multi-responsibility components**: if a component does two things, split it
- **Skipping BLOCKING gates**: never proceed past a BLOCKING marker without user confirmation
- **Diagrams without data**: generate diagrams only after the underlying structure is documented
- **Copy-pasting problem.md**: the architecture doc should analyze and transform, not repeat the input
- **Vague interfaces**: "component A talks to component B" is not enough; define the method, input, output
- **Ignoring restrictions.md**: every constraint must be traceable in the architecture or risk register
## Escalation Rules
| Situation | Action |
|-----------|--------|
| Ambiguous requirements | ASK user |
| Missing acceptance criteria | ASK user |
| Technology choice with multiple valid options | ASK user |
| Component naming | PROCEED, confirm at next BLOCKING gate |
| File structure within templates | PROCEED |
| Contradictions between input files | ASK user |
| Risk mitigation requires architecture change | ASK user |
## Methodology Quick Reference
```
┌────────────────────────────────────────────────────────────────┐
│ Solution Planning (5-Step Method) │
├────────────────────────────────────────────────────────────────┤
│ CONTEXT: Resolve mode (project vs standalone) + set paths │
│ 1. Solution Analysis → architecture.md, system-flows.md │
│ [BLOCKING: user confirms architecture] │
│ 2. Component Decompose → components/[##]_[name]/description │
│ [BLOCKING: user confirms decomposition] │
│ 3. Review & Risk Assess → risk_mitigations.md │
│ [BLOCKING: user confirms risks, iterative] │
│ 4. Test Specifications → components/[##]_[name]/tests.md │
│ 4b.E2E Test Infra → e2e_test_infrastructure.md │
│ 5. Jira Epics → Jira via MCP │
│ ───────────────────────────────────────────────── │
│ Quality Checklist → FINAL_report.md │
├────────────────────────────────────────────────────────────────┤
│ Principles: SRP · Dumb code/smart data · Save immediately │
│ Ask don't assume · Plan don't code │
└────────────────────────────────────────────────────────────────┘
```
@@ -0,0 +1,128 @@
# Architecture Document Template
Use this template for the architecture document. Save as `_docs/02_plans/<topic>/architecture.md`.
---
```markdown
# [System Name] — Architecture
## 1. System Context
**Problem being solved**: [One paragraph summarizing the problem from problem.md]
**System boundaries**: [What is inside the system vs. external]
**External systems**:
| System | Integration Type | Direction | Purpose |
|--------|-----------------|-----------|---------|
| [name] | REST / Queue / DB / File | Inbound / Outbound / Both | [why] |
## 2. Technology Stack
| Layer | Technology | Version | Rationale |
|-------|-----------|---------|-----------|
| Language | | | |
| Framework | | | |
| Database | | | |
| Cache | | | |
| Message Queue | | | |
| Hosting | | | |
| CI/CD | | | |
**Key constraints from restrictions.md**:
- [Constraint 1 and how it affects technology choices]
- [Constraint 2]
## 3. Deployment Model
**Environments**: Development, Staging, Production
**Infrastructure**:
- [Cloud provider / On-prem / Hybrid]
- [Container orchestration if applicable]
- [Scaling strategy: horizontal / vertical / auto]
**Environment-specific configuration**:
| Config | Development | Production |
|--------|-------------|------------|
| Database | [local/docker] | [managed service] |
| Secrets | [.env file] | [secret manager] |
| Logging | [console] | [centralized] |
## 4. Data Model Overview
> High-level data model covering the entire system. Detailed per-component models go in component specs.
**Core entities**:
| Entity | Description | Owned By Component |
|--------|-------------|--------------------|
| [entity] | [what it represents] | [component ##] |
**Key relationships**:
- [Entity A] → [Entity B]: [relationship description]
**Data flow summary**:
- [Source] → [Transform] → [Destination]: [what data and why]
## 5. Integration Points
### Internal Communication
| From | To | Protocol | Pattern | Notes |
|------|----|----------|---------|-------|
| [component] | [component] | Sync REST / Async Queue / Direct call | Request-Response / Event / Command | |
### External Integrations
| External System | Protocol | Auth | Rate Limits | Failure Mode |
|----------------|----------|------|-------------|--------------|
| [system] | [REST/gRPC/etc] | [API key/OAuth/etc] | [limits] | [retry/circuit breaker/fallback] |
## 6. Non-Functional Requirements
| Requirement | Target | Measurement | Priority |
|------------|--------|-------------|----------|
| Availability | [e.g., 99.9%] | [how measured] | High/Medium/Low |
| Latency (p95) | [e.g., <200ms] | [endpoint/operation] | |
| Throughput | [e.g., 1000 req/s] | [peak/sustained] | |
| Data retention | [e.g., 90 days] | [which data] | |
| Recovery (RPO/RTO) | [e.g., RPO 1hr, RTO 4hr] | | |
| Scalability | [e.g., 10x current load] | [timeline] | |
## 7. Security Architecture
**Authentication**: [mechanism — JWT / session / API key]
**Authorization**: [RBAC / ABAC / per-resource]
**Data protection**:
- At rest: [encryption method]
- In transit: [TLS version]
- Secrets management: [tool/approach]
**Audit logging**: [what is logged, where, retention]
## 8. Key Architectural Decisions
Record significant decisions that shaped the architecture.
### ADR-001: [Decision Title]
**Context**: [Why this decision was needed]
**Decision**: [What was decided]
**Alternatives considered**:
1. [Alternative 1] — rejected because [reason]
2. [Alternative 2] — rejected because [reason]
**Consequences**: [Trade-offs accepted]
### ADR-002: [Decision Title]
...
```
@@ -0,0 +1,156 @@
# Component Specification Template
Use this template for each component. Save as `components/[##]_[name]/description.md`.
---
```markdown
# [Component Name]
## 1. High-Level Overview
**Purpose**: [One sentence: what this component does and its role in the system]
**Architectural Pattern**: [e.g., Repository, Event-driven, Pipeline, Facade, etc.]
**Upstream dependencies**: [Components that this component calls or consumes from]
**Downstream consumers**: [Components that call or consume from this component]
## 2. Internal Interfaces
For each interface this component exposes internally:
### Interface: [InterfaceName]
| Method | Input | Output | Async | Error Types |
|--------|-------|--------|-------|-------------|
| `method_name` | `InputDTO` | `OutputDTO` | Yes/No | `ErrorType1`, `ErrorType2` |
**Input DTOs**:
```
[DTO name]:
field_1: type (required/optional) — description
field_2: type (required/optional) — description
```
**Output DTOs**:
```
[DTO name]:
field_1: type — description
field_2: type — description
```
## 3. External API Specification
> Include this section only if the component exposes an external HTTP/gRPC API.
> Skip if the component is internal-only.
| Endpoint | Method | Auth | Rate Limit | Description |
|----------|--------|------|------------|-------------|
| `/api/v1/...` | GET/POST/PUT/DELETE | Required/Public | X req/min | Brief description |
**Request/Response schemas**: define per endpoint using OpenAPI-style notation.
**Example request/response**:
```json
// Request
{ }
// Response
{ }
```
## 4. Data Access Patterns
### Queries
| Query | Frequency | Hot Path | Index Needed |
|-------|-----------|----------|--------------|
| [describe query] | High/Medium/Low | Yes/No | Yes/No |
### Caching Strategy
| Data | Cache Type | TTL | Invalidation |
|------|-----------|-----|-------------|
| [data item] | In-memory / Redis / None | [duration] | [trigger] |
### Storage Estimates
| Table/Collection | Est. Row Count (1yr) | Row Size | Total Size | Growth Rate |
|-----------------|---------------------|----------|------------|-------------|
| [table_name] | | | | /month |
### Data Management
**Seed data**: [Required seed data and how to load it]
**Rollback**: [Rollback procedure for this component's data changes]
## 5. Implementation Details
**Algorithmic Complexity**: [Big O for critical methods — only if non-trivial]
**State Management**: [Local state / Global state / Stateless — explain how state is handled]
**Key Dependencies**: [External libraries and their purpose]
| Library | Version | Purpose |
|---------|---------|---------|
| [name] | [version] | [why needed] |
**Error Handling Strategy**:
- [How errors are caught, propagated, and reported]
- [Retry policy if applicable]
- [Circuit breaker if applicable]
## 6. Extensions and Helpers
> List any shared utilities this component needs that should live in a `helpers/` folder.
| Helper | Purpose | Used By |
|--------|---------|---------|
| [helper_name] | [what it does] | [list of components] |
## 7. Caveats & Edge Cases
**Known limitations**:
- [Limitation 1]
**Potential race conditions**:
- [Race condition scenario, if any]
**Performance bottlenecks**:
- [Bottleneck description and mitigation approach]
## 8. Dependency Graph
**Must be implemented after**: [list of component numbers/names]
**Can be implemented in parallel with**: [list of component numbers/names]
**Blocks**: [list of components that depend on this one]
## 9. Logging Strategy
| Log Level | When | Example |
|-----------|------|---------|
| ERROR | Unrecoverable failures | `Failed to process order {id}: {error}` |
| WARN | Recoverable issues | `Retry attempt {n} for {operation}` |
| INFO | Key business events | `Order {id} created by user {uid}` |
| DEBUG | Development diagnostics | `Query returned {n} rows in {ms}ms` |
**Log format**: [structured JSON / plaintext — match system standard]
**Log storage**: [stdout / file / centralized logging service]
```
---
## Guidance Notes
- **Section 3 (External API)**: skip entirely for internal-only components. Include for any component that exposes HTTP endpoints, WebSocket connections, or gRPC services.
- **Section 4 (Storage Estimates)**: critical for components that manage persistent data. Skip for stateless components.
- **Section 5 (Algorithmic Complexity)**: only document if the algorithm is non-trivial (O(n^2) or worse, recursive, etc.). Simple CRUD operations don't need this.
- **Section 6 (Helpers)**: if the helper is used by only one component, keep it inside that component. Only extract to `helpers/` if shared by 2+ components.
- **Section 8 (Dependency Graph)**: this is essential for determining implementation order. Be precise about what "depends on" means — data dependency, API dependency, or shared infrastructure.
@@ -0,0 +1,141 @@
# E2E Black-Box Test Infrastructure Template
Describes a separate consumer application that tests the main system as a black box.
Save as `PLANS_DIR/<topic>/e2e_test_infrastructure.md`.
---
```markdown
# E2E Test Infrastructure
## Overview
**System under test**: [main system name and entry points — API URLs, message queues, etc.]
**Consumer app purpose**: Standalone application that exercises the main system through its public interfaces, validating end-to-end use cases without access to internals.
## Docker Environment
### Services
| Service | Image / Build | Purpose | Ports |
|---------|--------------|---------|-------|
| system-under-test | [main app image or build context] | The main system being tested | [ports] |
| test-db | [postgres/mysql/etc.] | Database for the main system | [ports] |
| e2e-consumer | [build context for consumer app] | Black-box test runner | — |
| [dependency] | [image] | [purpose — cache, queue, etc.] | [ports] |
### Networks
| Network | Services | Purpose |
|---------|----------|---------|
| e2e-net | all | Isolated test network |
### Volumes
| Volume | Mounted to | Purpose |
|--------|-----------|---------|
| [name] | [service:path] | [test data, DB persistence, etc.] |
### docker-compose structure
```yaml
# Outline only — not runnable code
services:
system-under-test:
# main system
test-db:
# database
e2e-consumer:
# consumer test app
depends_on:
- system-under-test
```
## Consumer Application
**Tech stack**: [language, framework, test runner]
**Entry point**: [how it starts — e.g., pytest, jest, custom runner]
### Communication with system under test
| Interface | Protocol | Endpoint / Topic | Authentication |
|-----------|----------|-----------------|----------------|
| [API name] | [HTTP/gRPC/AMQP/etc.] | [URL or topic] | [method] |
### What the consumer does NOT have access to
- No direct database access to the main system
- No internal module imports
- No shared memory or file system with the main system
## E2E Test Scenarios
### Acceptance Criteria Traceability
| AC ID | Acceptance Criterion | E2E Test IDs | Coverage |
|-------|---------------------|-------------|----------|
| AC-01 | [criterion] | E2E-01 | Covered |
| AC-02 | [criterion] | E2E-02, E2E-03 | Covered |
| AC-03 | [criterion] | — | NOT COVERED — [reason] |
### E2E-01: [Scenario Name]
**Summary**: [One sentence: what end-to-end use case this validates]
**Traces to**: AC-01
**Preconditions**:
- [System state required before test]
**Steps**:
| Step | Consumer Action | Expected System Response |
|------|----------------|------------------------|
| 1 | [call / send] | [response / event] |
| 2 | [call / send] | [response / event] |
**Max execution time**: [e.g., 10s]
---
### E2E-02: [Scenario Name]
(repeat structure)
---
## Test Data Management
**Seed data**:
| Data Set | Description | How Loaded | Cleanup |
|----------|-------------|-----------|---------|
| [name] | [what it contains] | [SQL script / API call / fixture file] | [how removed after test] |
**Isolation strategy**: [e.g., each test run gets a fresh DB via container restart, or transactions are rolled back, or namespaced data]
**External dependencies**: [any external APIs that need mocking or sandbox environments]
## CI/CD Integration
**When to run**: [e.g., on PR merge to dev, nightly, before production deploy]
**Pipeline stage**: [where in the CI pipeline this fits]
**Gate behavior**: [block merge / warning only / manual approval]
**Timeout**: [max total suite duration before considered failed]
## Reporting
**Format**: CSV
**Columns**: Test ID, Test Name, Execution Time (ms), Result (PASS/FAIL/SKIP), Error Message (if FAIL)
**Output path**: [where the CSV is written — e.g., ./e2e-results/report.csv]
```
---
## Guidance Notes
- Every E2E test MUST trace to at least one acceptance criterion. If it doesn't, question whether it's needed.
- The consumer app must treat the main system as a true black box — no internal imports, no direct DB queries against the main system's database.
- Keep the number of E2E tests focused on critical use cases. Exhaustive testing belongs in per-component tests (Step 4).
- Docker environment should be self-contained — `docker compose up` must be sufficient to run the full suite.
- If the main system requires external services (payment gateways, third-party APIs), define mock/stub services in the Docker environment.
+127
View File
@@ -0,0 +1,127 @@
# Jira Epic Template
Use this template for each Jira epic. Create epics via Jira MCP.
---
```markdown
## Epic: [Component Name] — [Outcome]
**Example**: Data Ingestion — Near-real-time pipeline
### Epic Summary
[1-2 sentences: what we are building + why it matters]
### Problem / Context
[Current state, pain points, constraints, business opportunities.
Link to architecture.md and relevant component spec.]
### Scope
**In Scope**:
- [Capability 1 — describe what, not how]
- [Capability 2]
- [Capability 3]
**Out of Scope**:
- [Explicit exclusion 1 — prevents scope creep]
- [Explicit exclusion 2]
### Assumptions
- [System design assumption]
- [Data structure assumption]
- [Infrastructure assumption]
### Dependencies
**Epic dependencies** (must be completed first):
- [Epic name / ID]
**External dependencies**:
- [Services, hardware, environments, certificates, data sources]
### Effort Estimation
**T-shirt size**: S / M / L / XL
**Story points range**: [min]-[max]
### Users / Consumers
| Type | Who | Key Use Cases |
|------|-----|--------------|
| Internal | [team/role] | [use case] |
| External | [user type] | [use case] |
| System | [service name] | [integration point] |
### Requirements
**Functional**:
- [API expectations, events, data handling]
- [Idempotency, retry behavior]
**Non-functional**:
- [Availability, latency, throughput targets]
- [Scalability, processing limits, data retention]
**Security / Compliance**:
- [Authentication, encryption, secrets management]
- [Logging, audit trail]
- [SOC2 / ISO / GDPR if applicable]
### Design & Architecture
- Architecture doc: `_docs/02_plans/<topic>/architecture.md`
- Component spec: `_docs/02_plans/<topic>/components/[##]_[name]/description.md`
- System flows: `_docs/02_plans/<topic>/system-flows.md`
### Definition of Done
- [ ] All in-scope capabilities implemented
- [ ] Automated tests pass (unit + integration + e2e)
- [ ] Minimum coverage threshold met (75%)
- [ ] Runbooks written (if applicable)
- [ ] Documentation updated
### Acceptance Criteria
| # | Criterion | Measurable Condition |
|---|-----------|---------------------|
| 1 | [criterion] | [how to verify] |
| 2 | [criterion] | [how to verify] |
### Risks & Mitigations
| # | Risk | Mitigation | Owner |
|---|------|------------|-------|
| 1 | [top risk] | [mitigation] | [owner] |
| 2 | | | |
| 3 | | | |
### Labels
- `component:[name]`
- `env:prod` / `env:stg`
- `type:platform` / `type:data` / `type:integration`
### Child Issues
| Type | Title | Points |
|------|-------|--------|
| Spike | [research/investigation task] | [1-3] |
| Task | [implementation task] | [1-5] |
| Task | [implementation task] | [1-5] |
| Enabler | [infrastructure/setup task] | [1-3] |
```
---
## Guidance Notes
- Be concise. Fewer words with the same meaning = better epic.
- Capabilities in scope are "what", not "how" — avoid describing implementation details.
- Dependency order matters: epics that must be done first should be listed earlier in the backlog.
- Every epic maps to exactly one component. If a component is too large for one epic, split the component first.
- Complexity points for child issues follow the project standard: 1, 2, 3, 5, 8. Do not create issues above 5 points — split them.
@@ -0,0 +1,104 @@
# Final Planning Report Template
Use this template after completing all 5 steps and the quality checklist. Save as `_docs/02_plans/<topic>/FINAL_report.md`.
---
```markdown
# [System Name] — Planning Report
## Executive Summary
[2-3 sentences: what was planned, the core architectural approach, and the key outcome (number of components, epics, estimated effort)]
## Problem Statement
[Brief restatement from problem.md — transformed, not copy-pasted]
## Architecture Overview
[Key architectural decisions and technology stack summary. Reference `architecture.md` for full details.]
**Technology stack**: [language, framework, database, hosting — one line]
**Deployment**: [environment strategy — one line]
## Component Summary
| # | Component | Purpose | Dependencies | Epic |
|---|-----------|---------|-------------|------|
| 01 | [name] | [one-line purpose] | — | [Jira ID] |
| 02 | [name] | [one-line purpose] | 01 | [Jira ID] |
| ... | | | | |
**Implementation order** (based on dependency graph):
1. [Phase 1: components that can start immediately]
2. [Phase 2: components that depend on Phase 1]
3. [Phase 3: ...]
## System Flows
| Flow | Description | Key Components |
|------|-------------|---------------|
| [name] | [one-line summary] | [component list] |
[Reference `system-flows.md` for full diagrams and details.]
## Risk Summary
| Level | Count | Key Risks |
|-------|-------|-----------|
| Critical | [N] | [brief list] |
| High | [N] | [brief list] |
| Medium | [N] | — |
| Low | [N] | — |
**Iterations completed**: [N]
**All Critical/High risks mitigated**: Yes / No — [details if No]
[Reference `risk_mitigations.md` for full register.]
## Test Coverage
| Component | Integration | Performance | Security | Acceptance | AC Coverage |
|-----------|-------------|-------------|----------|------------|-------------|
| [name] | [N tests] | [N tests] | [N tests] | [N tests] | [X/Y ACs] |
| ... | | | | | |
**Overall acceptance criteria coverage**: [X / Y total ACs covered] ([percentage]%)
## Epic Roadmap
| Order | Epic | Component | Effort | Dependencies |
|-------|------|-----------|--------|-------------|
| 1 | [Jira ID]: [name] | [component] | [S/M/L/XL] | — |
| 2 | [Jira ID]: [name] | [component] | [S/M/L/XL] | Epic 1 |
| ... | | | | |
**Total estimated effort**: [sum or range]
## Key Decisions Made
| # | Decision | Rationale | Alternatives Rejected |
|---|----------|-----------|----------------------|
| 1 | [decision] | [why] | [what was rejected] |
| 2 | | | |
## Open Questions
| # | Question | Impact | Assigned To |
|---|----------|--------|-------------|
| 1 | [unresolved question] | [what it blocks or affects] | [who should answer] |
## Artifact Index
| File | Description |
|------|-------------|
| `architecture.md` | System architecture |
| `system-flows.md` | System flows and diagrams |
| `components/01_[name]/description.md` | Component spec |
| `components/01_[name]/tests.md` | Test spec |
| `risk_mitigations.md` | Risk register |
| `diagrams/components.drawio` | Component diagram |
| `diagrams/flows/flow_[name].md` | Flow diagrams |
```
@@ -0,0 +1,99 @@
# Risk Register Template
Use this template for risk assessment. Save as `_docs/02_plans/<topic>/risk_mitigations.md`.
Subsequent iterations: `risk_mitigations_02.md`, `risk_mitigations_03.md`, etc.
---
```markdown
# Risk Assessment — [Topic] — Iteration [##]
## Risk Scoring Matrix
| | Low Impact | Medium Impact | High Impact |
|--|------------|---------------|-------------|
| **High Probability** | Medium | High | Critical |
| **Medium Probability** | Low | Medium | High |
| **Low Probability** | Low | Low | Medium |
## Acceptance Criteria by Risk Level
| Level | Action Required |
|-------|----------------|
| Low | Accepted, monitored quarterly |
| Medium | Mitigation plan required before implementation |
| High | Mitigation + contingency plan required, reviewed weekly |
| Critical | Must be resolved before proceeding to next planning step |
## Risk Register
| ID | Risk | Category | Probability | Impact | Score | Mitigation | Owner | Status |
|----|------|----------|-------------|--------|-------|------------|-------|--------|
| R01 | [risk description] | [category] | High/Med/Low | High/Med/Low | Critical/High/Med/Low | [mitigation strategy] | [owner] | Open/Mitigated/Accepted |
| R02 | | | | | | | | |
## Risk Categories
### Technical Risks
- Technology choices may not meet requirements
- Integration complexity underestimated
- Performance targets unachievable
- Security vulnerabilities in design
- Data model cannot support future requirements
### Schedule Risks
- Dependencies delayed
- Scope creep from ambiguous requirements
- Underestimated complexity
### Resource Risks
- Key person dependency
- Team lacks experience with chosen technology
- Infrastructure not available in time
### External Risks
- Third-party API changes or deprecation
- Vendor reliability or pricing changes
- Regulatory or compliance changes
- Data source availability
## Detailed Risk Analysis
### R01: [Risk Title]
**Description**: [Detailed description of the risk]
**Trigger conditions**: [What would cause this risk to materialize]
**Affected components**: [List of components impacted]
**Mitigation strategy**:
1. [Action 1]
2. [Action 2]
**Contingency plan**: [What to do if mitigation fails]
**Residual risk after mitigation**: [Low/Medium/High]
**Documents updated**: [List architecture/component docs that were updated to reflect this mitigation]
---
### R02: [Risk Title]
(repeat structure above)
## Architecture/Component Changes Applied
| Risk ID | Document Modified | Change Description |
|---------|------------------|--------------------|
| R01 | `architecture.md` §3 | [what changed] |
| R01 | `components/02_[name]/description.md` §5 | [what changed] |
## Summary
**Total risks identified**: [N]
**Critical**: [N] | **High**: [N] | **Medium**: [N] | **Low**: [N]
**Risks mitigated this iteration**: [N]
**Risks requiring user decision**: [list]
```
@@ -0,0 +1,108 @@
# System Flows Template
Use this template for the system flows document. Save as `_docs/02_plans/<topic>/system-flows.md`.
Individual flow diagrams go in `_docs/02_plans/<topic>/diagrams/flows/flow_[name].md`.
---
```markdown
# [System Name] — System Flows
## Flow Inventory
| # | Flow Name | Trigger | Primary Components | Criticality |
|---|-----------|---------|-------------------|-------------|
| F1 | [name] | [user action / scheduled / event] | [component list] | High/Medium/Low |
| F2 | [name] | | | |
| ... | | | | |
## Flow Dependencies
| Flow | Depends On | Shares Data With |
|------|-----------|-----------------|
| F1 | — | F2 (via [entity]) |
| F2 | F1 must complete first | F3 |
---
## Flow F1: [Flow Name]
### Description
[1-2 sentences: what this flow does, who triggers it, what the outcome is]
### Preconditions
- [Condition 1]
- [Condition 2]
### Sequence Diagram
```mermaid
sequenceDiagram
participant User
participant ComponentA
participant ComponentB
participant Database
User->>ComponentA: [action]
ComponentA->>ComponentB: [call with params]
ComponentB->>Database: [query/write]
Database-->>ComponentB: [result]
ComponentB-->>ComponentA: [response]
ComponentA-->>User: [result]
```
### Flowchart
```mermaid
flowchart TD
Start([Trigger]) --> Step1[Step description]
Step1 --> Decision{Condition?}
Decision -->|Yes| Step2[Step description]
Decision -->|No| Step3[Step description]
Step2 --> EndNode([Result])
Step3 --> EndNode
```
### Data Flow
| Step | From | To | Data | Format |
|------|------|----|------|--------|
| 1 | [source] | [destination] | [what data] | [DTO/event/etc] |
| 2 | | | | |
### Error Scenarios
| Error | Where | Detection | Recovery |
|-------|-------|-----------|----------|
| [error type] | [which step] | [how detected] | [what happens] |
### Performance Expectations
| Metric | Target | Notes |
|--------|--------|-------|
| End-to-end latency | [target] | [conditions] |
| Throughput | [target] | [peak/sustained] |
---
## Flow F2: [Flow Name]
(repeat structure above)
```
---
## Mermaid Diagram Conventions
Follow these conventions for consistency across all flow diagrams:
- **Participants**: use component names matching `components/[##]_[name]`
- **Node IDs**: camelCase, no spaces (e.g., `validateInput`, `saveOrder`)
- **Decision nodes**: use `{Question?}` format
- **Start/End**: use `([label])` stadium shape
- **External systems**: use `[[label]]` subroutine shape
- **Subgraphs**: group by component or bounded context
- **No styling**: do not add colors or CSS classes — let the renderer theme handle it
- **Edge labels**: wrap special characters in quotes (e.g., `-->|"O(n) check"|`)
+172
View File
@@ -0,0 +1,172 @@
# Test Specification Template
Use this template for each component's test spec. Save as `components/[##]_[name]/tests.md`.
---
```markdown
# Test Specification — [Component Name]
## Acceptance Criteria Traceability
| AC ID | Acceptance Criterion | Test IDs | Coverage |
|-------|---------------------|----------|----------|
| AC-01 | [criterion from acceptance_criteria.md] | IT-01, AT-01 | Covered |
| AC-02 | [criterion] | PT-01 | Covered |
| AC-03 | [criterion] | — | NOT COVERED — [reason] |
---
## Integration Tests
### IT-01: [Test Name]
**Summary**: [One sentence: what this test verifies]
**Traces to**: AC-01, AC-03
**Description**: [Detailed test scenario]
**Input data**:
```
[specific input data for this test]
```
**Expected result**:
```
[specific expected output or state]
```
**Max execution time**: [e.g., 5s]
**Dependencies**: [other components/services that must be running]
---
### IT-02: [Test Name]
(repeat structure)
---
## Performance Tests
### PT-01: [Test Name]
**Summary**: [One sentence: what performance aspect is tested]
**Traces to**: AC-02
**Load scenario**:
- Concurrent users: [N]
- Request rate: [N req/s]
- Duration: [N minutes]
- Ramp-up: [strategy]
**Expected results**:
| Metric | Target | Failure Threshold |
|--------|--------|-------------------|
| Latency (p50) | [target] | [max] |
| Latency (p95) | [target] | [max] |
| Latency (p99) | [target] | [max] |
| Throughput | [target req/s] | [min req/s] |
| Error rate | [target %] | [max %] |
**Resource limits**:
- CPU: [max %]
- Memory: [max MB/GB]
- Database connections: [max pool size]
---
### PT-02: [Test Name]
(repeat structure)
---
## Security Tests
### ST-01: [Test Name]
**Summary**: [One sentence: what security aspect is tested]
**Traces to**: AC-04
**Attack vector**: [e.g., SQL injection on search endpoint, privilege escalation via direct ID access]
**Test procedure**:
1. [Step 1]
2. [Step 2]
**Expected behavior**: [what the system should do — reject, sanitize, log, etc.]
**Pass criteria**: [specific measurable condition]
**Fail criteria**: [what constitutes a failure]
---
### ST-02: [Test Name]
(repeat structure)
---
## Acceptance Tests
### AT-01: [Test Name]
**Summary**: [One sentence: what user-facing behavior is verified]
**Traces to**: AC-01
**Preconditions**:
- [Precondition 1]
- [Precondition 2]
**Steps**:
| Step | Action | Expected Result |
|------|--------|-----------------|
| 1 | [user action] | [expected outcome] |
| 2 | [user action] | [expected outcome] |
| 3 | [user action] | [expected outcome] |
---
### AT-02: [Test Name]
(repeat structure)
---
## Test Data Management
**Required test data**:
| Data Set | Description | Source | Size |
|----------|-------------|--------|------|
| [name] | [what it contains] | [generated / fixture / copy of prod subset] | [approx size] |
**Setup procedure**:
1. [How to prepare the test environment]
2. [How to load test data]
**Teardown procedure**:
1. [How to clean up after tests]
2. [How to restore initial state]
**Data isolation strategy**: [How tests are isolated from each other — separate DB, transactions, namespacing]
```
---
## Guidance Notes
- Every test MUST trace back to at least one acceptance criterion (AC-XX). If a test doesn't trace to any, question whether it's needed.
- If an acceptance criterion has no test covering it, mark it as NOT COVERED and explain why (e.g., "requires manual verification", "deferred to phase 2").
- Performance test targets should come from the NFR section in `architecture.md`.
- Security tests should cover at minimum: authentication bypass, authorization escalation, injection attacks relevant to this component.
- Not every component needs all 4 test types. A stateless utility component may only need integration tests.
+470
View File
@@ -0,0 +1,470 @@
---
name: refactor
description: |
Structured refactoring workflow (6-phase method) with three execution modes:
- Full Refactoring: all 6 phases — baseline, discovery, analysis, safety net, execution, hardening
- Targeted Refactoring: skip discovery if docs exist, focus on a specific component/area
- Quick Assessment: phases 0-2 only, outputs a refactoring plan without execution
Supports project mode (_docs/ structure) and standalone mode (@file.md).
Trigger phrases:
- "refactor", "refactoring", "improve code"
- "analyze coupling", "decoupling", "technical debt"
- "refactoring assessment", "code quality improvement"
disable-model-invocation: true
---
# Structured Refactoring (6-Phase Method)
Transform existing codebases through a systematic refactoring workflow: capture baseline, document current state, research improvements, build safety net, execute changes, and harden.
## Core Principles
- **Preserve behavior first**: never refactor without a passing test suite
- **Measure before and after**: every change must be justified by metrics
- **Small incremental changes**: commit frequently, never break tests
- **Save immediately**: write artifacts to disk after each phase; never accumulate unsaved work
- **Ask, don't assume**: when scope or priorities are unclear, STOP and ask the user
## Context Resolution
Determine the operating mode based on invocation before any other logic runs.
**Project mode** (no explicit input file provided):
- PROBLEM_DIR: `_docs/00_problem/`
- SOLUTION_DIR: `_docs/01_solution/`
- COMPONENTS_DIR: `_docs/02_components/`
- TESTS_DIR: `_docs/02_tests/`
- REFACTOR_DIR: `_docs/04_refactoring/`
- All existing guardrails apply.
**Standalone mode** (explicit input file provided, e.g. `/refactor @some_component.md`):
- INPUT_FILE: the provided file (treated as component/area description)
- Derive `<topic>` from the input filename (without extension)
- REFACTOR_DIR: `_standalone/<topic>/refactoring/`
- Guardrails relaxed: only INPUT_FILE must exist and be non-empty
- `acceptance_criteria.md` is optional — warn if absent
Announce the detected mode and resolved paths to the user before proceeding.
## Mode Detection
After context resolution, determine the execution mode:
1. **User explicitly says** "quick assessment" or "just assess" → **Quick Assessment**
2. **User explicitly says** "refactor [component/file/area]" with a specific target → **Targeted Refactoring**
3. **Default****Full Refactoring**
| Mode | Phases Executed | When to Use |
|------|----------------|-------------|
| **Full Refactoring** | 0 → 1 → 2 → 3 → 4 → 5 | Complete refactoring of a system or major area |
| **Targeted Refactoring** | 0 → (skip 1 if docs exist) → 2 → 3 → 4 → 5 | Refactor a specific component; docs already exist |
| **Quick Assessment** | 0 → 1 → 2 | Produce a refactoring roadmap without executing changes |
Inform the user which mode was detected and confirm before proceeding.
## Prerequisite Checks (BLOCKING)
**Project mode:**
1. PROBLEM_DIR exists with `problem.md` (or `problem_description.md`) — **STOP if missing**, ask user to create it
2. If `acceptance_criteria.md` is missing: **warn** and ask whether to proceed
3. Create REFACTOR_DIR if it does not exist
4. If REFACTOR_DIR already contains artifacts, ask user: **resume from last checkpoint or start fresh?**
**Standalone mode:**
1. INPUT_FILE exists and is non-empty — **STOP if missing**
2. Warn if no `acceptance_criteria.md` provided
3. Create REFACTOR_DIR if it does not exist
## Artifact Management
### Directory Structure
```
REFACTOR_DIR/
├── baseline_metrics.md (Phase 0)
├── discovery/
│ ├── components/
│ │ └── [##]_[name].md (Phase 1)
│ ├── solution.md (Phase 1)
│ └── system_flows.md (Phase 1)
├── analysis/
│ ├── research_findings.md (Phase 2)
│ └── refactoring_roadmap.md (Phase 2)
├── test_specs/
│ └── [##]_[test_name].md (Phase 3)
├── coupling_analysis.md (Phase 4)
├── execution_log.md (Phase 4)
├── hardening/
│ ├── technical_debt.md (Phase 5)
│ ├── performance.md (Phase 5)
│ └── security.md (Phase 5)
└── FINAL_report.md (after all phases)
```
### Save Timing
| Phase | Save immediately after | Filename |
|-------|------------------------|----------|
| Phase 0 | Baseline captured | `baseline_metrics.md` |
| Phase 1 | Each component documented | `discovery/components/[##]_[name].md` |
| Phase 1 | Solution synthesized | `discovery/solution.md`, `discovery/system_flows.md` |
| Phase 2 | Research complete | `analysis/research_findings.md` |
| Phase 2 | Roadmap produced | `analysis/refactoring_roadmap.md` |
| Phase 3 | Test specs written | `test_specs/[##]_[test_name].md` |
| Phase 4 | Coupling analyzed | `coupling_analysis.md` |
| Phase 4 | Execution complete | `execution_log.md` |
| Phase 5 | Each hardening track | `hardening/<track>.md` |
| Final | All phases done | `FINAL_report.md` |
### Resumability
If REFACTOR_DIR already contains artifacts:
1. List existing files and match to the save timing table
2. Identify the last completed phase based on which artifacts exist
3. Resume from the next incomplete phase
4. Inform the user which phases are being skipped
## Progress Tracking
At the start of execution, create a TodoWrite with all applicable phases. Update status as each phase completes.
## Workflow
### Phase 0: Context & Baseline
**Role**: Software engineer preparing for refactoring
**Goal**: Collect refactoring goals and capture baseline metrics
**Constraints**: Measurement only — no code changes
#### 0a. Collect Goals
If PROBLEM_DIR files do not yet exist, help the user create them:
1. `problem.md` — what the system currently does, what changes are needed, pain points
2. `acceptance_criteria.md` — success criteria for the refactoring
3. `security_approach.md` — security requirements (if applicable)
Store in PROBLEM_DIR.
#### 0b. Capture Baseline
1. Read problem description and acceptance criteria
2. Measure current system metrics using project-appropriate tools:
| Metric Category | What to Capture |
|----------------|-----------------|
| **Coverage** | Overall, unit, integration, critical paths |
| **Complexity** | Cyclomatic complexity (avg + top 5 functions), LOC, tech debt ratio |
| **Code Smells** | Total, critical, major |
| **Performance** | Response times (P50/P95/P99), CPU/memory, throughput |
| **Dependencies** | Total count, outdated, security vulnerabilities |
| **Build** | Build time, test execution time, deployment time |
3. Create functionality inventory: all features/endpoints with status and coverage
**Self-verification**:
- [ ] All metric categories measured (or noted as N/A with reason)
- [ ] Functionality inventory is complete
- [ ] Measurements are reproducible
**Save action**: Write `REFACTOR_DIR/baseline_metrics.md`
**BLOCKING**: Present baseline summary to user. Do NOT proceed until user confirms.
---
### Phase 1: Discovery
**Role**: Principal software architect
**Goal**: Generate documentation from existing code and form solution description
**Constraints**: Document what exists, not what should be. No code changes.
**Skip condition** (Targeted mode): If `COMPONENTS_DIR` and `SOLUTION_DIR` already contain documentation for the target area, skip to Phase 2. Ask user to confirm skip.
#### 1a. Document Components
For each component in the codebase:
1. Analyze project structure, directories, files
2. Go file by file, analyze each method
3. Analyze connections between components
Write per component to `REFACTOR_DIR/discovery/components/[##]_[name].md`:
- Purpose and architectural patterns
- Mermaid diagrams for logic flows
- API reference table (name, description, input, output)
- Implementation details: algorithmic complexity, state management, dependencies
- Caveats, edge cases, known limitations
#### 1b. Synthesize Solution & Flows
1. Review all generated component documentation
2. Synthesize into a cohesive solution description
3. Create flow diagrams showing component interactions
Write:
- `REFACTOR_DIR/discovery/solution.md` — product description, component overview, interaction diagram
- `REFACTOR_DIR/discovery/system_flows.md` — Mermaid flowcharts per major use case
Also copy to project standard locations if in project mode:
- `SOLUTION_DIR/solution.md`
- `COMPONENTS_DIR/system_flows.md`
**Self-verification**:
- [ ] Every component in the codebase is documented
- [ ] Solution description covers all components
- [ ] Flow diagrams cover all major use cases
- [ ] Mermaid diagrams are syntactically correct
**Save action**: Write discovery artifacts
**BLOCKING**: Present discovery summary to user. Do NOT proceed until user confirms documentation accuracy.
---
### Phase 2: Analysis
**Role**: Researcher and software architect
**Goal**: Research improvements and produce a refactoring roadmap
**Constraints**: Analysis only — no code changes
#### 2a. Deep Research
1. Analyze current implementation patterns
2. Research modern approaches for similar systems
3. Identify what could be done differently
4. Suggest improvements based on state-of-the-art practices
Write `REFACTOR_DIR/analysis/research_findings.md`:
- Current state analysis: patterns used, strengths, weaknesses
- Alternative approaches per component: current vs alternative, pros/cons, migration effort
- Prioritized recommendations: quick wins + strategic improvements
#### 2b. Solution Assessment
1. Assess current implementation against acceptance criteria
2. Identify weak points in codebase, map to specific code areas
3. Perform gap analysis: acceptance criteria vs current state
4. Prioritize changes by impact and effort
Write `REFACTOR_DIR/analysis/refactoring_roadmap.md`:
- Weak points assessment: location, description, impact, proposed solution
- Gap analysis: what's missing, what needs improvement
- Phased roadmap: Phase 1 (critical fixes), Phase 2 (major improvements), Phase 3 (enhancements)
**Self-verification**:
- [ ] All acceptance criteria are addressed in gap analysis
- [ ] Recommendations are grounded in actual code, not abstract
- [ ] Roadmap phases are prioritized by impact
- [ ] Quick wins are identified separately
**Save action**: Write analysis artifacts
**BLOCKING**: Present refactoring roadmap to user. Do NOT proceed until user confirms.
**Quick Assessment mode stops here.** Present final summary and write `FINAL_report.md` with phases 0-2 content.
---
### Phase 3: Safety Net
**Role**: QA engineer and developer
**Goal**: Design and implement tests that capture current behavior before refactoring
**Constraints**: Tests must all pass on the current codebase before proceeding
#### 3a. Design Test Specs
Coverage requirements (must meet before refactoring):
- Minimum overall coverage: 75%
- Critical path coverage: 90%
- All public APIs must have integration tests
- All error handling paths must be tested
For each critical area, write test specs to `REFACTOR_DIR/test_specs/[##]_[test_name].md`:
- Integration tests: summary, current behavior, input data, expected result, max expected time
- Acceptance tests: summary, preconditions, steps with expected results
- Coverage analysis: current %, target %, uncovered critical paths
#### 3b. Implement Tests
1. Set up test environment and infrastructure if not exists
2. Implement each test from specs
3. Run tests, verify all pass on current codebase
4. Document any discovered issues
**Self-verification**:
- [ ] Coverage requirements met (75% overall, 90% critical paths)
- [ ] All tests pass on current codebase
- [ ] All public APIs have integration tests
- [ ] Test data fixtures are configured
**Save action**: Write test specs; implemented tests go into the project's test folder
**GATE (BLOCKING)**: ALL tests must pass before proceeding to Phase 4. If tests fail, fix the tests (not the code) or ask user for guidance. Do NOT proceed to Phase 4 with failing tests.
---
### Phase 4: Execution
**Role**: Software architect and developer
**Goal**: Analyze coupling and execute decoupling changes
**Constraints**: Small incremental changes; tests must stay green after every change
#### 4a. Analyze Coupling
1. Analyze coupling between components/modules
2. Map dependencies (direct and transitive)
3. Identify circular dependencies
4. Form decoupling strategy
Write `REFACTOR_DIR/coupling_analysis.md`:
- Dependency graph (Mermaid)
- Coupling metrics per component
- Problem areas: components involved, coupling type, severity, impact
- Decoupling strategy: priority order, proposed interfaces/abstractions, effort estimates
**BLOCKING**: Present coupling analysis to user. Do NOT proceed until user confirms strategy.
#### 4b. Execute Decoupling
For each change in the decoupling strategy:
1. Implement the change
2. Run integration tests
3. Fix any failures
4. Commit with descriptive message
Address code smells encountered: long methods, large classes, duplicate code, dead code, magic numbers.
Write `REFACTOR_DIR/execution_log.md`:
- Change description, files affected, test status per change
- Before/after metrics comparison against baseline
**Self-verification**:
- [ ] All tests still pass after execution
- [ ] No circular dependencies remain (or reduced per plan)
- [ ] Code smells addressed
- [ ] Metrics improved compared to baseline
**Save action**: Write execution artifacts
**BLOCKING**: Present execution summary to user. Do NOT proceed until user confirms.
---
### Phase 5: Hardening (Optional, Parallel Tracks)
**Role**: Varies per track
**Goal**: Address technical debt, performance, and security
**Constraints**: Each track is optional; user picks which to run
Present the three tracks and let user choose which to execute:
#### Track A: Technical Debt
**Role**: Technical debt analyst
1. Identify and categorize debt items: design, code, test, documentation
2. Assess each: location, description, impact, effort, interest (cost of not fixing)
3. Prioritize: quick wins → strategic debt → tolerable debt
4. Create actionable plan with prevention measures
Write `REFACTOR_DIR/hardening/technical_debt.md`
#### Track B: Performance Optimization
**Role**: Performance engineer
1. Profile current performance, identify bottlenecks
2. For each bottleneck: location, symptom, root cause, impact
3. Propose optimizations with expected improvement and risk
4. Implement one at a time, benchmark after each change
5. Verify tests still pass
Write `REFACTOR_DIR/hardening/performance.md` with before/after benchmarks
#### Track C: Security Review
**Role**: Security engineer
1. Review code against OWASP Top 10
2. Verify security requirements from `security_approach.md` are met
3. Check: authentication, authorization, input validation, output encoding, encryption, logging
Write `REFACTOR_DIR/hardening/security.md`:
- Vulnerability assessment: location, type, severity, exploit scenario, fix
- Security controls review
- Compliance check against `security_approach.md`
- Recommendations: critical fixes, improvements, hardening
**Self-verification** (per track):
- [ ] All findings are grounded in actual code
- [ ] Recommendations are actionable with effort estimates
- [ ] All tests still pass after any changes
**Save action**: Write hardening artifacts
---
## Final Report
After all executed phases complete, write `REFACTOR_DIR/FINAL_report.md`:
- Refactoring mode used and phases executed
- Baseline metrics vs final metrics comparison
- Changes made summary
- Remaining items (deferred to future)
- Lessons learned
## Escalation Rules
| Situation | Action |
|-----------|--------|
| Unclear refactoring scope | **ASK user** |
| Ambiguous acceptance criteria | **ASK user** |
| Tests failing before refactoring | **ASK user** — fix tests or fix code? |
| Coupling change risks breaking external contracts | **ASK user** |
| Performance optimization vs readability trade-off | **ASK user** |
| Missing baseline metrics (no test suite, no CI) | **WARN user**, suggest building safety net first |
| Security vulnerability found during refactoring | **WARN user** immediately, don't defer |
## Trigger Conditions
When the user wants to:
- Improve existing code structure or quality
- Reduce technical debt or coupling
- Prepare codebase for new features
- Assess code health before major changes
**Keywords**: "refactor", "refactoring", "improve code", "reduce coupling", "technical debt", "code quality", "decoupling"
## Methodology Quick Reference
```
┌────────────────────────────────────────────────────────────────┐
│ Structured Refactoring (6-Phase Method) │
├────────────────────────────────────────────────────────────────┤
│ CONTEXT: Resolve mode (project vs standalone) + set paths │
│ MODE: Full / Targeted / Quick Assessment │
│ │
│ 0. Context & Baseline → baseline_metrics.md │
│ [BLOCKING: user confirms baseline] │
│ 1. Discovery → discovery/ (components, solution) │
│ [BLOCKING: user confirms documentation] │
│ 2. Analysis → analysis/ (research, roadmap) │
│ [BLOCKING: user confirms roadmap] │
│ ── Quick Assessment stops here ── │
│ 3. Safety Net → test_specs/ + implemented tests │
│ [GATE: all tests must pass] │
│ 4. Execution → coupling_analysis, execution_log │
│ [BLOCKING: user confirms changes] │
│ 5. Hardening → hardening/ (debt, perf, security) │
│ [optional, user picks tracks] │
│ ───────────────────────────────────────────────── │
│ FINAL_report.md │
├────────────────────────────────────────────────────────────────┤
│ Principles: Preserve behavior · Measure before/after │
│ Small changes · Save immediately · Ask don't assume│
└────────────────────────────────────────────────────────────────┘
```
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,37 @@
# Solution Draft
## Product Solution Description
[Short description of the proposed solution. Brief component interaction diagram.]
## Existing/Competitor Solutions Analysis
[Analysis of existing solutions for similar problems, if any.]
## Architecture
[Architecture solution that meets restrictions and acceptance criteria.]
### Component: [Component Name]
| Solution | Tools | Advantages | Limitations | Requirements | Security | Cost | Fit |
|----------|-------|-----------|-------------|-------------|----------|------|-----|
| [Option 1] | [lib/platform] | [pros] | [cons] | [reqs] | [security] | [cost] | [fit assessment] |
| [Option 2] | [lib/platform] | [pros] | [cons] | [reqs] | [security] | [cost] | [fit assessment] |
[Repeat per component]
## Testing Strategy
### Integration / Functional Tests
- [Test 1]
- [Test 2]
### Non-Functional Tests
- [Performance test 1]
- [Security test 1]
## References
[All cited source links]
## Related Artifacts
- Tech stack evaluation: `_docs/01_solution/tech_stack.md` (if Phase 3 was executed)
- Security analysis: `_docs/01_solution/security_analysis.md` (if Phase 4 was executed)
@@ -0,0 +1,40 @@
# Solution Draft
## Assessment Findings
| Old Component Solution | Weak Point (functional/security/performance) | New Solution |
|------------------------|----------------------------------------------|-------------|
| [old] | [weak point] | [new] |
## Product Solution Description
[Short description. Brief component interaction diagram. Written as if from scratch — no "updated" markers.]
## Architecture
[Architecture solution that meets restrictions and acceptance criteria.]
### Component: [Component Name]
| Solution | Tools | Advantages | Limitations | Requirements | Security | Performance | Fit |
|----------|-------|-----------|-------------|-------------|----------|------------|-----|
| [Option 1] | [lib/platform] | [pros] | [cons] | [reqs] | [security] | [perf] | [fit assessment] |
| [Option 2] | [lib/platform] | [pros] | [cons] | [reqs] | [security] | [perf] | [fit assessment] |
[Repeat per component]
## Testing Strategy
### Integration / Functional Tests
- [Test 1]
- [Test 2]
### Non-Functional Tests
- [Performance test 1]
- [Security test 1]
## References
[All cited source links]
## Related Artifacts
- Tech stack evaluation: `_docs/01_solution/tech_stack.md` (if Phase 3 was executed)
- Security analysis: `_docs/01_solution/security_analysis.md` (if Phase 4 was executed)
+311
View File
@@ -0,0 +1,311 @@
---
name: security-testing
description: "Test for security vulnerabilities using OWASP principles. Use when conducting security audits, testing auth, or implementing security practices."
category: specialized-testing
priority: critical
tokenEstimate: 1200
agents: [qe-security-scanner, qe-api-contract-validator, qe-quality-analyzer]
implementation_status: optimized
optimization_version: 1.0
last_optimized: 2025-12-02
dependencies: []
quick_reference_card: true
tags: [security, owasp, sast, dast, vulnerabilities, auth, injection]
trust_tier: 3
validation:
schema_path: schemas/output.json
validator_path: scripts/validate-config.json
eval_path: evals/security-testing.yaml
---
# Security Testing
<default_to_action>
When testing security or conducting audits:
1. TEST OWASP Top 10 vulnerabilities systematically
2. VALIDATE authentication and authorization on every endpoint
3. SCAN dependencies for known vulnerabilities (npm audit)
4. CHECK for injection attacks (SQL, XSS, command)
5. VERIFY secrets aren't exposed in code/logs
**Quick Security Checks:**
- Access control → Test horizontal/vertical privilege escalation
- Crypto → Verify password hashing, HTTPS, no sensitive data exposed
- Injection → Test SQL injection, XSS, command injection
- Auth → Test weak passwords, session fixation, MFA enforcement
- Config → Check error messages don't leak info
**Critical Success Factors:**
- Think like an attacker, build like a defender
- Security is built in, not added at the end
- Test continuously in CI/CD, not just before release
</default_to_action>
## Quick Reference Card
### When to Use
- Security audits and penetration testing
- Testing authentication/authorization
- Validating input sanitization
- Reviewing security configuration
### OWASP Top 10 (2021)
| # | Vulnerability | Key Test |
|---|---------------|----------|
| 1 | Broken Access Control | User A accessing User B's data |
| 2 | Cryptographic Failures | Plaintext passwords, HTTP |
| 3 | Injection | SQL/XSS/command injection |
| 4 | Insecure Design | Rate limiting, session timeout |
| 5 | Security Misconfiguration | Verbose errors, exposed /admin |
| 6 | Vulnerable Components | npm audit, outdated packages |
| 7 | Auth Failures | Weak passwords, no MFA |
| 8 | Integrity Failures | Unsigned updates, malware |
| 9 | Logging Failures | No audit trail for breaches |
| 10 | SSRF | Server fetching internal URLs |
### Tools
| Type | Tool | Purpose |
|------|------|---------|
| SAST | SonarQube, Semgrep | Static code analysis |
| DAST | OWASP ZAP, Burp | Dynamic scanning |
| Deps | npm audit, Snyk | Dependency vulnerabilities |
| Secrets | git-secrets, TruffleHog | Secret scanning |
### Agent Coordination
- `qe-security-scanner`: Multi-layer SAST/DAST scanning
- `qe-api-contract-validator`: API security testing
- `qe-quality-analyzer`: Security code review
---
## Key Vulnerability Tests
### 1. Broken Access Control
```javascript
// Horizontal escalation - User A accessing User B's data
test('user cannot access another user\'s order', async () => {
const userAToken = await login('userA');
const userBOrder = await createOrder('userB');
const response = await api.get(`/orders/${userBOrder.id}`, {
headers: { Authorization: `Bearer ${userAToken}` }
});
expect(response.status).toBe(403);
});
// Vertical escalation - Regular user accessing admin
test('regular user cannot access admin', async () => {
const userToken = await login('regularUser');
expect((await api.get('/admin/users', {
headers: { Authorization: `Bearer ${userToken}` }
})).status).toBe(403);
});
```
### 2. Injection Attacks
```javascript
// SQL Injection
test('prevents SQL injection', async () => {
const malicious = "' OR '1'='1";
const response = await api.get(`/products?search=${malicious}`);
expect(response.body.length).toBeLessThan(100); // Not all products
});
// XSS
test('sanitizes HTML output', async () => {
const xss = '<script>alert("XSS")</script>';
await api.post('/comments', { text: xss });
const html = (await api.get('/comments')).body;
expect(html).toContain('&lt;script&gt;');
expect(html).not.toContain('<script>');
});
```
### 3. Cryptographic Failures
```javascript
test('passwords are hashed', async () => {
await db.users.create({ email: 'test@example.com', password: 'MyPassword123' });
const user = await db.users.findByEmail('test@example.com');
expect(user.password).not.toBe('MyPassword123');
expect(user.password).toMatch(/^\$2[aby]\$\d{2}\$/); // bcrypt
});
test('no sensitive data in API response', async () => {
const response = await api.get('/users/me');
expect(response.body).not.toHaveProperty('password');
expect(response.body).not.toHaveProperty('ssn');
});
```
### 4. Security Misconfiguration
```javascript
test('errors don\'t leak sensitive info', async () => {
const response = await api.post('/login', { email: 'nonexistent@test.com', password: 'wrong' });
expect(response.body.error).toBe('Invalid credentials'); // Generic message
});
test('sensitive endpoints not exposed', async () => {
const endpoints = ['/debug', '/.env', '/.git', '/admin'];
for (let ep of endpoints) {
expect((await fetch(`https://example.com${ep}`)).status).not.toBe(200);
}
});
```
### 5. Rate Limiting
```javascript
test('rate limiting prevents brute force', async () => {
const responses = [];
for (let i = 0; i < 20; i++) {
responses.push(await api.post('/login', { email: 'test@example.com', password: 'wrong' }));
}
expect(responses.filter(r => r.status === 429).length).toBeGreaterThan(0);
});
```
---
## Security Checklist
### Authentication
- [ ] Strong password requirements (12+ chars)
- [ ] Password hashing (bcrypt, scrypt, Argon2)
- [ ] MFA for sensitive operations
- [ ] Account lockout after failed attempts
- [ ] Session ID changes after login
- [ ] Session timeout
### Authorization
- [ ] Check authorization on every request
- [ ] Least privilege principle
- [ ] No horizontal escalation
- [ ] No vertical escalation
### Data Protection
- [ ] HTTPS everywhere
- [ ] Encrypted at rest
- [ ] Secrets not in code/logs
- [ ] PII compliance (GDPR)
### Input Validation
- [ ] Server-side validation
- [ ] Parameterized queries (no SQL injection)
- [ ] Output encoding (no XSS)
- [ ] Rate limiting
---
## CI/CD Integration
```yaml
# GitHub Actions
security-checks:
steps:
- name: Dependency audit
run: npm audit --audit-level=high
- name: SAST scan
run: npm run sast
- name: Secret scan
uses: trufflesecurity/trufflehog@main
- name: DAST scan
if: github.ref == 'refs/heads/main'
run: docker run owasp/zap2docker-stable zap-baseline.py -t https://staging.example.com
```
**Pre-commit hooks:**
```bash
#!/bin/sh
git-secrets --scan
npm run lint:security
```
---
## Agent-Assisted Security Testing
```typescript
// Comprehensive multi-layer scan
await Task("Security Scan", {
target: 'src/',
layers: { sast: true, dast: true, dependencies: true, secrets: true },
severity: ['critical', 'high', 'medium']
}, "qe-security-scanner");
// OWASP Top 10 testing
await Task("OWASP Scan", {
categories: ['broken-access-control', 'injection', 'cryptographic-failures'],
depth: 'comprehensive'
}, "qe-security-scanner");
// Validate fix
await Task("Validate Fix", {
vulnerability: 'CVE-2024-12345',
expectedResolution: 'upgrade package to v2.0.0',
retestAfterFix: true
}, "qe-security-scanner");
```
---
## Agent Coordination Hints
### Memory Namespace
```
aqe/security/
├── scans/* - Scan results
├── vulnerabilities/* - Found vulnerabilities
├── fixes/* - Remediation tracking
└── compliance/* - Compliance status
```
### Fleet Coordination
```typescript
const securityFleet = await FleetManager.coordinate({
strategy: 'security-testing',
agents: [
'qe-security-scanner',
'qe-api-contract-validator',
'qe-quality-analyzer',
'qe-deployment-readiness'
],
topology: 'parallel'
});
```
---
## Common Mistakes
### ❌ Security by Obscurity
Hiding admin at `/super-secret-admin`**Use proper auth**
### ❌ Client-Side Validation Only
JavaScript validation can be bypassed → **Always validate server-side**
### ❌ Trusting User Input
Assuming input is safe → **Sanitize, validate, escape all input**
### ❌ Hardcoded Secrets
API keys in code → **Environment variables, secret management**
---
## Related Skills
- [agentic-quality-engineering](../agentic-quality-engineering/) - Security with agents
- [api-testing-patterns](../api-testing-patterns/) - API security testing
- [compliance-testing](../compliance-testing/) - GDPR, HIPAA, SOC2
---
## Remember
**Think like an attacker:** What would you try to break? Test that.
**Build like a defender:** Assume input is malicious until proven otherwise.
**Test continuously:** Security testing is ongoing, not one-time.
**With Agents:** Agents automate vulnerability scanning, track remediation, and validate fixes. Use agents to maintain security posture at scale.
@@ -0,0 +1,789 @@
# =============================================================================
# AQE Skill Evaluation Test Suite: Security Testing v1.0.0
# =============================================================================
#
# Comprehensive evaluation suite for the security-testing skill per ADR-056.
# Tests OWASP Top 10 2021 detection, severity classification, remediation
# quality, and cross-model consistency.
#
# Schema: .claude/skills/.validation/schemas/skill-eval.schema.json
# Validator: .claude/skills/security-testing/scripts/validate-config.json
#
# Coverage:
# - OWASP A01:2021 - Broken Access Control
# - OWASP A02:2021 - Cryptographic Failures
# - OWASP A03:2021 - Injection (SQL, XSS, Command)
# - OWASP A07:2021 - Identification and Authentication Failures
# - Negative tests (no false positives on secure code)
#
# =============================================================================
skill: security-testing
version: 1.0.0
description: >
Comprehensive evaluation suite for the security-testing skill.
Tests OWASP Top 10 2021 detection capabilities, CWE classification accuracy,
CVSS scoring, severity classification, and remediation quality.
Supports multi-model testing and integrates with ReasoningBank for
continuous improvement.
# =============================================================================
# Multi-Model Configuration
# =============================================================================
models_to_test:
- claude-3.5-sonnet # Primary model (high accuracy expected)
- claude-3-haiku # Fast model (minimum quality threshold)
- gpt-4o # Cross-vendor validation
# =============================================================================
# MCP Integration Configuration
# =============================================================================
mcp_integration:
enabled: true
namespace: skill-validation
# Query existing security patterns before running evals
query_patterns: true
# Track each test outcome for learning feedback loop
track_outcomes: true
# Store successful patterns after evals complete
store_patterns: true
# Share learning with fleet coordinator agents
share_learning: true
# Update quality gate with validation metrics
update_quality_gate: true
# Target agents for learning distribution
target_agents:
- qe-learning-coordinator
- qe-queen-coordinator
- qe-security-scanner
- qe-security-auditor
# =============================================================================
# ReasoningBank Learning Configuration
# =============================================================================
learning:
store_success_patterns: true
store_failure_patterns: true
pattern_ttl_days: 90
min_confidence_to_store: 0.7
cross_model_comparison: true
# =============================================================================
# Result Format Configuration
# =============================================================================
result_format:
json_output: true
markdown_report: true
include_raw_output: false
include_timing: true
include_token_usage: true
# =============================================================================
# Environment Setup
# =============================================================================
setup:
required_tools:
- jq # JSON parsing (required)
- npm # Dependency audit (optional but recommended)
environment_variables:
SECURITY_SCAN_DEPTH: "deep"
OWASP_ENABLED: "true"
SEVERITY_THRESHOLD: "medium"
fixtures:
- name: vulnerable_express_app
path: fixtures/vulnerable-express-app.js
content: |
const express = require('express');
const app = express();
// SQL Injection vulnerability
app.get('/user', (req, res) => {
const query = `SELECT * FROM users WHERE id = ${req.params.id}`;
db.query(query);
});
// XSS vulnerability
app.get('/profile', (req, res) => {
res.send(`<h1>Hello ${req.query.name}</h1>`);
});
// Path Traversal vulnerability
app.get('/file', (req, res) => {
const path = './uploads/' + req.query.filename;
res.sendFile(path);
});
# =============================================================================
# TEST CASES
# =============================================================================
test_cases:
# ---------------------------------------------------------------------------
# CATEGORY: SQL Injection (OWASP A03:2021)
# ---------------------------------------------------------------------------
- id: tc001_sql_injection_string_concat
description: "Detect SQL injection via string concatenation in Node.js"
category: injection
priority: critical
input:
code: |
const express = require('express');
const mysql = require('mysql');
const app = express();
app.get('/api/users', (req, res) => {
const userId = req.params.id;
const query = `SELECT * FROM users WHERE id = ${userId}`;
db.query(query, (err, results) => {
res.json(results);
});
});
context:
language: javascript
framework: express
environment: production
expected_output:
must_contain:
- "SQL injection"
- "parameterized"
must_not_contain:
- "no vulnerabilities"
- "secure"
must_match_regex:
- "CWE-89|CWE-564"
- "A03:20[21][0-9]"
severity_classification: critical
finding_count:
min: 1
max: 3
recommendation_count:
min: 1
validation:
schema_check: true
keyword_match_threshold: 0.8
reasoning_quality_min: 0.7
grading_rubric:
completeness: 0.3
accuracy: 0.5
actionability: 0.2
timeout_ms: 30000
- id: tc002_sql_injection_parameterized_safe
description: "Verify parameterized queries are NOT flagged as vulnerable"
category: injection
priority: high
input:
code: |
app.get('/api/users', (req, res) => {
const userId = parseInt(req.params.id, 10);
db.query('SELECT * FROM users WHERE id = ?', [userId], (err, results) => {
res.json(results);
});
});
context:
language: javascript
framework: express
expected_output:
must_contain:
- "parameterized"
- "secure"
must_not_contain:
- "SQL injection"
- "critical"
- "vulnerable"
severity_classification: info
finding_count:
max: 1
validation:
schema_check: true
keyword_match_threshold: 0.7
allow_partial: true
# ---------------------------------------------------------------------------
# CATEGORY: Cross-Site Scripting (OWASP A03:2021)
# ---------------------------------------------------------------------------
- id: tc003_xss_reflected_html_output
description: "Detect reflected XSS in unescaped HTML output"
category: injection
priority: critical
input:
code: |
app.get('/profile', (req, res) => {
const name = req.query.name;
res.send(`
<html>
<body>
<h1>Welcome, ${name}!</h1>
<p>Your profile has been loaded.</p>
</body>
</html>
`);
});
context:
language: javascript
framework: express
expected_output:
must_contain:
- "XSS"
- "cross-site scripting"
- "sanitize"
- "escape"
must_match_regex:
- "CWE-79"
severity_classification: high
finding_count:
min: 1
validation:
schema_check: true
keyword_match_threshold: 0.8
reasoning_quality_min: 0.75
- id: tc004_xss_dom_based_innerhtml
description: "Detect DOM-based XSS via innerHTML assignment"
category: injection
priority: high
input:
code: |
// Client-side JavaScript
const params = new URLSearchParams(window.location.search);
const message = params.get('msg');
document.getElementById('output').innerHTML = message;
context:
language: javascript
framework: vanilla
environment: production
expected_output:
must_contain:
- "DOM"
- "XSS"
- "innerHTML"
- "textContent"
must_match_regex:
- "CWE-79"
severity_classification: high
validation:
schema_check: true
keyword_match_threshold: 0.7
# ---------------------------------------------------------------------------
# CATEGORY: Authentication Failures (OWASP A07:2021)
# ---------------------------------------------------------------------------
- id: tc005_hardcoded_credentials
description: "Detect hardcoded credentials and API keys"
category: authentication
priority: critical
input:
code: |
const ADMIN_PASSWORD = 'admin123';
const API_KEY = 'sk-1234567890abcdef';
const DATABASE_URL = 'postgres://admin:password123@localhost/db';
app.post('/login', (req, res) => {
if (req.body.password === ADMIN_PASSWORD) {
req.session.isAdmin = true;
res.send('Login successful');
}
});
context:
language: javascript
framework: express
expected_output:
must_contain:
- "hardcoded"
- "credentials"
- "secret"
- "environment variable"
must_match_regex:
- "CWE-798|CWE-259"
severity_classification: critical
finding_count:
min: 2
validation:
schema_check: true
keyword_match_threshold: 0.8
reasoning_quality_min: 0.8
- id: tc006_weak_password_hashing
description: "Detect weak password hashing algorithms (MD5, SHA1)"
category: authentication
priority: high
input:
code: |
const crypto = require('crypto');
function hashPassword(password) {
return crypto.createHash('md5').update(password).digest('hex');
}
function verifyPassword(password, hash) {
return hashPassword(password) === hash;
}
context:
language: javascript
framework: nodejs
expected_output:
must_contain:
- "MD5"
- "weak"
- "bcrypt"
- "argon2"
must_match_regex:
- "CWE-327|CWE-328|CWE-916"
severity_classification: high
finding_count:
min: 1
validation:
schema_check: true
keyword_match_threshold: 0.8
# ---------------------------------------------------------------------------
# CATEGORY: Broken Access Control (OWASP A01:2021)
# ---------------------------------------------------------------------------
- id: tc007_idor_missing_authorization
description: "Detect IDOR vulnerability with missing authorization check"
category: authorization
priority: critical
input:
code: |
app.get('/api/users/:id/profile', (req, res) => {
// No authorization check - any user can access any profile
const userId = req.params.id;
db.query('SELECT * FROM profiles WHERE user_id = ?', [userId])
.then(profile => res.json(profile));
});
app.delete('/api/users/:id', (req, res) => {
// No check if requesting user owns this account
db.query('DELETE FROM users WHERE id = ?', [req.params.id]);
res.send('User deleted');
});
context:
language: javascript
framework: express
expected_output:
must_contain:
- "authorization"
- "access control"
- "IDOR"
- "ownership"
must_match_regex:
- "CWE-639|CWE-284|CWE-862"
- "A01:2021"
severity_classification: critical
validation:
schema_check: true
keyword_match_threshold: 0.7
# ---------------------------------------------------------------------------
# CATEGORY: Cryptographic Failures (OWASP A02:2021)
# ---------------------------------------------------------------------------
- id: tc008_weak_encryption_des
description: "Detect use of weak encryption algorithms (DES, RC4)"
category: cryptography
priority: high
input:
code: |
const crypto = require('crypto');
function encryptData(data, key) {
const cipher = crypto.createCipher('des', key);
return cipher.update(data, 'utf8', 'hex') + cipher.final('hex');
}
function decryptData(data, key) {
const decipher = crypto.createDecipher('des', key);
return decipher.update(data, 'hex', 'utf8') + decipher.final('utf8');
}
context:
language: javascript
framework: nodejs
expected_output:
must_contain:
- "DES"
- "weak"
- "deprecated"
- "AES"
must_match_regex:
- "CWE-327|CWE-328"
- "A02:2021"
severity_classification: high
validation:
schema_check: true
keyword_match_threshold: 0.7
- id: tc009_plaintext_password_storage
description: "Detect plaintext password storage"
category: cryptography
priority: critical
input:
code: |
class User {
constructor(email, password) {
this.email = email;
this.password = password; // Stored in plaintext!
}
save() {
db.query('INSERT INTO users (email, password) VALUES (?, ?)',
[this.email, this.password]);
}
}
context:
language: javascript
framework: nodejs
expected_output:
must_contain:
- "plaintext"
- "password"
- "hash"
- "bcrypt"
must_match_regex:
- "CWE-256|CWE-312"
- "A02:2021"
severity_classification: critical
validation:
schema_check: true
keyword_match_threshold: 0.8
# ---------------------------------------------------------------------------
# CATEGORY: Path Traversal (Related to A01:2021)
# ---------------------------------------------------------------------------
- id: tc010_path_traversal_file_access
description: "Detect path traversal vulnerability in file access"
category: injection
priority: critical
input:
code: |
const fs = require('fs');
app.get('/download', (req, res) => {
const filename = req.query.file;
const filepath = './uploads/' + filename;
res.sendFile(filepath);
});
app.get('/read', (req, res) => {
const content = fs.readFileSync('./data/' + req.params.name);
res.send(content);
});
context:
language: javascript
framework: express
expected_output:
must_contain:
- "path traversal"
- "directory traversal"
- "../"
- "sanitize"
must_match_regex:
- "CWE-22|CWE-23"
severity_classification: critical
validation:
schema_check: true
keyword_match_threshold: 0.7
# ---------------------------------------------------------------------------
# CATEGORY: Negative Tests (No False Positives)
# ---------------------------------------------------------------------------
- id: tc011_secure_code_no_false_positives
description: "Verify secure code is NOT flagged as vulnerable"
category: negative
priority: critical
input:
code: |
const express = require('express');
const helmet = require('helmet');
const rateLimit = require('express-rate-limit');
const bcrypt = require('bcrypt');
const validator = require('validator');
const app = express();
app.use(helmet());
app.use(rateLimit({ windowMs: 15 * 60 * 1000, max: 100 }));
app.post('/api/users', async (req, res) => {
const { email, password } = req.body;
// Input validation
if (!validator.isEmail(email)) {
return res.status(400).json({ error: 'Invalid email' });
}
// Secure password hashing
const hashedPassword = await bcrypt.hash(password, 12);
// Parameterized query
await db.query(
'INSERT INTO users (email, password) VALUES ($1, $2)',
[email, hashedPassword]
);
res.status(201).json({ message: 'User created' });
});
context:
language: javascript
framework: express
environment: production
expected_output:
must_contain:
- "secure"
- "best practice"
must_not_contain:
- "SQL injection"
- "XSS"
- "critical vulnerability"
- "high severity"
finding_count:
max: 2 # Allow informational findings only
validation:
schema_check: true
keyword_match_threshold: 0.6
allow_partial: true
- id: tc012_secure_auth_implementation
description: "Verify secure authentication is recognized as safe"
category: negative
priority: high
input:
code: |
const bcrypt = require('bcrypt');
const jwt = require('jsonwebtoken');
async function login(email, password) {
const user = await User.findByEmail(email);
if (!user) {
return { error: 'Invalid credentials' };
}
const match = await bcrypt.compare(password, user.passwordHash);
if (!match) {
return { error: 'Invalid credentials' };
}
const token = jwt.sign(
{ userId: user.id },
process.env.JWT_SECRET,
{ expiresIn: '1h' }
);
return { token };
}
context:
language: javascript
framework: nodejs
expected_output:
must_contain:
- "bcrypt"
- "jwt"
- "secure"
must_not_contain:
- "vulnerable"
- "critical"
- "hardcoded"
severity_classification: info
validation:
schema_check: true
allow_partial: true
# ---------------------------------------------------------------------------
# CATEGORY: Python Security (Multi-language Support)
# ---------------------------------------------------------------------------
- id: tc013_python_sql_injection
description: "Detect SQL injection in Python Flask application"
category: injection
priority: critical
input:
code: |
from flask import Flask, request
import sqlite3
app = Flask(__name__)
@app.route('/user')
def get_user():
user_id = request.args.get('id')
conn = sqlite3.connect('users.db')
cursor = conn.cursor()
cursor.execute(f"SELECT * FROM users WHERE id = {user_id}")
return str(cursor.fetchone())
context:
language: python
framework: flask
expected_output:
must_contain:
- "SQL injection"
- "parameterized"
- "f-string"
must_match_regex:
- "CWE-89"
severity_classification: critical
finding_count:
min: 1
validation:
schema_check: true
keyword_match_threshold: 0.7
- id: tc014_python_ssti_jinja
description: "Detect Server-Side Template Injection in Jinja2"
category: injection
priority: critical
input:
code: |
from flask import Flask, request, render_template_string
app = Flask(__name__)
@app.route('/render')
def render():
template = request.args.get('template')
return render_template_string(template)
context:
language: python
framework: flask
expected_output:
must_contain:
- "SSTI"
- "template injection"
- "render_template_string"
- "Jinja2"
must_match_regex:
- "CWE-94|CWE-1336"
severity_classification: critical
validation:
schema_check: true
keyword_match_threshold: 0.7
- id: tc015_python_pickle_deserialization
description: "Detect insecure deserialization with pickle"
category: injection
priority: critical
input:
code: |
import pickle
from flask import Flask, request
app = Flask(__name__)
@app.route('/load')
def load_data():
data = request.get_data()
obj = pickle.loads(data)
return str(obj)
context:
language: python
framework: flask
expected_output:
must_contain:
- "pickle"
- "deserialization"
- "untrusted"
- "RCE"
must_match_regex:
- "CWE-502"
- "A08:2021"
severity_classification: critical
validation:
schema_check: true
keyword_match_threshold: 0.7
# =============================================================================
# SUCCESS CRITERIA
# =============================================================================
success_criteria:
# Overall pass rate (90% of tests must pass)
pass_rate: 0.9
# Critical tests must ALL pass (100%)
critical_pass_rate: 1.0
# Average reasoning quality score
avg_reasoning_quality: 0.75
# Maximum suite execution time (5 minutes)
max_execution_time_ms: 300000
# Maximum variance between model results (15%)
cross_model_variance: 0.15
# =============================================================================
# METADATA
# =============================================================================
metadata:
author: "qe-security-auditor"
created: "2026-02-02"
last_updated: "2026-02-02"
coverage_target: >
OWASP Top 10 2021: A01 (Broken Access Control), A02 (Cryptographic Failures),
A03 (Injection - SQL, XSS, SSTI, Command), A07 (Authentication Failures),
A08 (Software Integrity - Deserialization). Covers JavaScript/Node.js
Express apps and Python Flask apps. 15 test cases with 90% pass rate
requirement and 100% critical pass rate.
+879
View File
@@ -0,0 +1,879 @@
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "https://agentic-qe.dev/schemas/security-testing-output.json",
"title": "AQE Security Testing Skill Output Schema",
"description": "Schema for security-testing skill output validation. Extends the base skill-output template with OWASP Top 10 categories, CWE identifiers, and CVSS scoring.",
"type": "object",
"required": ["skillName", "version", "timestamp", "status", "trustTier", "output"],
"properties": {
"skillName": {
"type": "string",
"const": "security-testing",
"description": "Must be 'security-testing'"
},
"version": {
"type": "string",
"pattern": "^\\d+\\.\\d+\\.\\d+(-[a-zA-Z0-9]+)?$",
"description": "Semantic version of the skill"
},
"timestamp": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp of output generation"
},
"status": {
"type": "string",
"enum": ["success", "partial", "failed", "skipped"],
"description": "Overall execution status"
},
"trustTier": {
"type": "integer",
"const": 3,
"description": "Trust tier 3 indicates full validation with eval suite"
},
"output": {
"type": "object",
"required": ["summary", "findings", "owaspCategories"],
"properties": {
"summary": {
"type": "string",
"minLength": 50,
"maxLength": 2000,
"description": "Human-readable summary of security findings"
},
"score": {
"$ref": "#/$defs/securityScore",
"description": "Overall security score"
},
"findings": {
"type": "array",
"items": {
"$ref": "#/$defs/securityFinding"
},
"maxItems": 500,
"description": "List of security vulnerabilities discovered"
},
"recommendations": {
"type": "array",
"items": {
"$ref": "#/$defs/securityRecommendation"
},
"maxItems": 100,
"description": "Prioritized remediation recommendations with code examples"
},
"metrics": {
"$ref": "#/$defs/securityMetrics",
"description": "Security scan metrics and statistics"
},
"owaspCategories": {
"$ref": "#/$defs/owaspCategoryBreakdown",
"description": "OWASP Top 10 2021 category breakdown"
},
"artifacts": {
"type": "array",
"items": {
"$ref": "#/$defs/artifact"
},
"maxItems": 50,
"description": "Generated security reports and scan artifacts"
},
"timeline": {
"type": "array",
"items": {
"$ref": "#/$defs/timelineEvent"
},
"description": "Scan execution timeline"
},
"scanConfiguration": {
"$ref": "#/$defs/scanConfiguration",
"description": "Configuration used for the security scan"
}
}
},
"metadata": {
"$ref": "#/$defs/metadata"
},
"validation": {
"$ref": "#/$defs/validationResult"
},
"learning": {
"$ref": "#/$defs/learningData"
}
},
"$defs": {
"securityScore": {
"type": "object",
"required": ["value", "max"],
"properties": {
"value": {
"type": "number",
"minimum": 0,
"maximum": 100,
"description": "Security score (0=critical issues, 100=no issues)"
},
"max": {
"type": "number",
"const": 100,
"description": "Maximum score is always 100"
},
"grade": {
"type": "string",
"pattern": "^[A-F][+-]?$",
"description": "Letter grade: A (90-100), B (80-89), C (70-79), D (60-69), F (<60)"
},
"trend": {
"type": "string",
"enum": ["improving", "stable", "declining", "unknown"],
"description": "Trend compared to previous scans"
},
"riskLevel": {
"type": "string",
"enum": ["critical", "high", "medium", "low", "minimal"],
"description": "Overall risk level assessment"
}
}
},
"securityFinding": {
"type": "object",
"required": ["id", "title", "severity", "owasp"],
"properties": {
"id": {
"type": "string",
"pattern": "^SEC-\\d{3,6}$",
"description": "Unique finding identifier (e.g., SEC-001)"
},
"title": {
"type": "string",
"minLength": 10,
"maxLength": 200,
"description": "Finding title describing the vulnerability"
},
"description": {
"type": "string",
"maxLength": 2000,
"description": "Detailed description of the vulnerability"
},
"severity": {
"type": "string",
"enum": ["critical", "high", "medium", "low", "info"],
"description": "Severity: critical (CVSS 9.0-10.0), high (7.0-8.9), medium (4.0-6.9), low (0.1-3.9), info (0)"
},
"owasp": {
"type": "string",
"pattern": "^A(0[1-9]|10):20(21|25)$",
"description": "OWASP Top 10 category (e.g., A01:2021, A03:2025)"
},
"owaspCategory": {
"type": "string",
"enum": [
"A01:2021-Broken-Access-Control",
"A02:2021-Cryptographic-Failures",
"A03:2021-Injection",
"A04:2021-Insecure-Design",
"A05:2021-Security-Misconfiguration",
"A06:2021-Vulnerable-Components",
"A07:2021-Identification-Authentication-Failures",
"A08:2021-Software-Data-Integrity-Failures",
"A09:2021-Security-Logging-Monitoring-Failures",
"A10:2021-Server-Side-Request-Forgery"
],
"description": "Full OWASP category name"
},
"cwe": {
"type": "string",
"pattern": "^CWE-\\d{1,4}$",
"description": "CWE identifier (e.g., CWE-79 for XSS, CWE-89 for SQLi)"
},
"cvss": {
"type": "object",
"properties": {
"score": {
"type": "number",
"minimum": 0,
"maximum": 10,
"description": "CVSS v3.1 base score"
},
"vector": {
"type": "string",
"pattern": "^CVSS:3\\.1/AV:[NALP]/AC:[LH]/PR:[NLH]/UI:[NR]/S:[UC]/C:[NLH]/I:[NLH]/A:[NLH]$",
"description": "CVSS v3.1 vector string"
},
"severity": {
"type": "string",
"enum": ["None", "Low", "Medium", "High", "Critical"],
"description": "CVSS severity rating"
}
}
},
"location": {
"$ref": "#/$defs/location",
"description": "Location of the vulnerability"
},
"evidence": {
"type": "string",
"maxLength": 5000,
"description": "Evidence: code snippet, request/response, or PoC"
},
"remediation": {
"type": "string",
"maxLength": 2000,
"description": "Specific fix instructions for this finding"
},
"references": {
"type": "array",
"items": {
"type": "object",
"required": ["title", "url"],
"properties": {
"title": { "type": "string" },
"url": { "type": "string", "format": "uri" }
}
},
"maxItems": 10,
"description": "External references (OWASP, CWE, CVE, etc.)"
},
"falsePositive": {
"type": "boolean",
"default": false,
"description": "Potential false positive flag"
},
"confidence": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Confidence in finding accuracy (0.0-1.0)"
},
"exploitability": {
"type": "string",
"enum": ["trivial", "easy", "moderate", "difficult", "theoretical"],
"description": "How easy is it to exploit this vulnerability"
},
"affectedVersions": {
"type": "array",
"items": { "type": "string" },
"description": "Affected package/library versions for dependency vulnerabilities"
},
"cve": {
"type": "string",
"pattern": "^CVE-\\d{4}-\\d{4,}$",
"description": "CVE identifier if applicable"
}
}
},
"securityRecommendation": {
"type": "object",
"required": ["id", "title", "priority", "owaspCategories"],
"properties": {
"id": {
"type": "string",
"pattern": "^REC-\\d{3,6}$",
"description": "Unique recommendation identifier"
},
"title": {
"type": "string",
"minLength": 10,
"maxLength": 200,
"description": "Recommendation title"
},
"description": {
"type": "string",
"maxLength": 2000,
"description": "Detailed recommendation description"
},
"priority": {
"type": "string",
"enum": ["critical", "high", "medium", "low"],
"description": "Remediation priority"
},
"effort": {
"type": "string",
"enum": ["trivial", "low", "medium", "high", "major"],
"description": "Estimated effort: trivial(<1hr), low(1-4hr), medium(1-3d), high(1-2wk), major(>2wk)"
},
"impact": {
"type": "integer",
"minimum": 1,
"maximum": 10,
"description": "Security impact if implemented (1-10)"
},
"relatedFindings": {
"type": "array",
"items": {
"type": "string",
"pattern": "^SEC-\\d{3,6}$"
},
"description": "IDs of findings this addresses"
},
"owaspCategories": {
"type": "array",
"items": {
"type": "string",
"pattern": "^A(0[1-9]|10):20(21|25)$"
},
"description": "OWASP categories this recommendation addresses"
},
"codeExample": {
"type": "object",
"properties": {
"before": {
"type": "string",
"maxLength": 2000,
"description": "Vulnerable code example"
},
"after": {
"type": "string",
"maxLength": 2000,
"description": "Secure code example"
},
"language": {
"type": "string",
"description": "Programming language"
}
},
"description": "Before/after code examples for remediation"
},
"resources": {
"type": "array",
"items": {
"type": "object",
"required": ["title", "url"],
"properties": {
"title": { "type": "string" },
"url": { "type": "string", "format": "uri" }
}
},
"maxItems": 10,
"description": "External resources and documentation"
},
"automatable": {
"type": "boolean",
"description": "Can this fix be automated?"
},
"fixCommand": {
"type": "string",
"description": "CLI command to apply fix if automatable"
}
}
},
"owaspCategoryBreakdown": {
"type": "object",
"description": "OWASP Top 10 2021 category scores and findings",
"properties": {
"A01:2021": {
"$ref": "#/$defs/owaspCategoryScore",
"description": "A01:2021 - Broken Access Control"
},
"A02:2021": {
"$ref": "#/$defs/owaspCategoryScore",
"description": "A02:2021 - Cryptographic Failures"
},
"A03:2021": {
"$ref": "#/$defs/owaspCategoryScore",
"description": "A03:2021 - Injection"
},
"A04:2021": {
"$ref": "#/$defs/owaspCategoryScore",
"description": "A04:2021 - Insecure Design"
},
"A05:2021": {
"$ref": "#/$defs/owaspCategoryScore",
"description": "A05:2021 - Security Misconfiguration"
},
"A06:2021": {
"$ref": "#/$defs/owaspCategoryScore",
"description": "A06:2021 - Vulnerable and Outdated Components"
},
"A07:2021": {
"$ref": "#/$defs/owaspCategoryScore",
"description": "A07:2021 - Identification and Authentication Failures"
},
"A08:2021": {
"$ref": "#/$defs/owaspCategoryScore",
"description": "A08:2021 - Software and Data Integrity Failures"
},
"A09:2021": {
"$ref": "#/$defs/owaspCategoryScore",
"description": "A09:2021 - Security Logging and Monitoring Failures"
},
"A10:2021": {
"$ref": "#/$defs/owaspCategoryScore",
"description": "A10:2021 - Server-Side Request Forgery (SSRF)"
}
},
"additionalProperties": false
},
"owaspCategoryScore": {
"type": "object",
"required": ["tested", "score"],
"properties": {
"tested": {
"type": "boolean",
"description": "Whether this category was tested"
},
"score": {
"type": "number",
"minimum": 0,
"maximum": 100,
"description": "Category score (100 = no issues, 0 = critical)"
},
"grade": {
"type": "string",
"pattern": "^[A-F][+-]?$",
"description": "Letter grade for this category"
},
"findingCount": {
"type": "integer",
"minimum": 0,
"description": "Number of findings in this category"
},
"criticalCount": {
"type": "integer",
"minimum": 0,
"description": "Number of critical findings"
},
"highCount": {
"type": "integer",
"minimum": 0,
"description": "Number of high severity findings"
},
"status": {
"type": "string",
"enum": ["pass", "fail", "warn", "skip"],
"description": "Category status"
},
"description": {
"type": "string",
"description": "Category description and context"
},
"cwes": {
"type": "array",
"items": {
"type": "string",
"pattern": "^CWE-\\d{1,4}$"
},
"description": "CWEs found in this category"
}
}
},
"securityMetrics": {
"type": "object",
"properties": {
"totalFindings": {
"type": "integer",
"minimum": 0,
"description": "Total vulnerabilities found"
},
"criticalCount": {
"type": "integer",
"minimum": 0,
"description": "Critical severity findings"
},
"highCount": {
"type": "integer",
"minimum": 0,
"description": "High severity findings"
},
"mediumCount": {
"type": "integer",
"minimum": 0,
"description": "Medium severity findings"
},
"lowCount": {
"type": "integer",
"minimum": 0,
"description": "Low severity findings"
},
"infoCount": {
"type": "integer",
"minimum": 0,
"description": "Informational findings"
},
"filesScanned": {
"type": "integer",
"minimum": 0,
"description": "Number of files analyzed"
},
"linesOfCode": {
"type": "integer",
"minimum": 0,
"description": "Lines of code scanned"
},
"dependenciesChecked": {
"type": "integer",
"minimum": 0,
"description": "Number of dependencies checked"
},
"owaspCategoriesTested": {
"type": "integer",
"minimum": 0,
"maximum": 10,
"description": "OWASP Top 10 categories tested"
},
"owaspCategoriesPassed": {
"type": "integer",
"minimum": 0,
"maximum": 10,
"description": "OWASP Top 10 categories with no findings"
},
"uniqueCwes": {
"type": "integer",
"minimum": 0,
"description": "Unique CWE identifiers found"
},
"falsePositiveRate": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Estimated false positive rate"
},
"scanDurationMs": {
"type": "integer",
"minimum": 0,
"description": "Total scan duration in milliseconds"
},
"coverage": {
"type": "object",
"properties": {
"sast": {
"type": "boolean",
"description": "Static analysis performed"
},
"dast": {
"type": "boolean",
"description": "Dynamic analysis performed"
},
"dependencies": {
"type": "boolean",
"description": "Dependency scan performed"
},
"secrets": {
"type": "boolean",
"description": "Secret scanning performed"
},
"configuration": {
"type": "boolean",
"description": "Configuration review performed"
}
},
"description": "Scan coverage indicators"
}
}
},
"scanConfiguration": {
"type": "object",
"properties": {
"target": {
"type": "string",
"description": "Scan target (file path, URL, or package)"
},
"targetType": {
"type": "string",
"enum": ["source", "url", "package", "container", "infrastructure"],
"description": "Type of target being scanned"
},
"scanTypes": {
"type": "array",
"items": {
"type": "string",
"enum": ["sast", "dast", "dependency", "secret", "configuration", "container", "iac"]
},
"description": "Types of scans performed"
},
"severity": {
"type": "array",
"items": {
"type": "string",
"enum": ["critical", "high", "medium", "low", "info"]
},
"description": "Severity levels included in scan"
},
"owaspCategories": {
"type": "array",
"items": {
"type": "string",
"pattern": "^A(0[1-9]|10):20(21|25)$"
},
"description": "OWASP categories tested"
},
"tools": {
"type": "array",
"items": { "type": "string" },
"description": "Security tools used"
},
"excludePatterns": {
"type": "array",
"items": { "type": "string" },
"description": "File patterns excluded from scan"
},
"rulesets": {
"type": "array",
"items": { "type": "string" },
"description": "Security rulesets applied"
}
}
},
"location": {
"type": "object",
"properties": {
"file": {
"type": "string",
"maxLength": 500,
"description": "File path relative to project root"
},
"line": {
"type": "integer",
"minimum": 1,
"description": "Line number"
},
"column": {
"type": "integer",
"minimum": 1,
"description": "Column number"
},
"endLine": {
"type": "integer",
"minimum": 1,
"description": "End line for multi-line findings"
},
"endColumn": {
"type": "integer",
"minimum": 1,
"description": "End column"
},
"url": {
"type": "string",
"format": "uri",
"description": "URL for web-based findings"
},
"endpoint": {
"type": "string",
"description": "API endpoint path"
},
"method": {
"type": "string",
"enum": ["GET", "POST", "PUT", "DELETE", "PATCH", "HEAD", "OPTIONS"],
"description": "HTTP method for API findings"
},
"parameter": {
"type": "string",
"description": "Vulnerable parameter name"
},
"component": {
"type": "string",
"description": "Affected component or module"
}
}
},
"artifact": {
"type": "object",
"required": ["type", "path"],
"properties": {
"type": {
"type": "string",
"enum": ["report", "sarif", "data", "log", "evidence"],
"description": "Artifact type"
},
"path": {
"type": "string",
"maxLength": 500,
"description": "Path to artifact"
},
"format": {
"type": "string",
"enum": ["json", "sarif", "html", "md", "txt", "xml", "csv"],
"description": "Artifact format"
},
"description": {
"type": "string",
"maxLength": 500,
"description": "Artifact description"
},
"sizeBytes": {
"type": "integer",
"minimum": 0,
"description": "File size in bytes"
},
"checksum": {
"type": "string",
"pattern": "^sha256:[a-f0-9]{64}$",
"description": "SHA-256 checksum"
}
}
},
"timelineEvent": {
"type": "object",
"required": ["timestamp", "event"],
"properties": {
"timestamp": {
"type": "string",
"format": "date-time",
"description": "Event timestamp"
},
"event": {
"type": "string",
"maxLength": 200,
"description": "Event description"
},
"type": {
"type": "string",
"enum": ["start", "checkpoint", "warning", "error", "complete"],
"description": "Event type"
},
"durationMs": {
"type": "integer",
"minimum": 0,
"description": "Duration since previous event"
},
"phase": {
"type": "string",
"enum": ["initialization", "sast", "dast", "dependency", "secret", "reporting"],
"description": "Scan phase"
}
}
},
"metadata": {
"type": "object",
"properties": {
"executionTimeMs": {
"type": "integer",
"minimum": 0,
"maximum": 3600000,
"description": "Execution time in milliseconds"
},
"toolsUsed": {
"type": "array",
"items": {
"type": "string",
"enum": ["semgrep", "npm-audit", "trivy", "owasp-zap", "bandit", "gosec", "eslint-security", "snyk", "gitleaks", "trufflehog", "bearer"]
},
"uniqueItems": true,
"description": "Security tools used"
},
"agentId": {
"type": "string",
"pattern": "^qe-[a-z][a-z0-9-]*$",
"description": "Agent ID (e.g., qe-security-scanner)"
},
"modelUsed": {
"type": "string",
"description": "LLM model used for analysis"
},
"inputHash": {
"type": "string",
"pattern": "^[a-f0-9]{64}$",
"description": "SHA-256 hash of input"
},
"targetUrl": {
"type": "string",
"format": "uri",
"description": "Target URL if applicable"
},
"targetPath": {
"type": "string",
"description": "Target path if applicable"
},
"environment": {
"type": "string",
"enum": ["development", "staging", "production", "ci"],
"description": "Execution environment"
},
"retryCount": {
"type": "integer",
"minimum": 0,
"maximum": 10,
"description": "Number of retries"
}
}
},
"validationResult": {
"type": "object",
"properties": {
"schemaValid": {
"type": "boolean",
"description": "Passes JSON schema validation"
},
"contentValid": {
"type": "boolean",
"description": "Passes content validation"
},
"confidence": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Confidence score"
},
"warnings": {
"type": "array",
"items": {
"type": "string",
"maxLength": 500
},
"maxItems": 20,
"description": "Validation warnings"
},
"errors": {
"type": "array",
"items": {
"type": "string",
"maxLength": 500
},
"maxItems": 20,
"description": "Validation errors"
},
"validatorVersion": {
"type": "string",
"pattern": "^\\d+\\.\\d+\\.\\d+$",
"description": "Validator version"
}
}
},
"learningData": {
"type": "object",
"properties": {
"patternsDetected": {
"type": "array",
"items": {
"type": "string",
"maxLength": 200
},
"maxItems": 20,
"description": "Security patterns detected (e.g., sql-injection-string-concat)"
},
"reward": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Reward signal for learning (0.0-1.0)"
},
"feedbackLoop": {
"type": "object",
"properties": {
"previousRunId": {
"type": "string",
"format": "uuid",
"description": "Previous run ID for comparison"
},
"improvement": {
"type": "number",
"minimum": -1,
"maximum": 1,
"description": "Improvement over previous run"
}
}
},
"newVulnerabilityPatterns": {
"type": "array",
"items": {
"type": "object",
"properties": {
"pattern": { "type": "string" },
"cwe": { "type": "string" },
"confidence": { "type": "number" }
}
},
"description": "New vulnerability patterns learned"
}
}
}
}
}
@@ -0,0 +1,45 @@
{
"skillName": "security-testing",
"skillVersion": "1.0.0",
"requiredTools": [
"jq"
],
"optionalTools": [
"npm",
"semgrep",
"trivy",
"ajv",
"jsonschema",
"python3"
],
"schemaPath": "schemas/output.json",
"requiredFields": [
"skillName",
"status",
"output",
"output.summary",
"output.findings",
"output.owaspCategories"
],
"requiredNonEmptyFields": [
"output.summary"
],
"mustContainTerms": [
"OWASP",
"security",
"vulnerability"
],
"mustNotContainTerms": [
"TODO",
"placeholder",
"FIXME"
],
"enumValidations": {
".status": [
"success",
"partial",
"failed",
"skipped"
]
}
}
-21
View File
@@ -1,21 +0,0 @@
# Database Configuration
DATABASE_HOST=localhost
DATABASE_PORT=5432
DATABASE_NAME=gps_denied
DATABASE_USER=postgres
DATABASE_PASSWORD=
# API Configuration
API_HOST=0.0.0.0
API_PORT=8000
API_DEBUG=false
# Model Paths
SUPERPOINT_MODEL_PATH=models/superpoint.engine
LIGHTGLUE_MODEL_PATH=models/lightglue.engine
DINOV2_MODEL_PATH=models/dinov2.engine
LITESAM_MODEL_PATH=models/litesam.engine
# Satellite Data
SATELLITE_CACHE_DIR=satellite_cache
GOOGLE_MAPS_API_KEY=
-47
View File
@@ -1,47 +0,0 @@
.DS_Store
.idea
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
.env
.venv
env/
venv/
ENV/
*.swp
*.swo
*~
*.log
*.sqlite
*.db
satellite_cache/
image_storage/
models/*.engine
models/*.onnx
.coverage
htmlcov/
.pytest_cache/
.mypy_cache/
@@ -1,36 +0,0 @@
# Research Acceptance Criteria
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
## Role
You are a professional software architect
## Task
- Thoroughly research in internet about the problem and how realistic these acceptance criteria are.
- Check how critical each criterion is.
- Find out more acceptance criteria for this specific domain.
- Research the impact of each value in the acceptance criteria on the whole system quality.
- Verify your findings with authoritative sources (official docs, papers, benchmarks).
- Consider cost/budget implications of each criterion.
- Consider timeline implications - how long would it take to meet each criterion.
## Output format
Assess acceptable ranges for each value in each acceptance criterion in the state-of-the-art solutions, and propose corrections in the next table:
- Acceptance criterion name
- Our values
- Your researched criterion values
- Cost/Timeline impact
- Status: Is the criterion added by your research to our system, modified, or removed
### Assess the restrictions we've put on the system. Are they realistic? Should we add more strict restrictions, or vice versa, add more requirements in restrictions to use our system. Propose corrections in the next table:
- Restriction name
- Our values
- Your researched restriction values
- Cost/Timeline impact
- Status: Is a restriction added by your research to our system, modified, or removed
-37
View File
@@ -1,37 +0,0 @@
# Research Problem
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
## Role
You are a professional researcher and software architect
## Task
- Research existing/competitor solutions for similar problems.
- Thoroughly research in internet about the problem and all the possible ways to solve a problem, and split it to components.
- Then research all the possible ways to solve components, and find out the most efficient state-of-the-art solutions.
- Verify that suggested tools/libraries actually exist and work as described.
- Include security considerations in each component analysis.
- Provide rough cost estimates for proposed solutions.
Be concise in formulating. The fewer words, the better, but do not miss any important details.
## Output format
Produce the resulting solution draft in the next format:
- Short Product solution description. Brief component interaction diagram.
- Existing/competitor solutions analysis (if any).
- Architecture solution that meets restrictions and acceptance criteria.
For each component, analyze the best possible solutions, and form a comparison table.
Each possible component solution would be a row, and has the next columns:
- Tools (library, platform) to solve component tasks
- Advantages of this solution
- Limitations of this solution
- Requirements for this solution
- Security considerations
- Estimated cost
- How does it fit for the problem component that has to be solved, and the whole solution
- Testing strategy. Research how to cover system with tests in order to meet all the acceptance criteria. Form a list of integration functional tests and non-functional tests.
@@ -1,40 +0,0 @@
# Solution Draft Assessment
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Existing solution draft: `@_docs/01_solution/solution_draft.md`
## Role
You are a professional software architect
## Task
- Thoroughly research in internet about the problem and identify all potential weak points and problems.
- Identify security weak points and vulnerabilities.
- Identify performance bottlenecks.
- Address these problems and find out ways to solve them.
- Based on your findings, form a new solution draft in the same format.
## Output format
- Put here all new findings, what was updated, replaced, or removed from the previous solution in the next table:
- Old component solution
- Weak point (functional/security/performance)
- Solution (component's new solution)
- Form the new solution draft. In the updated report, do not put "new" marks, do not compare to the previous solution draft, just make a new solution as if from scratch. Put it in the next format:
- Short Product solution description. Brief component interaction diagram.
- Architecture solution that meets restrictions and acceptance criteria.
For each component, analyze the best possible solutions, and form a comparison table.
Each possible component solution would be a row, and has the next columns:
- Tools (library, platform) to solve component tasks
- Advantages of this solution
- Limitations of this solution
- Requirements for this solution
- Security considerations
- Performance characteristics
- How does it fit for the problem component that has to be solved, and the whole solution
- Testing strategy. Research how to cover system with tests in order to meet all the acceptance criteria. Form a list of integration functional tests and non-functional tests.
-137
View File
@@ -1,137 +0,0 @@
# Tech Stack Selection
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Solution draft: `@_docs/01_solution/solution.md`
## Role
You are a software architect evaluating technology choices
## Task
- Evaluate technology options against requirements
- Consider team expertise and learning curve
- Assess long-term maintainability
- Document selection rationale
## Output
### Requirements Analysis
#### Functional Requirements
| Requirement | Tech Implications |
|-------------|-------------------|
| [From acceptance criteria] | |
#### Non-Functional Requirements
| Requirement | Tech Implications |
|-------------|-------------------|
| Performance | |
| Scalability | |
| Security | |
| Maintainability | |
#### Constraints
| Constraint | Impact on Tech Choice |
|------------|----------------------|
| [From restrictions] | |
### Technology Evaluation
#### Programming Language
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Language]
**Rationale**: [Why this choice]
#### Framework
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Framework]
**Rationale**: [Why this choice]
#### Database
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Database]
**Rationale**: [Why this choice]
#### Infrastructure/Hosting
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Platform]
**Rationale**: [Why this choice]
#### Key Libraries/Dependencies
| Category | Library | Version | Purpose | Alternatives Considered |
|----------|---------|---------|---------|------------------------|
| | | | | |
### Evaluation Criteria
Rate each technology option against these criteria:
1. **Fitness for purpose**: Does it meet functional requirements?
2. **Performance**: Can it meet performance requirements?
3. **Security**: Does it have good security track record?
4. **Maturity**: Is it stable and well-maintained?
5. **Community**: Active community and documentation?
6. **Team expertise**: Does team have experience?
7. **Cost**: Licensing, hosting, operational costs?
8. **Scalability**: Can it grow with the project?
### Technology Stack Summary
```
Language: [Language] [Version]
Framework: [Framework] [Version]
Database: [Database] [Version]
Cache: [Cache solution]
Message Queue: [If applicable]
CI/CD: [Platform]
Hosting: [Platform]
Monitoring: [Tools]
```
### Risk Assessment
| Technology | Risk | Mitigation |
|------------|------|------------|
| | | |
### Learning Requirements
| Technology | Team Familiarity | Training Needed |
|------------|-----------------|-----------------|
| | High/Med/Low | Yes/No |
### Decision Record
**Decision**: [Summary of tech stack]
**Date**: [YYYY-MM-DD]
**Participants**: [Who was involved]
**Status**: Approved / Pending Review
Store output to `_docs/01_solution/tech_stack.md`
## Notes
- Avoid over-engineering - choose simplest solution that meets requirements
- Consider total cost of ownership, not just initial development
- Prefer proven technologies over cutting-edge unless required
- Document trade-offs for future reference
- Ask questions about team expertise and constraints
-37
View File
@@ -1,37 +0,0 @@
# Security Research
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Solution: `@_docs/01_solution/solution.md`
## Role
You are a security architect
## Task
- Review solution architecture against security requirements from `security_approach.md`
- Identify attack vectors and threat model for the system
- Define security requirements per component
- Propose security controls and mitigations
## Output format
### Threat Model
- Asset inventory (what needs protection)
- Threat actors (who might attack)
- Attack vectors (how they might attack)
### Security Requirements per Component
For each component:
- Component name
- Security requirements
- Proposed controls
- Risk level (High/Medium/Low)
### Security Controls Summary
- Authentication/Authorization approach
- Data protection (encryption, integrity)
- Secure communication
- Logging and monitoring requirements
-82
View File
@@ -1,82 +0,0 @@
# Decompose
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
You are a professional software architect
## Task
- Read problem description and solution draft, analyze it thoroughly
- Decompose a complex system solution to the components with proper communications between them, so that system would solve the problem.
- Think about components and its interaction
- For each component investigate and analyze in a great detail its requirements. If additional components are needed, like data preparation, create them
- Solution draft could be incomplete, so add all necessary components to meet acceptance criteria and restrictions
- When you've got full understanding of how exactly each component will interact with each other, create components
## Output Format
### Components Decomposition
Store description of each component to the file `_docs/02_components/[##]_[component_name]/[##]._component_[component_name].md` with the next structure:
1. High-level overview
- **Purpose:** A concise summary of what this component does and its role in the larger system.
- **Architectural Pattern:** Identify the design patterns used (e.g., Singleton, Observer, Factory).
2. API Reference. Create a table for each function or method with the next columns:
- Name
- Description
- Input
- Output
- Description of input and output data in case if it is not obvious
- Test cases which could be for the method
3. Implementation Details
- **Algorithmic Complexity:** Analyze Time (Big O) and Space complexity for critical methods.
- **State Management:** Explain how this component handles state (local vs. global).
- **Dependencies:** List key external libraries and their purpose here.
- **Error Handling:** Define error handling strategy for this component.
4. Tests
- Integration tests for the component if needed.
- Non-functional tests for the component if needed.
5. Extensions and Helpers
- Store Extensions and Helpers to support functionality across multiple components to a separate folder `_docs/02_components/helpers`.
6. Caveats & Edge Cases
- Known limitations
- Potential race conditions
- Potential performance bottlenecks.
### Dependency Graph
- Create component dependency graph showing implementation order
- Identify which components can be implemented in parallel
### API Contracts
- Define interfaces/contracts between components
- Specify data formats exchanged
### Logging Strategy
- Define global logging approach for the system
- Log levels, format, storage
For the whole system make these diagrams and store them to `_docs/02_components`:
### Logic & Architecture
- Generate draw.io components diagrams shows relations between components.
- Make sure lines are not intersect each other, or at least try to minimize intersections.
- Group the semantically coherent components into the groups
- Leave enough space for the nice alignment of the components boxes
- Put external users of the system closer to those components' blocks they are using
- Generate a Mermaid Flowchart diagrams for each of the main control flows
- Create multiple flows system can operate, and generate a flowchart diagram per each flow
- Flows can relate to each other
## Notes
- Strongly follow Single Responsibility Principle during creation of components.
- Follow dumb code - smart data principle. Do not overcomplicate
- Components should be semantically coherent. Do not spread similar functionality across multiple components
- Do not put any code yet, only names, input and output.
- Ask as many questions as possible to clarify all uncertainties.
@@ -1,30 +0,0 @@
# Component Assessment
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
You are a professional software architect
## Task
- Read carefully all the documents above
- Check all the components @02_components how coherent they are
- Follow interaction logic and flows, try to find some potential problems there
- Try to find some missing interaction or circular dependencies
- Check all the components follows Single Responsibility Principle
- Check all the follows dumb code - smart data principle. So that resulting code shouldn't be overcomplicated
- Check for security vulnerabilities in component design
- Check for performance bottlenecks
- Verify API contracts are consistent across components
## Output
Form a list of problems with fixes in the next format:
- Component
- Problem type (Architectural/Security/Performance/API)
- Problem, reason
- Fix or potential fixes
-36
View File
@@ -1,36 +0,0 @@
# Security Check
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a security architect
## Task
- Review each component against security requirements
- Identify security gaps in component design
- Verify security controls are properly distributed across components
- Check for common vulnerabilities (injection, auth bypass, data leaks)
## Output
### Security Assessment per Component
For each component:
- Component name
- Security gaps found
- Required security controls
- Priority (High/Medium/Low)
### Cross-Component Security
- Authentication flow assessment
- Authorization gaps
- Data flow security (encryption in transit/at rest)
- Logging for security events
### Recommendations
- Required changes before implementation
- Security helpers/components to add
-67
View File
@@ -1,67 +0,0 @@
# Generate Jira Epics
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a world class product manager
## Task
- Generate Jira Epics from the Components using Jira MCP
- Order epics by dependency (which must be done first)
- Include rough effort estimation per epic
- Ensure each epic has clear goal and acceptance criteria, verify it with acceptance criteria
- Generate draw.io components diagram based on previous diagram shows relations between components and current Jira Epic number, corresponding to each component.
## Output
Epic format:
- Epic Name [Component] [Outcome]
- Example: Data Ingestion Near-real-time pipeline
- Epic Summary (12 sentences)
- What we are building + why it matters
- Problem / Context
- Current state, pain points, constraints, business opportunities, Links to architecture decision records or diagrams
- Scope. Detailed description
- In Scope. Bullet list of capabilities (not tasks)
- Out-of-scope. Explicit exclusions to prevent scope creep
- Assumptions
- System design specifics, input material quality, data structures, network availability etc
- Dependencies
- Other epics that must be completed first
- Other components, services, hardware, environments, certificates, data sources etc.
- Effort Estimation
- T-shirt size (S/M/L/XL) or story points range
- Users / Consumers
- Internal, External, Systems, Short list of the key use cases.
- Requirements
- Functional - API expectations, events, data handling, idempotency, retry behavior etc
- Non-functional - Availability, latency, throughput, scalability, processing limits, data retention etc
- Security/Compliance - Authentication, encryption, secrets, logging, SOC2/ISO if applicable
- Design & Architecture (links)
- High-level diagram link, Data flow, sequence diagrams, schemas etc
- Definition of Done (Epic-level)
- Feature list per epic scope
- Automated tests (unit/integration/e2e) + minimum coverage threshold met
- Runbooks if applicable
- Documentation updated
- Acceptance Criteria (measurable)
- Risks & Mitigations
- Top 5 risks (technical + delivery) with mitigation owners or systems involved
- Label epic
- component:<name>
- env:prod|stg
- type:platform|data|integration
- Jira Issue Breakdown
- Create consistent child issues under the epic
- Spikes
- Tasks
- Technical enablers
## Notes
- Be as much concise as possible in formulating epics. The less words with the same meaning - the better epic is.
-57
View File
@@ -1,57 +0,0 @@
# Data Model Design
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a professional database architect
## Task
- Analyze solution and components to identify all data entities
- Design database schema that supports all component requirements
- Define relationships, constraints, and indexes
- Consider data access patterns for query optimization
- Plan for data migration if applicable
## Output
### Entity Relationship Diagram
- Create ERD showing all entities and relationships
- Use Mermaid or draw.io format
### Schema Definition
For each entity:
- Table name
- Columns with types, constraints, defaults
- Primary keys
- Foreign keys and relationships
- Indexes (clustered, non-clustered)
- Partitioning strategy (if needed)
### Data Access Patterns
- List common queries per component
- Identify hot paths requiring optimization
- Recommend caching strategy
### Migration Strategy
- Initial schema creation scripts
- Seed data requirements
- Rollback procedures
### Storage Estimates
- Estimated row counts per table
- Storage requirements
- Growth projections
Store output to `_docs/02_components/data_model.md`
## Notes
- Follow database normalization principles (3NF minimum)
- Consider read vs write optimization based on access patterns
- Plan for horizontal scaling if required
- Ask questions to clarify data requirements
-64
View File
@@ -1,64 +0,0 @@
# API Contracts Design
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Data Model: `@_docs/02_components/data_model.md`
## Role
You are a professional API architect
## Task
- Define API contracts between all components
- Specify external API endpoints (if applicable)
- Define data transfer objects (DTOs)
- Establish error response standards
- Plan API versioning strategy
## Output
### Internal Component Interfaces
For each component boundary:
- Interface name
- Methods with signatures
- Input/Output DTOs
- Error types
- Async/Sync designation
### External API Specification
Generate OpenAPI/Swagger spec including:
- Endpoints with HTTP methods
- Request/Response schemas
- Authentication requirements
- Rate limiting rules
- Example requests/responses
### DTO Definitions
For each data transfer object:
- Name and purpose
- Fields with types
- Validation rules
- Serialization format (JSON, Protobuf, etc.)
### Error Contract
- Standard error response format
- Error codes and messages
- HTTP status code mapping
### Versioning Strategy
- API versioning approach (URL, header, query param)
- Deprecation policy
- Breaking vs non-breaking change definitions
Store output to `_docs/02_components/api_contracts.md`
Store OpenAPI spec to `_docs/02_components/openapi.yaml` (if applicable)
## Notes
- Follow RESTful conventions for external APIs
- Keep internal interfaces minimal and focused
- Design for backward compatibility
- Ask questions to clarify integration requirements
-59
View File
@@ -1,59 +0,0 @@
# Generate Tests
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
You are a professional Quality Assurance Engineer
## Task
- Compose tests according to the test strategy
- Cover all the criteria with tests specs
- Minimum coverage target: 75%
## Output
Store all tests specs to the files `_docs/02_tests/[##]_[test_name]_spec.md`
Types and structures of tests:
- Integration tests
- Summary
- Detailed description
- Input data for this specific test scenario
- Expected result
- Maximum expected time to get result
- Performance tests
- Summary
- Load/stress scenario description
- Expected throughput/latency
- Resource limits
- Security tests
- Summary
- Attack vector being tested
- Expected behavior
- Pass/Fail criteria
- Acceptance tests
- Summary
- Detailed description
- Preconditions for tests
- Steps:
- Step1 - Expected result1
- Step2 - Expected result2
...
- StepN - Expected resultN
- Test Data Management
- Required test data
- Setup/Teardown procedures
- Data isolation strategy
## Notes
- Do not put any code yet
- Ask as many questions as needed.
-111
View File
@@ -1,111 +0,0 @@
# Risk Assessment
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Estimation: `@_docs/02_components/estimation.md`
## Role
You are a technical risk analyst
## Task
- Identify technical and project risks
- Assess probability and impact
- Define mitigation strategies
- Create risk monitoring plan
## Output
### Risk Register
| ID | Risk | Category | Probability | Impact | Score | Mitigation | Owner |
|----|------|----------|-------------|--------|-------|------------|-------|
| R1 | | Tech/Schedule/Resource/External | High/Med/Low | High/Med/Low | H/M/L | | |
### Risk Scoring Matrix
| | Low Impact | Medium Impact | High Impact |
|--|------------|---------------|-------------|
| High Probability | Medium | High | Critical |
| Medium Probability | Low | Medium | High |
| Low Probability | Low | Low | Medium |
### Risk Categories
#### Technical Risks
- Technology choices may not meet requirements
- Integration complexity underestimated
- Performance targets unachievable
- Security vulnerabilities
#### Schedule Risks
- Scope creep
- Dependencies delayed
- Resource unavailability
- Underestimated complexity
#### Resource Risks
- Key person dependency
- Skill gaps
- Team availability
#### External Risks
- Third-party API changes
- Vendor reliability
- Regulatory changes
### Top Risks (Ranked)
#### 1. [Highest Risk]
- **Description**:
- **Probability**: High/Medium/Low
- **Impact**: High/Medium/Low
- **Mitigation Strategy**:
- **Contingency Plan**:
- **Early Warning Signs**:
- **Owner**:
#### 2. [Second Highest Risk]
...
### Risk Mitigation Plan
| Risk ID | Mitigation Action | Timeline | Cost | Responsible |
|---------|-------------------|----------|------|-------------|
| R1 | | | | |
### Risk Monitoring
#### Review Schedule
- Daily standup: Discuss blockers (potential risks materializing)
- Weekly: Review risk register, update probabilities
- Sprint end: Comprehensive risk review
#### Early Warning Indicators
| Risk | Indicator | Threshold | Action |
|------|-----------|-----------|--------|
| | | | |
### Contingency Budget
- Time buffer: 20% of estimated duration
- Scope flexibility: [List features that can be descoped]
- Resource backup: [Backup resources if available]
### Acceptance Criteria for Risks
Define which risks are acceptable:
- Low risks: Accepted, monitored
- Medium risks: Mitigation required
- High risks: Mitigation + contingency required
- Critical risks: Must be resolved before proceeding
Store output to `_docs/02_components/risk_assessment.md`
## Notes
- Update risk register throughout project
- Escalate critical risks immediately
- Consider both likelihood and impact
- Ask questions to uncover hidden risks
@@ -1,40 +0,0 @@
# Generate Features for the provided component spec
## Input parameters
- component_spec.md. Required. Do NOT proceed if it is NOT provided!
- parent Jira Epic in the format AZ-###. Required. Do NOT proceed if it is NOT provided!
## Prerequisites
- Jira Epics must be created first (step 2.20)
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
You are a professional software architect
## Task
- Read very carefully component_spec.md
- Decompose component_spec.md to the features. If component is simple or atomic, then create only 1 feature.
- Split to the many features only if it necessary and would be easier to implement
- Do not create features of other components, create *only* features of this exact component
- Each feature should be atomic, could contain 0 API, or list of semantically connected APIs
- After splitting assess yourself
- Add complexity points estimation (1, 2, 3, 5, 8) per feature
- Note feature dependencies (some features may be independent)
- Use `@gen_feature_spec.md` as a complete guidance how to generate feature spec
- Generate Jira tasks per each feature using this spec `@gen_jira_task_and_branch.md` using Jira MCP.
## Output
- The file name of the feature specs should follow this format: `[component's number ##].[feature's number ##]_feature_[feature_name].md`.
- The structure of the feature spec should follow this spec `@gen_feature_spec.md`
- The structure of the Jira task should follow this spec: `@gen_jira_task_and_branch.md`
- Include dependency notes (which features can be done in parallel)
## Notes
- Do NOT generate any code yet, only brief explanations what should be done.
- Ask as many questions as needed.
@@ -1,73 +0,0 @@
# Create Initial Structure
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`.
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data.
- Restrictions: `@_docs/00_problem/restrictions.md`.
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`.
- Security approach: `@_docs/00_problem/security_approach.md`.
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components with Features specifications: `@_docs/02_components`
## Role
You are a professional software architect
## Task
- Read carefully all the component specs and features in the components folder: `@_docs/02_components`
- Investigate in internet what are the best way and tools to implement components and its features
- Make a plan for the creating initial structure:
- DTOs
- component's interfaces
- empty implementations
- helpers - empty implementations or interfaces
- Add .gitignore appropriate for the project's language/framework
- Add .env.example with required environment variables
- Configure CI/CD pipeline with full stages:
- Build stage
- Lint/Static analysis stage
- Unit tests stage
- Integration tests stage
- Security scan stage (SAST/dependency check)
- Deploy to staging stage (triggered on merge to stage branch)
- Define environment strategy based on `@_docs/00_templates/environment_strategy.md`:
- Development environment configuration
- Staging environment configuration
- Production environment configuration (if applicable)
- Add database migration setup if applicable
- Add README.md, describe the project by @_docs/01_solution/solution.md
- Create a separate folder for the integration tests (not a separate repo)
- Configure branch protection rules recommendations
## Example
The structure should roughly looks like this:
- .gitignore
- .env.example
- .github/workflows/ (or .gitlab-ci.yml)
- api
- components
- component1_folder
- component2_folder
- ...
- db
- migrations/
- helpers
- models
- tests
- unit_test1_project1_folder
- unit_test2_project2_folder
...
- integration_tests_folder
- test data
- test01_file
- test02_file
...
Also it is possible that some semantically coherent components (or 1 big component) would be in its own project or project folder
Could be common layer or project consisting of all the interfaces (for C# or Java), or each interface in each component's folder (python) - depending on the language common conventions
## Notes
- Follow SOLID principles
- Follow KISS principle. Dumb code - smart data.
- Follow DRY principles, but do not overcomplicate things, if code repeats sometimes, it is ok if that would be simpler
- Follow conventions and rules of the project's programming language
- Ask as many questions as needed, everything should be clear how to implement each feature
-35
View File
@@ -1,35 +0,0 @@
# Implement Component and Features by Spec
## Input parameter
component_folder
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`.
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data.
- Restrictions: `@_docs/00_problem/restrictions.md`.
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`.
- Security approach: `@_docs/00_problem/security_approach.md`.
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
You are a professional software architect and developer
## Task
- Read carefully initial data and component spec in the component_folder: `@_docs/02_components/[##]_[component_name]/[##]._component_[component_name]`
- Read carefully all the component features in the component_folder: `@_docs/02_components/[##]_[component_name]/[##].[##]_feature_[feature_name]`
- Investigate in internet what are the best way and tools to implement component and its features
- During the investigation it is possible that found solutions required architecturally reorganization of the features. It is ok, propose that and if user agrees, include reorganization in the build feature plan. Also it is possible that interface could be changed or even removed or added new one. It is ok.
- Analyze the existing codebase and get full context for the component's implementation
- Make sure each feature is connected and communicated properly with other features and existing code
- If component has dependency on another one, create temporary mock for the dependency
- For each feature:
- Implement the feature
- Implement error handling per defined strategy
- Implement logging per defined strategy
- Implement all unit tests from the Test cases description, add checks test results to the plan steps
- Implement all integration tests for the feature, add check test results to the plan steps. Analyze existing tests, and decide whether to create new one or add to existing
- Add to the implementation plan description of all component's integration tests, add check test results to the plan steps
- After component is complete, replace mocks with real implementations (mock cleanup)
## Notes
- Ask as many questions as needed, everything should be clear how to implement each feature
-39
View File
@@ -1,39 +0,0 @@
# Implement Tests by Spec
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`.
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data.
- Restrictions: `@_docs/00_problem/restrictions.md`.
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`.
- Full Solution Description: `@_docs/01_solution/solution.md`
- Tests specifications: `@_docs/02_tests`
## Role
You are a professional software architect and developer
## Task
- Read carefully all the initial data and understand whole system goals
- Check that a separate folder for tests is existing (should be generated by @3.05_implement_initial_structure.md)
- Set up Docker environment for testing:
- Create docker-compose.yml for test environment
- Configure test database container
- Configure application container
- For each test description:
- Prepare all the data necessary for testing, or check it is already exists
- Check existing integration tests and if a similar test is already exists, update it
- Implement the test by specification
- Implement test data management:
- Setup fixtures/factories
- Teardown/cleanup procedures
- Run system and integration tests in docker containers
- Fix all problems if tests failed until we got a successful result. In case if one or more tests was failed due to missing data from user or API or other system, ask it from developer.
- Repeat test cycle until no failed tests, iteratively fixing found bugs. Ask user for an additional information if something new appears
- Ensure tests run in CI pipeline
- Compose a final test results in a csv with the next format:
- Test filename
- Execution time
- Result
## Notes
- Ask as many questions as needed, everything should be clear how to implement each feature
-29
View File
@@ -1,29 +0,0 @@
# User Input for Refactoring
## Task
Collect and document goals for the refactoring project.
## User should provide:
Create in `_docs/00_problem`:
- `problem_description.md`:
- What the system currently does
- What changes/improvements are needed
- Pain points in current implementation
- `acceptance_criteria.md`: Success criteria for the refactoring
- `security_approach.md`: Security requirements (if applicable)
## Example
- `problem_description.md`
Current system: E-commerce platform with monolithic architecture.
Current issues: Slow deployments, difficult scaling, tightly coupled modules.
Goals: Break into microservices, improve test coverage, reduce deployment time.
- `acceptance_criteria.md`
- All existing functionality preserved
- Test coverage increased from 40% to 75%
- Deployment time reduced by 50%
- No circular dependencies between modules
## Output
Store user input in `_docs/00_problem/` folder for reference by subsequent steps.
-92
View File
@@ -1,92 +0,0 @@
# Capture Baseline Metrics
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current codebase
## Role
You are a software engineer preparing for refactoring
## Task
- Capture current system metrics as baseline
- Document current behavior
- Establish benchmarks to compare against after refactoring
- Identify critical paths to monitor
## Output
### Code Quality Metrics
#### Coverage
```
Current test coverage: XX%
- Unit test coverage: XX%
- Integration test coverage: XX%
- Critical paths coverage: XX%
```
#### Code Complexity
- Cyclomatic complexity (average):
- Most complex functions (top 5):
- Lines of code:
- Technical debt ratio:
#### Code Smells
- Total code smells:
- Critical issues:
- Major issues:
### Performance Metrics
#### Response Times
| Endpoint/Operation | P50 | P95 | P99 |
|-------------------|-----|-----|-----|
| [endpoint1] | Xms | Xms | Xms |
| [operation1] | Xms | Xms | Xms |
#### Resource Usage
- Average CPU usage:
- Average memory usage:
- Database query count per operation:
#### Throughput
- Requests per second:
- Concurrent users supported:
### Functionality Inventory
List all current features/endpoints:
| Feature | Status | Test Coverage | Notes |
|---------|--------|---------------|-------|
| | | | |
### Dependency Analysis
- Total dependencies:
- Outdated dependencies:
- Security vulnerabilities in dependencies:
### Build Metrics
- Build time:
- Test execution time:
- Deployment time:
Store output to `_docs/04_refactoring/baseline_metrics.md`
## Measurement Commands
Use project-appropriate tools for your tech stack:
| Metric | Python | C#/.NET | Java | Go | JavaScript/TypeScript |
|--------|--------|---------|------|-----|----------------------|
| Test coverage | pytest --cov | dotnet test --collect | jacoco | go test -cover | jest --coverage |
| Code complexity | radon | CodeMetrics | PMD | gocyclo | eslint-plugin-complexity |
| Lines of code | cloc | cloc | cloc | cloc | cloc |
| Dependency check | pip-audit | dotnet list package --vulnerable | mvn dependency-check | govulncheck | npm audit |
## Notes
- Run measurements multiple times for accuracy
- Document measurement methodology
- Save raw data for comparison
- Focus on metrics relevant to refactoring goals
-48
View File
@@ -1,48 +0,0 @@
# Create Documentation from Existing Codebase
## Role
You are a Principal Software Architect and Technical Communication Expert.
## Task
Generate production-grade documentation from existing code that serves both maintenance engineers and consuming developers.
## Core Directives:
- Truthfulness: Never invent features. Ground every claim in the provided code.
- Clarity: Use professional, third-person objective tone.
- Completeness: Document every public interface, summarize private internals unless critical.
- Visuals: Visualize complex logic using Mermaid.js.
## Process:
1. Analyze the project structure, form rough understanding from directories, projects and files
2. Go file by file, analyze each method, convert to short API reference description, form rough flow diagram
3. Analyze summaries and code, analyze connections between components, form detailed structure
## Output Format
Store description of each component to `_docs/02_components/[##]_[component_name]/[##]._component_[component_name].md`:
1. High-level overview
- **Purpose:** Component role in the larger system.
- **Architectural Pattern:** Design patterns used.
2. Logic & Architecture
- Mermaid `graph TD` or `sequenceDiagram`
- draw.io components diagram
3. API Reference table:
- Name, Description, Input, Output
- Test cases for the method
4. Implementation Details
- **Algorithmic Complexity:** Big O for critical methods.
- **State Management:** Local vs. global state.
- **Dependencies:** External libraries.
5. Tests
- Integration tests needed
- Non-functional tests needed
6. Extensions and Helpers
- Store to `_docs/02_components/helpers`
7. Caveats & Edge Cases
- Known limitations
- Race conditions
- Performance bottlenecks
## Notes
- Verify all parameters are captured
- Verify Mermaid diagrams are syntactically correct
- Explain why the code works, not just how
-36
View File
@@ -1,36 +0,0 @@
# Form Solution with Flows
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Generated component docs: `@_docs/02_components`
## Role
You are a professional software architect
## Task
- Review all generated component documentation
- Synthesize into a cohesive solution description
- Create flow diagrams showing how components interact
- Identify the main use cases and their flows
## Output
### Solution Description
Store to `_docs/01_solution/solution.md`:
- Short Product solution description
- Component interaction diagram (draw.io)
- Components overview and their responsibilities
### Flow Diagrams
Store to `_docs/02_components/system_flows.md`:
- Mermaid Flowchart diagrams for main control flows:
- Create flow diagram per major use case
- Show component interactions
- Note data transformations
- Flows can relate to each other
- Show entry points, decision points, and outputs
## Notes
- Focus on documenting what exists, not what should be
-39
View File
@@ -1,39 +0,0 @@
# Deep Research of Approaches
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a professional researcher and software architect
## Task
- Analyze current implementation patterns
- Research modern approaches for similar systems
- Identify what could be done differently
- Suggest improvements based on state-of-the-art practices
## Output
### Current State Analysis
- Patterns currently used
- Strengths of current approach
- Weaknesses identified
### Alternative Approaches
For each major component/pattern:
- Current approach
- Alternative approach
- Pros/Cons comparison
- Migration effort (Low/Medium/High)
### Recommendations
- Prioritized list of improvements
- Quick wins (low effort, high impact)
- Strategic improvements (higher effort)
## Notes
- Focus on practical, achievable improvements
- Consider existing codebase constraints
-40
View File
@@ -1,40 +0,0 @@
# Solution Assessment with Codebase
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Research findings: from step 4.30
## Role
You are a professional software architect
## Task
- Assess current implementation against acceptance criteria
- Identify weak points in current codebase
- Map research recommendations to specific code areas
- Prioritize changes based on impact and effort
## Output
### Weak Points Assessment
For each issue found:
- Location (component/file)
- Weak point description
- Impact (High/Medium/Low)
- Proposed solution
### Gap Analysis
- Acceptance criteria vs current state
- What's missing
- What needs improvement
### Refactoring Roadmap
- Phase 1: Critical fixes
- Phase 2: Major improvements
- Phase 3: Nice-to-have enhancements
## Notes
- Ground all findings in actual code
- Be specific about locations and changes needed
-52
View File
@@ -1,52 +0,0 @@
# Integration Tests Description
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a professional Quality Assurance Engineer
## Prerequisites
- Baseline metrics captured (see 4.07_capture_baseline.md)
- Feature parity checklist created (see `@_docs/00_templates/feature_parity_checklist.md`)
## Coverage Requirements (MUST meet before refactoring)
- Minimum overall coverage: 75%
- Critical path coverage: 90%
- All public APIs must have integration tests
- All error handling paths must be tested
## Task
- Analyze existing test coverage
- Define integration tests that capture current system behavior
- Tests should serve as safety net for refactoring
- Cover critical paths and edge cases
- Ensure coverage requirements are met before proceeding to refactoring
## Output
Store test specs to `_docs/02_tests/[##]_[test_name]_spec.md`:
- Integration tests
- Summary
- Current behavior being tested
- Input data
- Expected result
- Maximum expected time
- Acceptance tests
- Summary
- Preconditions
- Steps with expected results
- Coverage Analysis
- Current coverage percentage
- Target coverage (75% minimum)
- Critical paths not covered
## Notes
- Focus on behavior preservation
- These tests validate refactoring doesn't break functionality
-34
View File
@@ -1,34 +0,0 @@
# Implement Tests
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Tests specifications: `@_docs/02_tests`
## Role
You are a professional software developer
## Task
- Implement all tests from specifications
- Ensure all tests pass on current codebase (before refactoring)
- Set up test infrastructure if not exists
- Configure test data fixtures
## Process
1. Set up test environment
2. Implement each test from spec
3. Run tests, verify all pass
4. Document any discovered issues
## Output
- Implemented tests in test folder
- Test execution report:
- Test name
- Status (Pass/Fail)
- Execution time
- Issues discovered (if any)
## Notes
- All tests MUST pass before proceeding to refactoring
- Tests are the safety net for changes
-38
View File
@@ -1,38 +0,0 @@
# Analyze Coupling
## Initial data:
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Codebase
## Role
You are a software architect specializing in code quality
## Task
- Analyze coupling between components/modules
- Identify tightly coupled areas
- Map dependencies (direct and transitive)
- Form decoupling strategy
## Output
### Coupling Analysis
- Dependency graph (Mermaid)
- Coupling metrics per component
- Circular dependencies found
### Problem Areas
For each coupling issue:
- Components involved
- Type of coupling (content, common, control, stamp, data)
- Impact on maintainability
- Severity (High/Medium/Low)
### Decoupling Strategy
- Priority order for decoupling
- Proposed interfaces/abstractions
- Estimated effort per change
## Notes
- Focus on high-impact coupling issues first
- Consider backward compatibility
-43
View File
@@ -1,43 +0,0 @@
# Execute Decoupling
## Initial data:
- Decoupling strategy: from step 4.60
- Tests: implemented in step 4.50
- Codebase
## Role
You are a professional software developer
## Task
- Execute decoupling changes per strategy
- Fix code smells encountered during refactoring
- Run tests after each significant change
- Ensure all tests pass before proceeding
## Process
For each decoupling change:
1. Implement the change
2. Run integration tests
3. Fix any failures
4. Commit with descriptive message
## Code Smells to Address
- Long methods
- Large classes
- Duplicate code
- Dead code
- Magic numbers/strings
## Output
- Refactored code
- Test results after each change
- Summary of changes made:
- Change description
- Files affected
- Tests status
## Notes
- Small, incremental changes
- Never break tests
- Commit frequently
-40
View File
@@ -1,40 +0,0 @@
# Technical Debt
## Initial data:
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Codebase
## Role
You are a technical debt analyst
## Task
- Identify technical debt in the codebase
- Categorize and prioritize debt items
- Estimate effort to resolve
- Create actionable plan
## Output
### Debt Inventory
For each item:
- Location (file/component)
- Type (design, code, test, documentation)
- Description
- Impact (High/Medium/Low)
- Effort to fix (S/M/L/XL)
- Interest (cost of not fixing)
### Prioritized Backlog
- Quick wins (low effort, high impact)
- Strategic debt (high effort, high impact)
- Tolerable debt (low impact, can defer)
### Recommendations
- Immediate actions
- Sprint-by-sprint plan
- Prevention measures
## Notes
- Be realistic about effort estimates
- Consider business priorities
-49
View File
@@ -1,49 +0,0 @@
# Performance Optimization
## Initial data:
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Codebase
## Role
You are a performance engineer
## Task
- Identify performance bottlenecks
- Profile critical paths
- Propose optimizations
- Implement and verify improvements
## Output
### Bottleneck Analysis
For each bottleneck:
- Location
- Symptom (slow response, high memory, etc.)
- Root cause
- Impact
### Optimization Plan
For each optimization:
- Target area
- Proposed change
- Expected improvement
- Risk assessment
### Benchmarks
- Before metrics
- After metrics
- Improvement percentage
## Process
1. Profile current performance
2. Identify top bottlenecks
3. Implement optimizations one at a time
4. Benchmark after each change
5. Verify tests still pass
## Notes
- Measure before optimizing
- Optimize the right things (profile first)
- Don't sacrifice readability for micro-optimizations
-48
View File
@@ -1,48 +0,0 @@
# Security Review
## Initial data:
- Security approach: `@_docs/00_problem/security_approach.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Codebase
## Role
You are a security engineer
## Task
- Review code for security vulnerabilities
- Check against OWASP Top 10
- Verify security requirements are met
- Recommend fixes for issues found
## Output
### Vulnerability Assessment
For each issue:
- Location
- Vulnerability type (injection, XSS, CSRF, etc.)
- Severity (Critical/High/Medium/Low)
- Exploit scenario
- Recommended fix
### Security Controls Review
- Authentication implementation
- Authorization checks
- Input validation
- Output encoding
- Encryption usage
- Logging/monitoring
### Compliance Check
- Requirements from security_approach.md
- Status (Met/Partially Met/Not Met)
- Gaps to address
### Recommendations
- Critical fixes (must do)
- Improvements (should do)
- Hardening (nice to have)
## Notes
- Prioritize critical vulnerabilities
- Provide actionable fix recommendations
-189
View File
@@ -1,189 +0,0 @@
# Generate Feature Specification
Create a focused behavioral specification that describes **what** the system should do, not **how** it should be built.
## Input parameter
building_block.md
Example: `_docs/iterative/building_blocks/01-dashboard-export-example.md`
## Objective
Generate lean specifications with:
- Clear problem statement and desired outcomes
- Behavioral acceptance criteria in Gherkin format
- Essential non-functional requirements
- Complexity estimation
- Feature dependencies
- No implementation prescriptiveness
## Process
1. Read the building_block.md
2. Analyze the codebase to understand context
3. Generate a behavioral specification using the structure below
4. **DO NOT** include implementation details, file structures, or technical architecture
5. Focus on behavior, user experience, and acceptance criteria
6. Save the specification into `_docs/iterative/feature_specs/spec.md`
Example: `_docs/iterative/feature_specs/01-dashboard-export-example.md`
## Specification Structure
### Header
```markdown
# [Feature Name]
**Status**: Draft | **Date**: [YYYY-MM-DD] | **Feature**: [Brief Feature Description]
**Complexity**: [1|2|3|5|8] points
**Dependencies**: [List dependent features or "None"]
```
### Problem
Clear, concise statement of the problem users are facing.
### Outcome
Measurable or observable goals/benefits (use bullet points).
### Scope
#### Included
What's in scope for this feature (bullet points).
#### Excluded
Explicitly what's **NOT** in scope (bullet points).
### Acceptance Criteria
Each acceptance criterion should be:
- Numbered sequentially (AC-1, AC-2, etc.)
- Include a brief title
- Written in Gherkin format (Given/When/Then)
Example:
**AC-1: Export Availability**
Given the user is viewing the dashboard
When the dashboard loads
Then an "Export to Excel" button should be visible in the filter/actions area
### Non-Functional Requirements
Only include essential non-functional requirements:
- Performance (if relevant)
- Compatibility (if relevant)
- Reliability (if relevant)
Use sub-sections with bullet points.
### Unit tests based on Acceptance Criteria
- Acceptance criteria references
- What should be tested
- Required outcome
### Integration tests based on Acceptance Criteria and/or Non-Functional requirements
- Acceptance criteria references
- Initial data and conditions
- What should be tested
- How system should behave
- List of Non-functional requirements to be met
### Constraints
High-level constraints that guide implementation:
- Architectural patterns (if critical)
- Technical limitations
- Integration requirements
- No breaking changes (if applicable)
### Risks & Mitigation
List key risks with mitigation strategies (if applicable).
Each risk should have:
- *Risk*: Description
- *Mitigation*: Approach
## Complexity Points Guide
- 1 point: Trivial, self-contained, no dependencies
- 2 points: Non-trivial, low complexity, minimal coordination
- 3 points: Multi-step, moderate complexity, potential alignment needed
- 5 points: Difficult, interconnected logic, medium-high risk
- 8 points: High ambiguity, multiple components, very high risk (consider splitting)
## Output Guidelines
**DO:**
- Focus on behavior and user experience
- Use clear, simple language
- Keep acceptance criteria testable
- Include realistic scope boundaries
- Write from the user's perspective
- Include complexity estimation
- Note dependencies on other features
**DON'T:**
- Include implementation details (file paths, classes, methods)
- Prescribe technical solutions or libraries
- Add architectural diagrams or code examples
- Specify exact API endpoints or data structures
- Include step-by-step implementation instructions
- Add "how to build" guidance
## Example
```markdown
# Dashboard Export to Excel
**Status**: Draft | **Date**: 2025-01-XX | **Feature**: Export Dashboard Data to Excel
## Problem
Users currently have no efficient way to export dashboard data for offline analysis, reporting, or sharing. Manual copy-paste is time-consuming, error-prone, and lacks context about active filters.
## Outcome
- Eliminate manual copy-paste workflows
- Enable accurate data sharing with proper context
- Measurable time savings (target: <30s vs. several minutes)
- Improved data consistency for offline analysis
## Scope
### Included
- Export filtered dashboard data to Excel
- Single-click export from dashboard view
- Respect all active filters (status, date range)
### Excluded
- CSV or PDF export options
- Scheduled or automated exports
- Email export functionality
## Acceptance Criteria
**AC-1: Export Button Visibility**
Given the user is viewing the dashboard
When the dashboard loads
Then an "Export to Excel" button should be visible in the actions area
**AC-2: Basic Export Functionality**
Given the user is viewing the dashboard with data
When the user clicks the "Export to Excel" button
Then an Excel file should download to their default location
And the filename should include a timestamp
## Non-Functional Requirements
**Performance**
- Export completes in <2 seconds for up to 1000 records
- Support up to 10,000 records per export
**Compatibility**
- Excel files openable in Microsoft Excel, Google Sheets, and LibreOffice
- Standard Excel format (.xlsx)
## Constraints
- Must respect all currently active filters
- Must follow existing hexagonal architecture patterns
- No breaking changes to existing functionality
## Risks & Mitigation
**Risk 1: Excel File Compatibility**
- *Risk*: Generated files don't open correctly in all spreadsheet applications
- *Mitigation*: Use standard Excel format, test with multiple applications
```
## Implementation Notes
- Use descriptive but concise titles
- Keep specifications focused and scoped appropriately
- Remember: This is a **behavioral spec**, not an implementation plan
**CRITICAL**: Generate the spec file ONLY. Do NOT modify code, create files, or make any implementation changes at this stage.
@@ -1,81 +0,0 @@
# Generate Jira Task and Git Branch from Spec
Create a Jira ticket from a specification and set up git branch for development.
## Inputs
- feature_spec.md (required): path to the source spec file.
Example: `@_docs/iterative/feature_specs/spec-export-e2e.md`
- epic <Epic-Id> (required for Jira task creation): create Jira task under parent epic
Example: /gen_jira_task_and_branch @_docs/iterative/feature_specs/spec.md epic AZ-112
- update <Task-Id> (required for Jira task update): update existing Jira task
Example: /gen_jira_task_and_branch @_docs/iterative/feature_specs/spec.md update AZ-151
## Objective
1. Parse the spec to extract **Title**, **Description**, **Acceptance Criteria**, **Technical Details**, **Estimation**.
2. Create a Jira Task under Epic or Update existing Jira Task using **Jira MCP**
3. Create git branch for the task
## Parsing Rules
### Title
Use the first header at the top of the spec.
### Description (Markdown ONLY — no AC/Tech here)
Build from:
- **Purpose & Outcomes → Intent** (bullets)
- **Purpose & Outcomes → Success Signals** (bullets)
- (Optional) one-paragraph summary from **Behavior Change → New Behavior**
> **Do not include** Acceptance Criteria or Technical Details in Description if those fields exist in Jira.
### Estimation
Extract complexity points from spec header and add to Jira task.
### Acceptance Criteria (Gherkin HTML)
From **"Acceptance Criteria (Gherkin)"**, extract the **full Gherkin scenarios** including:
- The `Feature:` line
- Each complete `Scenario:` block with all `Given`, `When`, `Then`, `And` steps
- Convert the entire Gherkin text to **HTML format** preserving structure
- Do NOT create a simple checklist; keep the full Gherkin syntax for test traceability.
### Technical Details
Bullets composed of:
- **Inputs → Key constraints**
- **Scope → Included/Excluded** (condensed)
- **Interfaces & Contracts** (names only — UI actions, endpoint names, event names)
## Steps (Agent)
1. **Check current branch**
- Verify user is on `dev` branch
- If not on `dev`, notify user: "Please switch to the dev branch before proceeding"
- Stop execution if not on dev
2. Parse **Title**, **Description**, **AC**, **Tech**, **Estimation** per **Parsing Rules**.
3. **Create** or **Update** the Jira Task with the field mapping above.
- If creating a new Task with Epic provided, add the parent relation
- Do NOT modify the parent Epic work item.
4. **Create git branch**
```bash
git stash
git checkout -b {taskId}-{taskNameSlug}
git stash pop
```
- {taskId} is Jira task Id (lowercase), e.g., `az-122`
- {taskNameSlug} is kebab-case slug from task title, e.g., `progressive-search-system`
- Full branch name example: `az-122-progressive-search-system`
5. Rename spec.md and corresponding building block:
- Rename to `_docs/iterative/feature_specs/{taskId}-{taskNameSlug}.md`
- Rename to `_docs/iterative/building_blocks/{taskId}-{taskNameSlug}.md`
## Guardrails
- No source code edits; only Jira task, file moves, and git branch.
- If Jira creation/update fails, do not create branch or move files.
- If AC/Tech fields are absent in Jira, append to Description.
- **CRITICAL**: Extract the FULL Gherkin scenarios with all steps - do NOT create simple checklist items.
- Do not edit parent Epic.
- Always check for dev branch before proceeding.
-120
View File
@@ -1,120 +0,0 @@
# Merge and Deploy Feature
Complete the feature development cycle by creating PR, merging, and updating documentation.
## Input parameters
- task_id (required): Jira task ID
Example: /gen_merge_and_deploy AZ-122
## Prerequisites
- All tests pass locally
- Code review completed (or ready for review)
- Definition of Done checklist reviewed
## Steps (Agent)
### 1. Verify Branch Status
```bash
git status
git log --oneline -5
```
- Confirm on feature branch (e.g., az-122-feature-name)
- Confirm all changes committed
- If uncommitted changes exist, prompt user to commit first
### 2. Run Pre-merge Checks
**User action required**: Run your project's test and lint commands before proceeding.
```bash
# Check for merge conflicts
git fetch origin dev
git merge origin/dev --no-commit --no-ff || git merge --abort
```
- [ ] All tests pass (run project-specific test command)
- [ ] No linting errors (run project-specific lint command)
- [ ] No merge conflicts (or resolve them)
### 3. Update Documentation
#### CHANGELOG.md
Add entry under "Unreleased" section:
```markdown
### Added/Changed/Fixed
- [TASK_ID] Brief description of change
```
#### Update Jira
- Add comment with summary of implementation
- Link any related PRs or documentation
### 4. Create Pull Request
#### PR Title Format
`[TASK_ID] Brief description`
#### PR Body (from template)
```markdown
## Description
[Summary of changes]
## Related Issue
Jira ticket: [TASK_ID](link)
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Refactoring
## Checklist
- [ ] Code follows project conventions
- [ ] Self-review completed
- [ ] Tests added/updated
- [ ] All tests pass
- [ ] Documentation updated
## Breaking Changes
[None / List breaking changes]
## Deployment Notes
[None / Special deployment considerations]
## Rollback Plan
[Steps to rollback if issues arise]
## Testing
[How to test these changes]
```
### 5. Post-merge Actions
After PR is approved and merged:
```bash
# Switch to dev branch
git checkout dev
git pull origin dev
# Delete feature branch
git branch -d {feature_branch}
git push origin --delete {feature_branch}
```
### 6. Update Jira Status
- Move ticket to "Done"
- Add link to merged PR
- Log time spent (if tracked)
## Guardrails
- Do NOT merge if tests fail
- Do NOT merge if there are unresolved review comments
- Do NOT delete branch before merge is confirmed
- Always update CHANGELOG before creating PR
## Output
- PR created/URL provided
- CHANGELOG updated
- Jira ticket updated
- Feature branch cleaned up (post-merge)
-149
View File
@@ -1,149 +0,0 @@
# Iterative Implementation Phase
## Prerequisites
### Jira MCP
Add Jira MCP to the list in IDE:
```
"Jira-MCP-Server": {
"url": "https://mcp.atlassian.com/v1/sse"
}
```
### Context7 MCP
Add context7 MCP to the list in IDE:
```
"context7": {
"command": "npx",
"args": [
"-y",
"@upstash/context7-mcp"
]
}
```
### Reference Documents
- Definition of Done: `@_docs/00_templates/definition_of_done.md`
- Quality Gates: `@_docs/00_templates/quality_gates.md`
- PR Template: `@_docs/00_templates/pr_template.md`
- Feature Dependencies: `@_docs/00_templates/feature_dependency_matrix.md`
## 10 **🧑‍💻 Developers**: Form a building block
### Form a building block in the next format:
```
# Building Block: Title
## Problem / Goal
Short description of the problem we have to solve or what the end goal we need to achieve. 2-3 lines
## Architecture Notes (optional)
How it should be implemented. Which subsystem to use, short explanation of the 3-5 lines.
## Outcome
What we want to achieve from the building block
```
### Example
`_docs/iterative/building_blocks/01-dashboard-export-example.md`
## 20. **🤖AI agent**: Generate Feature Specification
### Execute `/gen_feature_spec`
## 25. **🧑‍💻 Developer**: Check Feature Dependencies
### Verify
- Check `@_docs/00_templates/feature_dependency_matrix.md`
- Ensure all dependent features are completed or mocked
- Update dependency matrix with new feature
## 30. **🤖AI agent**: Generate Jira ticket and branch
### Execute `/gen_jira_task_and_branch`
This will:
- Create Jira task under specified epic
- Create git branch from dev (e.g., `az-122-progressive-search-system`)
## 40. **🤖📋AI plan**: Generate Plan
### Execute
generate plan for `@_docs/iterative/feature_specs/spec.md`
Example:
generate plan for `@_docs/iterative/feature_specs/01-dashboard-export-example.md`
## 45. **🧑‍💻 Developer**: Define Test Strategy
### Determine test types needed:
- [ ] Unit tests (always required)
- [ ] Integration tests (if touching external systems/DB)
- [ ] E2E tests (if user workflow changes)
### Document in plan:
- Which tests to write
- Test data requirements
- Mocking strategy
## 50. **🧑‍💻 Developer**: Save the plan
Save the generated plan to `@_docs/iterative/plans`.
(First, save with built-in mechanism to .cursor folder, then move to this folder `@_docs/iterative/plans`)
## 55. **🧑‍💻 Developer**: Review Plan Before Build
### Checklist
- [ ] Plan covers all acceptance criteria
- [ ] Test strategy defined
- [ ] Dependencies identified and available
- [ ] No architectural concerns
- [ ] Estimate seems reasonable
## 60. Build from the plan
## 65. **🤖📋AI plan**: Code Review
### Execute
Use Cursor's built-in review feature or manual review.
### Verify
- All issues addressed
- Code quality standards met
## 70. Check build and tests are successful
**User action required**: Run your project's test, lint, and coverage commands.
- [ ] All tests pass
- [ ] No linting errors
- [ ] Code coverage >= 75%
## 72. **🧑‍💻 Developer**: Run Full Verification
### Local Verification
- [ ] All unit tests pass
- [ ] All integration tests pass
- [ ] Code coverage >= 75%
- [ ] No linting errors
- [ ] Manual testing completed (if UI changes)
### Quality Gate Check
Review `@_docs/00_templates/quality_gates.md` - Iterative Gate 3
## 75. **🤖AI agent**: Create PR and Merge
### Execute `/gen_merge_and_deploy`
This will:
- Verify branch status
- Run pre-merge checks
- Update CHANGELOG
- Create PR using template
- Guide through merge process
## 78. **🧑‍💻 Developer**: Finalize
- Move Jira ticket to Done
- Verify CI pipeline passed on dev
-430
View File
@@ -1,430 +0,0 @@
# 1 Research Phase
## 1.01 **🧑‍💻 Developers**: Problem statement
### Discuss
Discuss the problem and create in the `_docs/00_problem` next files and folders:
- `problem_description.md`: Our problem to solve with the end result we want to achieve.
- `input_data`: Put to this folder all the necessary input data and expected results for the further tests. Analyze very thoroughly input data and form system's restrictions and acceptance criteria
- `restrictions.md`: Restrictions we have in real world in the -dashed list format.
- `acceptance_criteria.md`: Acceptance criteria for the solution in the -dashed list format.
The most important part, determines how good the system should be.
- `security_approach.md`: Security requirements and constraints for the system.
### Example:
- `problem_description.md`
We have wing type UAV (airplane). It should fly autonomously to predetermined GPS destination. During the flight it is relying on the signal form GPS module.
But when adversary jam or spoof GPS, then UAV either don't know where to fly, or fly to the wrong direction.
So, we need to achieve that UAV can fly correctly to the destination without GPS or when GPS is spoofed. We can use the camera pointing downward and other sensor data like altitude, available form the flight controller. Airplane is running Ardupliot.
- `input_data`
- orthophoto images from the UAV for the analysis
- list of expected GPS for the centers for each picture in csv format: picture name, lat, lon
- video from the UAV for the analysis
- list of expected GPS for the centers of video per timeframe in csv format: timestamp, lat, lon for each 1-2 seconds
- ...
- `restrictions.md`
- We're limiting our solution to airplane type UAVs.
- Additional weight it could take is under 1 kg.
- The whole system should cost under $2000.
- The flying range is restricted by eastern and southern part of Ukraine. And so on.
- `acceptance_criteria.md`
- UAV should fly without GPS for at least 30 km in the sunshine weather.
- UAV should fly with maximum mistake no more than 40 meters from the real GPS
- UAV should fly correctly with little foggy weather with maximum mistake no more than 100 meters from the real GPS
- UAV should fly for minimum of 500 meters with missing internal Satellite maps and the drifting error should be no more than 50 meters.
- `security_approach.md`
- System runs on embedded platform (Jetson Orin Nano) with secure boot
- Communication with ground station encrypted via AES-256
- No remote API access during flight - fully autonomous
- Firmware signing required for updates
## 1.05 **🧑‍💻 Developers**: Git Init
### Initialize Repository
```bash
git init
git add .
git commit -m "Initial: problem statement and input data"
```
### Branching Strategy
- `main`: Documentation and stable releases
- `stage`: Planning phase artifacts
- `dev`: Implementation code
After research phase completion, all docs stay on `main`.
Before planning phase, create `stage` branch.
Before implementation phase, create `dev` branch from `stage`.
After integration tests pass, merge `dev``stage``main`.
## 1.10 **✨AI Research**: Restrictions and Acceptance Criteria assessment
### Execute `/1.research/1.10_research_assesment_acceptance_criteria`
In case of external DeepResearch (Gemini, DeepSeek, or other), copypaste command's text and put to the research context:
- `problem_description.md`
- `restrictions.md`
- `acceptance_criteria.md`
- `security_approach.md`
- Samples of the input data
### Revise
- Revise the result, discuss it
- Overwrite `acceptance_criteria.md` and `restrictions.md`
### Commit
```bash
git add _docs/00_problem/
git commit -m "Research: acceptance criteria and restrictions assessed"
```
## 1.20 **🤖✨AI Research**: Research the problem in great detail
### Execute `/1.research/1.20_research_problem`
In case of external DeepResearch (Gemini, DeepSeek, or other), copypaste command's text and put to the research context:
- `problem_description.md`
- `restrictions.md`
- `acceptance_criteria.md`
- `security_approach.md`
- Samples of the input data
### Revise
- Revise the result from AI.
- Research the problem as well
- Add/modify/remove some solution details in the draft. (Also with AI)
- Store it to the `_docs/01_solution/solution_draft.md`
## 1.30 **🤖✨AI Research**: Solution draft assessment
### Execute `/1.research/1.30_solution_draft_assessment`
In case of external DeepResearch (Gemini, DeepSeek, or other), copypaste command's text and put to the research context:
- `problem_description.md`
- `restrictions.md`
- `acceptance_criteria.md`
- `security_approach.md`
- Samples of the input data
### Revise
- Research by yourself as well - how to solve additional problems which AI figured out, and add them to the result.
### Iterate
- Rename previous `solution_draft.md` to `{xx}_solution_draft.md`. Start {xx} from 01
- Store the new revised result draft to the `_docs/01_solution/solution_draft.md`
- Repeat the process 1.30 from the beginning
When the next solution wouldn't differ much from the previous one, or become actually worse, store the last draft as `_docs/01_solution/solution.md`
## 1.35 **🤖📋AI plan**: Tech Stack Selection
### Execute `/1.research/1.35_tech_stack_selection`
### Revise
- Review technology choices against requirements
- Consider team expertise and learning curve
- Document trade-offs and alternatives considered
### Store
- Save output to `_docs/01_solution/tech_stack.md`
## 1.40 **🤖✨AI Research**: Security Research
### Execute `/1.research/1.40_security_research`
### Revise
- Review security approach against solution architecture
- Update `security_approach.md` with specific requirements per component
### Quality Gate: Research Complete
Review `@_docs/00_templates/quality_gates.md` - Gate 1
### Commit
```bash
git add _docs/
git commit -m "Research: solution and security finalized"
```
# 2. Planning phase
> **Note**: If implementation reveals architectural issues, return to Planning phase to revise components.
## 2.05 **🧑‍💻 Developers**: Create stage branch
```bash
git checkout -b stage
```
## 2.10 **🤖📋AI plan**: Generate components
### Execute `/2.planning/2.10_plan_components`
### Revise
- Revise the plan, answer questions, put detailed descriptions
- Make sure stored components are coherent and make sense
### Store plan
- Save plan to `_docs/02_components/00_decomposition_plan.md`
## 2.15 **🤖📋AI plan**: Components assessment
### Execute `/2.planning/2.15_plan_asses_components`
### Revise
- Clarify the proposals and ask to fix found issues
## 2.17 **🤖📋AI plan**: Security Check
### Execute `/2.planning/2.17_plan_security_check`
### Revise
- Review security considerations for each component
- Ensure security requirements from 1.40 are addressed
## 2.20 **🤖AI agent**: Generate Jira Epics
### Jira MCP
Add Jira MCP to the list in IDE:
```
"Jira-MCP-Server": {
"url": "https://mcp.atlassian.com/v1/sse"
}
```
### Execute `/2.planning/2.20_plan_jira_epics`
### Revise
- Revise the epics, answer questions, put detailed descriptions
- Make sure epics are coherent and make sense
## 2.22 **🤖📋AI plan**: Data Model Design
### Execute `/2.planning/2.22_plan_data_model`
### Revise
- Review entity relationships
- Verify data access patterns
- Check migration strategy
### Store
- Save output to `_docs/02_components/data_model.md`
## 2.25 **🤖📋AI plan**: API Contracts Design
### Execute `/2.planning/2.25_plan_api_contracts`
### Revise
- Review interface definitions
- Verify error handling standards
- Check versioning strategy
### Store
- Save output to `_docs/02_components/api_contracts.md`
- Save OpenAPI spec to `_docs/02_components/openapi.yaml` (if applicable)
## 2.30 **🤖📋AI plan**: Generate tests
### Execute `/2.planning/2.30_plan_tests`
### Revise
- Revise the tests, answer questions, put detailed descriptions
- Make sure stored tests are coherent and make sense
## 2.35 **🤖📋AI plan**: Risk Assessment
### Execute `/2.planning/2.37_plan_risk_assessment`
### Revise
- Review identified risks
- Verify mitigation strategies
- Set up risk monitoring
### Store
- Save output to `_docs/02_components/risk_assessment.md`
## 2.40 **🤖📋AI plan**: Component Decomposition To Features
### Execute
For each component in `_docs/02_components` run
`/2.planning/2.40_plan_features_decompose --component @xx__spec_[component_name].md`
### Revise
- Revise the features, answer questions, put detailed descriptions
- Make sure features are coherent and make sense
### Quality Gate: Planning Complete
Review `@_docs/00_templates/quality_gates.md` - Gate 2
### Commit
```bash
git add _docs/
git commit -m "Planning: components, tests, and features defined"
```
# 3. Implementation phase
## 3.05 **🤖📋AI plan**: Initial structure
### Create dev branch
```bash
git checkout -b dev
```
### Context7 MCP
Add context7 MCP to the list in IDE:
```
"context7": {
"command": "npx",
"args": [
"-y",
"@upstash/context7-mcp"
]
}
```
### Execute: `/3.implementation/3.05_implement_initial_structure`
This will create:
- Project structure with CI/CD pipeline
- Environment configurations (see `@_docs/00_templates/environment_strategy.md`)
- Database migrations setup
- Test infrastructure
### Review Plan
- Analyze the proposals, answer questions
- Improve plan as much as possible so it would be clear what exactly to do
### Save Plan
- when plan is final and ready, save it as `_docs/02_components/structure_plan.md`
### Execute Plan
- Press build and let AI generate the structure
### Revise Code
- Read the code and check that everything is ok
## 3.10 **🤖📋AI plan**: Feature implementation
### Execute
For each component in `_docs/02_components` run
`/3.implementation/3.10_implement_component @component_folder`
### Revise Plan
- Analyze the proposed development plan in a great detail, provide all necessary information
- Possibly reorganize plan if needed, think and add more input constraints if needed
- Improve plan as much as possible so it would be clear what exactly to do
### Save Plan
- when plan is final and ready, save it as `[##]._plan_[component_name]` to component's folder
### Execute Plan
- Press build and let AI generate the code
### Revise Code
- Read the code and check that everything is ok
## 3.20 **🤖📋AI plan**: Code Review
### Execute `/3.implementation/3.20_implement_code_review`
Can also use Cursor's built-in review feature.
### Revise
- Address all found issues
- Ensure code quality standards are met
## 3.30 **🤖📋AI plan**: CI/CD Validation
### Execute `/3.implementation/3.30_implement_cicd`
### Revise
- Review pipeline configuration
- Verify all quality gates are enforced
- Ensure all stages are properly configured
## 3.35 **🤖📋AI plan**: Deployment Strategy
### Execute `/3.implementation/3.35_plan_deployment`
### Revise
- Review deployment procedures per environment
- Verify rollback procedures documented
- Ensure health checks configured
### Store
- Save output to `_docs/02_components/deployment_strategy.md`
## 3.40 **🤖📋AI plan**: Integration tests and solution checks
### Execute `/3.implementation/3.40_implement_tests`
### Revise
- Revise the plan, answer questions, put detailed descriptions
- Make sure tests are coherent and make sense
## 3.42 **🤖📋AI plan**: Observability Setup
### Execute `/3.implementation/3.42_plan_observability`
### Revise
- Review logging strategy
- Verify metrics and alerting
- Check dashboard configuration
### Store
- Save output to `_docs/02_components/observability_plan.md`
## 3.45 **🧑‍💻 Developer**: Final Quality Gate
### Quality Gate: Implementation Complete
Review `@_docs/00_templates/quality_gates.md` - Gate 3
### Checklist
- [ ] All components implemented
- [ ] Code coverage >= 75%
- [ ] All tests pass
- [ ] Code review approved
- [ ] CI/CD pipeline green
- [ ] Deployment tested on staging
- [ ] Observability configured
### Merge after tests pass
```bash
git checkout stage
git merge dev
git checkout main
git merge stage
git push origin main
```
## 3.50 **🧑‍💻 Developer**: Post-Implementation
### Documentation
- [ ] Update README with final setup instructions
- [ ] Create/update runbooks using `@_docs/00_templates/incident_playbook.md`
- [ ] Document rollback procedures using `@_docs/00_templates/rollback_strategy.md`
### Handoff
- [ ] Stakeholders notified of completion
- [ ] Operations team briefed on monitoring
- [ ] Support documentation complete
-182
View File
@@ -1,182 +0,0 @@
# Refactoring Existing Project
This tutorial guides through analyzing, documenting, and refactoring an existing codebase.
## Reference Documents
- Definition of Done: `@_docs/00_templates/definition_of_done.md`
- Quality Gates: `@_docs/00_templates/quality_gates.md`
- Feature Parity Checklist: `@_docs/00_templates/feature_parity_checklist.md`
- Baseline Metrics: `@_docs/04_refactoring/baseline_metrics.md` (created in 4.07)
## 4.05 **🧑‍💻 Developers**: User Input
### Define Goals
Create in `_docs/00_problem`:
- `problem_description.md`: What system currently does + what you want to change/improve
- `acceptance_criteria.md`: Success criteria for the refactoring
- `security_approach.md`: Security requirements (if applicable)
## 4.07 **🤖📋AI plan**: Capture Baseline Metrics
### Execute `/4.refactoring/4.07_capture_baseline`
### Revise
- Verify all metrics are captured accurately
- Document measurement methodology
- Save raw data for later comparison
### Store
- Create folder `_docs/04_refactoring/`
- Save output to `_docs/04_refactoring/baseline_metrics.md`
### Create Feature Parity Checklist
- Copy `@_docs/00_templates/feature_parity_checklist.md` to `_docs/04_refactoring/`
- Fill in current feature inventory
## 4.10 **🤖📋AI plan**: Build Documentation from Code
### Execute `/4.refactoring/4.10_documentation`
### Revise
- Review generated component docs
- Verify accuracy against actual code behavior
## 4.20 **🤖📋AI plan**: Form Solution with Flows
### Execute `/4.refactoring/4.20_form_solution_flows`
### Revise
- Review solution description
- Verify flow diagrams match actual system behavior
- Store to `_docs/01_solution/solution.md`
## 4.30 **🤖✨AI Research**: Deep Research of Approaches
### Execute `/4.refactoring/4.30_deep_research`
### Revise
- Review suggested improvements
- Prioritize changes based on impact vs effort
## 4.35 **🤖✨AI Research**: Solution Assessment with Codebase
### Execute `/4.refactoring/4.35_solution_assessment`
### Revise
- Review weak points identified in current implementation
- Decide which to address
## 4.40 **🤖📋AI plan**: Integration Tests Description
### Execute `/4.refactoring/4.40_tests_description`
### Prerequisites Check
- Baseline metrics captured (4.07)
- Feature parity checklist created
### Coverage Requirements
- Minimum overall coverage: 75%
- Critical path coverage: 90%
- All public APIs must have integration tests
### Revise
- Ensure tests cover critical functionality
- Add edge cases
## 4.50 **🤖📋AI plan**: Implement Tests
### Execute `/4.refactoring/4.50_implement_tests`
### Verify
- All tests pass on current codebase
- Tests serve as safety net for refactoring
- Coverage meets requirements (75% minimum)
### Quality Gate: Safety Net Ready
Review `@_docs/00_templates/quality_gates.md` - Refactoring Gate 1
## 4.60 **🤖📋AI plan**: Analyze Coupling
### Execute `/4.refactoring/4.60_analyze_coupling`
### Revise
- Review coupling analysis
- Prioritize decoupling strategy
## 4.70 **🤖📋AI plan**: Execute Decoupling
### Execute `/4.refactoring/4.70_execute_decoupling`
### Verify After Each Change
- Run integration tests after each change
- All tests must pass before proceeding
- Update feature parity checklist
### Quality Gate: Refactoring Safe
Review `@_docs/00_templates/quality_gates.md` - Refactoring Gate 2
## 4.80 **🤖📋AI plan**: Technical Debt
### Execute `/4.refactoring/4.80_technical_debt`
### Revise
- Review debt items
- Prioritize by impact
## 4.90 **🤖📋AI plan**: Performance Optimization
### Execute `/4.refactoring/4.90_performance`
### Verify
- Compare against baseline metrics from 4.07
- Performance should be improved or maintained
- Run tests to ensure no regressions
## 4.95 **🤖📋AI plan**: Security Review
### Execute `/4.refactoring/4.95_security`
### Verify
- Address identified vulnerabilities
- Run security tests if applicable
## 4.97 **🧑‍💻 Developer**: Final Verification
### Quality Gate: Refactoring Complete
Review `@_docs/00_templates/quality_gates.md` - Refactoring Gate 3
### Compare Against Baseline
- [ ] Code coverage >= baseline
- [ ] Performance metrics improved or maintained
- [ ] All features preserved (feature parity checklist complete)
- [ ] Technical debt reduced
### Feature Parity Verification
- [ ] All items in feature parity checklist verified
- [ ] No functionality lost
- [ ] All tests pass
### Documentation
- [ ] Update solution.md with changes
- [ ] Document any intentional behavior changes
- [ ] Update README if needed
### Commit
```bash
git add .
git commit -m "Refactoring: complete"
```
-21
View File
@@ -1,21 +0,0 @@
- Follow SOLID principles
- Follow KISS principle.
- Follow principle: Dumb code - smart data.
- Follow DRY principles, but do not overcomplicate things, if small or even medium code duplication (sometimes even 2 times) would make solution easier - go for it.
Deduplicate code concepts if it is clear reusing pattern and semantically can be easily distinguish in reusable component
- Follow conventions and rules of the project's programming language
- Always prefer simple solution
- Generate concise code
- Do not put comments in the code
- Do not put logs unless it is an exception, or was asked specifically
- Do not put code annotations unless it was asked specifically
- Your changes should be well correlated with what was requested. In case of any uncertainties ask questions.
- Mocking data is needed only for tests
- When you add new libraries or dependencies make sure you are using the same version of it as other parts of the code
- Focus on the areas of code relevant to the task
- Do not touch code that is unrelated to the task
- Always think about what other methods and areas of code might be affected by the code changes
- When you think you are done with changes, run tests and make sure they are not broken
- Do not rename any databases or tables or table columns without confirmation. Avoid such renaming if possible.
- Do not create diagrams unless I ask explicitly
- Do not commit binaries, create and keep .gitignore up to date and delete binaries after you are done with the task
-169
View File
@@ -1,169 +0,0 @@
# Azaion GPS Denied Desktop
**A Resilient, GNSS-Denied Geo-Localization System for Wing-Type UAVs**
GPS-denied UAV localization system using visual odometry and satellite imagery matching for fixed-wing UAVs operating over Eastern/Southern Ukraine.
## Overview
Azaion GPS-Denied addresses the challenge of autonomous navigation in GNSS-denied environments where traditional GPS is unavailable or unreliable. The system is designed for high-speed, fixed-wing UAVs operating without IMU data over visually homogeneous agricultural terrain.
### Key Features
- **Tri-Layer Localization**: Sequential tracking, global re-localization, and metric refinement
- **Sharp Turn Recovery**: Handles 0% image overlap during banking maneuvers
- **350m Outlier Tolerance**: Robust to large positional errors from airframe tilt
- **Real-time Processing**: <5 seconds per frame on RTX 2060/3070
- **Human-in-the-Loop**: Fallback to user input when automation fails
### Accuracy Targets
| Metric | Target |
|--------|--------|
| Photos within 50m error | 80% |
| Photos within 20m error | 60% |
| Processing time per frame | <5 seconds |
| Image registration rate | >95% |
## Architecture
### Processing Layers
| Layer | Purpose | Algorithm | Latency |
|-------|---------|-----------|---------|
| L1: Sequential Tracking | Frame-to-frame pose | SuperPoint + LightGlue | ~50-100ms |
| L2: Global Re-Localization | Recovery after track loss | DINOv2 + VLAD (AnyLoc) | ~200ms |
| L3: Metric Refinement | Precise GPS anchoring | LiteSAM | ~300-500ms |
### Core Components
- **Factor Graph Optimizer** (GTSAM): Fuses relative and absolute measurements
- **Atlas Multi-Map**: Route chunks as first-class entities for handling disconnected segments
- **Satellite Data Manager**: Google Maps tile caching with Web Mercator projection
## Tech Stack
- **Python**: 3.10+ (GTSAM compatibility)
- **Web Framework**: FastAPI (async)
- **Database**: PostgreSQL + SQLAlchemy ORM
- **ML Runtime**: TensorRT (primary), ONNX Runtime (fallback)
- **Graph Optimization**: GTSAM
- **Similarity Search**: Faiss
## Development Setup
### Prerequisites
- Python 3.10+
- PostgreSQL 14+
- NVIDIA GPU with CUDA support (RTX 2060/3070 recommended)
- [uv](https://docs.astral.sh/uv/) package manager
### Installation
1. **Clone the repository**
```bash
git clone <repository-url>
cd gps-denied
```
2. **Install uv** (if not already installed)
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
3. **Create virtual environment and install dependencies**
```bash
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
uv pip install -e ".[dev]"
```
4. **Install ML dependencies** (optional, requires CUDA)
```bash
uv pip install -e ".[ml]"
```
5. **Set up PostgreSQL database**
```bash
createdb gps_denied
```
6. **Configure environment**
```bash
cp .env.example .env
# Edit .env with your database credentials
```
### Running the Service
```bash
uvicorn main:app --reload --host 0.0.0.0 --port 8000
```
### API Documentation
Once running, access the interactive API docs at:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
### Running Tests
```bash
pytest
```
With coverage:
```bash
pytest --cov=. --cov-report=html
```
## API Endpoints
| Method | Endpoint | Description |
|--------|----------|-------------|
| POST | `/api/v1/flights` | Create new flight |
| GET | `/api/v1/flights/{id}` | Get flight details |
| GET | `/api/v1/flights/{id}/status` | Get processing status |
| POST | `/api/v1/flights/{id}/batches` | Upload image batch |
| POST | `/api/v1/flights/{id}/user-fix` | Submit user GPS anchor |
| GET | `/api/v1/stream/{id}` | SSE stream for real-time results |
## Project Structure
```
gps-denied/
├── main.py # FastAPI application entry point
├── pyproject.toml # Project dependencies
├── api/ # REST API routes
│ ├── routes/
│ │ ├── flights.py # Flight management endpoints
│ │ ├── images.py # Image upload endpoints
│ │ └── stream.py # SSE streaming
│ └── dependencies.py # Dependency injection
├── components/ # Core processing components
│ ├── flight_api/ # API layer component
│ ├── flight_processing_engine/
│ ├── sequential_visual_odometry/
│ ├── global_place_recognition/
│ ├── metric_refinement/
│ ├── factor_graph_optimizer/
│ ├── satellite_data_manager/
│ └── ...
├── models/ # Pydantic DTOs
│ ├── core/ # GPS, Camera, Pose models
│ ├── flight/ # Flight, Waypoint models
│ ├── processing/ # VO, Matching results
│ ├── chunks/ # Route chunk models
│ └── ...
├── helpers/ # Utility functions
├── db/ # Database layer
│ ├── models.py # SQLAlchemy models
│ └── connection.py # Async database connection
└── _docs/ # Documentation
```
## License
Proprietary - All rights reserved
@@ -1,36 +0,0 @@
# Research Acceptance Criteria
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
## Role
You are a professional software architect
## Task
- Thoroughly research in internet about the problem and how realistic these acceptance criteria are.
- Check how critical each criterion is.
- Find out more acceptance criteria for this specific domain.
- Research the impact of each value in the acceptance criteria on the whole system quality.
- Verify your findings with authoritative sources (official docs, papers, benchmarks).
- Consider cost/budget implications of each criterion.
- Consider timeline implications - how long would it take to meet each criterion.
## Output format
Assess acceptable ranges for each value in each acceptance criterion in the state-of-the-art solutions, and propose corrections in the next table:
- Acceptance criterion name
- Our values
- Your researched criterion values
- Cost/Timeline impact
- Status: Is the criterion added by your research to our system, modified, or removed
### Assess the restrictions we've put on the system. Are they realistic? Should we add more strict restrictions, or vice versa, add more requirements in restrictions to use our system. Propose corrections in the next table:
- Restriction name
- Our values
- Your researched restriction values
- Cost/Timeline impact
- Status: Is a restriction added by your research to our system, modified, or removed
@@ -1,37 +0,0 @@
# Research Problem
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
## Role
You are a professional researcher and software architect
## Task
- Research existing/competitor solutions for similar problems.
- Thoroughly research in internet about the problem and all the possible ways to solve a problem, and split it to components.
- Then research all the possible ways to solve components, and find out the most efficient state-of-the-art solutions.
- Verify that suggested tools/libraries actually exist and work as described.
- Include security considerations in each component analysis.
- Provide rough cost estimates for proposed solutions.
Be concise in formulating. The fewer words, the better, but do not miss any important details.
## Output format
Produce the resulting solution draft in the next format:
- Short Product solution description. Brief component interaction diagram.
- Existing/competitor solutions analysis (if any).
- Architecture solution that meets restrictions and acceptance criteria.
For each component, analyze the best possible solutions, and form a comparison table.
Each possible component solution would be a row, and has the next columns:
- Tools (library, platform) to solve component tasks
- Advantages of this solution
- Limitations of this solution
- Requirements for this solution
- Security considerations
- Estimated cost
- How does it fit for the problem component that has to be solved, and the whole solution
- Testing strategy. Research how to cover system with tests in order to meet all the acceptance criteria. Form a list of integration functional tests and non-functional tests.
@@ -1,40 +0,0 @@
# Solution Draft Assessment
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Existing solution draft: `@_docs/01_solution/solution_draft.md`
## Role
You are a professional software architect
## Task
- Thoroughly research in internet about the problem and identify all potential weak points and problems.
- Identify security weak points and vulnerabilities.
- Identify performance bottlenecks.
- Address these problems and find out ways to solve them.
- Based on your findings, form a new solution draft in the same format.
## Output format
- Put here all new findings, what was updated, replaced, or removed from the previous solution in the next table:
- Old component solution
- Weak point (functional/security/performance)
- Solution (component's new solution)
- Form the new solution draft. In the updated report, do not put "new" marks, do not compare to the previous solution draft, just make a new solution as if from scratch. Put it in the next format:
- Short Product solution description. Brief component interaction diagram.
- Architecture solution that meets restrictions and acceptance criteria.
For each component, analyze the best possible solutions, and form a comparison table.
Each possible component solution would be a row, and has the next columns:
- Tools (library, platform) to solve component tasks
- Advantages of this solution
- Limitations of this solution
- Requirements for this solution
- Security considerations
- Performance characteristics
- How does it fit for the problem component that has to be solved, and the whole solution
- Testing strategy. Research how to cover system with tests in order to meet all the acceptance criteria. Form a list of integration functional tests and non-functional tests.
@@ -1,137 +0,0 @@
# Tech Stack Selection
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Solution draft: `@_docs/01_solution/solution.md`
## Role
You are a software architect evaluating technology choices
## Task
- Evaluate technology options against requirements
- Consider team expertise and learning curve
- Assess long-term maintainability
- Document selection rationale
## Output
### Requirements Analysis
#### Functional Requirements
| Requirement | Tech Implications |
|-------------|-------------------|
| [From acceptance criteria] | |
#### Non-Functional Requirements
| Requirement | Tech Implications |
|-------------|-------------------|
| Performance | |
| Scalability | |
| Security | |
| Maintainability | |
#### Constraints
| Constraint | Impact on Tech Choice |
|------------|----------------------|
| [From restrictions] | |
### Technology Evaluation
#### Programming Language
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Language]
**Rationale**: [Why this choice]
#### Framework
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Framework]
**Rationale**: [Why this choice]
#### Database
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Database]
**Rationale**: [Why this choice]
#### Infrastructure/Hosting
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Platform]
**Rationale**: [Why this choice]
#### Key Libraries/Dependencies
| Category | Library | Version | Purpose | Alternatives Considered |
|----------|---------|---------|---------|------------------------|
| | | | | |
### Evaluation Criteria
Rate each technology option against these criteria:
1. **Fitness for purpose**: Does it meet functional requirements?
2. **Performance**: Can it meet performance requirements?
3. **Security**: Does it have good security track record?
4. **Maturity**: Is it stable and well-maintained?
5. **Community**: Active community and documentation?
6. **Team expertise**: Does team have experience?
7. **Cost**: Licensing, hosting, operational costs?
8. **Scalability**: Can it grow with the project?
### Technology Stack Summary
```
Language: [Language] [Version]
Framework: [Framework] [Version]
Database: [Database] [Version]
Cache: [Cache solution]
Message Queue: [If applicable]
CI/CD: [Platform]
Hosting: [Platform]
Monitoring: [Tools]
```
### Risk Assessment
| Technology | Risk | Mitigation |
|------------|------|------------|
| | | |
### Learning Requirements
| Technology | Team Familiarity | Training Needed |
|------------|-----------------|-----------------|
| | High/Med/Low | Yes/No |
### Decision Record
**Decision**: [Summary of tech stack]
**Date**: [YYYY-MM-DD]
**Participants**: [Who was involved]
**Status**: Approved / Pending Review
Store output to `_docs/01_solution/tech_stack.md`
## Notes
- Avoid over-engineering - choose simplest solution that meets requirements
- Consider total cost of ownership, not just initial development
- Prefer proven technologies over cutting-edge unless required
- Document trade-offs for future reference
- Ask questions about team expertise and constraints
@@ -1,37 +0,0 @@
# Security Research
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Solution: `@_docs/01_solution/solution.md`
## Role
You are a security architect
## Task
- Review solution architecture against security requirements from `security_approach.md`
- Identify attack vectors and threat model for the system
- Define security requirements per component
- Propose security controls and mitigations
## Output format
### Threat Model
- Asset inventory (what needs protection)
- Threat actors (who might attack)
- Attack vectors (how they might attack)
### Security Requirements per Component
For each component:
- Component name
- Security requirements
- Proposed controls
- Risk level (High/Medium/Low)
### Security Controls Summary
- Authentication/Authorization approach
- Data protection (encryption, integrity)
- Secure communication
- Logging and monitoring requirements
@@ -1,82 +0,0 @@
# Decompose
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
You are a professional software architect
## Task
- Read problem description and solution draft, analyze it thoroughly
- Decompose a complex system solution to the components with proper communications between them, so that system would solve the problem.
- Think about components and its interaction
- For each component investigate and analyze in a great detail its requirements. If additional components are needed, like data preparation, create them
- Solution draft could be incomplete, so add all necessary components to meet acceptance criteria and restrictions
- When you've got full understanding of how exactly each component will interact with each other, create components
## Output Format
### Components Decomposition
Store description of each component to the file `_docs/02_components/[##]_[component_name]/[##]._component_[component_name].md` with the next structure:
1. High-level overview
- **Purpose:** A concise summary of what this component does and its role in the larger system.
- **Architectural Pattern:** Identify the design patterns used (e.g., Singleton, Observer, Factory).
2. API Reference. Create a table for each function or method with the next columns:
- Name
- Description
- Input
- Output
- Description of input and output data in case if it is not obvious
- Test cases which could be for the method
3. Implementation Details
- **Algorithmic Complexity:** Analyze Time (Big O) and Space complexity for critical methods.
- **State Management:** Explain how this component handles state (local vs. global).
- **Dependencies:** List key external libraries and their purpose here.
- **Error Handling:** Define error handling strategy for this component.
4. Tests
- Integration tests for the component if needed.
- Non-functional tests for the component if needed.
5. Extensions and Helpers
- Store Extensions and Helpers to support functionality across multiple components to a separate folder `_docs/02_components/helpers`.
6. Caveats & Edge Cases
- Known limitations
- Potential race conditions
- Potential performance bottlenecks.
### Dependency Graph
- Create component dependency graph showing implementation order
- Identify which components can be implemented in parallel
### API Contracts
- Define interfaces/contracts between components
- Specify data formats exchanged
### Logging Strategy
- Define global logging approach for the system
- Log levels, format, storage
For the whole system make these diagrams and store them to `_docs/02_components`:
### Logic & Architecture
- Generate draw.io components diagrams shows relations between components.
- Make sure lines are not intersect each other, or at least try to minimize intersections.
- Group the semantically coherent components into the groups
- Leave enough space for the nice alignment of the components boxes
- Put external users of the system closer to those components' blocks they are using
- Generate a Mermaid Flowchart diagrams for each of the main control flows
- Create multiple flows system can operate, and generate a flowchart diagram per each flow
- Flows can relate to each other
## Notes
- Strongly follow Single Responsibility Principle during creation of components.
- Follow dumb code - smart data principle. Do not overcomplicate
- Components should be semantically coherent. Do not spread similar functionality across multiple components
- Do not put any code yet, only names, input and output.
- Ask as many questions as possible to clarify all uncertainties.
@@ -1,30 +0,0 @@
# Component Assessment
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
You are a professional software architect
## Task
- Read carefully all the documents above
- Check all the components @02_components how coherent they are
- Follow interaction logic and flows, try to find some potential problems there
- Try to find some missing interaction or circular dependencies
- Check all the components follows Single Responsibility Principle
- Check all the follows dumb code - smart data principle. So that resulting code shouldn't be overcomplicated
- Check for security vulnerabilities in component design
- Check for performance bottlenecks
- Verify API contracts are consistent across components
## Output
Form a list of problems with fixes in the next format:
- Component
- Problem type (Architectural/Security/Performance/API)
- Problem, reason
- Fix or potential fixes
@@ -1,36 +0,0 @@
# Security Check
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a security architect
## Task
- Review each component against security requirements
- Identify security gaps in component design
- Verify security controls are properly distributed across components
- Check for common vulnerabilities (injection, auth bypass, data leaks)
## Output
### Security Assessment per Component
For each component:
- Component name
- Security gaps found
- Required security controls
- Priority (High/Medium/Low)
### Cross-Component Security
- Authentication flow assessment
- Authorization gaps
- Data flow security (encryption in transit/at rest)
- Logging for security events
### Recommendations
- Required changes before implementation
- Security helpers/components to add
@@ -1,67 +0,0 @@
# Generate Jira Epics
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a world class product manager
## Task
- Generate Jira Epics from the Components using Jira MCP
- Order epics by dependency (which must be done first)
- Include rough effort estimation per epic
- Ensure each epic has clear goal and acceptance criteria, verify it with acceptance criteria
- Generate draw.io components diagram based on previous diagram shows relations between components and current Jira Epic number, corresponding to each component.
## Output
Epic format:
- Epic Name [Component] [Outcome]
- Example: Data Ingestion Near-real-time pipeline
- Epic Summary (12 sentences)
- What we are building + why it matters
- Problem / Context
- Current state, pain points, constraints, business opportunities, Links to architecture decision records or diagrams
- Scope. Detailed description
- In Scope. Bullet list of capabilities (not tasks)
- Out-of-scope. Explicit exclusions to prevent scope creep
- Assumptions
- System design specifics, input material quality, data structures, network availability etc
- Dependencies
- Other epics that must be completed first
- Other components, services, hardware, environments, certificates, data sources etc.
- Effort Estimation
- T-shirt size (S/M/L/XL) or story points range
- Users / Consumers
- Internal, External, Systems, Short list of the key use cases.
- Requirements
- Functional - API expectations, events, data handling, idempotency, retry behavior etc
- Non-functional - Availability, latency, throughput, scalability, processing limits, data retention etc
- Security/Compliance - Authentication, encryption, secrets, logging, SOC2/ISO if applicable
- Design & Architecture (links)
- High-level diagram link, Data flow, sequence diagrams, schemas etc
- Definition of Done (Epic-level)
- Feature list per epic scope
- Automated tests (unit/integration/e2e) + minimum coverage threshold met
- Runbooks if applicable
- Documentation updated
- Acceptance Criteria (measurable)
- Risks & Mitigations
- Top 5 risks (technical + delivery) with mitigation owners or systems involved
- Label epic
- component:<name>
- env:prod|stg
- type:platform|data|integration
- Jira Issue Breakdown
- Create consistent child issues under the epic
- Spikes
- Tasks
- Technical enablers
## Notes
- Be as much concise as possible in formulating epics. The less words with the same meaning - the better epic is.
@@ -1,57 +0,0 @@
# Data Model Design
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a professional database architect
## Task
- Analyze solution and components to identify all data entities
- Design database schema that supports all component requirements
- Define relationships, constraints, and indexes
- Consider data access patterns for query optimization
- Plan for data migration if applicable
## Output
### Entity Relationship Diagram
- Create ERD showing all entities and relationships
- Use Mermaid or draw.io format
### Schema Definition
For each entity:
- Table name
- Columns with types, constraints, defaults
- Primary keys
- Foreign keys and relationships
- Indexes (clustered, non-clustered)
- Partitioning strategy (if needed)
### Data Access Patterns
- List common queries per component
- Identify hot paths requiring optimization
- Recommend caching strategy
### Migration Strategy
- Initial schema creation scripts
- Seed data requirements
- Rollback procedures
### Storage Estimates
- Estimated row counts per table
- Storage requirements
- Growth projections
Store output to `_docs/02_components/data_model.md`
## Notes
- Follow database normalization principles (3NF minimum)
- Consider read vs write optimization based on access patterns
- Plan for horizontal scaling if required
- Ask questions to clarify data requirements
@@ -1,64 +0,0 @@
# API Contracts Design
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Data Model: `@_docs/02_components/data_model.md`
## Role
You are a professional API architect
## Task
- Define API contracts between all components
- Specify external API endpoints (if applicable)
- Define data transfer objects (DTOs)
- Establish error response standards
- Plan API versioning strategy
## Output
### Internal Component Interfaces
For each component boundary:
- Interface name
- Methods with signatures
- Input/Output DTOs
- Error types
- Async/Sync designation
### External API Specification
Generate OpenAPI/Swagger spec including:
- Endpoints with HTTP methods
- Request/Response schemas
- Authentication requirements
- Rate limiting rules
- Example requests/responses
### DTO Definitions
For each data transfer object:
- Name and purpose
- Fields with types
- Validation rules
- Serialization format (JSON, Protobuf, etc.)
### Error Contract
- Standard error response format
- Error codes and messages
- HTTP status code mapping
### Versioning Strategy
- API versioning approach (URL, header, query param)
- Deprecation policy
- Breaking vs non-breaking change definitions
Store output to `_docs/02_components/api_contracts.md`
Store OpenAPI spec to `_docs/02_components/openapi.yaml` (if applicable)
## Notes
- Follow RESTful conventions for external APIs
- Keep internal interfaces minimal and focused
- Design for backward compatibility
- Ask questions to clarify integration requirements
@@ -1,59 +0,0 @@
# Generate Tests
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
You are a professional Quality Assurance Engineer
## Task
- Compose tests according to the test strategy
- Cover all the criteria with tests specs
- Minimum coverage target: 75%
## Output
Store all tests specs to the files `_docs/02_tests/[##]_[test_name]_spec.md`
Types and structures of tests:
- Integration tests
- Summary
- Detailed description
- Input data for this specific test scenario
- Expected result
- Maximum expected time to get result
- Performance tests
- Summary
- Load/stress scenario description
- Expected throughput/latency
- Resource limits
- Security tests
- Summary
- Attack vector being tested
- Expected behavior
- Pass/Fail criteria
- Acceptance tests
- Summary
- Detailed description
- Preconditions for tests
- Steps:
- Step1 - Expected result1
- Step2 - Expected result2
...
- StepN - Expected resultN
- Test Data Management
- Required test data
- Setup/Teardown procedures
- Data isolation strategy
## Notes
- Do not put any code yet
- Ask as many questions as needed.
@@ -1,111 +0,0 @@
# Risk Assessment
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Estimation: `@_docs/02_components/estimation.md`
## Role
You are a technical risk analyst
## Task
- Identify technical and project risks
- Assess probability and impact
- Define mitigation strategies
- Create risk monitoring plan
## Output
### Risk Register
| ID | Risk | Category | Probability | Impact | Score | Mitigation | Owner |
|----|------|----------|-------------|--------|-------|------------|-------|
| R1 | | Tech/Schedule/Resource/External | High/Med/Low | High/Med/Low | H/M/L | | |
### Risk Scoring Matrix
| | Low Impact | Medium Impact | High Impact |
|--|------------|---------------|-------------|
| High Probability | Medium | High | Critical |
| Medium Probability | Low | Medium | High |
| Low Probability | Low | Low | Medium |
### Risk Categories
#### Technical Risks
- Technology choices may not meet requirements
- Integration complexity underestimated
- Performance targets unachievable
- Security vulnerabilities
#### Schedule Risks
- Scope creep
- Dependencies delayed
- Resource unavailability
- Underestimated complexity
#### Resource Risks
- Key person dependency
- Skill gaps
- Team availability
#### External Risks
- Third-party API changes
- Vendor reliability
- Regulatory changes
### Top Risks (Ranked)
#### 1. [Highest Risk]
- **Description**:
- **Probability**: High/Medium/Low
- **Impact**: High/Medium/Low
- **Mitigation Strategy**:
- **Contingency Plan**:
- **Early Warning Signs**:
- **Owner**:
#### 2. [Second Highest Risk]
...
### Risk Mitigation Plan
| Risk ID | Mitigation Action | Timeline | Cost | Responsible |
|---------|-------------------|----------|------|-------------|
| R1 | | | | |
### Risk Monitoring
#### Review Schedule
- Daily standup: Discuss blockers (potential risks materializing)
- Weekly: Review risk register, update probabilities
- Sprint end: Comprehensive risk review
#### Early Warning Indicators
| Risk | Indicator | Threshold | Action |
|------|-----------|-----------|--------|
| | | | |
### Contingency Budget
- Time buffer: 20% of estimated duration
- Scope flexibility: [List features that can be descoped]
- Resource backup: [Backup resources if available]
### Acceptance Criteria for Risks
Define which risks are acceptable:
- Low risks: Accepted, monitored
- Medium risks: Mitigation required
- High risks: Mitigation + contingency required
- Critical risks: Must be resolved before proceeding
Store output to `_docs/02_components/risk_assessment.md`
## Notes
- Update risk register throughout project
- Escalate critical risks immediately
- Consider both likelihood and impact
- Ask questions to uncover hidden risks
@@ -1,40 +0,0 @@
# Generate Features for the provided component spec
## Input parameters
- component_spec.md. Required. Do NOT proceed if it is NOT provided!
- parent Jira Epic in the format AZ-###. Required. Do NOT proceed if it is NOT provided!
## Prerequisites
- Jira Epics must be created first (step 2.20)
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
You are a professional software architect
## Task
- Read very carefully component_spec.md
- Decompose component_spec.md to the features. If component is simple or atomic, then create only 1 feature.
- Split to the many features only if it necessary and would be easier to implement
- Do not create features of other components, create *only* features of this exact component
- Each feature should be atomic, could contain 0 API, or list of semantically connected APIs
- After splitting assess yourself
- Add complexity points estimation (1, 2, 3, 5, 8) per feature
- Note feature dependencies (some features may be independent)
- Use `@gen_feature_spec.md` as a complete guidance how to generate feature spec
- Generate Jira tasks per each feature using this spec `@gen_jira_task_and_branch.md` using Jira MCP.
## Output
- The file name of the feature specs should follow this format: `[component's number ##].[feature's number ##]_feature_[feature_name].md`.
- The structure of the feature spec should follow this spec `@gen_feature_spec.md`
- The structure of the Jira task should follow this spec: `@gen_jira_task_and_branch.md`
- Include dependency notes (which features can be done in parallel)
## Notes
- Do NOT generate any code yet, only brief explanations what should be done.
- Ask as many questions as needed.
@@ -1,73 +0,0 @@
# Create Initial Structure
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`.
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data.
- Restrictions: `@_docs/00_problem/restrictions.md`.
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`.
- Security approach: `@_docs/00_problem/security_approach.md`.
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components with Features specifications: `@_docs/02_components`
## Role
You are a professional software architect
## Task
- Read carefully all the component specs and features in the components folder: `@_docs/02_components`
- Investigate in internet what are the best way and tools to implement components and its features
- Make a plan for the creating initial structure:
- DTOs
- component's interfaces
- empty implementations
- helpers - empty implementations or interfaces
- Add .gitignore appropriate for the project's language/framework
- Add .env.example with required environment variables
- Configure CI/CD pipeline with full stages:
- Build stage
- Lint/Static analysis stage
- Unit tests stage
- Integration tests stage
- Security scan stage (SAST/dependency check)
- Deploy to staging stage (triggered on merge to stage branch)
- Define environment strategy based on `@_docs/00_templates/environment_strategy.md`:
- Development environment configuration
- Staging environment configuration
- Production environment configuration (if applicable)
- Add database migration setup if applicable
- Add README.md, describe the project by @_docs/01_solution/solution.md
- Create a separate folder for the integration tests (not a separate repo)
- Configure branch protection rules recommendations
## Example
The structure should roughly looks like this:
- .gitignore
- .env.example
- .github/workflows/ (or .gitlab-ci.yml)
- api
- components
- component1_folder
- component2_folder
- ...
- db
- migrations/
- helpers
- models
- tests
- unit_test1_project1_folder
- unit_test2_project2_folder
...
- integration_tests_folder
- test data
- test01_file
- test02_file
...
Also it is possible that some semantically coherent components (or 1 big component) would be in its own project or project folder
Could be common layer or project consisting of all the interfaces (for C# or Java), or each interface in each component's folder (python) - depending on the language common conventions
## Notes
- Follow SOLID principles
- Follow KISS principle. Dumb code - smart data.
- Follow DRY principles, but do not overcomplicate things, if code repeats sometimes, it is ok if that would be simpler
- Follow conventions and rules of the project's programming language
- Ask as many questions as needed, everything should be clear how to implement each feature
@@ -1,35 +0,0 @@
# Implement Component and Features by Spec
## Input parameter
component_folder
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`.
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data.
- Restrictions: `@_docs/00_problem/restrictions.md`.
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`.
- Security approach: `@_docs/00_problem/security_approach.md`.
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
You are a professional software architect and developer
## Task
- Read carefully initial data and component spec in the component_folder: `@_docs/02_components/[##]_[component_name]/[##]._component_[component_name]`
- Read carefully all the component features in the component_folder: `@_docs/02_components/[##]_[component_name]/[##].[##]_feature_[feature_name]`
- Investigate in internet what are the best way and tools to implement component and its features
- During the investigation it is possible that found solutions required architecturally reorganization of the features. It is ok, propose that and if user agrees, include reorganization in the build feature plan. Also it is possible that interface could be changed or even removed or added new one. It is ok.
- Analyze the existing codebase and get full context for the component's implementation
- Make sure each feature is connected and communicated properly with other features and existing code
- If component has dependency on another one, create temporary mock for the dependency
- For each feature:
- Implement the feature
- Implement error handling per defined strategy
- Implement logging per defined strategy
- Implement all unit tests from the Test cases description, add checks test results to the plan steps
- Implement all integration tests for the feature, add check test results to the plan steps. Analyze existing tests, and decide whether to create new one or add to existing
- Add to the implementation plan description of all component's integration tests, add check test results to the plan steps
- After component is complete, replace mocks with real implementations (mock cleanup)
## Notes
- Ask as many questions as needed, everything should be clear how to implement each feature
@@ -1,39 +0,0 @@
# Code Review
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`.
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`.
- Security approach: `@_docs/00_problem/security_approach.md`.
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a senior software engineer performing code review
## Task
- Review implemented code against component specifications
- Check code quality: readability, maintainability, SOLID principles
- Check error handling consistency
- Check logging implementation
- Check security requirements are met
- Check test coverage is adequate
- Identify code smells and technical debt
## Output
### Issues Found
For each issue:
- File/Location
- Issue type (Bug/Security/Performance/Style/Debt)
- Description
- Suggested fix
- Priority (High/Medium/Low)
### Summary
- Total issues by type
- Blocking issues that must be fixed
- Recommended improvements
## Notes
- Can also use Cursor's built-in review feature
- Focus on critical issues first
@@ -1,64 +0,0 @@
# CI/CD Pipeline Validation & Enhancement
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Environment Strategy: `@_docs/00_templates/environment_strategy.md`
## Role
You are a DevOps engineer
## Task
- Review existing CI/CD pipeline configuration
- Validate all stages are working correctly
- Optimize pipeline performance (parallelization, caching)
- Ensure test coverage gates are enforced
- Verify security scanning is properly configured
- Add missing quality gates
## Checklist
### Pipeline Health
- [ ] All stages execute successfully
- [ ] Build time is acceptable (<10 min for most projects)
- [ ] Caching is properly configured (dependencies, build artifacts)
- [ ] Parallel execution where possible
### Quality Gates
- [ ] Code coverage threshold enforced (minimum 75%)
- [ ] Linting errors block merge
- [ ] Security vulnerabilities block merge (critical/high)
- [ ] All tests must pass
### Environment Deployments
- [ ] Staging deployment works on merge to stage branch
- [ ] Environment variables properly configured per environment
- [ ] Secrets are securely managed (not in code)
- [ ] Rollback procedure documented
### Monitoring
- [ ] Build notifications configured (Slack, email, etc.)
- [ ] Failed build alerts
- [ ] Deployment success/failure notifications
## Output
### Pipeline Status Report
- Current pipeline configuration summary
- Issues found and fixes applied
- Performance metrics (build times)
### Recommended Improvements
- Short-term improvements
- Long-term optimizations
### Quality Gate Configuration
- Thresholds configured
- Enforcement rules
## Notes
- Do not break existing functionality
- Test changes in separate branch first
- Document any manual steps required
@@ -1,72 +0,0 @@
# Deployment Strategy Planning
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Environment Strategy: `@_docs/00_templates/environment_strategy.md`
## Role
You are a DevOps/Platform engineer
## Task
- Define deployment strategy for each environment
- Plan deployment procedures and automation
- Define rollback procedures
- Establish deployment verification steps
- Document manual intervention points
## Output
### Deployment Architecture
- Infrastructure diagram (where components run)
- Network topology
- Load balancing strategy
- Container/VM configuration
### Deployment Procedures
#### Staging Deployment
- Trigger conditions
- Pre-deployment checks
- Deployment steps
- Post-deployment verification
- Smoke tests to run
#### Production Deployment
- Approval workflow
- Deployment window
- Pre-deployment checks
- Deployment steps (blue-green, rolling, canary)
- Post-deployment verification
- Smoke tests to run
### Rollback Procedures
- Rollback trigger criteria
- Rollback steps per environment
- Data rollback considerations
- Communication plan during rollback
### Health Checks
- Liveness probe configuration
- Readiness probe configuration
- Custom health endpoints
### Deployment Checklist
- [ ] All tests pass in CI
- [ ] Security scan clean
- [ ] Database migrations reviewed
- [ ] Feature flags configured
- [ ] Monitoring alerts configured
- [ ] Rollback plan documented
- [ ] Stakeholders notified
Store output to `_docs/02_components/deployment_strategy.md`
## Notes
- Prefer automated deployments over manual
- Zero-downtime deployments for production
- Always have a rollback plan
- Ask questions about infrastructure constraints
@@ -1,39 +0,0 @@
# Implement Tests by Spec
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`.
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data.
- Restrictions: `@_docs/00_problem/restrictions.md`.
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`.
- Full Solution Description: `@_docs/01_solution/solution.md`
- Tests specifications: `@_docs/02_tests`
## Role
You are a professional software architect and developer
## Task
- Read carefully all the initial data and understand whole system goals
- Check that a separate folder for tests is existing (should be generated by @3.05_implement_initial_structure.md)
- Set up Docker environment for testing:
- Create docker-compose.yml for test environment
- Configure test database container
- Configure application container
- For each test description:
- Prepare all the data necessary for testing, or check it is already exists
- Check existing integration tests and if a similar test is already exists, update it
- Implement the test by specification
- Implement test data management:
- Setup fixtures/factories
- Teardown/cleanup procedures
- Run system and integration tests in docker containers
- Fix all problems if tests failed until we got a successful result. In case if one or more tests was failed due to missing data from user or API or other system, ask it from developer.
- Repeat test cycle until no failed tests, iteratively fixing found bugs. Ask user for an additional information if something new appears
- Ensure tests run in CI pipeline
- Compose a final test results in a csv with the next format:
- Test filename
- Execution time
- Result
## Notes
- Ask as many questions as needed, everything should be clear how to implement each feature
@@ -1,123 +0,0 @@
# Observability Planning
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Deployment Strategy: `@_docs/02_components/deployment_strategy.md`
## Role
You are a Site Reliability Engineer (SRE)
## Task
- Define logging strategy across all components
- Plan metrics collection and dashboards
- Design distributed tracing (if applicable)
- Establish alerting rules
- Document incident response procedures
## Output
### Logging Strategy
#### Log Levels
| Level | Usage | Example |
|-------|-------|---------|
| ERROR | Exceptions, failures requiring attention | Database connection failed |
| WARN | Potential issues, degraded performance | Retry attempt 2/3 |
| INFO | Significant business events | User registered, Order placed |
| DEBUG | Detailed diagnostic information | Request payload, Query params |
#### Log Format
```json
{
"timestamp": "ISO8601",
"level": "INFO",
"service": "service-name",
"correlation_id": "uuid",
"message": "Event description",
"context": {}
}
```
#### Log Storage
- Development: Console/file
- Staging: Centralized (ELK, CloudWatch, etc.)
- Production: Centralized with retention policy
### Metrics
#### System Metrics
- CPU usage
- Memory usage
- Disk I/O
- Network I/O
#### Application Metrics
| Metric | Type | Description |
|--------|------|-------------|
| request_count | Counter | Total requests |
| request_duration | Histogram | Response time |
| error_count | Counter | Failed requests |
| active_connections | Gauge | Current connections |
#### Business Metrics
- [Define based on acceptance criteria]
### Distributed Tracing
#### Trace Context
- Correlation ID propagation
- Span naming conventions
- Sampling strategy
#### Integration Points
- HTTP headers
- Message queue metadata
- Database query tagging
### Alerting
#### Alert Categories
| Severity | Response Time | Examples |
|----------|---------------|----------|
| Critical | 5 min | Service down, Data loss |
| High | 30 min | High error rate, Performance degradation |
| Medium | 4 hours | Elevated latency, Disk usage high |
| Low | Next business day | Non-critical warnings |
#### Alert Rules
```yaml
alerts:
- name: high_error_rate
condition: error_rate > 5%
duration: 5m
severity: high
- name: service_down
condition: health_check_failed
duration: 1m
severity: critical
```
### Dashboards
#### Operations Dashboard
- Service health status
- Request rate and error rate
- Response time percentiles
- Resource utilization
#### Business Dashboard
- Key business metrics
- User activity
- Transaction volumes
Store output to `_docs/02_components/observability_plan.md`
## Notes
- Follow the principle: "If it's not monitored, it's not in production"
- Balance verbosity with cost
- Ensure PII is not logged
- Plan for log rotation and retention
@@ -1,29 +0,0 @@
# User Input for Refactoring
## Task
Collect and document goals for the refactoring project.
## User should provide:
Create in `_docs/00_problem`:
- `problem_description.md`:
- What the system currently does
- What changes/improvements are needed
- Pain points in current implementation
- `acceptance_criteria.md`: Success criteria for the refactoring
- `security_approach.md`: Security requirements (if applicable)
## Example
- `problem_description.md`
Current system: E-commerce platform with monolithic architecture.
Current issues: Slow deployments, difficult scaling, tightly coupled modules.
Goals: Break into microservices, improve test coverage, reduce deployment time.
- `acceptance_criteria.md`
- All existing functionality preserved
- Test coverage increased from 40% to 75%
- Deployment time reduced by 50%
- No circular dependencies between modules
## Output
Store user input in `_docs/00_problem/` folder for reference by subsequent steps.
@@ -1,92 +0,0 @@
# Capture Baseline Metrics
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current codebase
## Role
You are a software engineer preparing for refactoring
## Task
- Capture current system metrics as baseline
- Document current behavior
- Establish benchmarks to compare against after refactoring
- Identify critical paths to monitor
## Output
### Code Quality Metrics
#### Coverage
```
Current test coverage: XX%
- Unit test coverage: XX%
- Integration test coverage: XX%
- Critical paths coverage: XX%
```
#### Code Complexity
- Cyclomatic complexity (average):
- Most complex functions (top 5):
- Lines of code:
- Technical debt ratio:
#### Code Smells
- Total code smells:
- Critical issues:
- Major issues:
### Performance Metrics
#### Response Times
| Endpoint/Operation | P50 | P95 | P99 |
|-------------------|-----|-----|-----|
| [endpoint1] | Xms | Xms | Xms |
| [operation1] | Xms | Xms | Xms |
#### Resource Usage
- Average CPU usage:
- Average memory usage:
- Database query count per operation:
#### Throughput
- Requests per second:
- Concurrent users supported:
### Functionality Inventory
List all current features/endpoints:
| Feature | Status | Test Coverage | Notes |
|---------|--------|---------------|-------|
| | | | |
### Dependency Analysis
- Total dependencies:
- Outdated dependencies:
- Security vulnerabilities in dependencies:
### Build Metrics
- Build time:
- Test execution time:
- Deployment time:
Store output to `_docs/04_refactoring/baseline_metrics.md`
## Measurement Commands
Use project-appropriate tools for your tech stack:
| Metric | Python | C#/.NET | Java | Go | JavaScript/TypeScript |
|--------|--------|---------|------|-----|----------------------|
| Test coverage | pytest --cov | dotnet test --collect | jacoco | go test -cover | jest --coverage |
| Code complexity | radon | CodeMetrics | PMD | gocyclo | eslint-plugin-complexity |
| Lines of code | cloc | cloc | cloc | cloc | cloc |
| Dependency check | pip-audit | dotnet list package --vulnerable | mvn dependency-check | govulncheck | npm audit |
## Notes
- Run measurements multiple times for accuracy
- Document measurement methodology
- Save raw data for comparison
- Focus on metrics relevant to refactoring goals
@@ -1,48 +0,0 @@
# Create Documentation from Existing Codebase
## Role
You are a Principal Software Architect and Technical Communication Expert.
## Task
Generate production-grade documentation from existing code that serves both maintenance engineers and consuming developers.
## Core Directives:
- Truthfulness: Never invent features. Ground every claim in the provided code.
- Clarity: Use professional, third-person objective tone.
- Completeness: Document every public interface, summarize private internals unless critical.
- Visuals: Visualize complex logic using Mermaid.js.
## Process:
1. Analyze the project structure, form rough understanding from directories, projects and files
2. Go file by file, analyze each method, convert to short API reference description, form rough flow diagram
3. Analyze summaries and code, analyze connections between components, form detailed structure
## Output Format
Store description of each component to `_docs/02_components/[##]_[component_name]/[##]._component_[component_name].md`:
1. High-level overview
- **Purpose:** Component role in the larger system.
- **Architectural Pattern:** Design patterns used.
2. Logic & Architecture
- Mermaid `graph TD` or `sequenceDiagram`
- draw.io components diagram
3. API Reference table:
- Name, Description, Input, Output
- Test cases for the method
4. Implementation Details
- **Algorithmic Complexity:** Big O for critical methods.
- **State Management:** Local vs. global state.
- **Dependencies:** External libraries.
5. Tests
- Integration tests needed
- Non-functional tests needed
6. Extensions and Helpers
- Store to `_docs/02_components/helpers`
7. Caveats & Edge Cases
- Known limitations
- Race conditions
- Performance bottlenecks
## Notes
- Verify all parameters are captured
- Verify Mermaid diagrams are syntactically correct
- Explain why the code works, not just how
@@ -1,36 +0,0 @@
# Form Solution with Flows
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Generated component docs: `@_docs/02_components`
## Role
You are a professional software architect
## Task
- Review all generated component documentation
- Synthesize into a cohesive solution description
- Create flow diagrams showing how components interact
- Identify the main use cases and their flows
## Output
### Solution Description
Store to `_docs/01_solution/solution.md`:
- Short Product solution description
- Component interaction diagram (draw.io)
- Components overview and their responsibilities
### Flow Diagrams
Store to `_docs/02_components/system_flows.md`:
- Mermaid Flowchart diagrams for main control flows:
- Create flow diagram per major use case
- Show component interactions
- Note data transformations
- Flows can relate to each other
- Show entry points, decision points, and outputs
## Notes
- Focus on documenting what exists, not what should be
@@ -1,39 +0,0 @@
# Deep Research of Approaches
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a professional researcher and software architect
## Task
- Analyze current implementation patterns
- Research modern approaches for similar systems
- Identify what could be done differently
- Suggest improvements based on state-of-the-art practices
## Output
### Current State Analysis
- Patterns currently used
- Strengths of current approach
- Weaknesses identified
### Alternative Approaches
For each major component/pattern:
- Current approach
- Alternative approach
- Pros/Cons comparison
- Migration effort (Low/Medium/High)
### Recommendations
- Prioritized list of improvements
- Quick wins (low effort, high impact)
- Strategic improvements (higher effort)
## Notes
- Focus on practical, achievable improvements
- Consider existing codebase constraints
@@ -1,40 +0,0 @@
# Solution Assessment with Codebase
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Research findings: from step 4.30
## Role
You are a professional software architect
## Task
- Assess current implementation against acceptance criteria
- Identify weak points in current codebase
- Map research recommendations to specific code areas
- Prioritize changes based on impact and effort
## Output
### Weak Points Assessment
For each issue found:
- Location (component/file)
- Weak point description
- Impact (High/Medium/Low)
- Proposed solution
### Gap Analysis
- Acceptance criteria vs current state
- What's missing
- What needs improvement
### Refactoring Roadmap
- Phase 1: Critical fixes
- Phase 2: Major improvements
- Phase 3: Nice-to-have enhancements
## Notes
- Ground all findings in actual code
- Be specific about locations and changes needed

Some files were not shown because too many files have changed in this diff Show More