organize structure for .roo and for ai in general

rework rulels
This commit is contained in:
Oleksandr Bezdieniezhnykh
2025-12-10 19:59:13 +02:00
parent 749c8e674d
commit 8a284eb106
84 changed files with 3044 additions and 35 deletions
@@ -0,0 +1,36 @@
# Research Acceptance Criteria
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
## Role
You are a professional software architect
## Task
- Thoroughly research in internet about the problem and how realistic these acceptance criteria are.
- Check how critical each criterion is.
- Find out more acceptance criteria for this specific domain.
- Research the impact of each value in the acceptance criteria on the whole system quality.
- Verify your findings with authoritative sources (official docs, papers, benchmarks).
- Consider cost/budget implications of each criterion.
- Consider timeline implications - how long would it take to meet each criterion.
## Output format
Assess acceptable ranges for each value in each acceptance criterion in the state-of-the-art solutions, and propose corrections in the next table:
- Acceptance criterion name
- Our values
- Your researched criterion values
- Cost/Timeline impact
- Status: Is the criterion added by your research to our system, modified, or removed
### Assess the restrictions we've put on the system. Are they realistic? Should we add more strict restrictions, or vice versa, add more requirements in restrictions to use our system. Propose corrections in the next table:
- Restriction name
- Our values
- Your researched restriction values
- Cost/Timeline impact
- Status: Is a restriction added by your research to our system, modified, or removed
@@ -0,0 +1,37 @@
# Research Problem
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
## Role
You are a professional researcher and software architect
## Task
- Research existing/competitor solutions for similar problems.
- Thoroughly research in internet about the problem and all the possible ways to solve a problem, and split it to components.
- Then research all the possible ways to solve components, and find out the most efficient state-of-the-art solutions.
- Verify that suggested tools/libraries actually exist and work as described.
- Include security considerations in each component analysis.
- Provide rough cost estimates for proposed solutions.
Be concise in formulating. The fewer words, the better, but do not miss any important details.
## Output format
Produce the resulting solution draft in the next format:
- Short Product solution description. Brief component interaction diagram.
- Existing/competitor solutions analysis (if any).
- Architecture solution that meets restrictions and acceptance criteria.
For each component, analyze the best possible solutions, and form a comparison table.
Each possible component solution would be a row, and has the next columns:
- Tools (library, platform) to solve component tasks
- Advantages of this solution
- Limitations of this solution
- Requirements for this solution
- Security considerations
- Estimated cost
- How does it fit for the problem component that has to be solved, and the whole solution
- Testing strategy. Research how to cover system with tests in order to meet all the acceptance criteria. Form a list of integration functional tests and non-functional tests.
@@ -0,0 +1,40 @@
# Solution Draft Assessment
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Existing solution draft: `@_docs/01_solution/solution_draft.md`
## Role
You are a professional software architect
## Task
- Thoroughly research in internet about the problem and identify all potential weak points and problems.
- Identify security weak points and vulnerabilities.
- Identify performance bottlenecks.
- Address these problems and find out ways to solve them.
- Based on your findings, form a new solution draft in the same format.
## Output format
- Put here all new findings, what was updated, replaced, or removed from the previous solution in the next table:
- Old component solution
- Weak point (functional/security/performance)
- Solution (component's new solution)
- Form the new solution draft. In the updated report, do not put "new" marks, do not compare to the previous solution draft, just make a new solution as if from scratch. Put it in the next format:
- Short Product solution description. Brief component interaction diagram.
- Architecture solution that meets restrictions and acceptance criteria.
For each component, analyze the best possible solutions, and form a comparison table.
Each possible component solution would be a row, and has the next columns:
- Tools (library, platform) to solve component tasks
- Advantages of this solution
- Limitations of this solution
- Requirements for this solution
- Security considerations
- Performance characteristics
- How does it fit for the problem component that has to be solved, and the whole solution
- Testing strategy. Research how to cover system with tests in order to meet all the acceptance criteria. Form a list of integration functional tests and non-functional tests.
@@ -0,0 +1,137 @@
# Tech Stack Selection
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Solution draft: `@_docs/01_solution/solution.md`
## Role
You are a software architect evaluating technology choices
## Task
- Evaluate technology options against requirements
- Consider team expertise and learning curve
- Assess long-term maintainability
- Document selection rationale
## Output
### Requirements Analysis
#### Functional Requirements
| Requirement | Tech Implications |
|-------------|-------------------|
| [From acceptance criteria] | |
#### Non-Functional Requirements
| Requirement | Tech Implications |
|-------------|-------------------|
| Performance | |
| Scalability | |
| Security | |
| Maintainability | |
#### Constraints
| Constraint | Impact on Tech Choice |
|------------|----------------------|
| [From restrictions] | |
### Technology Evaluation
#### Programming Language
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Language]
**Rationale**: [Why this choice]
#### Framework
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Framework]
**Rationale**: [Why this choice]
#### Database
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Database]
**Rationale**: [Why this choice]
#### Infrastructure/Hosting
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Platform]
**Rationale**: [Why this choice]
#### Key Libraries/Dependencies
| Category | Library | Version | Purpose | Alternatives Considered |
|----------|---------|---------|---------|------------------------|
| | | | | |
### Evaluation Criteria
Rate each technology option against these criteria:
1. **Fitness for purpose**: Does it meet functional requirements?
2. **Performance**: Can it meet performance requirements?
3. **Security**: Does it have good security track record?
4. **Maturity**: Is it stable and well-maintained?
5. **Community**: Active community and documentation?
6. **Team expertise**: Does team have experience?
7. **Cost**: Licensing, hosting, operational costs?
8. **Scalability**: Can it grow with the project?
### Technology Stack Summary
```
Language: [Language] [Version]
Framework: [Framework] [Version]
Database: [Database] [Version]
Cache: [Cache solution]
Message Queue: [If applicable]
CI/CD: [Platform]
Hosting: [Platform]
Monitoring: [Tools]
```
### Risk Assessment
| Technology | Risk | Mitigation |
|------------|------|------------|
| | | |
### Learning Requirements
| Technology | Team Familiarity | Training Needed |
|------------|-----------------|-----------------|
| | High/Med/Low | Yes/No |
### Decision Record
**Decision**: [Summary of tech stack]
**Date**: [YYYY-MM-DD]
**Participants**: [Who was involved]
**Status**: Approved / Pending Review
Store output to `_docs/01_solution/tech_stack.md`
## Notes
- Avoid over-engineering - choose simplest solution that meets requirements
- Consider total cost of ownership, not just initial development
- Prefer proven technologies over cutting-edge unless required
- Document trade-offs for future reference
- Ask questions about team expertise and constraints
@@ -0,0 +1,37 @@
# Security Research
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Solution: `@_docs/01_solution/solution.md`
## Role
You are a security architect
## Task
- Review solution architecture against security requirements from `security_approach.md`
- Identify attack vectors and threat model for the system
- Define security requirements per component
- Propose security controls and mitigations
## Output format
### Threat Model
- Asset inventory (what needs protection)
- Threat actors (who might attack)
- Attack vectors (how they might attack)
### Security Requirements per Component
For each component:
- Component name
- Security requirements
- Proposed controls
- Risk level (High/Medium/Low)
### Security Controls Summary
- Authentication/Authorization approach
- Data protection (encryption, integrity)
- Secure communication
- Logging and monitoring requirements
@@ -0,0 +1,82 @@
# Decompose
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
You are a professional software architect
## Task
- Read problem description and solution draft, analyze it thoroughly
- Decompose a complex system solution to the components with proper communications between them, so that system would solve the problem.
- Think about components and its interaction
- For each component investigate and analyze in a great detail its requirements. If additional components are needed, like data preparation, create them
- Solution draft could be incomplete, so add all necessary components to meet acceptance criteria and restrictions
- When you've got full understanding of how exactly each component will interact with each other, create components
## Output Format
### Components Decomposition
Store description of each component to the file `_docs/02_components/[##]_[component_name]/[##]._component_[component_name].md` with the next structure:
1. High-level overview
- **Purpose:** A concise summary of what this component does and its role in the larger system.
- **Architectural Pattern:** Identify the design patterns used (e.g., Singleton, Observer, Factory).
2. API Reference. Create a table for each function or method with the next columns:
- Name
- Description
- Input
- Output
- Description of input and output data in case if it is not obvious
- Test cases which could be for the method
3. Implementation Details
- **Algorithmic Complexity:** Analyze Time (Big O) and Space complexity for critical methods.
- **State Management:** Explain how this component handles state (local vs. global).
- **Dependencies:** List key external libraries and their purpose here.
- **Error Handling:** Define error handling strategy for this component.
4. Tests
- Integration tests for the component if needed.
- Non-functional tests for the component if needed.
5. Extensions and Helpers
- Store Extensions and Helpers to support functionality across multiple components to a separate folder `_docs/02_components/helpers`.
6. Caveats & Edge Cases
- Known limitations
- Potential race conditions
- Potential performance bottlenecks.
### Dependency Graph
- Create component dependency graph showing implementation order
- Identify which components can be implemented in parallel
### API Contracts
- Define interfaces/contracts between components
- Specify data formats exchanged
### Logging Strategy
- Define global logging approach for the system
- Log levels, format, storage
For the whole system make these diagrams and store them to `_docs/02_components`:
### Logic & Architecture
- Generate draw.io components diagrams shows relations between components.
- Make sure lines are not intersect each other, or at least try to minimize intersections.
- Group the semantically coherent components into the groups
- Leave enough space for the nice alignment of the components boxes
- Put external users of the system closer to those components' blocks they are using
- Generate a Mermaid Flowchart diagrams for each of the main control flows
- Create multiple flows system can operate, and generate a flowchart diagram per each flow
- Flows can relate to each other
## Notes
- Strongly follow Single Responsibility Principle during creation of components.
- Follow dumb code - smart data principle. Do not overcomplicate
- Components should be semantically coherent. Do not spread similar functionality across multiple components
- Do not put any code yet, only names, input and output.
- Ask as many questions as possible to clarify all uncertainties.
@@ -0,0 +1,30 @@
# Component Assessment
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
You are a professional software architect
## Task
- Read carefully all the documents above
- Check all the components @02_components how coherent they are
- Follow interaction logic and flows, try to find some potential problems there
- Try to find some missing interaction or circular dependencies
- Check all the components follows Single Responsibility Principle
- Check all the follows dumb code - smart data principle. So that resulting code shouldn't be overcomplicated
- Check for security vulnerabilities in component design
- Check for performance bottlenecks
- Verify API contracts are consistent across components
## Output
Form a list of problems with fixes in the next format:
- Component
- Problem type (Architectural/Security/Performance/API)
- Problem, reason
- Fix or potential fixes
@@ -0,0 +1,36 @@
# Security Check
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a security architect
## Task
- Review each component against security requirements
- Identify security gaps in component design
- Verify security controls are properly distributed across components
- Check for common vulnerabilities (injection, auth bypass, data leaks)
## Output
### Security Assessment per Component
For each component:
- Component name
- Security gaps found
- Required security controls
- Priority (High/Medium/Low)
### Cross-Component Security
- Authentication flow assessment
- Authorization gaps
- Data flow security (encryption in transit/at rest)
- Logging for security events
### Recommendations
- Required changes before implementation
- Security helpers/components to add
@@ -0,0 +1,67 @@
# Generate Jira Epics
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a world class product manager
## Task
- Generate Jira Epics from the Components using Jira MCP
- Order epics by dependency (which must be done first)
- Include rough effort estimation per epic
- Ensure each epic has clear goal and acceptance criteria, verify it with acceptance criteria
- Generate draw.io components diagram based on previous diagram shows relations between components and current Jira Epic number, corresponding to each component.
## Output
Epic format:
- Epic Name [Component] [Outcome]
- Example: Data Ingestion Near-real-time pipeline
- Epic Summary (12 sentences)
- What we are building + why it matters
- Problem / Context
- Current state, pain points, constraints, business opportunities, Links to architecture decision records or diagrams
- Scope. Detailed description
- In Scope. Bullet list of capabilities (not tasks)
- Out-of-scope. Explicit exclusions to prevent scope creep
- Assumptions
- System design specifics, input material quality, data structures, network availability etc
- Dependencies
- Other epics that must be completed first
- Other components, services, hardware, environments, certificates, data sources etc.
- Effort Estimation
- T-shirt size (S/M/L/XL) or story points range
- Users / Consumers
- Internal, External, Systems, Short list of the key use cases.
- Requirements
- Functional - API expectations, events, data handling, idempotency, retry behavior etc
- Non-functional - Availability, latency, throughput, scalability, processing limits, data retention etc
- Security/Compliance - Authentication, encryption, secrets, logging, SOC2/ISO if applicable
- Design & Architecture (links)
- High-level diagram link, Data flow, sequence diagrams, schemas etc
- Definition of Done (Epic-level)
- Feature list per epic scope
- Automated tests (unit/integration/e2e) + minimum coverage threshold met
- Runbooks if applicable
- Documentation updated
- Acceptance Criteria (measurable)
- Risks & Mitigations
- Top 5 risks (technical + delivery) with mitigation owners or systems involved
- Label epic
- component:<name>
- env:prod|stg
- type:platform|data|integration
- Jira Issue Breakdown
- Create consistent child issues under the epic
- Spikes
- Tasks
- Technical enablers
## Notes
- Be as much concise as possible in formulating epics. The less words with the same meaning - the better epic is.
@@ -0,0 +1,57 @@
# Data Model Design
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a professional database architect
## Task
- Analyze solution and components to identify all data entities
- Design database schema that supports all component requirements
- Define relationships, constraints, and indexes
- Consider data access patterns for query optimization
- Plan for data migration if applicable
## Output
### Entity Relationship Diagram
- Create ERD showing all entities and relationships
- Use Mermaid or draw.io format
### Schema Definition
For each entity:
- Table name
- Columns with types, constraints, defaults
- Primary keys
- Foreign keys and relationships
- Indexes (clustered, non-clustered)
- Partitioning strategy (if needed)
### Data Access Patterns
- List common queries per component
- Identify hot paths requiring optimization
- Recommend caching strategy
### Migration Strategy
- Initial schema creation scripts
- Seed data requirements
- Rollback procedures
### Storage Estimates
- Estimated row counts per table
- Storage requirements
- Growth projections
Store output to `_docs/02_components/data_model.md`
## Notes
- Follow database normalization principles (3NF minimum)
- Consider read vs write optimization based on access patterns
- Plan for horizontal scaling if required
- Ask questions to clarify data requirements
@@ -0,0 +1,64 @@
# API Contracts Design
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Data Model: `@_docs/02_components/data_model.md`
## Role
You are a professional API architect
## Task
- Define API contracts between all components
- Specify external API endpoints (if applicable)
- Define data transfer objects (DTOs)
- Establish error response standards
- Plan API versioning strategy
## Output
### Internal Component Interfaces
For each component boundary:
- Interface name
- Methods with signatures
- Input/Output DTOs
- Error types
- Async/Sync designation
### External API Specification
Generate OpenAPI/Swagger spec including:
- Endpoints with HTTP methods
- Request/Response schemas
- Authentication requirements
- Rate limiting rules
- Example requests/responses
### DTO Definitions
For each data transfer object:
- Name and purpose
- Fields with types
- Validation rules
- Serialization format (JSON, Protobuf, etc.)
### Error Contract
- Standard error response format
- Error codes and messages
- HTTP status code mapping
### Versioning Strategy
- API versioning approach (URL, header, query param)
- Deprecation policy
- Breaking vs non-breaking change definitions
Store output to `_docs/02_components/api_contracts.md`
Store OpenAPI spec to `_docs/02_components/openapi.yaml` (if applicable)
## Notes
- Follow RESTful conventions for external APIs
- Keep internal interfaces minimal and focused
- Design for backward compatibility
- Ask questions to clarify integration requirements
@@ -0,0 +1,59 @@
# Generate Tests
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
You are a professional Quality Assurance Engineer
## Task
- Compose tests according to the test strategy
- Cover all the criteria with tests specs
- Minimum coverage target: 75%
## Output
Store all tests specs to the files `_docs/02_tests/[##]_[test_name]_spec.md`
Types and structures of tests:
- Integration tests
- Summary
- Detailed description
- Input data for this specific test scenario
- Expected result
- Maximum expected time to get result
- Performance tests
- Summary
- Load/stress scenario description
- Expected throughput/latency
- Resource limits
- Security tests
- Summary
- Attack vector being tested
- Expected behavior
- Pass/Fail criteria
- Acceptance tests
- Summary
- Detailed description
- Preconditions for tests
- Steps:
- Step1 - Expected result1
- Step2 - Expected result2
...
- StepN - Expected resultN
- Test Data Management
- Required test data
- Setup/Teardown procedures
- Data isolation strategy
## Notes
- Do not put any code yet
- Ask as many questions as needed.
@@ -0,0 +1,111 @@
# Risk Assessment
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Estimation: `@_docs/02_components/estimation.md`
## Role
You are a technical risk analyst
## Task
- Identify technical and project risks
- Assess probability and impact
- Define mitigation strategies
- Create risk monitoring plan
## Output
### Risk Register
| ID | Risk | Category | Probability | Impact | Score | Mitigation | Owner |
|----|------|----------|-------------|--------|-------|------------|-------|
| R1 | | Tech/Schedule/Resource/External | High/Med/Low | High/Med/Low | H/M/L | | |
### Risk Scoring Matrix
| | Low Impact | Medium Impact | High Impact |
|--|------------|---------------|-------------|
| High Probability | Medium | High | Critical |
| Medium Probability | Low | Medium | High |
| Low Probability | Low | Low | Medium |
### Risk Categories
#### Technical Risks
- Technology choices may not meet requirements
- Integration complexity underestimated
- Performance targets unachievable
- Security vulnerabilities
#### Schedule Risks
- Scope creep
- Dependencies delayed
- Resource unavailability
- Underestimated complexity
#### Resource Risks
- Key person dependency
- Skill gaps
- Team availability
#### External Risks
- Third-party API changes
- Vendor reliability
- Regulatory changes
### Top Risks (Ranked)
#### 1. [Highest Risk]
- **Description**:
- **Probability**: High/Medium/Low
- **Impact**: High/Medium/Low
- **Mitigation Strategy**:
- **Contingency Plan**:
- **Early Warning Signs**:
- **Owner**:
#### 2. [Second Highest Risk]
...
### Risk Mitigation Plan
| Risk ID | Mitigation Action | Timeline | Cost | Responsible |
|---------|-------------------|----------|------|-------------|
| R1 | | | | |
### Risk Monitoring
#### Review Schedule
- Daily standup: Discuss blockers (potential risks materializing)
- Weekly: Review risk register, update probabilities
- Sprint end: Comprehensive risk review
#### Early Warning Indicators
| Risk | Indicator | Threshold | Action |
|------|-----------|-----------|--------|
| | | | |
### Contingency Budget
- Time buffer: 20% of estimated duration
- Scope flexibility: [List features that can be descoped]
- Resource backup: [Backup resources if available]
### Acceptance Criteria for Risks
Define which risks are acceptable:
- Low risks: Accepted, monitored
- Medium risks: Mitigation required
- High risks: Mitigation + contingency required
- Critical risks: Must be resolved before proceeding
Store output to `_docs/02_components/risk_assessment.md`
## Notes
- Update risk register throughout project
- Escalate critical risks immediately
- Consider both likelihood and impact
- Ask questions to uncover hidden risks
@@ -0,0 +1,40 @@
# Generate Features for the provided component spec
## Input parameters
- component_spec.md. Required. Do NOT proceed if it is NOT provided!
- parent Jira Epic in the format AZ-###. Required. Do NOT proceed if it is NOT provided!
## Prerequisites
- Jira Epics must be created first (step 2.20)
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
You are a professional software architect
## Task
- Read very carefully component_spec.md
- Decompose component_spec.md to the features. If component is simple or atomic, then create only 1 feature.
- Split to the many features only if it necessary and would be easier to implement
- Do not create features of other components, create *only* features of this exact component
- Each feature should be atomic, could contain 0 API, or list of semantically connected APIs
- After splitting assess yourself
- Add complexity points estimation (1, 2, 3, 5, 8) per feature
- Note feature dependencies (some features may be independent)
- Use `@gen_feature_spec.md` as a complete guidance how to generate feature spec
- Generate Jira tasks per each feature using this spec `@gen_jira_task_and_branch.md` using Jira MCP.
## Output
- The file name of the feature specs should follow this format: `[component's number ##].[feature's number ##]_feature_[feature_name].md`.
- The structure of the feature spec should follow this spec `@gen_feature_spec.md`
- The structure of the Jira task should follow this spec: `@gen_jira_task_and_branch.md`
- Include dependency notes (which features can be done in parallel)
## Notes
- Do NOT generate any code yet, only brief explanations what should be done.
- Ask as many questions as needed.
@@ -0,0 +1,73 @@
# Create Initial Structure
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`.
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data.
- Restrictions: `@_docs/00_problem/restrictions.md`.
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`.
- Security approach: `@_docs/00_problem/security_approach.md`.
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components with Features specifications: `@_docs/02_components`
## Role
You are a professional software architect
## Task
- Read carefully all the component specs and features in the components folder: `@_docs/02_components`
- Investigate in internet what are the best way and tools to implement components and its features
- Make a plan for the creating initial structure:
- DTOs
- component's interfaces
- empty implementations
- helpers - empty implementations or interfaces
- Add .gitignore appropriate for the project's language/framework
- Add .env.example with required environment variables
- Configure CI/CD pipeline with full stages:
- Build stage
- Lint/Static analysis stage
- Unit tests stage
- Integration tests stage
- Security scan stage (SAST/dependency check)
- Deploy to staging stage (triggered on merge to stage branch)
- Define environment strategy based on `@_docs/00_templates/environment_strategy.md`:
- Development environment configuration
- Staging environment configuration
- Production environment configuration (if applicable)
- Add database migration setup if applicable
- Add README.md, describe the project by @_docs/01_solution/solution.md
- Create a separate folder for the integration tests (not a separate repo)
- Configure branch protection rules recommendations
## Example
The structure should roughly looks like this:
- .gitignore
- .env.example
- .github/workflows/ (or .gitlab-ci.yml)
- api
- components
- component1_folder
- component2_folder
- ...
- db
- migrations/
- helpers
- models
- tests
- unit_test1_project1_folder
- unit_test2_project2_folder
...
- integration_tests_folder
- test data
- test01_file
- test02_file
...
Also it is possible that some semantically coherent components (or 1 big component) would be in its own project or project folder
Could be common layer or project consisting of all the interfaces (for C# or Java), or each interface in each component's folder (python) - depending on the language common conventions
## Notes
- Follow SOLID principles
- Follow KISS principle. Dumb code - smart data.
- Follow DRY principles, but do not overcomplicate things, if code repeats sometimes, it is ok if that would be simpler
- Follow conventions and rules of the project's programming language
- Ask as many questions as needed, everything should be clear how to implement each feature
@@ -0,0 +1,35 @@
# Implement Component and Features by Spec
## Input parameter
component_folder
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`.
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data.
- Restrictions: `@_docs/00_problem/restrictions.md`.
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`.
- Security approach: `@_docs/00_problem/security_approach.md`.
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
You are a professional software architect and developer
## Task
- Read carefully initial data and component spec in the component_folder: `@_docs/02_components/[##]_[component_name]/[##]._component_[component_name]`
- Read carefully all the component features in the component_folder: `@_docs/02_components/[##]_[component_name]/[##].[##]_feature_[feature_name]`
- Investigate in internet what are the best way and tools to implement component and its features
- During the investigation it is possible that found solutions required architecturally reorganization of the features. It is ok, propose that and if user agrees, include reorganization in the build feature plan. Also it is possible that interface could be changed or even removed or added new one. It is ok.
- Analyze the existing codebase and get full context for the component's implementation
- Make sure each feature is connected and communicated properly with other features and existing code
- If component has dependency on another one, create temporary mock for the dependency
- For each feature:
- Implement the feature
- Implement error handling per defined strategy
- Implement logging per defined strategy
- Implement all unit tests from the Test cases description, add checks test results to the plan steps
- Implement all integration tests for the feature, add check test results to the plan steps. Analyze existing tests, and decide whether to create new one or add to existing
- Add to the implementation plan description of all component's integration tests, add check test results to the plan steps
- After component is complete, replace mocks with real implementations (mock cleanup)
## Notes
- Ask as many questions as needed, everything should be clear how to implement each feature
@@ -0,0 +1,39 @@
# Code Review
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`.
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`.
- Security approach: `@_docs/00_problem/security_approach.md`.
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a senior software engineer performing code review
## Task
- Review implemented code against component specifications
- Check code quality: readability, maintainability, SOLID principles
- Check error handling consistency
- Check logging implementation
- Check security requirements are met
- Check test coverage is adequate
- Identify code smells and technical debt
## Output
### Issues Found
For each issue:
- File/Location
- Issue type (Bug/Security/Performance/Style/Debt)
- Description
- Suggested fix
- Priority (High/Medium/Low)
### Summary
- Total issues by type
- Blocking issues that must be fixed
- Recommended improvements
## Notes
- Can also use Cursor's built-in review feature
- Focus on critical issues first
@@ -0,0 +1,64 @@
# CI/CD Pipeline Validation & Enhancement
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Environment Strategy: `@_docs/00_templates/environment_strategy.md`
## Role
You are a DevOps engineer
## Task
- Review existing CI/CD pipeline configuration
- Validate all stages are working correctly
- Optimize pipeline performance (parallelization, caching)
- Ensure test coverage gates are enforced
- Verify security scanning is properly configured
- Add missing quality gates
## Checklist
### Pipeline Health
- [ ] All stages execute successfully
- [ ] Build time is acceptable (<10 min for most projects)
- [ ] Caching is properly configured (dependencies, build artifacts)
- [ ] Parallel execution where possible
### Quality Gates
- [ ] Code coverage threshold enforced (minimum 75%)
- [ ] Linting errors block merge
- [ ] Security vulnerabilities block merge (critical/high)
- [ ] All tests must pass
### Environment Deployments
- [ ] Staging deployment works on merge to stage branch
- [ ] Environment variables properly configured per environment
- [ ] Secrets are securely managed (not in code)
- [ ] Rollback procedure documented
### Monitoring
- [ ] Build notifications configured (Slack, email, etc.)
- [ ] Failed build alerts
- [ ] Deployment success/failure notifications
## Output
### Pipeline Status Report
- Current pipeline configuration summary
- Issues found and fixes applied
- Performance metrics (build times)
### Recommended Improvements
- Short-term improvements
- Long-term optimizations
### Quality Gate Configuration
- Thresholds configured
- Enforcement rules
## Notes
- Do not break existing functionality
- Test changes in separate branch first
- Document any manual steps required
@@ -0,0 +1,72 @@
# Deployment Strategy Planning
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Environment Strategy: `@_docs/00_templates/environment_strategy.md`
## Role
You are a DevOps/Platform engineer
## Task
- Define deployment strategy for each environment
- Plan deployment procedures and automation
- Define rollback procedures
- Establish deployment verification steps
- Document manual intervention points
## Output
### Deployment Architecture
- Infrastructure diagram (where components run)
- Network topology
- Load balancing strategy
- Container/VM configuration
### Deployment Procedures
#### Staging Deployment
- Trigger conditions
- Pre-deployment checks
- Deployment steps
- Post-deployment verification
- Smoke tests to run
#### Production Deployment
- Approval workflow
- Deployment window
- Pre-deployment checks
- Deployment steps (blue-green, rolling, canary)
- Post-deployment verification
- Smoke tests to run
### Rollback Procedures
- Rollback trigger criteria
- Rollback steps per environment
- Data rollback considerations
- Communication plan during rollback
### Health Checks
- Liveness probe configuration
- Readiness probe configuration
- Custom health endpoints
### Deployment Checklist
- [ ] All tests pass in CI
- [ ] Security scan clean
- [ ] Database migrations reviewed
- [ ] Feature flags configured
- [ ] Monitoring alerts configured
- [ ] Rollback plan documented
- [ ] Stakeholders notified
Store output to `_docs/02_components/deployment_strategy.md`
## Notes
- Prefer automated deployments over manual
- Zero-downtime deployments for production
- Always have a rollback plan
- Ask questions about infrastructure constraints
@@ -0,0 +1,39 @@
# Implement Tests by Spec
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`.
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data.
- Restrictions: `@_docs/00_problem/restrictions.md`.
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`.
- Full Solution Description: `@_docs/01_solution/solution.md`
- Tests specifications: `@_docs/02_tests`
## Role
You are a professional software architect and developer
## Task
- Read carefully all the initial data and understand whole system goals
- Check that a separate folder for tests is existing (should be generated by @3.05_implement_initial_structure.md)
- Set up Docker environment for testing:
- Create docker-compose.yml for test environment
- Configure test database container
- Configure application container
- For each test description:
- Prepare all the data necessary for testing, or check it is already exists
- Check existing integration tests and if a similar test is already exists, update it
- Implement the test by specification
- Implement test data management:
- Setup fixtures/factories
- Teardown/cleanup procedures
- Run system and integration tests in docker containers
- Fix all problems if tests failed until we got a successful result. In case if one or more tests was failed due to missing data from user or API or other system, ask it from developer.
- Repeat test cycle until no failed tests, iteratively fixing found bugs. Ask user for an additional information if something new appears
- Ensure tests run in CI pipeline
- Compose a final test results in a csv with the next format:
- Test filename
- Execution time
- Result
## Notes
- Ask as many questions as needed, everything should be clear how to implement each feature
@@ -0,0 +1,123 @@
# Observability Planning
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Deployment Strategy: `@_docs/02_components/deployment_strategy.md`
## Role
You are a Site Reliability Engineer (SRE)
## Task
- Define logging strategy across all components
- Plan metrics collection and dashboards
- Design distributed tracing (if applicable)
- Establish alerting rules
- Document incident response procedures
## Output
### Logging Strategy
#### Log Levels
| Level | Usage | Example |
|-------|-------|---------|
| ERROR | Exceptions, failures requiring attention | Database connection failed |
| WARN | Potential issues, degraded performance | Retry attempt 2/3 |
| INFO | Significant business events | User registered, Order placed |
| DEBUG | Detailed diagnostic information | Request payload, Query params |
#### Log Format
```json
{
"timestamp": "ISO8601",
"level": "INFO",
"service": "service-name",
"correlation_id": "uuid",
"message": "Event description",
"context": {}
}
```
#### Log Storage
- Development: Console/file
- Staging: Centralized (ELK, CloudWatch, etc.)
- Production: Centralized with retention policy
### Metrics
#### System Metrics
- CPU usage
- Memory usage
- Disk I/O
- Network I/O
#### Application Metrics
| Metric | Type | Description |
|--------|------|-------------|
| request_count | Counter | Total requests |
| request_duration | Histogram | Response time |
| error_count | Counter | Failed requests |
| active_connections | Gauge | Current connections |
#### Business Metrics
- [Define based on acceptance criteria]
### Distributed Tracing
#### Trace Context
- Correlation ID propagation
- Span naming conventions
- Sampling strategy
#### Integration Points
- HTTP headers
- Message queue metadata
- Database query tagging
### Alerting
#### Alert Categories
| Severity | Response Time | Examples |
|----------|---------------|----------|
| Critical | 5 min | Service down, Data loss |
| High | 30 min | High error rate, Performance degradation |
| Medium | 4 hours | Elevated latency, Disk usage high |
| Low | Next business day | Non-critical warnings |
#### Alert Rules
```yaml
alerts:
- name: high_error_rate
condition: error_rate > 5%
duration: 5m
severity: high
- name: service_down
condition: health_check_failed
duration: 1m
severity: critical
```
### Dashboards
#### Operations Dashboard
- Service health status
- Request rate and error rate
- Response time percentiles
- Resource utilization
#### Business Dashboard
- Key business metrics
- User activity
- Transaction volumes
Store output to `_docs/02_components/observability_plan.md`
## Notes
- Follow the principle: "If it's not monitored, it's not in production"
- Balance verbosity with cost
- Ensure PII is not logged
- Plan for log rotation and retention
@@ -0,0 +1,29 @@
# User Input for Refactoring
## Task
Collect and document goals for the refactoring project.
## User should provide:
Create in `_docs/00_problem`:
- `problem_description.md`:
- What the system currently does
- What changes/improvements are needed
- Pain points in current implementation
- `acceptance_criteria.md`: Success criteria for the refactoring
- `security_approach.md`: Security requirements (if applicable)
## Example
- `problem_description.md`
Current system: E-commerce platform with monolithic architecture.
Current issues: Slow deployments, difficult scaling, tightly coupled modules.
Goals: Break into microservices, improve test coverage, reduce deployment time.
- `acceptance_criteria.md`
- All existing functionality preserved
- Test coverage increased from 40% to 75%
- Deployment time reduced by 50%
- No circular dependencies between modules
## Output
Store user input in `_docs/00_problem/` folder for reference by subsequent steps.
@@ -0,0 +1,92 @@
# Capture Baseline Metrics
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current codebase
## Role
You are a software engineer preparing for refactoring
## Task
- Capture current system metrics as baseline
- Document current behavior
- Establish benchmarks to compare against after refactoring
- Identify critical paths to monitor
## Output
### Code Quality Metrics
#### Coverage
```
Current test coverage: XX%
- Unit test coverage: XX%
- Integration test coverage: XX%
- Critical paths coverage: XX%
```
#### Code Complexity
- Cyclomatic complexity (average):
- Most complex functions (top 5):
- Lines of code:
- Technical debt ratio:
#### Code Smells
- Total code smells:
- Critical issues:
- Major issues:
### Performance Metrics
#### Response Times
| Endpoint/Operation | P50 | P95 | P99 |
|-------------------|-----|-----|-----|
| [endpoint1] | Xms | Xms | Xms |
| [operation1] | Xms | Xms | Xms |
#### Resource Usage
- Average CPU usage:
- Average memory usage:
- Database query count per operation:
#### Throughput
- Requests per second:
- Concurrent users supported:
### Functionality Inventory
List all current features/endpoints:
| Feature | Status | Test Coverage | Notes |
|---------|--------|---------------|-------|
| | | | |
### Dependency Analysis
- Total dependencies:
- Outdated dependencies:
- Security vulnerabilities in dependencies:
### Build Metrics
- Build time:
- Test execution time:
- Deployment time:
Store output to `_docs/04_refactoring/baseline_metrics.md`
## Measurement Commands
Use project-appropriate tools for your tech stack:
| Metric | Python | C#/.NET | Java | Go | JavaScript/TypeScript |
|--------|--------|---------|------|-----|----------------------|
| Test coverage | pytest --cov | dotnet test --collect | jacoco | go test -cover | jest --coverage |
| Code complexity | radon | CodeMetrics | PMD | gocyclo | eslint-plugin-complexity |
| Lines of code | cloc | cloc | cloc | cloc | cloc |
| Dependency check | pip-audit | dotnet list package --vulnerable | mvn dependency-check | govulncheck | npm audit |
## Notes
- Run measurements multiple times for accuracy
- Document measurement methodology
- Save raw data for comparison
- Focus on metrics relevant to refactoring goals
@@ -0,0 +1,48 @@
# Create Documentation from Existing Codebase
## Role
You are a Principal Software Architect and Technical Communication Expert.
## Task
Generate production-grade documentation from existing code that serves both maintenance engineers and consuming developers.
## Core Directives:
- Truthfulness: Never invent features. Ground every claim in the provided code.
- Clarity: Use professional, third-person objective tone.
- Completeness: Document every public interface, summarize private internals unless critical.
- Visuals: Visualize complex logic using Mermaid.js.
## Process:
1. Analyze the project structure, form rough understanding from directories, projects and files
2. Go file by file, analyze each method, convert to short API reference description, form rough flow diagram
3. Analyze summaries and code, analyze connections between components, form detailed structure
## Output Format
Store description of each component to `_docs/02_components/[##]_[component_name]/[##]._component_[component_name].md`:
1. High-level overview
- **Purpose:** Component role in the larger system.
- **Architectural Pattern:** Design patterns used.
2. Logic & Architecture
- Mermaid `graph TD` or `sequenceDiagram`
- draw.io components diagram
3. API Reference table:
- Name, Description, Input, Output
- Test cases for the method
4. Implementation Details
- **Algorithmic Complexity:** Big O for critical methods.
- **State Management:** Local vs. global state.
- **Dependencies:** External libraries.
5. Tests
- Integration tests needed
- Non-functional tests needed
6. Extensions and Helpers
- Store to `_docs/02_components/helpers`
7. Caveats & Edge Cases
- Known limitations
- Race conditions
- Performance bottlenecks
## Notes
- Verify all parameters are captured
- Verify Mermaid diagrams are syntactically correct
- Explain why the code works, not just how
@@ -0,0 +1,36 @@
# Form Solution with Flows
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Generated component docs: `@_docs/02_components`
## Role
You are a professional software architect
## Task
- Review all generated component documentation
- Synthesize into a cohesive solution description
- Create flow diagrams showing how components interact
- Identify the main use cases and their flows
## Output
### Solution Description
Store to `_docs/01_solution/solution.md`:
- Short Product solution description
- Component interaction diagram (draw.io)
- Components overview and their responsibilities
### Flow Diagrams
Store to `_docs/02_components/system_flows.md`:
- Mermaid Flowchart diagrams for main control flows:
- Create flow diagram per major use case
- Show component interactions
- Note data transformations
- Flows can relate to each other
- Show entry points, decision points, and outputs
## Notes
- Focus on documenting what exists, not what should be
@@ -0,0 +1,39 @@
# Deep Research of Approaches
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a professional researcher and software architect
## Task
- Analyze current implementation patterns
- Research modern approaches for similar systems
- Identify what could be done differently
- Suggest improvements based on state-of-the-art practices
## Output
### Current State Analysis
- Patterns currently used
- Strengths of current approach
- Weaknesses identified
### Alternative Approaches
For each major component/pattern:
- Current approach
- Alternative approach
- Pros/Cons comparison
- Migration effort (Low/Medium/High)
### Recommendations
- Prioritized list of improvements
- Quick wins (low effort, high impact)
- Strategic improvements (higher effort)
## Notes
- Focus on practical, achievable improvements
- Consider existing codebase constraints
@@ -0,0 +1,40 @@
# Solution Assessment with Codebase
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Research findings: from step 4.30
## Role
You are a professional software architect
## Task
- Assess current implementation against acceptance criteria
- Identify weak points in current codebase
- Map research recommendations to specific code areas
- Prioritize changes based on impact and effort
## Output
### Weak Points Assessment
For each issue found:
- Location (component/file)
- Weak point description
- Impact (High/Medium/Low)
- Proposed solution
### Gap Analysis
- Acceptance criteria vs current state
- What's missing
- What needs improvement
### Refactoring Roadmap
- Phase 1: Critical fixes
- Phase 2: Major improvements
- Phase 3: Nice-to-have enhancements
## Notes
- Ground all findings in actual code
- Be specific about locations and changes needed
@@ -0,0 +1,52 @@
# Integration Tests Description
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a professional Quality Assurance Engineer
## Prerequisites
- Baseline metrics captured (see 4.07_capture_baseline.md)
- Feature parity checklist created (see `@_docs/00_templates/feature_parity_checklist.md`)
## Coverage Requirements (MUST meet before refactoring)
- Minimum overall coverage: 75%
- Critical path coverage: 90%
- All public APIs must have integration tests
- All error handling paths must be tested
## Task
- Analyze existing test coverage
- Define integration tests that capture current system behavior
- Tests should serve as safety net for refactoring
- Cover critical paths and edge cases
- Ensure coverage requirements are met before proceeding to refactoring
## Output
Store test specs to `_docs/02_tests/[##]_[test_name]_spec.md`:
- Integration tests
- Summary
- Current behavior being tested
- Input data
- Expected result
- Maximum expected time
- Acceptance tests
- Summary
- Preconditions
- Steps with expected results
- Coverage Analysis
- Current coverage percentage
- Target coverage (75% minimum)
- Critical paths not covered
## Notes
- Focus on behavior preservation
- These tests validate refactoring doesn't break functionality
@@ -0,0 +1,34 @@
# Implement Tests
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Tests specifications: `@_docs/02_tests`
## Role
You are a professional software developer
## Task
- Implement all tests from specifications
- Ensure all tests pass on current codebase (before refactoring)
- Set up test infrastructure if not exists
- Configure test data fixtures
## Process
1. Set up test environment
2. Implement each test from spec
3. Run tests, verify all pass
4. Document any discovered issues
## Output
- Implemented tests in test folder
- Test execution report:
- Test name
- Status (Pass/Fail)
- Execution time
- Issues discovered (if any)
## Notes
- All tests MUST pass before proceeding to refactoring
- Tests are the safety net for changes
@@ -0,0 +1,38 @@
# Analyze Coupling
## Initial data:
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Codebase
## Role
You are a software architect specializing in code quality
## Task
- Analyze coupling between components/modules
- Identify tightly coupled areas
- Map dependencies (direct and transitive)
- Form decoupling strategy
## Output
### Coupling Analysis
- Dependency graph (Mermaid)
- Coupling metrics per component
- Circular dependencies found
### Problem Areas
For each coupling issue:
- Components involved
- Type of coupling (content, common, control, stamp, data)
- Impact on maintainability
- Severity (High/Medium/Low)
### Decoupling Strategy
- Priority order for decoupling
- Proposed interfaces/abstractions
- Estimated effort per change
## Notes
- Focus on high-impact coupling issues first
- Consider backward compatibility
@@ -0,0 +1,43 @@
# Execute Decoupling
## Initial data:
- Decoupling strategy: from step 4.60
- Tests: implemented in step 4.50
- Codebase
## Role
You are a professional software developer
## Task
- Execute decoupling changes per strategy
- Fix code smells encountered during refactoring
- Run tests after each significant change
- Ensure all tests pass before proceeding
## Process
For each decoupling change:
1. Implement the change
2. Run integration tests
3. Fix any failures
4. Commit with descriptive message
## Code Smells to Address
- Long methods
- Large classes
- Duplicate code
- Dead code
- Magic numbers/strings
## Output
- Refactored code
- Test results after each change
- Summary of changes made:
- Change description
- Files affected
- Tests status
## Notes
- Small, incremental changes
- Never break tests
- Commit frequently
@@ -0,0 +1,40 @@
# Technical Debt
## Initial data:
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Codebase
## Role
You are a technical debt analyst
## Task
- Identify technical debt in the codebase
- Categorize and prioritize debt items
- Estimate effort to resolve
- Create actionable plan
## Output
### Debt Inventory
For each item:
- Location (file/component)
- Type (design, code, test, documentation)
- Description
- Impact (High/Medium/Low)
- Effort to fix (S/M/L/XL)
- Interest (cost of not fixing)
### Prioritized Backlog
- Quick wins (low effort, high impact)
- Strategic debt (high effort, high impact)
- Tolerable debt (low impact, can defer)
### Recommendations
- Immediate actions
- Sprint-by-sprint plan
- Prevention measures
## Notes
- Be realistic about effort estimates
- Consider business priorities
@@ -0,0 +1,49 @@
# Performance Optimization
## Initial data:
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Codebase
## Role
You are a performance engineer
## Task
- Identify performance bottlenecks
- Profile critical paths
- Propose optimizations
- Implement and verify improvements
## Output
### Bottleneck Analysis
For each bottleneck:
- Location
- Symptom (slow response, high memory, etc.)
- Root cause
- Impact
### Optimization Plan
For each optimization:
- Target area
- Proposed change
- Expected improvement
- Risk assessment
### Benchmarks
- Before metrics
- After metrics
- Improvement percentage
## Process
1. Profile current performance
2. Identify top bottlenecks
3. Implement optimizations one at a time
4. Benchmark after each change
5. Verify tests still pass
## Notes
- Measure before optimizing
- Optimize the right things (profile first)
- Don't sacrifice readability for micro-optimizations
@@ -0,0 +1,48 @@
# Security Review
## Initial data:
- Security approach: `@_docs/00_problem/security_approach.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Codebase
## Role
You are a security engineer
## Task
- Review code for security vulnerabilities
- Check against OWASP Top 10
- Verify security requirements are met
- Recommend fixes for issues found
## Output
### Vulnerability Assessment
For each issue:
- Location
- Vulnerability type (injection, XSS, CSRF, etc.)
- Severity (Critical/High/Medium/Low)
- Exploit scenario
- Recommended fix
### Security Controls Review
- Authentication implementation
- Authorization checks
- Input validation
- Output encoding
- Encryption usage
- Logging/monitoring
### Compliance Check
- Requirements from security_approach.md
- Status (Met/Partially Met/Not Met)
- Gaps to address
### Recommendations
- Critical fixes (must do)
- Improvements (should do)
- Hardening (nice to have)
## Notes
- Prioritize critical vulnerabilities
- Provide actionable fix recommendations
@@ -0,0 +1,189 @@
# Generate Feature Specification
Create a focused behavioral specification that describes **what** the system should do, not **how** it should be built.
## Input parameter
building_block.md
Example: `_docs/iterative/building_blocks/01-dashboard-export-example.md`
## Objective
Generate lean specifications with:
- Clear problem statement and desired outcomes
- Behavioral acceptance criteria in Gherkin format
- Essential non-functional requirements
- Complexity estimation
- Feature dependencies
- No implementation prescriptiveness
## Process
1. Read the building_block.md
2. Analyze the codebase to understand context
3. Generate a behavioral specification using the structure below
4. **DO NOT** include implementation details, file structures, or technical architecture
5. Focus on behavior, user experience, and acceptance criteria
6. Save the specification into `_docs/iterative/feature_specs/spec.md`
Example: `_docs/iterative/feature_specs/01-dashboard-export-example.md`
## Specification Structure
### Header
```markdown
# [Feature Name]
**Status**: Draft | **Date**: [YYYY-MM-DD] | **Feature**: [Brief Feature Description]
**Complexity**: [1|2|3|5|8] points
**Dependencies**: [List dependent features or "None"]
```
### Problem
Clear, concise statement of the problem users are facing.
### Outcome
Measurable or observable goals/benefits (use bullet points).
### Scope
#### Included
What's in scope for this feature (bullet points).
#### Excluded
Explicitly what's **NOT** in scope (bullet points).
### Acceptance Criteria
Each acceptance criterion should be:
- Numbered sequentially (AC-1, AC-2, etc.)
- Include a brief title
- Written in Gherkin format (Given/When/Then)
Example:
**AC-1: Export Availability**
Given the user is viewing the dashboard
When the dashboard loads
Then an "Export to Excel" button should be visible in the filter/actions area
### Non-Functional Requirements
Only include essential non-functional requirements:
- Performance (if relevant)
- Compatibility (if relevant)
- Reliability (if relevant)
Use sub-sections with bullet points.
### Unit tests based on Acceptance Criteria
- Acceptance criteria references
- What should be tested
- Required outcome
### Integration tests based on Acceptance Criteria and/or Non-Functional requirements
- Acceptance criteria references
- Initial data and conditions
- What should be tested
- How system should behave
- List of Non-functional requirements to be met
### Constraints
High-level constraints that guide implementation:
- Architectural patterns (if critical)
- Technical limitations
- Integration requirements
- No breaking changes (if applicable)
### Risks & Mitigation
List key risks with mitigation strategies (if applicable).
Each risk should have:
- *Risk*: Description
- *Mitigation*: Approach
## Complexity Points Guide
- 1 point: Trivial, self-contained, no dependencies
- 2 points: Non-trivial, low complexity, minimal coordination
- 3 points: Multi-step, moderate complexity, potential alignment needed
- 5 points: Difficult, interconnected logic, medium-high risk
- 8 points: High ambiguity, multiple components, very high risk (consider splitting)
## Output Guidelines
**DO:**
- Focus on behavior and user experience
- Use clear, simple language
- Keep acceptance criteria testable
- Include realistic scope boundaries
- Write from the user's perspective
- Include complexity estimation
- Note dependencies on other features
**DON'T:**
- Include implementation details (file paths, classes, methods)
- Prescribe technical solutions or libraries
- Add architectural diagrams or code examples
- Specify exact API endpoints or data structures
- Include step-by-step implementation instructions
- Add "how to build" guidance
## Example
```markdown
# Dashboard Export to Excel
**Status**: Draft | **Date**: 2025-01-XX | **Feature**: Export Dashboard Data to Excel
## Problem
Users currently have no efficient way to export dashboard data for offline analysis, reporting, or sharing. Manual copy-paste is time-consuming, error-prone, and lacks context about active filters.
## Outcome
- Eliminate manual copy-paste workflows
- Enable accurate data sharing with proper context
- Measurable time savings (target: <30s vs. several minutes)
- Improved data consistency for offline analysis
## Scope
### Included
- Export filtered dashboard data to Excel
- Single-click export from dashboard view
- Respect all active filters (status, date range)
### Excluded
- CSV or PDF export options
- Scheduled or automated exports
- Email export functionality
## Acceptance Criteria
**AC-1: Export Button Visibility**
Given the user is viewing the dashboard
When the dashboard loads
Then an "Export to Excel" button should be visible in the actions area
**AC-2: Basic Export Functionality**
Given the user is viewing the dashboard with data
When the user clicks the "Export to Excel" button
Then an Excel file should download to their default location
And the filename should include a timestamp
## Non-Functional Requirements
**Performance**
- Export completes in <2 seconds for up to 1000 records
- Support up to 10,000 records per export
**Compatibility**
- Excel files openable in Microsoft Excel, Google Sheets, and LibreOffice
- Standard Excel format (.xlsx)
## Constraints
- Must respect all currently active filters
- Must follow existing hexagonal architecture patterns
- No breaking changes to existing functionality
## Risks & Mitigation
**Risk 1: Excel File Compatibility**
- *Risk*: Generated files don't open correctly in all spreadsheet applications
- *Mitigation*: Use standard Excel format, test with multiple applications
```
## Implementation Notes
- Use descriptive but concise titles
- Keep specifications focused and scoped appropriately
- Remember: This is a **behavioral spec**, not an implementation plan
**CRITICAL**: Generate the spec file ONLY. Do NOT modify code, create files, or make any implementation changes at this stage.
@@ -0,0 +1,81 @@
# Generate Jira Task and Git Branch from Spec
Create a Jira ticket from a specification and set up git branch for development.
## Inputs
- feature_spec.md (required): path to the source spec file.
Example: `@_docs/iterative/feature_specs/spec-export-e2e.md`
- epic <Epic-Id> (required for Jira task creation): create Jira task under parent epic
Example: /gen_jira_task_and_branch @_docs/iterative/feature_specs/spec.md epic AZ-112
- update <Task-Id> (required for Jira task update): update existing Jira task
Example: /gen_jira_task_and_branch @_docs/iterative/feature_specs/spec.md update AZ-151
## Objective
1. Parse the spec to extract **Title**, **Description**, **Acceptance Criteria**, **Technical Details**, **Estimation**.
2. Create a Jira Task under Epic or Update existing Jira Task using **Jira MCP**
3. Create git branch for the task
## Parsing Rules
### Title
Use the first header at the top of the spec.
### Description (Markdown ONLY — no AC/Tech here)
Build from:
- **Purpose & Outcomes → Intent** (bullets)
- **Purpose & Outcomes → Success Signals** (bullets)
- (Optional) one-paragraph summary from **Behavior Change → New Behavior**
> **Do not include** Acceptance Criteria or Technical Details in Description if those fields exist in Jira.
### Estimation
Extract complexity points from spec header and add to Jira task.
### Acceptance Criteria (Gherkin HTML)
From **"Acceptance Criteria (Gherkin)"**, extract the **full Gherkin scenarios** including:
- The `Feature:` line
- Each complete `Scenario:` block with all `Given`, `When`, `Then`, `And` steps
- Convert the entire Gherkin text to **HTML format** preserving structure
- Do NOT create a simple checklist; keep the full Gherkin syntax for test traceability.
### Technical Details
Bullets composed of:
- **Inputs → Key constraints**
- **Scope → Included/Excluded** (condensed)
- **Interfaces & Contracts** (names only — UI actions, endpoint names, event names)
## Steps (Agent)
1. **Check current branch**
- Verify user is on `dev` branch
- If not on `dev`, notify user: "Please switch to the dev branch before proceeding"
- Stop execution if not on dev
2. Parse **Title**, **Description**, **AC**, **Tech**, **Estimation** per **Parsing Rules**.
3. **Create** or **Update** the Jira Task with the field mapping above.
- If creating a new Task with Epic provided, add the parent relation
- Do NOT modify the parent Epic work item.
4. **Create git branch**
```bash
git stash
git checkout -b {taskId}-{taskNameSlug}
git stash pop
```
- {taskId} is Jira task Id (lowercase), e.g., `az-122`
- {taskNameSlug} is kebab-case slug from task title, e.g., `progressive-search-system`
- Full branch name example: `az-122-progressive-search-system`
5. Rename spec.md and corresponding building block:
- Rename to `_docs/iterative/feature_specs/{taskId}-{taskNameSlug}.md`
- Rename to `_docs/iterative/building_blocks/{taskId}-{taskNameSlug}.md`
## Guardrails
- No source code edits; only Jira task, file moves, and git branch.
- If Jira creation/update fails, do not create branch or move files.
- If AC/Tech fields are absent in Jira, append to Description.
- **CRITICAL**: Extract the FULL Gherkin scenarios with all steps - do NOT create simple checklist items.
- Do not edit parent Epic.
- Always check for dev branch before proceeding.
@@ -0,0 +1,120 @@
# Merge and Deploy Feature
Complete the feature development cycle by creating PR, merging, and updating documentation.
## Input parameters
- task_id (required): Jira task ID
Example: /gen_merge_and_deploy AZ-122
## Prerequisites
- All tests pass locally
- Code review completed (or ready for review)
- Definition of Done checklist reviewed
## Steps (Agent)
### 1. Verify Branch Status
```bash
git status
git log --oneline -5
```
- Confirm on feature branch (e.g., az-122-feature-name)
- Confirm all changes committed
- If uncommitted changes exist, prompt user to commit first
### 2. Run Pre-merge Checks
**User action required**: Run your project's test and lint commands before proceeding.
```bash
# Check for merge conflicts
git fetch origin dev
git merge origin/dev --no-commit --no-ff || git merge --abort
```
- [ ] All tests pass (run project-specific test command)
- [ ] No linting errors (run project-specific lint command)
- [ ] No merge conflicts (or resolve them)
### 3. Update Documentation
#### CHANGELOG.md
Add entry under "Unreleased" section:
```markdown
### Added/Changed/Fixed
- [TASK_ID] Brief description of change
```
#### Update Jira
- Add comment with summary of implementation
- Link any related PRs or documentation
### 4. Create Pull Request
#### PR Title Format
`[TASK_ID] Brief description`
#### PR Body (from template)
```markdown
## Description
[Summary of changes]
## Related Issue
Jira ticket: [TASK_ID](link)
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Refactoring
## Checklist
- [ ] Code follows project conventions
- [ ] Self-review completed
- [ ] Tests added/updated
- [ ] All tests pass
- [ ] Documentation updated
## Breaking Changes
[None / List breaking changes]
## Deployment Notes
[None / Special deployment considerations]
## Rollback Plan
[Steps to rollback if issues arise]
## Testing
[How to test these changes]
```
### 5. Post-merge Actions
After PR is approved and merged:
```bash
# Switch to dev branch
git checkout dev
git pull origin dev
# Delete feature branch
git branch -d {feature_branch}
git push origin --delete {feature_branch}
```
### 6. Update Jira Status
- Move ticket to "Done"
- Add link to merged PR
- Log time spent (if tracked)
## Guardrails
- Do NOT merge if tests fail
- Do NOT merge if there are unresolved review comments
- Do NOT delete branch before merge is confirmed
- Always update CHANGELOG before creating PR
## Output
- PR created/URL provided
- CHANGELOG updated
- Jira ticket updated
- Feature branch cleaned up (post-merge)