review of all AI-dev system #01

add refactoring phase
complete implementation phase
fix wrong links and file names
This commit is contained in:
Oleksandr Bezdieniezhnykh
2025-12-09 12:11:29 +02:00
parent d5c036e6f7
commit 73cbe43397
35 changed files with 1215 additions and 206 deletions
@@ -1,29 +1,36 @@
# research acceptance criteria
# Research Acceptance Criteria
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
## Role
You are a professional software architect
## Task
- Thorougly research in internet about the problem and how realistic these acceptance criteria are.
- Thoroughly research in internet about the problem and how realistic these acceptance criteria are.
- Check how critical each criterion is.
- Find out more acceptance criteria for this specific domain.
- Research the impact of each value in the acceptance criteria on the whole system quality.
- Verify your findings with authoritative sources (official docs, papers, benchmarks).
- Consider cost/budget implications of each criterion.
- Consider timeline implications - how long would it take to meet each criterion.
## Output format
Assess acceptable ranges for each value in each acceptance criterion in the state-of-the-art solutions, and propose corrections in the next table:
- Acceptance criterion name
- Our values
- Your researched criterion values
- Cost/Timeline impact
- Status: Is the criterion added by your research to our system, modified, or removed
### Assess the restrictions we've put on the system. Are they realistic? Should we add more strict restrictions, or vise versa, add more requirements in restrictions to use our system. Propose corrections in the next table:
### Assess the restrictions we've put on the system. Are they realistic? Should we add more strict restrictions, or vice versa, add more requirements in restrictions to use our system. Propose corrections in the next table:
- Restriction name
- Our values
- Your researched restriction values
- Cost/Timeline impact
- Status: Is a restriction added by your research to our system, modified, or removed
@@ -1,28 +1,37 @@
# research problem
# Research Problem
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
## Role
You are a professional researcher and software architect
## Task
- Thorougly research in internet about the problem and all the possible ways to solve a problem, and split it to components.
- Research existing/competitor solutions for similar problems.
- Thoroughly research in internet about the problem and all the possible ways to solve a problem, and split it to components.
- Then research all the possible ways to solve components, and find out the most efficient state-of-the-art solutions.
Be concise in formulating. The fewer words, the better, but do not miss any important details.
- Verify that suggested tools/libraries actually exist and work as described.
- Include security considerations in each component analysis.
- Provide rough cost estimates for proposed solutions.
Be concise in formulating. The fewer words, the better, but do not miss any important details.
## Output format
Produce the resulting solution draft in the next format:
Produce the resulting solution draft in the next format:
- Short Product solution description. Brief component interaction diagram.
- Existing/competitor solutions analysis (if any).
- Architecture solution that meets restrictions and acceptance criteria.
For each component, analyze the best possible solutions, and form a comparison table.
Each possible component solution would be a row, and has the next columns:
- Tools (library, platform) to solve component tasks
- Advantages of this solution. For example, LiteSAM AI feature is picked for UAV - Satellite matching finding, and it make its job perfectly in milliseconds timeframe.
- Limitations of this solution. For example, LiteSAM AI feature matcher requires to work efficiently on RTX Gpus and since it is sparsed, the quality a bit lower than densed feature matcher.
- Requirements for this solution. For example, LiteSAM AI feature matcher requires that photos it comparing to be aligned by rotation with no more than 45 degree difference. This requires additional preparation step for pre-rotating either UAV either Satellite images in order to be aligned.
- Advantages of this solution
- Limitations of this solution
- Requirements for this solution
- Security considerations
- Estimated cost
- How does it fit for the problem component that has to be solved, and the whole solution
- Testing strategy. Research how to cover system with tests in order to meet all the acceptance criteria. Form a list of integration functional tests and non-functional tests.
@@ -1,24 +1,27 @@
# Solution draft assesment
# Solution Draft Assessment
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Existing solution draft: `@_docs/01_solution/solution_draft.md`
## Role
You are a professional software architect
## Task
- Thorougly research in internet about the problem and identify all potential weak points and problems.
- Thoroughly research in internet about the problem and identify all potential weak points and problems.
- Identify security weak points and vulnerabilities.
- Identify performance bottlenecks.
- Address these problems and find out ways to solve them.
- Based on your findings, form a new solution draft in the same format.
## Output format
- Put here all new findings, what was updated, replaced, or removed from the previous solution in the next table:
- Old component solution
- Weak point
- Weak point (functional/security/performance)
- Solution (component's new solution)
- Form the new solution draft. In the updated report, do not put "new" marks, do not compare to the previous solution draft, just make a new solution as if from scratch. Put it in the next format:
@@ -27,9 +30,11 @@
For each component, analyze the best possible solutions, and form a comparison table.
Each possible component solution would be a row, and has the next columns:
- Tools (library, platform) to solve component tasks
- Advantages of this solution. For example, LiteSAM AI feature is picked for UAV - Satellite matching finding, and it make its job perfectly in milliseconds timeframe.
- Limitations of this solution. For example, LiteSAM AI feature matcher requires to work efficiently on RTX Gpus and since it is sparsed, the quality a bit lower than densed feature matcher.
- Requirements for this solution. For example, LiteSAM AI feature matcher requires that photos it comparing to be aligned by rotation with no more than 45 degree difference. This requires additional preparation step for pre-rotating either UAV either Satellite images in order to be aligned.
- Advantages of this solution
- Limitations of this solution
- Requirements for this solution
- Security considerations
- Performance characteristics
- How does it fit for the problem component that has to be solved, and the whole solution
- Testing strategy. Research how to cover system with tests in order to meet all the acceptance criteria. Form a list of integration functional tests and non-functional tests.
@@ -0,0 +1,37 @@
# Security Research
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Solution: `@_docs/01_solution/solution.md`
## Role
You are a security architect
## Task
- Review solution architecture against security requirements from `security_approach.md`
- Identify attack vectors and threat model for the system
- Define security requirements per component
- Propose security controls and mitigations
## Output format
### Threat Model
- Asset inventory (what needs protection)
- Threat actors (who might attack)
- Attack vectors (how they might attack)
### Security Requirements per Component
For each component:
- Component name
- Security requirements
- Proposed controls
- Risk level (High/Medium/Low)
### Security Controls Summary
- Authentication/Authorization approach
- Data protection (encryption, integrity)
- Secure communication
- Logging and monitoring requirements
@@ -1,10 +1,11 @@
# decompose
# Decompose
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
@@ -19,38 +20,63 @@
- When you've got full understanding of how exactly each component will interact with each other, create components
## Output Format
### Components Decomposition
Store description of each component to the file `_docs/02_components/[##]_[component_name]/[##]._component_[component_name].md` with the next structure:
1. High-level overview
- **Purpose:** A concise summary of what this component does and its role in the larger system.
- **Architectural Pattern:** Identify the design patterns used (e.g., Singleton, Observer, Factory).
2. Logic & Architecture
- **Control Flow Diagram:**
- Generate a `graph TD` or `sequenceDiagram` in Mermaid syntax.
- Generate draw.io components diagram shows relations between components.
3. API Reference. Create a table for eac function or method with the next columns:
2. API Reference. Create a table for each function or method with the next columns:
- Name
- Description
- Input
- Output
- Description of input and output data in case if it is not obvious
- Test cases which could be for the method
4. Implementation Details
3. Implementation Details
- **Algorithmic Complexity:** Analyze Time (Big O) and Space complexity for critical methods.
- **State Management:** Explain how this component handles state (local vs. global).
- **Dependencies:** List key external libraries and their purpose here.
5. Tests
- **Error Handling:** Define error handling strategy for this component.
4. Tests
- Integration tests for the component if needed.
- Non-functional tests for the component if needed.
6. Extensions and Helpers
5. Extensions and Helpers
- Store Extensions and Helpers to support functionality across multiple components to a separate folder `_docs/02_components/helpers`.
7. Caveats & Edge Cases
6. Caveats & Edge Cases
- Known limitations
- Potential race conditions
- Potential performance bottlenecks.
### Dependency Graph
- Create component dependency graph showing implementation order
- Identify which components can be implemented in parallel
### API Contracts
- Define interfaces/contracts between components
- Specify data formats exchanged
### Logging Strategy
- Define global logging approach for the system
- Log levels, format, storage
For the whole system make these diagrams and store them to `_docs/02_components`:
### Logic & Architecture
- Generate draw.io components diagrams shows relations between components.
- Make sure lines are not intersect each other, or at least try to minimize intersections.
- Group the semantically coherent components into the groups
- Leave enough space for the nice alignment of the components boxes
- Put external users of the system closer to those components' blocks they are using
- Generate a Mermaid Flowchart diagrams for each of the main control flows
- Create multiple flows system can operate, and generate a flowchart diagram per each flow
- Flows can relate to each other
## Notes
- Strongly follow Single Responsibility Principle during creation of components.
- Follow dumb code - smart data principle. Do not overcomplicate
- Components should be semantically coherents. Do not spread similar functionality across multiple components
- Components should be semantically coherent. Do not spread similar functionality across multiple components
- Do not put any code yet, only names, input and output.
- Ask as many questions as possible to clarify all uncertainties.
@@ -1,10 +1,11 @@
# component assesment
# Component Assessment
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
@@ -12,14 +13,18 @@
## Task
- Read carefully all the documents above
- check all the components @02_components how coherent they are
- Check all the components @02_components how coherent they are
- Follow interaction logic and flows, try to find some potential problems there
- Try to find some missing interaction or circular dependencies
- Check all the components follows Single Responsibility Principle
- Check all the follows dumb code - smart data principle. So that resulting code shouldn't be overcomplicated
- Check for security vulnerabilities in component design
- Check for performance bottlenecks
- Verify API contracts are consistent across components
## Output
Form a list of problems with fixes in the next format:
- Component
- Problem type (Architectural/Security/Performance/API)
- Problem, reason
- Fix or potential fixes
@@ -0,0 +1,36 @@
# Security Check
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a security architect
## Task
- Review each component against security requirements
- Identify security gaps in component design
- Verify security controls are properly distributed across components
- Check for common vulnerabilities (injection, auth bypass, data leaks)
## Output
### Security Assessment per Component
For each component:
- Component name
- Security gaps found
- Required security controls
- Priority (High/Medium/Low)
### Cross-Component Security
- Authentication flow assessment
- Authorization gaps
- Data flow security (encryption in transit/at rest)
- Logging for security events
### Recommendations
- Required changes before implementation
- Security helpers/components to add
@@ -1,4 +1,4 @@
# generate Jira Epics
# Generate Jira Epics
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
@@ -6,12 +6,15 @@
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a world class product manager
## Task
- Generate Jira Epics from the Components Using Jira MCP
- Generate Jira Epics from the Components using Jira MCP
- Order epics by dependency (which must be done first)
- Include rough effort estimation per epic
- Ensure each epic has clear goal and acceptance criteria, verify it with acceptance criteria
- Generate draw.io components diagram based on previous diagram shows relations between components and current Jira Epic number, corresponding to each component.
@@ -29,17 +32,20 @@
- Assumptions
- System design specifics, input material quality, data structures, network availability etc
- Dependencies
- Other epics that must be completed first
- Other components, services, hardware, environments, certificates, data sources etc.
- Effort Estimation
- T-shirt size (S/M/L/XL) or story points range
- Users / Consumers
- Internal, External, Systems, Short list of the key use cases.
- Requirements
- Functional - API expectations, events, data handling, idempotency, retry behavior etc
- Non-functional -Availability, latency, throughput, scalability, processing limits, data retention etc
- Security/Compliance - Authentication, encryption, secrets, logging, SOC2/ISO is applicable
- Non-functional - Availability, latency, throughput, scalability, processing limits, data retention etc
- Security/Compliance - Authentication, encryption, secrets, logging, SOC2/ISO if applicable
- Design & Architecture (links)
- High-level diagram link , Data flow, sequence diagrams, schemas etc
- High-level diagram link, Data flow, sequence diagrams, schemas etc
- Definition of Done (Epic-level)
- Feature list per epic scope
- Feature list per epic scope
- Automated tests (unit/integration/e2e) + minimum coverage threshold met
- Runbooks if applicable
- Documentation updated
@@ -57,6 +63,5 @@
- Tasks
- Technical enablers
## Notes
- Be as much concise as possible in formulating epics. The less words with the same meaning - the better epic is.
## Notes
- Be as much concise as possible in formulating epics. The less words with the same meaning - the better epic is.
+23 -4
View File
@@ -1,10 +1,11 @@
# generate Tests
# Generate Tests
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
@@ -12,10 +13,11 @@
## Task
- Compose tests according to the test strategy
- Cover all the the criteria with tests specs
- Cover all the criteria with tests specs
- Minimum coverage target: 75%
## Output
Store all tests specs to the files `_docs/03_tests/[##]_[test_name]_spec.md`
Store all tests specs to the files `_docs/02_tests/[##]_[test_name]_spec.md`
Types and structures of tests:
- Integration tests
@@ -25,7 +27,19 @@
- Expected result
- Maximum expected time to get result
- Acceptance tests:
- Performance tests
- Summary
- Load/stress scenario description
- Expected throughput/latency
- Resource limits
- Security tests
- Summary
- Attack vector being tested
- Expected behavior
- Pass/Fail criteria
- Acceptance tests
- Summary
- Detailed description
- Preconditions for tests
@@ -35,6 +49,11 @@
...
- StepN - Expected resultN
- Test Data Management
- Required test data
- Setup/Teardown procedures
- Data isolation strategy
## Notes
- Do not put any code yet
- Ask as many questions as needed.
@@ -1,9 +1,12 @@
# generate Features for the provided component spec
# Generate Features for the provided component spec
## Input parameters
- component_spec.md. Required. Do NOT proceed if it is NOT provided!
- parent Jira Epic in the format AZ-###. Required. Do NOT proceed if it is NOT provided!
## Prerequisites
- Jira Epics must be created first (step 2.20)
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data
@@ -20,14 +23,17 @@
- Split to the many features only if it necessary and would be easier to implement
- Do not create features of other components, create *only* features of this exact component
- Each feature should be atomic, could contain 0 API, or list of semantically connected APIs
- After splitting asses yourself
- After splitting assess yourself
- Add complexity points estimation (1, 2, 3, 5, 8) per feature
- Note feature dependencies (some features may be independent)
- Use `@gen_feature_spec.md` as a complete guidance how to generate feature spec
- Generate Jira tasks per each feature using this spec `@gen_jira_task.md` usint Jira MCP.
- Generate Jira tasks per each feature using this spec `@gen_jira_task_and_branch.md` using Jira MCP.
## Output
- The file name of the feature specs should follow this format: `[component's number ##].[feature's number ##]_feature_[feature_name].md`.
- The structure of the feature spec should follow this spec `@gen_feature_spec.md`
- The structure of the Jira task should follow this spec: `@gen_jira_task.md`
- The structure of the Jira task should follow this spec: `@gen_jira_task_and_branch.md`
- Include dependency notes (which features can be done in parallel)
## Notes
- Do NOT generate any code yet, only brief explanations what should be done.
@@ -1,10 +1,11 @@
# Create initial structure
# Create Initial Structure
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`.
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data.
- Restrictions: `@_docs/00_problem/restrictions.md`.
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`.
- Security approach: `@_docs/00_problem/security_approach.md`.
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components with Features specifications: `@_docs/02_components`
@@ -13,24 +14,31 @@
## Task
- Read carefully all the component specs and features in the components folder: `@_docs/02_components`
- Investgate in internet what are the best way and tools to implement components and its features
- Investigate in internet what are the best way and tools to implement components and its features
- Make a plan for the creating initial structure:
- DTOs
- component's interfaces
- empty implementations
- helpers - empty implementations or interfaces
- add README.md, describe the project by @_docs/01_solution/solution.md
- Create a separate project for the integration tests
- Add .gitignore appropriate for the project's language/framework
- Add .env.example with required environment variables
- Add CI/CD skeleton (GitHub Actions, GitLab CI, or appropriate)
- Add database migration setup if applicable
- Add README.md, describe the project by @_docs/01_solution/solution.md
- Create a separate folder for the integration tests (not a separate repo)
## Example
The structure should roughly looks like this:
-
- .gitignore
- .env.example
- .github/workflows/ (or .gitlab-ci.yml)
- api
- components
- component1_folder
- component2_folder
- ...
- db
- migrations/
- helpers
- models
- tests
@@ -1,4 +1,4 @@
# Implement component and features by spec
# Implement Component and Features by Spec
## Input parameter
component_folder
@@ -8,6 +8,7 @@
- Input data: `@_docs/00_problem/input_data`. They are for reference only, yet it is an example of the real data.
- Restrictions: `@_docs/00_problem/restrictions.md`.
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`.
- Security approach: `@_docs/00_problem/security_approach.md`.
- Full Solution Description: `@_docs/01_solution/solution.md`
## Role
@@ -16,16 +17,19 @@
## Task
- Read carefully initial data and component spec in the component_folder: `@_docs/02_components/[##]_[component_name]/[##]._component_[component_name]`
- Read carefully all the component features in the component_folder: `@_docs/02_components/[##]_[component_name]/[##].[##]_feature_[feature_name]`
- Investgate in internet what are the best way and tools to implement component and its features
- During the investigation is is possible that found solutions required architecturally reorganization of the features. It is ok, propose that and if user agrees, include reorganization in the build feature plan. Also it is possible that interface could be changed or even removed or added new one. It is ok.
- Investigate in internet what are the best way and tools to implement component and its features
- During the investigation it is possible that found solutions required architecturally reorganization of the features. It is ok, propose that and if user agrees, include reorganization in the build feature plan. Also it is possible that interface could be changed or even removed or added new one. It is ok.
- Analyze the existing codebase and get full context for the component's implementation
- Make sure each feature is connected and communicated properly with other features and existing code
- If component has dependency on another one, create temporary mock for the dependency
- For each feature:
- Implement the feature
- Implement error handling per defined strategy
- Implement logging per defined strategy
- Implement all unit tests from the Test cases description, add checks test results to the plan steps
- Implement all integration tests for the feature, add check test results to the plan steps. Analyze existing tests, and decide whether to create new one or add to existing
- Add to the implementation plan description of all component's integration tests, add check test results to the plan steps
- After component is complete, replace mocks with real implementations (mock cleanup)
## Notes
- Ask as many questions as needed, everything should be clear how to implement each feature
@@ -0,0 +1,39 @@
# Code Review
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`.
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`.
- Security approach: `@_docs/00_problem/security_approach.md`.
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a senior software engineer performing code review
## Task
- Review implemented code against component specifications
- Check code quality: readability, maintainability, SOLID principles
- Check error handling consistency
- Check logging implementation
- Check security requirements are met
- Check test coverage is adequate
- Identify code smells and technical debt
## Output
### Issues Found
For each issue:
- File/Location
- Issue type (Bug/Security/Performance/Style/Debt)
- Description
- Suggested fix
- Priority (High/Medium/Low)
### Summary
- Total issues by type
- Blocking issues that must be fixed
- Recommended improvements
## Notes
- Can also use Cursor's built-in review feature
- Focus on critical issues first
@@ -0,0 +1,42 @@
# CI/CD Setup
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`.
- Restrictions: `@_docs/00_problem/restrictions.md`.
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a DevOps engineer
## Task
- Review project structure and dependencies
- Configure CI/CD pipeline with stages:
- Build
- Lint
- Unit tests
- Integration tests
- Security scan (if applicable)
- Deploy to staging (if applicable)
- Configure environment variables handling
- Set up test reporting
- Configure branch protection rules recommendations
## Output
### Pipeline Configuration
- Pipeline file(s) created/updated
- Stages description
- Triggers (on push, PR, etc.)
### Environment Setup
- Required secrets/variables
- Environment-specific configs
### Deployment Strategy
- Staging deployment steps
- Production deployment steps (if applicable)
## Notes
- Use project-appropriate CI/CD tool (GitHub Actions, GitLab CI, Azure DevOps, etc.)
- Keep pipeline fast - parallelize where possible
@@ -1,4 +1,4 @@
# Implement tests by spec
# Implement Tests by Spec
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`.
@@ -13,14 +13,22 @@
## Task
- Read carefully all the initial data and understand whole system goals
- Check that a separate project in a separate folder is existing (should be generated by @3.05_implement_initial_structure.md)
- Check that a separate folder for tests is existing (should be generated by @3.05_implement_initial_structure.md)
- Set up Docker environment for testing:
- Create docker-compose.yml for test environment
- Configure test database container
- Configure application container
- For each test description:
- Prepare all the data necessary for testing, or check it is already exists
- Check existing integration tests and if a similar test is already exists, update it
- Implement the test by specification
- Run system and integration tests and in 2 separate docker containers
- Implement test data management:
- Setup fixtures/factories
- Teardown/cleanup procedures
- Run system and integration tests in docker containers
- Fix all problems if tests failed until we got a successful result. In case if one or more tests was failed due to missing data from user or API or other system, ask it from developer.
- Repeat test cycle until no failed tests, iteratively fixing found bugs. Ask user for an additional information if something new appears
- Ensure tests run in CI pipeline
- Compose a final test results in a csv with the next format:
- Test filename
- Execution time
@@ -28,3 +36,4 @@
## Notes
- Ask as many questions as needed, everything should be clear how to implement each feature
@@ -0,0 +1,29 @@
# User Input for Refactoring
## Task
Collect and document goals for the refactoring project.
## User should provide:
Create in `_docs/00_problem`:
- `problem_description.md`:
- What the system currently does
- What changes/improvements are needed
- Pain points in current implementation
- `acceptance_criteria.md`: Success criteria for the refactoring
- `security_approach.md`: Security requirements (if applicable)
## Example
- `problem_description.md`
Current system: E-commerce platform with monolithic architecture.
Current issues: Slow deployments, difficult scaling, tightly coupled modules.
Goals: Break into microservices, improve test coverage, reduce deployment time.
- `acceptance_criteria.md`
- All existing functionality preserved
- Test coverage increased from 40% to 75%
- Deployment time reduced by 50%
- No circular dependencies between modules
## Output
Store user input in `_docs/00_problem/` folder for reference by subsequent steps.
@@ -1,54 +1,48 @@
# Create a comprehensive documentation from existing codebase
# Create Documentation from Existing Codebase
## Role
You are a Principal Software Architect and Technical Communication Expert. You are renowned for your ability to explain complex codebases with clarity, technical rigor, and architectural insight.
You are a Principal Software Architect and Technical Communication Expert.
## Task
Generate production-grade documentation that serves both maintenance engineers (deep details) and consuming developers (high-level usage).
Generate production-grade documentation from existing code that serves both maintenance engineers and consuming developers.
## Core Directives:
- Truthfulness: Never invent features. Ground every claim in the provided code.
- Clarity: Use professional, third-person objective tone. Avoid fluff ("This code does...").
- Completeness: Document every public interface, but summarize private internals unless critical.
- Visuals: Always visualize complex logic using Mermaid.js.
- Clarity: Use professional, third-person objective tone.
- Completeness: Document every public interface, summarize private internals unless critical.
- Visuals: Visualize complex logic using Mermaid.js.
## Proces:
1. Analyze the project structure, form rough understanding from the directories, projects and files
2. Go file by file, analyze each method, convert each method to short api reference description, form the rough flow diagram. You can write interim summary for each file
3. Analyze summaries and code, analyze connections between components, form more detailed structure by the format described in Output Format section
## Process:
1. Analyze the project structure, form rough understanding from directories, projects and files
2. Go file by file, analyze each method, convert to short API reference description, form rough flow diagram
3. Analyze summaries and code, analyze connections between components, form detailed structure
## Output Format
Store description of each component to the file `_docs/02_components/[##]_[component_name]/[##]._component_[component_name].md` with the next structure:
Store description of each component to `_docs/02_components/[##]_[component_name]/[##]._component_[component_name].md`:
1. High-level overview
- **Purpose:** A concise summary of what this component does and its role in the larger system.
- **Architectural Pattern:** Identify the design patterns used (e.g., Singleton, Observer, Factory).
- **Purpose:** Component role in the larger system.
- **Architectural Pattern:** Design patterns used.
2. Logic & Architecture
- **Control Flow Diagram:**
- Generate a `graph TD` or `sequenceDiagram` in Mermaid syntax.
- Generate draw.io components diagram shows relations between components.
3. API Reference. Create a table for eac function or method with the next columns:
- Name
- Description
- Input
- Output
- Description of input and output data in case if it is not obvious
- Test cases which could be for the method
- Mermaid `graph TD` or `sequenceDiagram`
- draw.io components diagram
3. API Reference table:
- Name, Description, Input, Output
- Test cases for the method
4. Implementation Details
- **Algorithmic Complexity:** Analyze Time (Big O) and Space complexity for critical methods.
- **State Management:** Explain how this component handles state (local vs. global).
- **Dependencies:** List key external libraries and their purpose here.
- **Algorithmic Complexity:** Big O for critical methods.
- **State Management:** Local vs. global state.
- **Dependencies:** External libraries.
5. Tests
- Integration tests for the component if needed.
- Non-functional tests for the component if needed.
- Integration tests needed
- Non-functional tests needed
6. Extensions and Helpers
- Store Extensions and Helpers to support functionality across multiple components to a separate folder `_docs/02_components/helpers`.
- Store to `_docs/02_components/helpers`
7. Caveats & Edge Cases
- Known limitations
- Potential race conditions
- Potential performance bottlenecks.
- Known limitations
- Race conditions
- Performance bottlenecks
## Notes
Do final checks:
- Whether all the parameters are captured
- Is the Mermaid diagram syntactically correct
- Should be explained why the code works, not just how
- Verify all parameters are captured
- Verify Mermaid diagrams are syntactically correct
- Explain why the code works, not just how
@@ -0,0 +1,36 @@
# Form Solution with Flows
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Generated component docs: `@_docs/02_components`
## Role
You are a professional software architect
## Task
- Review all generated component documentation
- Synthesize into a cohesive solution description
- Create flow diagrams showing how components interact
- Identify the main use cases and their flows
## Output
### Solution Description
Store to `_docs/01_solution/solution.md`:
- Short Product solution description
- Component interaction diagram (draw.io)
- Components overview and their responsibilities
### Flow Diagrams
Store to `_docs/02_components/system_flows.md`:
- Mermaid Flowchart diagrams for main control flows:
- Create flow diagram per major use case
- Show component interactions
- Note data transformations
- Flows can relate to each other
- Show entry points, decision points, and outputs
## Notes
- Focus on documenting what exists, not what should be
@@ -0,0 +1,39 @@
# Deep Research of Approaches
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a professional researcher and software architect
## Task
- Analyze current implementation patterns
- Research modern approaches for similar systems
- Identify what could be done differently
- Suggest improvements based on state-of-the-art practices
## Output
### Current State Analysis
- Patterns currently used
- Strengths of current approach
- Weaknesses identified
### Alternative Approaches
For each major component/pattern:
- Current approach
- Alternative approach
- Pros/Cons comparison
- Migration effort (Low/Medium/High)
### Recommendations
- Prioritized list of improvements
- Quick wins (low effort, high impact)
- Strategic improvements (higher effort)
## Notes
- Focus on practical, achievable improvements
- Consider existing codebase constraints
@@ -0,0 +1,40 @@
# Solution Assessment with Codebase
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Research findings: from step 4.30
## Role
You are a professional software architect
## Task
- Assess current implementation against acceptance criteria
- Identify weak points in current codebase
- Map research recommendations to specific code areas
- Prioritize changes based on impact and effort
## Output
### Weak Points Assessment
For each issue found:
- Location (component/file)
- Weak point description
- Impact (High/Medium/Low)
- Proposed solution
### Gap Analysis
- Acceptance criteria vs current state
- What's missing
- What needs improvement
### Refactoring Roadmap
- Phase 1: Critical fixes
- Phase 2: Major improvements
- Phase 3: Nice-to-have enhancements
## Notes
- Ground all findings in actual code
- Be specific about locations and changes needed
@@ -0,0 +1,41 @@
# Integration Tests Description
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a professional Quality Assurance Engineer
## Task
- Analyze existing test coverage
- Define integration tests that capture current system behavior
- Tests should serve as safety net for refactoring
- Cover critical paths and edge cases
## Output
Store test specs to `_docs/02_tests/[##]_[test_name]_spec.md`:
- Integration tests
- Summary
- Current behavior being tested
- Input data
- Expected result
- Maximum expected time
- Acceptance tests
- Summary
- Preconditions
- Steps with expected results
- Coverage Analysis
- Current coverage percentage
- Target coverage (75% minimum)
- Critical paths not covered
## Notes
- Focus on behavior preservation
- These tests validate refactoring doesn't break functionality
@@ -0,0 +1,34 @@
# Implement Tests
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Tests specifications: `@_docs/02_tests`
## Role
You are a professional software developer
## Task
- Implement all tests from specifications
- Ensure all tests pass on current codebase (before refactoring)
- Set up test infrastructure if not exists
- Configure test data fixtures
## Process
1. Set up test environment
2. Implement each test from spec
3. Run tests, verify all pass
4. Document any discovered issues
## Output
- Implemented tests in test folder
- Test execution report:
- Test name
- Status (Pass/Fail)
- Execution time
- Issues discovered (if any)
## Notes
- All tests MUST pass before proceeding to refactoring
- Tests are the safety net for changes
@@ -0,0 +1,38 @@
# Analyze Coupling
## Initial data:
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Codebase
## Role
You are a software architect specializing in code quality
## Task
- Analyze coupling between components/modules
- Identify tightly coupled areas
- Map dependencies (direct and transitive)
- Form decoupling strategy
## Output
### Coupling Analysis
- Dependency graph (Mermaid)
- Coupling metrics per component
- Circular dependencies found
### Problem Areas
For each coupling issue:
- Components involved
- Type of coupling (content, common, control, stamp, data)
- Impact on maintainability
- Severity (High/Medium/Low)
### Decoupling Strategy
- Priority order for decoupling
- Proposed interfaces/abstractions
- Estimated effort per change
## Notes
- Focus on high-impact coupling issues first
- Consider backward compatibility
@@ -0,0 +1,43 @@
# Execute Decoupling
## Initial data:
- Decoupling strategy: from step 4.60
- Tests: implemented in step 4.50
- Codebase
## Role
You are a professional software developer
## Task
- Execute decoupling changes per strategy
- Fix code smells encountered during refactoring
- Run tests after each significant change
- Ensure all tests pass before proceeding
## Process
For each decoupling change:
1. Implement the change
2. Run integration tests
3. Fix any failures
4. Commit with descriptive message
## Code Smells to Address
- Long methods
- Large classes
- Duplicate code
- Dead code
- Magic numbers/strings
## Output
- Refactored code
- Test results after each change
- Summary of changes made:
- Change description
- Files affected
- Tests status
## Notes
- Small, incremental changes
- Never break tests
- Commit frequently
@@ -0,0 +1,40 @@
# Technical Debt
## Initial data:
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Codebase
## Role
You are a technical debt analyst
## Task
- Identify technical debt in the codebase
- Categorize and prioritize debt items
- Estimate effort to resolve
- Create actionable plan
## Output
### Debt Inventory
For each item:
- Location (file/component)
- Type (design, code, test, documentation)
- Description
- Impact (High/Medium/Low)
- Effort to fix (S/M/L/XL)
- Interest (cost of not fixing)
### Prioritized Backlog
- Quick wins (low effort, high impact)
- Strategic debt (high effort, high impact)
- Tolerable debt (low impact, can defer)
### Recommendations
- Immediate actions
- Sprint-by-sprint plan
- Prevention measures
## Notes
- Be realistic about effort estimates
- Consider business priorities
@@ -0,0 +1,49 @@
# Performance Optimization
## Initial data:
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Codebase
## Role
You are a performance engineer
## Task
- Identify performance bottlenecks
- Profile critical paths
- Propose optimizations
- Implement and verify improvements
## Output
### Bottleneck Analysis
For each bottleneck:
- Location
- Symptom (slow response, high memory, etc.)
- Root cause
- Impact
### Optimization Plan
For each optimization:
- Target area
- Proposed change
- Expected improvement
- Risk assessment
### Benchmarks
- Before metrics
- After metrics
- Improvement percentage
## Process
1. Profile current performance
2. Identify top bottlenecks
3. Implement optimizations one at a time
4. Benchmark after each change
5. Verify tests still pass
## Notes
- Measure before optimizing
- Optimize the right things (profile first)
- Don't sacrifice readability for micro-optimizations
@@ -0,0 +1,48 @@
# Security Review
## Initial data:
- Security approach: `@_docs/00_problem/security_approach.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Codebase
## Role
You are a security engineer
## Task
- Review code for security vulnerabilities
- Check against OWASP Top 10
- Verify security requirements are met
- Recommend fixes for issues found
## Output
### Vulnerability Assessment
For each issue:
- Location
- Vulnerability type (injection, XSS, CSRF, etc.)
- Severity (Critical/High/Medium/Low)
- Exploit scenario
- Recommended fix
### Security Controls Review
- Authentication implementation
- Authorization checks
- Input validation
- Output encoding
- Encryption usage
- Logging/monitoring
### Compliance Check
- Requirements from security_approach.md
- Status (Met/Partially Met/Not Met)
- Gaps to address
### Recommendations
- Critical fixes (must do)
- Improvements (should do)
- Hardening (nice to have)
## Notes
- Prioritize critical vulnerabilities
- Provide actionable fix recommendations
+17 -4
View File
@@ -3,13 +3,15 @@ Create a focused behavioral specification that describes **what** the system sho
## Input parameter
building_block.md
Example: `03_building_blocks/01-dashboard-export-example.md`
Example: `_docs/iterative/building_blocks/01-dashboard-export-example.md`
## Objective
Generate lean specifications with:
- Clear problem statement and desired outcomes
- Behavioral acceptance criteria in Gherkin format
- Essential non-functional requirements
- Complexity estimation
- Feature dependencies
- No implementation prescriptiveness
## Process
@@ -18,8 +20,8 @@ Generate lean specifications with:
3. Generate a behavioral specification using the structure below
4. **DO NOT** include implementation details, file structures, or technical architecture
5. Focus on behavior, user experience, and acceptance criteria
6. Save the specification into the `03_feature_specs/spec.md`
Example: `03_feature_specs/01-dashboard-export-example.md`
6. Save the specification into `_docs/iterative/feature_specs/spec.md`
Example: `_docs/iterative/feature_specs/01-dashboard-export-example.md`
## Specification Structure
@@ -28,6 +30,8 @@ Generate lean specifications with:
# [Feature Name]
**Status**: Draft | **Date**: [YYYY-MM-DD] | **Feature**: [Brief Feature Description]
**Complexity**: [1|2|3|5|8] points
**Dependencies**: [List dependent features or "None"]
```
### Problem
@@ -72,7 +76,7 @@ Use sub-sections with bullet points.
- Initial data and conditions
- What should be tested
- How system should behave
- List of Non-functional requiremnts to be met
- List of Non-functional requirements to be met
### Constraints
@@ -90,6 +94,13 @@ Each risk should have:
- *Risk*: Description
- *Mitigation*: Approach
## Complexity Points Guide
- 1 point: Trivial, self-contained, no dependencies
- 2 points: Non-trivial, low complexity, minimal coordination
- 3 points: Multi-step, moderate complexity, potential alignment needed
- 5 points: Difficult, interconnected logic, medium-high risk
- 8 points: High ambiguity, multiple components, very high risk (consider splitting)
## Output Guidelines
**DO:**
- Focus on behavior and user experience
@@ -97,6 +108,8 @@ Each risk should have:
- Keep acceptance criteria testable
- Include realistic scope boundaries
- Write from the user's perspective
- Include complexity estimation
- Note dependencies on other features
**DON'T:**
- Include implementation details (file paths, classes, methods)
-64
View File
@@ -1,64 +0,0 @@
# Generate Jira Task from Spec (via Jira MCP)
Create or update a *Jira ticket** from a specification feature_spec.md
## 🔧 Inputs
- feature_spec.md (required): path to the source spec file.
Example: `@_docs/03_feature_specs/spec-export-e2e.md`
- epic <Epic-Id> (required for Jira task creation): create Jira task under parent epic, for example:
Example: /3.30_generate_jira_task @_docs/03_specs/spec-export-e2e.md epic AZ-112
- update <Task-Id> (required for Jira task update): update existing Jira task, for example:
Example: /3.30_generate_jira_task @_docs/03_specs/spec-export-e2e.md update AZ-151
## 🎯 Objective
1. Parse the spec to extract **Title**, **Description**, **Acceptance Criteria**, **Technical Details**.
2. Create a Jira Task under Epic or Update existing Jira Task using **Jira MCP**
## 🧩 Parsing Rules
### 🏷️ Title
Use the first header (`#` or `##`) at the top of the spec.
### 📝 Description (Markdown ONLY — no AC/Tech here)
Build from:
- **Purpose & Outcomes → Intent** (bullets)
- **Purpose & Outcomes → Success Signals** (bullets)
- (Optional) one-paragraph summary from **Behavior Change → New Behavior**
> **Do not include** Acceptance Criteria or Technical Details in Description if those fields exist in Jira.
### ✅ Acceptance Criteria (Gherkin HTML)
From **"Acceptance Criteria (Gherkin)"**, extract the **full Gherkin scenarios** including:
- The `Feature:` line
- Each complete `Scenario:` block with all `Given`, `When`, `Then`, `And` steps
- Convert the entire Gherkin text to **HTML format** preserving structure (use `<pre>` and `<code>` tags or properly formatted HTML)
- Do NOT create a simple checklist; keep the full Gherkin syntax as it is essential for test traceability.
### ⚙️ Technical Details
Bullets composed of:
- **Inputs → Key constraints**
- **Scope → Included/Excluded** (condensed)
- **Interfaces & Contracts** (names only — UI actions, endpoint names, event names)
After creating the PBI, add a parent-child relation to link the Task under the Epic
**CRITICAL**:
- Do NOT modify the parent Epic - only update the child Task
## 🔂 Steps (Agent)
1. Parse **Title**, **Description**, **AC**, **Tech** per **Parsing Rules**.
- For AC: Extract the COMPLETE Gherkin syntax from the "Acceptance Criteria (Gherkin)" section (including Feature line, all Scenario blocks, and all Given/When/Then/And steps).
2. **Create** or **Update** the Task with the field mapping above.
- If creating a new Task with a Epic provided, add the parent relation
- Do NOT modify the parent Epic work item.
3. Raname spec.md and corresponding building
- Rename `_docs/03_specs/{taskId}-{taskNameSlug}.md`
- Rename `_docs/03_building_blocks/{taskId}-{taskNameSlug}.md`
{taskId} is Jira task Id which either was just created, either provided in update argument
{taskNameSlug} is a kebab-case slug from the Jira Task title when update argument is provided, or derived from the spec title.
## 🛡️ Guardrails
- No source code edits; only one work item and (optional) one file move.
- If Jira creation/update fails, do not move the file.
- If AC/Tech fields are absent, append to Description; otherwise, keep Description clean.
- **CRITICAL**: Extract the FULL Gherkin scenarios with all steps (Given/When/Then/And) - do NOT create simple checklist items. The Gherkin syntax is required for proper test traceability in Jira
- Do not edit parent Epic
@@ -0,0 +1,81 @@
# Generate Jira Task and Git Branch from Spec
Create a Jira ticket from a specification and set up git branch for development.
## Inputs
- feature_spec.md (required): path to the source spec file.
Example: `@_docs/iterative/feature_specs/spec-export-e2e.md`
- epic <Epic-Id> (required for Jira task creation): create Jira task under parent epic
Example: /gen_jira_task_and_branch @_docs/iterative/feature_specs/spec.md epic AZ-112
- update <Task-Id> (required for Jira task update): update existing Jira task
Example: /gen_jira_task_and_branch @_docs/iterative/feature_specs/spec.md update AZ-151
## Objective
1. Parse the spec to extract **Title**, **Description**, **Acceptance Criteria**, **Technical Details**, **Estimation**.
2. Create a Jira Task under Epic or Update existing Jira Task using **Jira MCP**
3. Create git branch for the task
## Parsing Rules
### Title
Use the first header at the top of the spec.
### Description (Markdown ONLY — no AC/Tech here)
Build from:
- **Purpose & Outcomes → Intent** (bullets)
- **Purpose & Outcomes → Success Signals** (bullets)
- (Optional) one-paragraph summary from **Behavior Change → New Behavior**
> **Do not include** Acceptance Criteria or Technical Details in Description if those fields exist in Jira.
### Estimation
Extract complexity points from spec header and add to Jira task.
### Acceptance Criteria (Gherkin HTML)
From **"Acceptance Criteria (Gherkin)"**, extract the **full Gherkin scenarios** including:
- The `Feature:` line
- Each complete `Scenario:` block with all `Given`, `When`, `Then`, `And` steps
- Convert the entire Gherkin text to **HTML format** preserving structure
- Do NOT create a simple checklist; keep the full Gherkin syntax for test traceability.
### Technical Details
Bullets composed of:
- **Inputs → Key constraints**
- **Scope → Included/Excluded** (condensed)
- **Interfaces & Contracts** (names only — UI actions, endpoint names, event names)
## Steps (Agent)
1. **Check current branch**
- Verify user is on `dev` branch
- If not on `dev`, notify user: "Please switch to the dev branch before proceeding"
- Stop execution if not on dev
2. Parse **Title**, **Description**, **AC**, **Tech**, **Estimation** per **Parsing Rules**.
3. **Create** or **Update** the Jira Task with the field mapping above.
- If creating a new Task with Epic provided, add the parent relation
- Do NOT modify the parent Epic work item.
4. **Create git branch**
```bash
git stash
git checkout -b {taskId}-{taskNameSlug}
git stash pop
```
- {taskId} is Jira task Id (lowercase), e.g., `az-122`
- {taskNameSlug} is kebab-case slug from task title, e.g., `progressive-search-system`
- Full branch name example: `az-122-progressive-search-system`
5. Rename spec.md and corresponding building block:
- Rename to `_docs/iterative/feature_specs/{taskId}-{taskNameSlug}.md`
- Rename to `_docs/iterative/building_blocks/{taskId}-{taskNameSlug}.md`
## Guardrails
- No source code edits; only Jira task, file moves, and git branch.
- If Jira creation/update fails, do not create branch or move files.
- If AC/Tech fields are absent in Jira, append to Description.
- **CRITICAL**: Extract the FULL Gherkin scenarios with all steps - do NOT create simple checklist items.
- Do not edit parent Epic.
- Always check for dev branch before proceeding.
+26
View File
@@ -0,0 +1,26 @@
# Pull Request Template
## Description
Brief description of changes.
## Related Issue
Jira ticket: [AZ-XXX](link)
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Refactoring
- [ ] Documentation
## Checklist
- [ ] Code follows project conventions
- [ ] Self-review completed
- [ ] Tests added/updated
- [ ] All tests pass
- [ ] Documentation updated (if needed)
## Testing
How to test these changes.
## Screenshots (if applicable)
@@ -1,9 +1,12 @@
# Building Block: Dashboard Export to Excel
## Problem / Goal
Users need to export the dashboard data theyre currently viewing into Excel for offline analysis and sharing.
## Problem / Goal
Users need to export the dashboard data they're currently viewing into Excel for offline analysis and sharing.
##
## Architecture Notes (optional)
Use existing data fetching layer. Add Excel generation service. Export button in dashboard toolbar triggers download.
## Outcome
Export dashboard data functionality. Expo
## Outcome
- One-click export of filtered dashboard data to Excel file
- File includes timestamp and current filter context
- Supports up to 10,000 records
+47 -5
View File
@@ -1,4 +1,27 @@
# Iterative Implementation phase
# Iterative Implementation Phase
## Prerequisites
### Jira MCP
Add Jira MCP to the list in IDE:
```
"Jira-MCP-Server": {
"url": "https://mcp.atlassian.com/v1/sse"
}
```
### Context7 MCP
Add context7 MCP to the list in IDE:
```
"context7": {
"command": "npx",
"args": [
"-y",
"@upstash/context7-mcp"
]
}
```
## 10 **🧑‍💻 Developers**: Form a building block
@@ -13,16 +36,23 @@
How it should be implemented. Which subsystem to use, short explanation of the 3-5 lines.
## Outcome
What we waht to achieve
What we want to achieve from the building block
```
### Example
`_docs/iterative/building_blocks/01-dashboard-export-example.md`
## 20. **🤖AI agent**: Generate Feature Specification
### Execute `/gen_feature_spec`
## 30. **🤖AI agent**: Generate Jira ticket
### Execute `/gen_jira_task`
## 30. **🤖AI agent**: Generate Jira ticket and branch
### Execute `/gen_jira_task_and_branch`
This will:
- Create Jira task under specified epic
- Create git branch from dev (e.g., `az-122-progressive-search-system`)
## 40. **🤖📋AI plan**: Generate Plan
### Execute
@@ -30,10 +60,22 @@
Example:
generate plan for `@_docs/iterative/feature_specs/01-dashboard-export-example.md`
## 50. **🧑‍💻 Developer**: Save the plan
Save the generated plan to `@_docs/iterative/plans`.
(First, save with built-in mechanism to .cursor folder, then move to this folder `@_docs/iterative/plans`)
## 60. Build from the plan
## Check build and tests are successful.
## 65. **🤖📋AI plan**: Code Review
### Execute
Use Cursor's built-in review feature or manual review.
### Verify
- All issues addressed
- Code quality standards met
## 70. Check build and tests are successful.
+126 -19
View File
@@ -1,14 +1,15 @@
# 1 Research Phase
## **🧑‍💻 Developers**: 1.0 Problem statement
## 1.01 **🧑‍💻 Developers**: Problem statement
### Discuss
Discuss the problem and create in the `_docs/00_problem` next files and folders:
- `problem_description.md`: Our problem to solve with the end result we want to achieve.
- `input_data`: Put to this folder all the necessary input data and expected results for the further tests. Analyze very thoroughly input data and form system's restrictions and acceptance ctiteria
- `input_data`: Put to this folder all the necessary input data and expected results for the further tests. Analyze very thoroughly input data and form system's restrictions and acceptance criteria
- `restrictions.md`: Restrictions we have in real world in the -dashed list format.
- `acceptance_criteria.md`: Acceptance criteria for the solution in the -dashed list format.
The most important part, determines how good the system should be.
- `security_approach.md`: Security requirements and constraints for the system.
### Example:
- `problem_description.md`
@@ -28,32 +29,64 @@
- The flying range is restricted by eastern and southern part of Ukraine. And so on.
- `acceptance_criteria.md`
- UAV should fly without GPS for at least 30 km in the sunshine weather.
- UAV shoulf fly with maximum mistake no more than 40 meters from the real GPS
- UAV should fly with maximum mistake no more than 40 meters from the real GPS
- UAV should fly correctly with little foggy weather with maximum mistake no more than 100 meters from the real GPS
- UAV should fly for minimum of 500 meters with missing inernal Satellite maps and the drifting error should be no more than 50 meters.
- UAV should fly for minimum of 500 meters with missing internal Satellite maps and the drifting error should be no more than 50 meters.
- `security_approach.md`
- System runs on embedded platform (Jetson Orin Nano) with secure boot
- Communication with ground station encrypted via AES-256
- No remote API access during flight - fully autonomous
- Firmware signing required for updates
## 1.1 **✨AI Research**: Restrictions and Acceptance Criteria assesment
## 1.05 **🧑‍💻 Developers**: Git Init
### Initialize Repository
```bash
git init
git add .
git commit -m "Initial: problem statement and input data"
```
### Execute `/1.research/1.1_research_assesment_acceptance_criteria`
### Branching Strategy
- `main`: Documentation and stable releases
- `stage`: Planning phase artifacts
- `dev`: Implementation code
After research phase completion, all docs stay on `main`.
Before planning phase, create `stage` branch.
Before implementation phase, create `dev` branch from `stage`.
After integration tests pass, merge `dev``stage``main`.
## 1.10 **✨AI Research**: Restrictions and Acceptance Criteria assessment
### Execute `/1.research/1.10_research_assesment_acceptance_criteria`
In case of external DeepResearch (Gemini, DeepSeek, or other), copypaste command's text and put to the research context:
- `problem_description.md`
- `restrictions.md`
- `acceptance_criteria.md`
- `security_approach.md`
- Samples of the input data
### Revise
- Revise the result, discuss it
- Overwrite `acceptance_criteria.md` and `restrictions.md`
### Commit
```bash
git add _docs/00_problem/
git commit -m "Research: acceptance criteria and restrictions assessed"
```
## 1.2 **🤖✨AI Research**: Research the problem in great detail
### Execute `/1.research/1.2_research_problem`
## 1.20 **🤖✨AI Research**: Research the problem in great detail
### Execute `/1.research/1.20_research_problem`
In case of external DeepResearch (Gemini, DeepSeek, or other), copypaste command's text and put to the research context:
- `problem_description.md`
- `restrictions.md`
- `acceptance_criteria.md`
- `security_approach.md`
- Samples of the input data
### Revise
@@ -63,13 +96,14 @@
- Store it to the `_docs/01_solution/solution_draft.md`
## 1.3 **🤖✨AI Research**: Solution draft assessment
## 1.30 **🤖✨AI Research**: Solution draft assessment
### Execute `/1.research/1.3_solution_draft_assessment`
### Execute `/1.research/1.30_solution_draft_assessment`
In case of external DeepResearch (Gemini, DeepSeek, or other), copypaste command's text and put to the research context:
- `problem_description.md`
- `restrictions.md`
- `acceptance_criteria.md`
- `security_approach.md`
- Samples of the input data
### Revise
@@ -78,18 +112,40 @@
### Iterate
- Rename previous `solution_draft.md` to `{xx}_solution_draft.md`. Start {xx} from 01
- Store the new revised result draft to the `_docs/01_solution/solution_draft.md`
- Repeat the process 1.3 from the beginning
- Repeat the process 1.30 from the beginning
When the next solution wouldn't differ much from the previous one, or become actually worse, store the last draft as `_docs/01_solution/solution.md`
## 1.40 **🤖✨AI Research**: Security Research
### Execute `/1.research/1.40_security_research`
### Revise
- Review security approach against solution architecture
- Update `security_approach.md` with specific requirements per component
### Commit
```bash
git add _docs/
git commit -m "Research: solution and security finalized"
```
# 2. Planning phase
> **Note**: If implementation reveals architectural issues, return to Planning phase to revise components.
## 2.05 **🧑‍💻 Developers**: Create stage branch
```bash
git checkout -b stage
```
## 2.10 **🤖📋AI plan**: Generate components
### Execute `/2.planning/2.10_gen_components`
### Execute `/2.planning/2.10_plan_components`
### Revise
- Revise the plan, answer questions, put detailed descriptions
@@ -99,14 +155,23 @@
- Save plan to `_docs/02_components/00_decomposition_plan.md`
## 2.15 **🤖📋AI plan**: Components assesment
## 2.15 **🤖📋AI plan**: Components assessment
### Execute `/2.planning/2.15_components_assesment`
### Execute `/2.planning/2.15_plan_asses_components`
### Revise
- Clarify the proposals and ask to fix found issues
## 2.17 **🤖📋AI plan**: Security Check
### Execute `/2.planning/2.17_plan_security_check`
### Revise
- Review security considerations for each component
- Ensure security requirements from 1.40 are addressed
## 2.20 **🤖AI agent**: Generate Jira Epics
### Jira MCP
@@ -123,7 +188,7 @@
- Make sure epics are coherent and make sense
## 2.30 **🤖📋AI plan**: Generate tests
## 2.30 **🤖📋AI plan**: Generate tests
### Execute `/2.planning/2.30_plan_tests`
@@ -141,14 +206,26 @@
- Revise the features, answer questions, put detailed descriptions
- Make sure features are coherent and make sense
### Commit
```bash
git add _docs/
git commit -m "Planning: components, tests, and features defined"
```
# 3. Implementation phase
## 3.05 **🤖📋AI plan**: Initial structure
### Jira MCP
Add context7 MCP to the list in IDE:
### Create dev branch
```bash
git checkout -b dev
```
### Context7 MCP
Add context7 MCP to the list in IDE:
```
"context7": {
"command": "npx",
@@ -195,10 +272,40 @@
- Read the code and check that everything is ok
## 3.20 **🤖📋AI plan**: Integration tests and solution checks
## 3.20 **🤖📋AI plan**: Code Review
### Execute `/3.implementation/3.20_implement_tests`
### Execute `/3.implementation/3.20_implement_code_review`
Can also use Cursor's built-in review feature.
### Revise
- Address all found issues
- Ensure code quality standards are met
## 3.30 **🤖📋AI plan**: CI/CD Setup
### Execute `/3.implementation/3.30_implement_cicd`
### Revise
- Review pipeline configuration
- Ensure all stages are properly configured
## 3.40 **🤖📋AI plan**: Integration tests and solution checks
### Execute `/3.implementation/3.40_implement_tests`
### Revise
- Revise the plan, answer questions, put detailed descriptions
- Make sure tests are coherent and make sense
### Merge after tests pass
```bash
git checkout stage
git merge dev
git checkout main
git merge stage
git push origin main
```
+113
View File
@@ -0,0 +1,113 @@
# Refactoring Existing Project
This tutorial guides through analyzing, documenting, and refactoring an existing codebase.
## 4.05 **🧑‍💻 Developers**: User Input
### Define Goals
Create in `_docs/00_problem`:
- `problem_description.md`: What system currently does + what you want to change/improve
- `acceptance_criteria.md`: Success criteria for the refactoring
- `security_approach.md`: Security requirements (if applicable)
## 4.10 **🤖📋AI plan**: Build Documentation from Code
### Execute `/4.refactoring/4.10_documentation`
### Revise
- Review generated component docs
- Verify accuracy against actual code behavior
## 4.20 **🤖📋AI plan**: Form Solution with Flows
### Execute `/4.refactoring/4.20_form_solution_flows`
### Revise
- Review solution description
- Verify flow diagrams match actual system behavior
- Store to `_docs/01_solution/solution.md`
## 4.30 **🤖✨AI Research**: Deep Research of Approaches
### Execute `/4.refactoring/4.30_deep_research`
### Revise
- Review suggested improvements
- Prioritize changes based on impact vs effort
## 4.35 **🤖✨AI Research**: Solution Assessment with Codebase
### Execute `/4.refactoring/4.35_solution_assessment`
### Revise
- Review weak points identified in current implementation
- Decide which to address
## 4.40 **🤖📋AI plan**: Integration Tests Description
### Execute `/4.refactoring/4.40_tests_description`
### Revise
- Ensure tests cover critical functionality
- Add edge cases
## 4.50 **🤖📋AI plan**: Implement Tests
### Execute `/4.refactoring/4.50_implement_tests`
### Verify
- All tests pass on current codebase
- Tests serve as safety net for refactoring
## 4.60 **🤖📋AI plan**: Analyze Coupling
### Execute `/4.refactoring/4.60_analyze_coupling`
### Revise
- Review coupling analysis
- Prioritize decoupling strategy
## 4.70 **🤖📋AI plan**: Execute Decoupling
### Execute `/4.refactoring/4.70_execute_decoupling`
### Verify
- Run integration tests after each change
- All tests must pass before proceeding
## 4.80 **🤖📋AI plan**: Technical Debt
### Execute `/4.refactoring/4.80_technical_debt`
### Revise
- Review debt items
- Prioritize by impact
## 4.90 **🤖📋AI plan**: Performance Optimization
### Execute `/4.refactoring/4.90_performance`
### Verify
- Benchmark before/after
- Run tests to ensure no regressions
## 4.95 **🤖📋AI plan**: Security Review
### Execute `/4.refactoring/4.95_security`
### Verify
- Address identified vulnerabilities
- Run security tests if applicable