organize structure for .roo and for ai in general

rework rulels
This commit is contained in:
Oleksandr Bezdieniezhnykh
2025-12-10 19:59:13 +02:00
parent 749c8e674d
commit 8a284eb106
84 changed files with 3044 additions and 35 deletions
@@ -0,0 +1,29 @@
# User Input for Refactoring
## Task
Collect and document goals for the refactoring project.
## User should provide:
Create in `_docs/00_problem`:
- `problem_description.md`:
- What the system currently does
- What changes/improvements are needed
- Pain points in current implementation
- `acceptance_criteria.md`: Success criteria for the refactoring
- `security_approach.md`: Security requirements (if applicable)
## Example
- `problem_description.md`
Current system: E-commerce platform with monolithic architecture.
Current issues: Slow deployments, difficult scaling, tightly coupled modules.
Goals: Break into microservices, improve test coverage, reduce deployment time.
- `acceptance_criteria.md`
- All existing functionality preserved
- Test coverage increased from 40% to 75%
- Deployment time reduced by 50%
- No circular dependencies between modules
## Output
Store user input in `_docs/00_problem/` folder for reference by subsequent steps.
@@ -0,0 +1,92 @@
# Capture Baseline Metrics
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current codebase
## Role
You are a software engineer preparing for refactoring
## Task
- Capture current system metrics as baseline
- Document current behavior
- Establish benchmarks to compare against after refactoring
- Identify critical paths to monitor
## Output
### Code Quality Metrics
#### Coverage
```
Current test coverage: XX%
- Unit test coverage: XX%
- Integration test coverage: XX%
- Critical paths coverage: XX%
```
#### Code Complexity
- Cyclomatic complexity (average):
- Most complex functions (top 5):
- Lines of code:
- Technical debt ratio:
#### Code Smells
- Total code smells:
- Critical issues:
- Major issues:
### Performance Metrics
#### Response Times
| Endpoint/Operation | P50 | P95 | P99 |
|-------------------|-----|-----|-----|
| [endpoint1] | Xms | Xms | Xms |
| [operation1] | Xms | Xms | Xms |
#### Resource Usage
- Average CPU usage:
- Average memory usage:
- Database query count per operation:
#### Throughput
- Requests per second:
- Concurrent users supported:
### Functionality Inventory
List all current features/endpoints:
| Feature | Status | Test Coverage | Notes |
|---------|--------|---------------|-------|
| | | | |
### Dependency Analysis
- Total dependencies:
- Outdated dependencies:
- Security vulnerabilities in dependencies:
### Build Metrics
- Build time:
- Test execution time:
- Deployment time:
Store output to `_docs/04_refactoring/baseline_metrics.md`
## Measurement Commands
Use project-appropriate tools for your tech stack:
| Metric | Python | C#/.NET | Java | Go | JavaScript/TypeScript |
|--------|--------|---------|------|-----|----------------------|
| Test coverage | pytest --cov | dotnet test --collect | jacoco | go test -cover | jest --coverage |
| Code complexity | radon | CodeMetrics | PMD | gocyclo | eslint-plugin-complexity |
| Lines of code | cloc | cloc | cloc | cloc | cloc |
| Dependency check | pip-audit | dotnet list package --vulnerable | mvn dependency-check | govulncheck | npm audit |
## Notes
- Run measurements multiple times for accuracy
- Document measurement methodology
- Save raw data for comparison
- Focus on metrics relevant to refactoring goals
@@ -0,0 +1,48 @@
# Create Documentation from Existing Codebase
## Role
You are a Principal Software Architect and Technical Communication Expert.
## Task
Generate production-grade documentation from existing code that serves both maintenance engineers and consuming developers.
## Core Directives:
- Truthfulness: Never invent features. Ground every claim in the provided code.
- Clarity: Use professional, third-person objective tone.
- Completeness: Document every public interface, summarize private internals unless critical.
- Visuals: Visualize complex logic using Mermaid.js.
## Process:
1. Analyze the project structure, form rough understanding from directories, projects and files
2. Go file by file, analyze each method, convert to short API reference description, form rough flow diagram
3. Analyze summaries and code, analyze connections between components, form detailed structure
## Output Format
Store description of each component to `_docs/02_components/[##]_[component_name]/[##]._component_[component_name].md`:
1. High-level overview
- **Purpose:** Component role in the larger system.
- **Architectural Pattern:** Design patterns used.
2. Logic & Architecture
- Mermaid `graph TD` or `sequenceDiagram`
- draw.io components diagram
3. API Reference table:
- Name, Description, Input, Output
- Test cases for the method
4. Implementation Details
- **Algorithmic Complexity:** Big O for critical methods.
- **State Management:** Local vs. global state.
- **Dependencies:** External libraries.
5. Tests
- Integration tests needed
- Non-functional tests needed
6. Extensions and Helpers
- Store to `_docs/02_components/helpers`
7. Caveats & Edge Cases
- Known limitations
- Race conditions
- Performance bottlenecks
## Notes
- Verify all parameters are captured
- Verify Mermaid diagrams are syntactically correct
- Explain why the code works, not just how
@@ -0,0 +1,36 @@
# Form Solution with Flows
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Generated component docs: `@_docs/02_components`
## Role
You are a professional software architect
## Task
- Review all generated component documentation
- Synthesize into a cohesive solution description
- Create flow diagrams showing how components interact
- Identify the main use cases and their flows
## Output
### Solution Description
Store to `_docs/01_solution/solution.md`:
- Short Product solution description
- Component interaction diagram (draw.io)
- Components overview and their responsibilities
### Flow Diagrams
Store to `_docs/02_components/system_flows.md`:
- Mermaid Flowchart diagrams for main control flows:
- Create flow diagram per major use case
- Show component interactions
- Note data transformations
- Flows can relate to each other
- Show entry points, decision points, and outputs
## Notes
- Focus on documenting what exists, not what should be
@@ -0,0 +1,39 @@
# Deep Research of Approaches
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a professional researcher and software architect
## Task
- Analyze current implementation patterns
- Research modern approaches for similar systems
- Identify what could be done differently
- Suggest improvements based on state-of-the-art practices
## Output
### Current State Analysis
- Patterns currently used
- Strengths of current approach
- Weaknesses identified
### Alternative Approaches
For each major component/pattern:
- Current approach
- Alternative approach
- Pros/Cons comparison
- Migration effort (Low/Medium/High)
### Recommendations
- Prioritized list of improvements
- Quick wins (low effort, high impact)
- Strategic improvements (higher effort)
## Notes
- Focus on practical, achievable improvements
- Consider existing codebase constraints
@@ -0,0 +1,40 @@
# Solution Assessment with Codebase
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Research findings: from step 4.30
## Role
You are a professional software architect
## Task
- Assess current implementation against acceptance criteria
- Identify weak points in current codebase
- Map research recommendations to specific code areas
- Prioritize changes based on impact and effort
## Output
### Weak Points Assessment
For each issue found:
- Location (component/file)
- Weak point description
- Impact (High/Medium/Low)
- Proposed solution
### Gap Analysis
- Acceptance criteria vs current state
- What's missing
- What needs improvement
### Refactoring Roadmap
- Phase 1: Critical fixes
- Phase 2: Major improvements
- Phase 3: Nice-to-have enhancements
## Notes
- Ground all findings in actual code
- Be specific about locations and changes needed
@@ -0,0 +1,52 @@
# Integration Tests Description
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a professional Quality Assurance Engineer
## Prerequisites
- Baseline metrics captured (see 4.07_capture_baseline.md)
- Feature parity checklist created (see `@_docs/00_templates/feature_parity_checklist.md`)
## Coverage Requirements (MUST meet before refactoring)
- Minimum overall coverage: 75%
- Critical path coverage: 90%
- All public APIs must have integration tests
- All error handling paths must be tested
## Task
- Analyze existing test coverage
- Define integration tests that capture current system behavior
- Tests should serve as safety net for refactoring
- Cover critical paths and edge cases
- Ensure coverage requirements are met before proceeding to refactoring
## Output
Store test specs to `_docs/02_tests/[##]_[test_name]_spec.md`:
- Integration tests
- Summary
- Current behavior being tested
- Input data
- Expected result
- Maximum expected time
- Acceptance tests
- Summary
- Preconditions
- Steps with expected results
- Coverage Analysis
- Current coverage percentage
- Target coverage (75% minimum)
- Critical paths not covered
## Notes
- Focus on behavior preservation
- These tests validate refactoring doesn't break functionality
@@ -0,0 +1,34 @@
# Implement Tests
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Tests specifications: `@_docs/02_tests`
## Role
You are a professional software developer
## Task
- Implement all tests from specifications
- Ensure all tests pass on current codebase (before refactoring)
- Set up test infrastructure if not exists
- Configure test data fixtures
## Process
1. Set up test environment
2. Implement each test from spec
3. Run tests, verify all pass
4. Document any discovered issues
## Output
- Implemented tests in test folder
- Test execution report:
- Test name
- Status (Pass/Fail)
- Execution time
- Issues discovered (if any)
## Notes
- All tests MUST pass before proceeding to refactoring
- Tests are the safety net for changes
@@ -0,0 +1,38 @@
# Analyze Coupling
## Initial data:
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Codebase
## Role
You are a software architect specializing in code quality
## Task
- Analyze coupling between components/modules
- Identify tightly coupled areas
- Map dependencies (direct and transitive)
- Form decoupling strategy
## Output
### Coupling Analysis
- Dependency graph (Mermaid)
- Coupling metrics per component
- Circular dependencies found
### Problem Areas
For each coupling issue:
- Components involved
- Type of coupling (content, common, control, stamp, data)
- Impact on maintainability
- Severity (High/Medium/Low)
### Decoupling Strategy
- Priority order for decoupling
- Proposed interfaces/abstractions
- Estimated effort per change
## Notes
- Focus on high-impact coupling issues first
- Consider backward compatibility
@@ -0,0 +1,43 @@
# Execute Decoupling
## Initial data:
- Decoupling strategy: from step 4.60
- Tests: implemented in step 4.50
- Codebase
## Role
You are a professional software developer
## Task
- Execute decoupling changes per strategy
- Fix code smells encountered during refactoring
- Run tests after each significant change
- Ensure all tests pass before proceeding
## Process
For each decoupling change:
1. Implement the change
2. Run integration tests
3. Fix any failures
4. Commit with descriptive message
## Code Smells to Address
- Long methods
- Large classes
- Duplicate code
- Dead code
- Magic numbers/strings
## Output
- Refactored code
- Test results after each change
- Summary of changes made:
- Change description
- Files affected
- Tests status
## Notes
- Small, incremental changes
- Never break tests
- Commit frequently
@@ -0,0 +1,40 @@
# Technical Debt
## Initial data:
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Codebase
## Role
You are a technical debt analyst
## Task
- Identify technical debt in the codebase
- Categorize and prioritize debt items
- Estimate effort to resolve
- Create actionable plan
## Output
### Debt Inventory
For each item:
- Location (file/component)
- Type (design, code, test, documentation)
- Description
- Impact (High/Medium/Low)
- Effort to fix (S/M/L/XL)
- Interest (cost of not fixing)
### Prioritized Backlog
- Quick wins (low effort, high impact)
- Strategic debt (high effort, high impact)
- Tolerable debt (low impact, can defer)
### Recommendations
- Immediate actions
- Sprint-by-sprint plan
- Prevention measures
## Notes
- Be realistic about effort estimates
- Consider business priorities
@@ -0,0 +1,49 @@
# Performance Optimization
## Initial data:
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Codebase
## Role
You are a performance engineer
## Task
- Identify performance bottlenecks
- Profile critical paths
- Propose optimizations
- Implement and verify improvements
## Output
### Bottleneck Analysis
For each bottleneck:
- Location
- Symptom (slow response, high memory, etc.)
- Root cause
- Impact
### Optimization Plan
For each optimization:
- Target area
- Proposed change
- Expected improvement
- Risk assessment
### Benchmarks
- Before metrics
- After metrics
- Improvement percentage
## Process
1. Profile current performance
2. Identify top bottlenecks
3. Implement optimizations one at a time
4. Benchmark after each change
5. Verify tests still pass
## Notes
- Measure before optimizing
- Optimize the right things (profile first)
- Don't sacrifice readability for micro-optimizations
@@ -0,0 +1,48 @@
# Security Review
## Initial data:
- Security approach: `@_docs/00_problem/security_approach.md`
- Current solution: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Codebase
## Role
You are a security engineer
## Task
- Review code for security vulnerabilities
- Check against OWASP Top 10
- Verify security requirements are met
- Recommend fixes for issues found
## Output
### Vulnerability Assessment
For each issue:
- Location
- Vulnerability type (injection, XSS, CSRF, etc.)
- Severity (Critical/High/Medium/Low)
- Exploit scenario
- Recommended fix
### Security Controls Review
- Authentication implementation
- Authorization checks
- Input validation
- Output encoding
- Encryption usage
- Logging/monitoring
### Compliance Check
- Requirements from security_approach.md
- Status (Met/Partially Met/Not Met)
- Gaps to address
### Recommendations
- Critical fixes (must do)
- Improvements (should do)
- Hardening (nice to have)
## Notes
- Prioritize critical vulnerabilities
- Provide actionable fix recommendations