more detailed SDLC plan

This commit is contained in:
Oleksandr Bezdieniezhnykh
2025-12-10 19:05:17 +02:00
parent 73cbe43397
commit fd75243a84
22 changed files with 2087 additions and 34 deletions
@@ -0,0 +1,137 @@
# Tech Stack Selection
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Solution draft: `@_docs/01_solution/solution.md`
## Role
You are a software architect evaluating technology choices
## Task
- Evaluate technology options against requirements
- Consider team expertise and learning curve
- Assess long-term maintainability
- Document selection rationale
## Output
### Requirements Analysis
#### Functional Requirements
| Requirement | Tech Implications |
|-------------|-------------------|
| [From acceptance criteria] | |
#### Non-Functional Requirements
| Requirement | Tech Implications |
|-------------|-------------------|
| Performance | |
| Scalability | |
| Security | |
| Maintainability | |
#### Constraints
| Constraint | Impact on Tech Choice |
|------------|----------------------|
| [From restrictions] | |
### Technology Evaluation
#### Programming Language
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Language]
**Rationale**: [Why this choice]
#### Framework
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Framework]
**Rationale**: [Why this choice]
#### Database
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Database]
**Rationale**: [Why this choice]
#### Infrastructure/Hosting
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Platform]
**Rationale**: [Why this choice]
#### Key Libraries/Dependencies
| Category | Library | Version | Purpose | Alternatives Considered |
|----------|---------|---------|---------|------------------------|
| | | | | |
### Evaluation Criteria
Rate each technology option against these criteria:
1. **Fitness for purpose**: Does it meet functional requirements?
2. **Performance**: Can it meet performance requirements?
3. **Security**: Does it have good security track record?
4. **Maturity**: Is it stable and well-maintained?
5. **Community**: Active community and documentation?
6. **Team expertise**: Does team have experience?
7. **Cost**: Licensing, hosting, operational costs?
8. **Scalability**: Can it grow with the project?
### Technology Stack Summary
```
Language: [Language] [Version]
Framework: [Framework] [Version]
Database: [Database] [Version]
Cache: [Cache solution]
Message Queue: [If applicable]
CI/CD: [Platform]
Hosting: [Platform]
Monitoring: [Tools]
```
### Risk Assessment
| Technology | Risk | Mitigation |
|------------|------|------------|
| | | |
### Learning Requirements
| Technology | Team Familiarity | Training Needed |
|------------|-----------------|-----------------|
| | High/Med/Low | Yes/No |
### Decision Record
**Decision**: [Summary of tech stack]
**Date**: [YYYY-MM-DD]
**Participants**: [Who was involved]
**Status**: Approved / Pending Review
Store output to `_docs/01_solution/tech_stack.md`
## Notes
- Avoid over-engineering - choose simplest solution that meets requirements
- Consider total cost of ownership, not just initial development
- Prefer proven technologies over cutting-edge unless required
- Document trade-offs for future reference
- Ask questions about team expertise and constraints
@@ -0,0 +1,57 @@
# Data Model Design
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a professional database architect
## Task
- Analyze solution and components to identify all data entities
- Design database schema that supports all component requirements
- Define relationships, constraints, and indexes
- Consider data access patterns for query optimization
- Plan for data migration if applicable
## Output
### Entity Relationship Diagram
- Create ERD showing all entities and relationships
- Use Mermaid or draw.io format
### Schema Definition
For each entity:
- Table name
- Columns with types, constraints, defaults
- Primary keys
- Foreign keys and relationships
- Indexes (clustered, non-clustered)
- Partitioning strategy (if needed)
### Data Access Patterns
- List common queries per component
- Identify hot paths requiring optimization
- Recommend caching strategy
### Migration Strategy
- Initial schema creation scripts
- Seed data requirements
- Rollback procedures
### Storage Estimates
- Estimated row counts per table
- Storage requirements
- Growth projections
Store output to `_docs/02_components/data_model.md`
## Notes
- Follow database normalization principles (3NF minimum)
- Consider read vs write optimization based on access patterns
- Plan for horizontal scaling if required
- Ask questions to clarify data requirements
@@ -0,0 +1,64 @@
# API Contracts Design
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Data Model: `@_docs/02_components/data_model.md`
## Role
You are a professional API architect
## Task
- Define API contracts between all components
- Specify external API endpoints (if applicable)
- Define data transfer objects (DTOs)
- Establish error response standards
- Plan API versioning strategy
## Output
### Internal Component Interfaces
For each component boundary:
- Interface name
- Methods with signatures
- Input/Output DTOs
- Error types
- Async/Sync designation
### External API Specification
Generate OpenAPI/Swagger spec including:
- Endpoints with HTTP methods
- Request/Response schemas
- Authentication requirements
- Rate limiting rules
- Example requests/responses
### DTO Definitions
For each data transfer object:
- Name and purpose
- Fields with types
- Validation rules
- Serialization format (JSON, Protobuf, etc.)
### Error Contract
- Standard error response format
- Error codes and messages
- HTTP status code mapping
### Versioning Strategy
- API versioning approach (URL, header, query param)
- Deprecation policy
- Breaking vs non-breaking change definitions
Store output to `_docs/02_components/api_contracts.md`
Store OpenAPI spec to `_docs/02_components/openapi.yaml` (if applicable)
## Notes
- Follow RESTful conventions for external APIs
- Keep internal interfaces minimal and focused
- Design for backward compatibility
- Ask questions to clarify integration requirements
@@ -0,0 +1,111 @@
# Risk Assessment
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Estimation: `@_docs/02_components/estimation.md`
## Role
You are a technical risk analyst
## Task
- Identify technical and project risks
- Assess probability and impact
- Define mitigation strategies
- Create risk monitoring plan
## Output
### Risk Register
| ID | Risk | Category | Probability | Impact | Score | Mitigation | Owner |
|----|------|----------|-------------|--------|-------|------------|-------|
| R1 | | Tech/Schedule/Resource/External | High/Med/Low | High/Med/Low | H/M/L | | |
### Risk Scoring Matrix
| | Low Impact | Medium Impact | High Impact |
|--|------------|---------------|-------------|
| High Probability | Medium | High | Critical |
| Medium Probability | Low | Medium | High |
| Low Probability | Low | Low | Medium |
### Risk Categories
#### Technical Risks
- Technology choices may not meet requirements
- Integration complexity underestimated
- Performance targets unachievable
- Security vulnerabilities
#### Schedule Risks
- Scope creep
- Dependencies delayed
- Resource unavailability
- Underestimated complexity
#### Resource Risks
- Key person dependency
- Skill gaps
- Team availability
#### External Risks
- Third-party API changes
- Vendor reliability
- Regulatory changes
### Top Risks (Ranked)
#### 1. [Highest Risk]
- **Description**:
- **Probability**: High/Medium/Low
- **Impact**: High/Medium/Low
- **Mitigation Strategy**:
- **Contingency Plan**:
- **Early Warning Signs**:
- **Owner**:
#### 2. [Second Highest Risk]
...
### Risk Mitigation Plan
| Risk ID | Mitigation Action | Timeline | Cost | Responsible |
|---------|-------------------|----------|------|-------------|
| R1 | | | | |
### Risk Monitoring
#### Review Schedule
- Daily standup: Discuss blockers (potential risks materializing)
- Weekly: Review risk register, update probabilities
- Sprint end: Comprehensive risk review
#### Early Warning Indicators
| Risk | Indicator | Threshold | Action |
|------|-----------|-----------|--------|
| | | | |
### Contingency Budget
- Time buffer: 20% of estimated duration
- Scope flexibility: [List features that can be descoped]
- Resource backup: [Backup resources if available]
### Acceptance Criteria for Risks
Define which risks are acceptable:
- Low risks: Accepted, monitored
- Medium risks: Mitigation required
- High risks: Mitigation + contingency required
- Critical risks: Must be resolved before proceeding
Store output to `_docs/02_components/risk_assessment.md`
## Notes
- Update risk register throughout project
- Escalate critical risks immediately
- Consider both likelihood and impact
- Ask questions to uncover hidden risks
@@ -22,10 +22,21 @@
- helpers - empty implementations or interfaces
- Add .gitignore appropriate for the project's language/framework
- Add .env.example with required environment variables
- Add CI/CD skeleton (GitHub Actions, GitLab CI, or appropriate)
- Configure CI/CD pipeline with full stages:
- Build stage
- Lint/Static analysis stage
- Unit tests stage
- Integration tests stage
- Security scan stage (SAST/dependency check)
- Deploy to staging stage (triggered on merge to stage branch)
- Define environment strategy based on `@_docs/00_templates/environment_strategy.md`:
- Development environment configuration
- Staging environment configuration
- Production environment configuration (if applicable)
- Add database migration setup if applicable
- Add README.md, describe the project by @_docs/01_solution/solution.md
- Create a separate folder for the integration tests (not a separate repo)
- Configure branch protection rules recommendations
## Example
The structure should roughly looks like this:
@@ -1,42 +1,64 @@
# CI/CD Setup
# CI/CD Pipeline Validation & Enhancement
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`.
- Restrictions: `@_docs/00_problem/restrictions.md`.
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Environment Strategy: `@_docs/00_templates/environment_strategy.md`
## Role
You are a DevOps engineer
## Task
- Review project structure and dependencies
- Configure CI/CD pipeline with stages:
- Build
- Lint
- Unit tests
- Integration tests
- Security scan (if applicable)
- Deploy to staging (if applicable)
- Configure environment variables handling
- Set up test reporting
- Configure branch protection rules recommendations
- Review existing CI/CD pipeline configuration
- Validate all stages are working correctly
- Optimize pipeline performance (parallelization, caching)
- Ensure test coverage gates are enforced
- Verify security scanning is properly configured
- Add missing quality gates
## Checklist
### Pipeline Health
- [ ] All stages execute successfully
- [ ] Build time is acceptable (<10 min for most projects)
- [ ] Caching is properly configured (dependencies, build artifacts)
- [ ] Parallel execution where possible
### Quality Gates
- [ ] Code coverage threshold enforced (minimum 75%)
- [ ] Linting errors block merge
- [ ] Security vulnerabilities block merge (critical/high)
- [ ] All tests must pass
### Environment Deployments
- [ ] Staging deployment works on merge to stage branch
- [ ] Environment variables properly configured per environment
- [ ] Secrets are securely managed (not in code)
- [ ] Rollback procedure documented
### Monitoring
- [ ] Build notifications configured (Slack, email, etc.)
- [ ] Failed build alerts
- [ ] Deployment success/failure notifications
## Output
### Pipeline Configuration
- Pipeline file(s) created/updated
- Stages description
- Triggers (on push, PR, etc.)
### Environment Setup
- Required secrets/variables
- Environment-specific configs
### Pipeline Status Report
- Current pipeline configuration summary
- Issues found and fixes applied
- Performance metrics (build times)
### Deployment Strategy
- Staging deployment steps
- Production deployment steps (if applicable)
### Recommended Improvements
- Short-term improvements
- Long-term optimizations
### Quality Gate Configuration
- Thresholds configured
- Enforcement rules
## Notes
- Use project-appropriate CI/CD tool (GitHub Actions, GitLab CI, Azure DevOps, etc.)
- Keep pipeline fast - parallelize where possible
- Do not break existing functionality
- Test changes in separate branch first
- Document any manual steps required
@@ -0,0 +1,72 @@
# Deployment Strategy Planning
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Environment Strategy: `@_docs/00_templates/environment_strategy.md`
## Role
You are a DevOps/Platform engineer
## Task
- Define deployment strategy for each environment
- Plan deployment procedures and automation
- Define rollback procedures
- Establish deployment verification steps
- Document manual intervention points
## Output
### Deployment Architecture
- Infrastructure diagram (where components run)
- Network topology
- Load balancing strategy
- Container/VM configuration
### Deployment Procedures
#### Staging Deployment
- Trigger conditions
- Pre-deployment checks
- Deployment steps
- Post-deployment verification
- Smoke tests to run
#### Production Deployment
- Approval workflow
- Deployment window
- Pre-deployment checks
- Deployment steps (blue-green, rolling, canary)
- Post-deployment verification
- Smoke tests to run
### Rollback Procedures
- Rollback trigger criteria
- Rollback steps per environment
- Data rollback considerations
- Communication plan during rollback
### Health Checks
- Liveness probe configuration
- Readiness probe configuration
- Custom health endpoints
### Deployment Checklist
- [ ] All tests pass in CI
- [ ] Security scan clean
- [ ] Database migrations reviewed
- [ ] Feature flags configured
- [ ] Monitoring alerts configured
- [ ] Rollback plan documented
- [ ] Stakeholders notified
Store output to `_docs/02_components/deployment_strategy.md`
## Notes
- Prefer automated deployments over manual
- Zero-downtime deployments for production
- Always have a rollback plan
- Ask questions about infrastructure constraints
@@ -0,0 +1,123 @@
# Observability Planning
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Deployment Strategy: `@_docs/02_components/deployment_strategy.md`
## Role
You are a Site Reliability Engineer (SRE)
## Task
- Define logging strategy across all components
- Plan metrics collection and dashboards
- Design distributed tracing (if applicable)
- Establish alerting rules
- Document incident response procedures
## Output
### Logging Strategy
#### Log Levels
| Level | Usage | Example |
|-------|-------|---------|
| ERROR | Exceptions, failures requiring attention | Database connection failed |
| WARN | Potential issues, degraded performance | Retry attempt 2/3 |
| INFO | Significant business events | User registered, Order placed |
| DEBUG | Detailed diagnostic information | Request payload, Query params |
#### Log Format
```json
{
"timestamp": "ISO8601",
"level": "INFO",
"service": "service-name",
"correlation_id": "uuid",
"message": "Event description",
"context": {}
}
```
#### Log Storage
- Development: Console/file
- Staging: Centralized (ELK, CloudWatch, etc.)
- Production: Centralized with retention policy
### Metrics
#### System Metrics
- CPU usage
- Memory usage
- Disk I/O
- Network I/O
#### Application Metrics
| Metric | Type | Description |
|--------|------|-------------|
| request_count | Counter | Total requests |
| request_duration | Histogram | Response time |
| error_count | Counter | Failed requests |
| active_connections | Gauge | Current connections |
#### Business Metrics
- [Define based on acceptance criteria]
### Distributed Tracing
#### Trace Context
- Correlation ID propagation
- Span naming conventions
- Sampling strategy
#### Integration Points
- HTTP headers
- Message queue metadata
- Database query tagging
### Alerting
#### Alert Categories
| Severity | Response Time | Examples |
|----------|---------------|----------|
| Critical | 5 min | Service down, Data loss |
| High | 30 min | High error rate, Performance degradation |
| Medium | 4 hours | Elevated latency, Disk usage high |
| Low | Next business day | Non-critical warnings |
#### Alert Rules
```yaml
alerts:
- name: high_error_rate
condition: error_rate > 5%
duration: 5m
severity: high
- name: service_down
condition: health_check_failed
duration: 1m
severity: critical
```
### Dashboards
#### Operations Dashboard
- Service health status
- Request rate and error rate
- Response time percentiles
- Resource utilization
#### Business Dashboard
- Key business metrics
- User activity
- Transaction volumes
Store output to `_docs/02_components/observability_plan.md`
## Notes
- Follow the principle: "If it's not monitored, it's not in production"
- Balance verbosity with cost
- Ensure PII is not logged
- Plan for log rotation and retention
@@ -0,0 +1,92 @@
# Capture Baseline Metrics
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current codebase
## Role
You are a software engineer preparing for refactoring
## Task
- Capture current system metrics as baseline
- Document current behavior
- Establish benchmarks to compare against after refactoring
- Identify critical paths to monitor
## Output
### Code Quality Metrics
#### Coverage
```
Current test coverage: XX%
- Unit test coverage: XX%
- Integration test coverage: XX%
- Critical paths coverage: XX%
```
#### Code Complexity
- Cyclomatic complexity (average):
- Most complex functions (top 5):
- Lines of code:
- Technical debt ratio:
#### Code Smells
- Total code smells:
- Critical issues:
- Major issues:
### Performance Metrics
#### Response Times
| Endpoint/Operation | P50 | P95 | P99 |
|-------------------|-----|-----|-----|
| [endpoint1] | Xms | Xms | Xms |
| [operation1] | Xms | Xms | Xms |
#### Resource Usage
- Average CPU usage:
- Average memory usage:
- Database query count per operation:
#### Throughput
- Requests per second:
- Concurrent users supported:
### Functionality Inventory
List all current features/endpoints:
| Feature | Status | Test Coverage | Notes |
|---------|--------|---------------|-------|
| | | | |
### Dependency Analysis
- Total dependencies:
- Outdated dependencies:
- Security vulnerabilities in dependencies:
### Build Metrics
- Build time:
- Test execution time:
- Deployment time:
Store output to `_docs/04_refactoring/baseline_metrics.md`
## Measurement Commands
Use project-appropriate tools for your tech stack:
| Metric | Python | C#/.NET | Java | Go | JavaScript/TypeScript |
|--------|--------|---------|------|-----|----------------------|
| Test coverage | pytest --cov | dotnet test --collect | jacoco | go test -cover | jest --coverage |
| Code complexity | radon | CodeMetrics | PMD | gocyclo | eslint-plugin-complexity |
| Lines of code | cloc | cloc | cloc | cloc | cloc |
| Dependency check | pip-audit | dotnet list package --vulnerable | mvn dependency-check | govulncheck | npm audit |
## Notes
- Run measurements multiple times for accuracy
- Document measurement methodology
- Save raw data for comparison
- Focus on metrics relevant to refactoring goals
@@ -9,11 +9,22 @@
## Role
You are a professional Quality Assurance Engineer
## Prerequisites
- Baseline metrics captured (see 4.07_capture_baseline.md)
- Feature parity checklist created (see `@_docs/00_templates/feature_parity_checklist.md`)
## Coverage Requirements (MUST meet before refactoring)
- Minimum overall coverage: 75%
- Critical path coverage: 90%
- All public APIs must have integration tests
- All error handling paths must be tested
## Task
- Analyze existing test coverage
- Define integration tests that capture current system behavior
- Tests should serve as safety net for refactoring
- Cover critical paths and edge cases
- Ensure coverage requirements are met before proceeding to refactoring
## Output
Store test specs to `_docs/02_tests/[##]_[test_name]_spec.md`:
+120
View File
@@ -0,0 +1,120 @@
# Merge and Deploy Feature
Complete the feature development cycle by creating PR, merging, and updating documentation.
## Input parameters
- task_id (required): Jira task ID
Example: /gen_merge_and_deploy AZ-122
## Prerequisites
- All tests pass locally
- Code review completed (or ready for review)
- Definition of Done checklist reviewed
## Steps (Agent)
### 1. Verify Branch Status
```bash
git status
git log --oneline -5
```
- Confirm on feature branch (e.g., az-122-feature-name)
- Confirm all changes committed
- If uncommitted changes exist, prompt user to commit first
### 2. Run Pre-merge Checks
**User action required**: Run your project's test and lint commands before proceeding.
```bash
# Check for merge conflicts
git fetch origin dev
git merge origin/dev --no-commit --no-ff || git merge --abort
```
- [ ] All tests pass (run project-specific test command)
- [ ] No linting errors (run project-specific lint command)
- [ ] No merge conflicts (or resolve them)
### 3. Update Documentation
#### CHANGELOG.md
Add entry under "Unreleased" section:
```markdown
### Added/Changed/Fixed
- [TASK_ID] Brief description of change
```
#### Update Jira
- Add comment with summary of implementation
- Link any related PRs or documentation
### 4. Create Pull Request
#### PR Title Format
`[TASK_ID] Brief description`
#### PR Body (from template)
```markdown
## Description
[Summary of changes]
## Related Issue
Jira ticket: [TASK_ID](link)
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Refactoring
## Checklist
- [ ] Code follows project conventions
- [ ] Self-review completed
- [ ] Tests added/updated
- [ ] All tests pass
- [ ] Documentation updated
## Breaking Changes
[None / List breaking changes]
## Deployment Notes
[None / Special deployment considerations]
## Rollback Plan
[Steps to rollback if issues arise]
## Testing
[How to test these changes]
```
### 5. Post-merge Actions
After PR is approved and merged:
```bash
# Switch to dev branch
git checkout dev
git pull origin dev
# Delete feature branch
git branch -d {feature_branch}
git push origin --delete {feature_branch}
```
### 6. Update Jira Status
- Move ticket to "Done"
- Add link to merged PR
- Log time spent (if tracked)
## Guardrails
- Do NOT merge if tests fail
- Do NOT merge if there are unresolved review comments
- Do NOT delete branch before merge is confirmed
- Always update CHANGELOG before creating PR
## Output
- PR created/URL provided
- CHANGELOG updated
- Jira ticket updated
- Feature branch cleaned up (post-merge)
+92
View File
@@ -0,0 +1,92 @@
# Definition of Done (DoD)
A feature/task is considered DONE when all applicable items are completed.
---
## Code Complete
- [ ] All acceptance criteria from the spec are implemented
- [ ] Code compiles/builds without errors
- [ ] No new linting errors or warnings
- [ ] Code follows project coding standards and conventions
- [ ] No hardcoded values (use configuration/environment variables)
- [ ] Error handling implemented per project standards
---
## Testing Complete
- [ ] Unit tests written for new code
- [ ] Unit tests pass locally
- [ ] Integration tests written (if applicable)
- [ ] Integration tests pass
- [ ] Code coverage meets minimum threshold (75%)
- [ ] Manual testing performed for UI changes
---
## Code Review Complete
- [ ] Pull request created with proper description
- [ ] PR linked to Jira ticket
- [ ] At least one approval from reviewer
- [ ] All review comments addressed
- [ ] No merge conflicts
---
## Documentation Complete
- [ ] Code comments for complex logic (if needed)
- [ ] API documentation updated (if endpoints changed)
- [ ] README updated (if setup/usage changed)
- [ ] CHANGELOG updated with changes
---
## CI/CD Complete
- [ ] All CI pipeline stages pass
- [ ] Security scan passes (no critical/high vulnerabilities)
- [ ] Build artifacts generated successfully
---
## Deployment Ready
- [ ] Database migrations tested (if applicable)
- [ ] Configuration changes documented
- [ ] Feature flags configured (if applicable)
- [ ] Rollback plan identified
---
## Communication Complete
- [ ] Jira ticket moved to Done
- [ ] Stakeholders notified of completion (if required)
- [ ] Any blockers or follow-up items documented
---
## Quick Reference
| Category | Must Have | Nice to Have |
|----------|-----------|--------------|
| Code | Builds, No lint errors | Optimized |
| Tests | Unit + Integration pass | E2E tests |
| Coverage | >= 75% | >= 85% |
| Review | 1 approval | 2 approvals |
| Docs | CHANGELOG | Full API docs |
---
## Exceptions
If any DoD item cannot be completed, document:
1. Which item is incomplete
2. Reason for exception
3. Plan to address (with timeline)
4. Approval from tech lead
+139
View File
@@ -0,0 +1,139 @@
# Environment Strategy Template
## Overview
Define the environment strategy for the project, including configuration, access, and deployment procedures for each environment.
---
## Environments
### Development (dev)
**Purpose**: Local development and feature testing
| Aspect | Configuration |
|--------|---------------|
| Branch | `dev`, feature branches |
| Database | Local or shared dev instance |
| External Services | Mock/sandbox endpoints |
| Logging Level | DEBUG |
| Access | All developers |
**Configuration**:
```
# .env.development
ENV=development
DATABASE_URL=<dev_database_url>
API_TIMEOUT=30
LOG_LEVEL=DEBUG
```
### Staging (stage)
**Purpose**: Pre-production testing, QA, UAT
| Aspect | Configuration |
|--------|---------------|
| Branch | `stage` |
| Database | Staging instance (production-like) |
| External Services | Sandbox/test endpoints |
| Logging Level | INFO |
| Access | Development team, QA |
**Configuration**:
```
# .env.staging
ENV=staging
DATABASE_URL=<staging_database_url>
API_TIMEOUT=15
LOG_LEVEL=INFO
```
**Deployment Trigger**: Merge to `stage` branch
### Production (prod)
**Purpose**: Live system serving end users
| Aspect | Configuration |
|--------|---------------|
| Branch | `main` |
| Database | Production instance |
| External Services | Production endpoints |
| Logging Level | WARN |
| Access | Restricted (ops team) |
**Configuration**:
```
# .env.production
ENV=production
DATABASE_URL=<production_database_url>
API_TIMEOUT=10
LOG_LEVEL=WARN
```
**Deployment Trigger**: Manual approval after staging validation
---
## Secrets Management
### Secret Categories
- Database credentials
- API keys (internal and external)
- Encryption keys
- Service account credentials
### Storage
| Environment | Secret Storage |
|-------------|----------------|
| Development | .env.local (gitignored) |
| Staging | CI/CD secrets / Vault |
| Production | CI/CD secrets / Vault |
### Rotation Policy
- Database passwords: Every 90 days
- API keys: Every 180 days or on compromise
- Encryption keys: Annually
---
## Environment Parity
### Required Parity
- Same database engine and version
- Same runtime version
- Same dependency versions
- Same configuration structure
### Allowed Differences
- Resource scaling (CPU, memory)
- External service endpoints (sandbox vs production)
- Logging verbosity
- Feature flags
---
## Access Control
| Role | Dev | Staging | Production |
|------|-----|---------|------------|
| Developer | Full | Read + Deploy | Read logs only |
| QA | Read | Full | Read logs only |
| DevOps | Full | Full | Full |
| Stakeholder | None | Read | Read dashboards |
---
## Backup & Recovery
| Environment | Backup Frequency | Retention | RTO | RPO |
|-------------|------------------|-----------|-----|-----|
| Development | None | N/A | N/A | N/A |
| Staging | Daily | 7 days | 4 hours | 24 hours |
| Production | Hourly | 30 days | 1 hour | 1 hour |
---
## Notes
- Never copy production data to lower environments without anonymization
- All environment-specific values must be externalized (no hardcoding)
- Document any environment-specific behaviors in code comments
@@ -0,0 +1,103 @@
# Feature Dependency Matrix
Track feature dependencies to ensure proper implementation order.
---
## Active Features
| Feature ID | Feature Name | Status | Dependencies | Blocks |
|------------|--------------|--------|--------------|--------|
| | | Draft/In Progress/Done | List IDs | List IDs |
---
## Dependency Rules
### Status Definitions
- **Draft**: Spec created, not started
- **In Progress**: Development started
- **Done**: Merged to dev, verified
- **Blocked**: Waiting on dependencies
### Dependency Types
- **Hard**: Cannot start without dependency complete
- **Soft**: Can mock dependency, integrate later
- **API**: Depends on API contract (can parallelize with mock)
- **Data**: Depends on data/schema (must be complete)
---
## Current Dependencies
### [Feature A] depends on:
| Dependency | Type | Status | Blocker? |
|------------|------|--------|----------|
| | Hard/Soft/API/Data | Done/In Progress | Yes/No |
### [Feature B] depends on:
| Dependency | Type | Status | Blocker? |
|------------|------|--------|----------|
| | | | |
---
## Dependency Graph
```
Feature A (Done)
└── Feature B (In Progress)
└── Feature D (Draft)
└── Feature C (Draft)
Feature E (Done)
└── Feature F (In Progress)
```
---
## Implementation Order
Based on dependencies, recommended implementation order:
1. **Phase 1** (No dependencies)
- [ ] Feature X
- [ ] Feature Y
2. **Phase 2** (Depends on Phase 1)
- [ ] Feature Z (after X)
- [ ] Feature W (after Y)
3. **Phase 3** (Depends on Phase 2)
- [ ] Feature V (after Z, W)
---
## Handling Blocked Features
When a feature is blocked:
1. **Identify** the blocking dependency
2. **Escalate** if blocker is delayed
3. **Consider** if feature can proceed with mocks
4. **Document** any workarounds used
5. **Schedule** integration when blocker completes
---
## Mock Strategy
When using mocks for dependencies:
| Feature | Mocked Dependency | Mock Type | Integration Task |
|---------|-------------------|-----------|------------------|
| | | Interface/Data/API | Link to task |
---
## Update Log
| Date | Feature | Change | By |
|------|---------|--------|-----|
| | | Added/Updated/Completed | |
@@ -0,0 +1,129 @@
# Feature Parity Checklist
Use this checklist to ensure all functionality is preserved during refactoring.
---
## Project: [Project Name]
## Refactoring Scope: [Brief description]
## Date: [YYYY-MM-DD]
---
## Feature Inventory
### API Endpoints
| Endpoint | Method | Before | After | Verified |
|----------|--------|--------|-------|----------|
| /api/v1/example | GET | Working | | [ ] |
| | | | | [ ] |
### Core Functions
| Function/Module | Purpose | Before | After | Verified |
|-----------------|---------|--------|-------|----------|
| | | Working | | [ ] |
| | | | | [ ] |
### User Workflows
| Workflow | Steps | Before | After | Verified |
|----------|-------|--------|-------|----------|
| User login | 1. Enter credentials 2. Submit | Working | | [ ] |
| | | | | [ ] |
### Integrations
| External System | Integration Type | Before | After | Verified |
|-----------------|------------------|--------|-------|----------|
| | API/Webhook/DB | Working | | [ ] |
| | | | | [ ] |
---
## Behavioral Parity
### Input Handling
- [ ] Same inputs produce same outputs
- [ ] Error messages unchanged (or improved)
- [ ] Validation rules preserved
- [ ] Edge cases handled identically
### Output Format
- [ ] Response structure unchanged
- [ ] Data types preserved
- [ ] Null handling consistent
- [ ] Date/time formats preserved
### Side Effects
- [ ] Database writes produce same results
- [ ] File operations unchanged
- [ ] External API calls preserved
- [ ] Event emissions maintained
---
## Non-Functional Parity
### Performance
- [ ] Response times within baseline +10%
- [ ] Memory usage within baseline +10%
- [ ] CPU usage within baseline +10%
- [ ] No new N+1 queries introduced
### Security
- [ ] Authentication unchanged
- [ ] Authorization rules preserved
- [ ] Input sanitization maintained
- [ ] No new vulnerabilities introduced
### Reliability
- [ ] Error handling preserved
- [ ] Retry logic maintained
- [ ] Timeout behavior unchanged
- [ ] Circuit breakers preserved
---
## Test Coverage
| Test Type | Before | After | Status |
|-----------|--------|-------|--------|
| Unit Tests | X pass | | [ ] Same or better |
| Integration Tests | X pass | | [ ] Same or better |
| E2E Tests | X pass | | [ ] Same or better |
---
## Verification Steps
### Automated Verification
1. [ ] All existing tests pass
2. [ ] No new linting errors
3. [ ] Coverage >= baseline
### Manual Verification
1. [ ] Smoke test critical paths
2. [ ] Verify UI behavior (if applicable)
3. [ ] Test error scenarios
### Stakeholder Sign-off
- [ ] QA approved
- [ ] Product owner approved (if behavior changed)
---
## Discrepancies Found
| Feature | Expected | Actual | Resolution | Status |
|---------|----------|--------|------------|--------|
| | | | | |
---
## Notes
- Any intentional behavior changes must be documented and approved
- Update this checklist as refactoring progresses
- Keep baseline metrics for comparison
+157
View File
@@ -0,0 +1,157 @@
# Incident Playbook Template
## Incident Overview
| Field | Value |
|-------|-------|
| Playbook Name | [Name] |
| Severity | Critical / High / Medium / Low |
| Last Updated | [YYYY-MM-DD] |
| Owner | [Team/Person] |
---
## Detection
### Symptoms
- [How will you know this incident is occurring?]
- Alert: [Alert name that triggers]
- User reports: [Expected user complaints]
### Monitoring
- Dashboard: [Link to relevant dashboard]
- Logs: [Log query to investigate]
- Metrics: [Key metrics to watch]
---
## Assessment
### Impact Analysis
- Users affected: [All / Subset / Internal only]
- Data at risk: [Yes / No]
- Revenue impact: [High / Medium / Low / None]
### Severity Determination
| Condition | Severity |
|-----------|----------|
| Service completely down | Critical |
| Partial degradation | High |
| Intermittent issues | Medium |
| Minor impact | Low |
---
## Response
### Immediate Actions (First 5 minutes)
1. [ ] Acknowledge alert
2. [ ] Verify incident is real (not false positive)
3. [ ] Notify on-call team
4. [ ] Start incident channel/call
### Investigation Steps
1. [ ] Check recent deployments
2. [ ] Review error logs
3. [ ] Check infrastructure metrics
4. [ ] Identify affected components
### Communication
| Audience | Channel | Frequency |
|----------|---------|-----------|
| Engineering | Slack #incidents | Continuous |
| Stakeholders | Email | Every 30 min |
| Users | Status page | Major updates |
---
## Resolution
### Common Fixes
#### Fix 1: [Common issue]
```bash
# Commands to fix
```
Expected outcome: [What should happen]
#### Fix 2: [Another common issue]
```bash
# Commands to fix
```
Expected outcome: [What should happen]
### Rollback Procedure
1. [ ] Identify last known good version
2. [ ] Execute rollback
```bash
# Rollback commands
```
3. [ ] Verify service restored
4. [ ] Monitor for 15 minutes
### Escalation Path
| Time | Action |
|------|--------|
| 0-15 min | On-call engineer |
| 15-30 min | Team lead |
| 30-60 min | Engineering manager |
| 60+ min | Director/VP |
---
## Post-Incident
### Verification
- [ ] Service fully restored
- [ ] All alerts cleared
- [ ] User-facing functionality verified
- [ ] Monitoring back to normal
### Documentation
- [ ] Timeline documented
- [ ] Root cause identified
- [ ] Action items created
- [ ] Post-mortem scheduled
### Post-Mortem Template
```markdown
## Incident Summary
- Date/Time:
- Duration:
- Impact:
- Root Cause:
## Timeline
- [Time] - Event
## What Went Well
-
## What Went Wrong
-
## Action Items
| Action | Owner | Due Date |
|--------|-------|----------|
| | | |
```
---
## Contacts
| Role | Name | Contact |
|------|------|---------|
| On-call | | |
| Team Lead | | |
| Manager | | |
---
## Revision History
| Date | Author | Changes |
|------|--------|---------|
| | | |
+46 -2
View File
@@ -11,16 +11,60 @@ Jira ticket: [AZ-XXX](link)
- [ ] New feature
- [ ] Refactoring
- [ ] Documentation
- [ ] Performance improvement
- [ ] Security fix
## Checklist
- [ ] Code follows project conventions
- [ ] Self-review completed
- [ ] Tests added/updated
- [ ] All tests pass
- [ ] Code coverage maintained/improved
- [ ] Documentation updated (if needed)
- [ ] CHANGELOG updated
## Breaking Changes
<!-- List any breaking changes, or write "None" -->
- None
## API Changes
<!-- List any API changes (new endpoints, changed signatures, removed endpoints) -->
- None
## Database Changes
<!-- List any database changes (migrations, schema changes) -->
- [ ] No database changes
- [ ] Migration included and tested
- [ ] Rollback migration included
## Deployment Notes
<!-- Special considerations for deployment -->
- [ ] No special deployment steps required
- [ ] Environment variables added/changed (documented in .env.example)
- [ ] Feature flags configured
- [ ] External service dependencies
## Rollback Plan
<!-- Steps to rollback if issues arise -->
1. Revert this PR commit
2. [Additional steps if needed]
## Testing
How to test these changes.
How to test these changes:
1.
2.
3.
## Performance Impact
<!-- Note any performance implications -->
- [ ] No performance impact expected
- [ ] Performance tested (attach results if applicable)
## Security Considerations
<!-- Note any security implications -->
- [ ] No security implications
- [ ] Security review completed
- [ ] Sensitive data handling reviewed
## Screenshots (if applicable)
<!-- Add screenshots for UI changes -->
+140
View File
@@ -0,0 +1,140 @@
# Quality Gates
Quality gates are checkpoints that must pass before proceeding to the next phase.
---
## Kickstart Tutorial Quality Gates
### Gate 1: Research Complete (after 1.40)
Before proceeding to Planning phase:
- [ ] Problem description is clear and complete
- [ ] Acceptance criteria are measurable and testable
- [ ] Restrictions are documented
- [ ] Security requirements defined
- [ ] Solution draft reviewed and finalized
- [ ] Tech stack evaluated and selected
### Gate 2: Planning Complete (after 2.40)
Before proceeding to Implementation phase:
- [ ] All components defined with clear boundaries
- [ ] Data model designed and reviewed
- [ ] API contracts defined
- [ ] Test specifications created
- [ ] Jira epics/tasks created
- [ ] Effort estimated
- [ ] Risks identified and mitigated
### Gate 3: Implementation Complete (after 3.40)
Before merging to main:
- [ ] All components implemented
- [ ] Code coverage >= 75%
- [ ] All tests pass (unit, integration)
- [ ] Code review approved
- [ ] Security scan passed
- [ ] CI/CD pipeline green
- [ ] Deployment tested on staging
- [ ] Documentation complete
---
## Iterative Tutorial Quality Gates
### Gate 1: Spec Ready (after step 20)
Before creating Jira task:
- [ ] Building block clearly defines problem/goal
- [ ] Feature spec has measurable acceptance criteria
- [ ] Dependencies identified
- [ ] Complexity estimated
### Gate 2: Implementation Ready (after step 50)
Before starting development:
- [ ] Plan reviewed and approved
- [ ] Test strategy defined
- [ ] Dependencies available or mocked
### Gate 3: Merge Ready (after step 70)
Before creating PR:
- [ ] All acceptance criteria met
- [ ] Tests pass locally
- [ ] Definition of Done checklist completed
- [ ] No unresolved TODOs in code
---
## Refactoring Tutorial Quality Gates
### Gate 1: Safety Net Ready (after 4.50)
Before starting refactoring:
- [ ] Baseline metrics captured
- [ ] Current behavior documented
- [ ] Integration tests pass (>= 75% coverage)
- [ ] Feature parity checklist created
### Gate 2: Refactoring Safe (after each 4.70 cycle)
After each refactoring step:
- [ ] All existing tests still pass
- [ ] No functionality lost (feature parity check)
- [ ] Performance not degraded (compare to baseline)
### Gate 3: Refactoring Complete (after 4.95)
Before declaring refactoring done:
- [ ] All tests pass
- [ ] Performance improved or maintained
- [ ] Security review passed
- [ ] Technical debt reduced
- [ ] Documentation updated
---
## Automated Gate Checks
### CI Pipeline Gates
```yaml
gates:
build:
- compilation_success: true
quality:
- lint_errors: 0
- code_coverage: ">= 75%"
- code_smells: "< 10 new"
security:
- critical_vulnerabilities: 0
- high_vulnerabilities: 0
tests:
- unit_tests_pass: true
- integration_tests_pass: true
```
### Manual Gate Checks
Some gates require human verification:
- Architecture review
- Security review
- UX review (for UI changes)
- Stakeholder sign-off
---
## Gate Failure Handling
When a gate fails:
1. **Stop** - Do not proceed to next phase
2. **Identify** - Determine which checks failed
3. **Fix** - Address the failures
4. **Re-verify** - Run gate checks again
5. **Document** - If exception needed, get approval and document reason
---
## Exception Process
If a gate must be bypassed:
1. Document the reason
2. Get tech lead approval
3. Create follow-up task to address
4. Set deadline for resolution
5. Add to risk register
+173
View File
@@ -0,0 +1,173 @@
# Rollback Strategy Template
## Overview
| Field | Value |
|-------|-------|
| Service/Component | [Name] |
| Last Updated | [YYYY-MM-DD] |
| Owner | [Team/Person] |
| Max Rollback Time | [Target: X minutes] |
---
## Rollback Triggers
### Automatic Rollback Triggers
- [ ] Health check failures > 3 consecutive
- [ ] Error rate > 10% for 5 minutes
- [ ] P99 latency > 2x baseline for 5 minutes
- [ ] Critical alert triggered
### Manual Rollback Triggers
- [ ] User-reported critical bug
- [ ] Data corruption detected
- [ ] Security vulnerability discovered
- [ ] Stakeholder decision
---
## Pre-Rollback Checklist
- [ ] Incident acknowledged and documented
- [ ] Stakeholders notified of rollback decision
- [ ] Current state captured (logs, metrics snapshot)
- [ ] Rollback target version identified
- [ ] Database state assessed (migrations reversible?)
---
## Rollback Procedures
### Application Rollback
#### Option 1: Revert Deployment (Preferred)
```bash
# Using CI/CD
# Trigger previous successful deployment
# Manual (if needed)
git revert <commit-hash>
git push origin main
```
#### Option 2: Blue-Green Switch
```bash
# Switch traffic to previous version
# [Platform-specific commands]
```
#### Option 3: Feature Flag Disable
```bash
# Disable feature flag
# [Feature flag system commands]
```
### Database Rollback
#### If Migration is Reversible
```bash
# Run down migration
# [Migration tool command]
```
#### If Migration is NOT Reversible
1. [ ] Restore from backup
2. [ ] Point-in-time recovery to pre-deployment
3. [ ] **WARNING**: May cause data loss - requires approval
### Configuration Rollback
```bash
# Restore previous configuration
# [Config management commands]
```
---
## Post-Rollback Verification
### Immediate (0-5 minutes)
- [ ] Service responding to health checks
- [ ] No error spikes in logs
- [ ] Basic functionality verified
### Short-term (5-30 minutes)
- [ ] All critical paths functional
- [ ] Error rate returned to baseline
- [ ] Performance metrics normal
### Extended (30-60 minutes)
- [ ] No delayed issues appearing
- [ ] User reports resolved
- [ ] All alerts cleared
---
## Communication Plan
### During Rollback
| Audience | Message | Channel |
|----------|---------|---------|
| Engineering | "Initiating rollback due to [reason]" | Slack |
| Stakeholders | "Service issue detected, rollback in progress" | Email |
| Users | "We're aware of issues and working on a fix" | Status page |
### After Rollback
| Audience | Message | Channel |
|----------|---------|---------|
| Engineering | "Rollback complete, monitoring" | Slack |
| Stakeholders | "Service restored, post-mortem scheduled" | Email |
| Users | "Issue resolved, service fully operational" | Status page |
---
## Known Limitations
### Cannot Rollback If:
- [ ] Database migration deleted columns with data
- [ ] External API contracts changed
- [ ] Third-party integrations updated
### Partial Rollback Scenarios
- [ ] When only specific components affected
- [ ] When data migration is complex
---
## Recovery After Rollback
### Investigation
1. [ ] Collect all relevant logs
2. [ ] Identify root cause
3. [ ] Document findings
### Re-deployment Planning
1. [ ] Fix identified in development
2. [ ] Additional tests added
3. [ ] Staged rollout planned
4. [ ] Monitoring enhanced
---
## Rollback Testing
### Test Schedule
- [ ] Monthly rollback drill
- [ ] After major infrastructure changes
- [ ] Before critical releases
### Test Scenarios
1. Application rollback
2. Database rollback (in staging)
3. Configuration rollback
---
## Contacts
| Role | Name | Contact |
|------|------|---------|
| On-call | | |
| Database Admin | | |
| Platform Team | | |
+69 -1
View File
@@ -22,6 +22,12 @@ Add context7 MCP to the list in IDE:
}
```
### Reference Documents
- Definition of Done: `@_docs/00_templates/definition_of_done.md`
- Quality Gates: `@_docs/00_templates/quality_gates.md`
- PR Template: `@_docs/00_templates/pr_template.md`
- Feature Dependencies: `@_docs/00_templates/feature_dependency_matrix.md`
## 10 **🧑‍💻 Developers**: Form a building block
@@ -46,6 +52,13 @@ Add context7 MCP to the list in IDE:
### Execute `/gen_feature_spec`
## 25. **🧑‍💻 Developer**: Check Feature Dependencies
### Verify
- Check `@_docs/00_templates/feature_dependency_matrix.md`
- Ensure all dependent features are completed or mocked
- Update dependency matrix with new feature
## 30. **🤖AI agent**: Generate Jira ticket and branch
### Execute `/gen_jira_task_and_branch`
@@ -61,11 +74,32 @@ Add context7 MCP to the list in IDE:
generate plan for `@_docs/iterative/feature_specs/01-dashboard-export-example.md`
## 45. **🧑‍💻 Developer**: Define Test Strategy
### Determine test types needed:
- [ ] Unit tests (always required)
- [ ] Integration tests (if touching external systems/DB)
- [ ] E2E tests (if user workflow changes)
### Document in plan:
- Which tests to write
- Test data requirements
- Mocking strategy
## 50. **🧑‍💻 Developer**: Save the plan
Save the generated plan to `@_docs/iterative/plans`.
(First, save with built-in mechanism to .cursor folder, then move to this folder `@_docs/iterative/plans`)
## 55. **🧑‍💻 Developer**: Review Plan Before Build
### Checklist
- [ ] Plan covers all acceptance criteria
- [ ] Test strategy defined
- [ ] Dependencies identified and available
- [ ] No architectural concerns
- [ ] Estimate seems reasonable
## 60. Build from the plan
@@ -78,4 +112,38 @@ Add context7 MCP to the list in IDE:
- Code quality standards met
## 70. Check build and tests are successful.
## 70. Check build and tests are successful
**User action required**: Run your project's test, lint, and coverage commands.
- [ ] All tests pass
- [ ] No linting errors
- [ ] Code coverage >= 75%
## 72. **🧑‍💻 Developer**: Run Full Verification
### Local Verification
- [ ] All unit tests pass
- [ ] All integration tests pass
- [ ] Code coverage >= 75%
- [ ] No linting errors
- [ ] Manual testing completed (if UI changes)
### Quality Gate Check
Review `@_docs/00_templates/quality_gates.md` - Iterative Gate 3
## 75. **🤖AI agent**: Create PR and Merge
### Execute `/gen_merge_and_deploy`
This will:
- Verify branch status
- Run pre-merge checks
- Update CHANGELOG
- Create PR using template
- Guide through merge process
## 78. **🧑‍💻 Developer**: Finalize
- Move Jira ticket to Done
- Verify CI pipeline passed on dev
+120 -1
View File
@@ -117,6 +117,19 @@
When the next solution wouldn't differ much from the previous one, or become actually worse, store the last draft as `_docs/01_solution/solution.md`
## 1.35 **🤖📋AI plan**: Tech Stack Selection
### Execute `/1.research/1.35_tech_stack_selection`
### Revise
- Review technology choices against requirements
- Consider team expertise and learning curve
- Document trade-offs and alternatives considered
### Store
- Save output to `_docs/01_solution/tech_stack.md`
## 1.40 **🤖✨AI Research**: Security Research
### Execute `/1.research/1.40_security_research`
@@ -125,6 +138,9 @@
- Review security approach against solution architecture
- Update `security_approach.md` with specific requirements per component
### Quality Gate: Research Complete
Review `@_docs/00_templates/quality_gates.md` - Gate 1
### Commit
```bash
git add _docs/
@@ -188,6 +204,33 @@
- Make sure epics are coherent and make sense
## 2.22 **🤖📋AI plan**: Data Model Design
### Execute `/2.planning/2.22_plan_data_model`
### Revise
- Review entity relationships
- Verify data access patterns
- Check migration strategy
### Store
- Save output to `_docs/02_components/data_model.md`
## 2.25 **🤖📋AI plan**: API Contracts Design
### Execute `/2.planning/2.25_plan_api_contracts`
### Revise
- Review interface definitions
- Verify error handling standards
- Check versioning strategy
### Store
- Save output to `_docs/02_components/api_contracts.md`
- Save OpenAPI spec to `_docs/02_components/openapi.yaml` (if applicable)
## 2.30 **🤖📋AI plan**: Generate tests
### Execute `/2.planning/2.30_plan_tests`
@@ -197,6 +240,19 @@
- Make sure stored tests are coherent and make sense
## 2.35 **🤖📋AI plan**: Risk Assessment
### Execute `/2.planning/2.37_plan_risk_assessment`
### Revise
- Review identified risks
- Verify mitigation strategies
- Set up risk monitoring
### Store
- Save output to `_docs/02_components/risk_assessment.md`
## 2.40 **🤖📋AI plan**: Component Decomposition To Features
### Execute
For each component in `_docs/02_components` run
@@ -206,6 +262,9 @@
- Revise the features, answer questions, put detailed descriptions
- Make sure features are coherent and make sense
### Quality Gate: Planning Complete
Review `@_docs/00_templates/quality_gates.md` - Gate 2
### Commit
```bash
git add _docs/
@@ -237,6 +296,12 @@
```
### Execute: `/3.implementation/3.05_implement_initial_structure`
This will create:
- Project structure with CI/CD pipeline
- Environment configurations (see `@_docs/00_templates/environment_strategy.md`)
- Database migrations setup
- Test infrastructure
### Review Plan
- Analyze the proposals, answer questions
@@ -283,15 +348,29 @@
- Ensure code quality standards are met
## 3.30 **🤖📋AI plan**: CI/CD Setup
## 3.30 **🤖📋AI plan**: CI/CD Validation
### Execute `/3.implementation/3.30_implement_cicd`
### Revise
- Review pipeline configuration
- Verify all quality gates are enforced
- Ensure all stages are properly configured
## 3.35 **🤖📋AI plan**: Deployment Strategy
### Execute `/3.implementation/3.35_plan_deployment`
### Revise
- Review deployment procedures per environment
- Verify rollback procedures documented
- Ensure health checks configured
### Store
- Save output to `_docs/02_components/deployment_strategy.md`
## 3.40 **🤖📋AI plan**: Integration tests and solution checks
### Execute `/3.implementation/3.40_implement_tests`
@@ -300,6 +379,34 @@
- Revise the plan, answer questions, put detailed descriptions
- Make sure tests are coherent and make sense
## 3.42 **🤖📋AI plan**: Observability Setup
### Execute `/3.implementation/3.42_plan_observability`
### Revise
- Review logging strategy
- Verify metrics and alerting
- Check dashboard configuration
### Store
- Save output to `_docs/02_components/observability_plan.md`
## 3.45 **🧑‍💻 Developer**: Final Quality Gate
### Quality Gate: Implementation Complete
Review `@_docs/00_templates/quality_gates.md` - Gate 3
### Checklist
- [ ] All components implemented
- [ ] Code coverage >= 75%
- [ ] All tests pass
- [ ] Code review approved
- [ ] CI/CD pipeline green
- [ ] Deployment tested on staging
- [ ] Observability configured
### Merge after tests pass
```bash
git checkout stage
@@ -309,3 +416,15 @@
git push origin main
```
## 3.50 **🧑‍💻 Developer**: Post-Implementation
### Documentation
- [ ] Update README with final setup instructions
- [ ] Create/update runbooks using `@_docs/00_templates/incident_playbook.md`
- [ ] Document rollback procedures using `@_docs/00_templates/rollback_strategy.md`
### Handoff
- [ ] Stakeholders notified of completion
- [ ] Operations team briefed on monitoring
- [ ] Support documentation complete
+71 -2
View File
@@ -2,6 +2,12 @@
This tutorial guides through analyzing, documenting, and refactoring an existing codebase.
## Reference Documents
- Definition of Done: `@_docs/00_templates/definition_of_done.md`
- Quality Gates: `@_docs/00_templates/quality_gates.md`
- Feature Parity Checklist: `@_docs/00_templates/feature_parity_checklist.md`
- Baseline Metrics: `@_docs/04_refactoring/baseline_metrics.md` (created in 4.07)
## 4.05 **🧑‍💻 Developers**: User Input
@@ -12,6 +18,24 @@ This tutorial guides through analyzing, documenting, and refactoring an existing
- `security_approach.md`: Security requirements (if applicable)
## 4.07 **🤖📋AI plan**: Capture Baseline Metrics
### Execute `/4.refactoring/4.07_capture_baseline`
### Revise
- Verify all metrics are captured accurately
- Document measurement methodology
- Save raw data for later comparison
### Store
- Create folder `_docs/04_refactoring/`
- Save output to `_docs/04_refactoring/baseline_metrics.md`
### Create Feature Parity Checklist
- Copy `@_docs/00_templates/feature_parity_checklist.md` to `_docs/04_refactoring/`
- Fill in current feature inventory
## 4.10 **🤖📋AI plan**: Build Documentation from Code
### Execute `/4.refactoring/4.10_documentation`
@@ -53,6 +77,15 @@ This tutorial guides through analyzing, documenting, and refactoring an existing
### Execute `/4.refactoring/4.40_tests_description`
### Prerequisites Check
- Baseline metrics captured (4.07)
- Feature parity checklist created
### Coverage Requirements
- Minimum overall coverage: 75%
- Critical path coverage: 90%
- All public APIs must have integration tests
### Revise
- Ensure tests cover critical functionality
- Add edge cases
@@ -65,6 +98,10 @@ This tutorial guides through analyzing, documenting, and refactoring an existing
### Verify
- All tests pass on current codebase
- Tests serve as safety net for refactoring
- Coverage meets requirements (75% minimum)
### Quality Gate: Safety Net Ready
Review `@_docs/00_templates/quality_gates.md` - Refactoring Gate 1
## 4.60 **🤖📋AI plan**: Analyze Coupling
@@ -80,9 +117,13 @@ This tutorial guides through analyzing, documenting, and refactoring an existing
### Execute `/4.refactoring/4.70_execute_decoupling`
### Verify
### Verify After Each Change
- Run integration tests after each change
- All tests must pass before proceeding
- Update feature parity checklist
### Quality Gate: Refactoring Safe
Review `@_docs/00_templates/quality_gates.md` - Refactoring Gate 2
## 4.80 **🤖📋AI plan**: Technical Debt
@@ -99,7 +140,8 @@ This tutorial guides through analyzing, documenting, and refactoring an existing
### Execute `/4.refactoring/4.90_performance`
### Verify
- Benchmark before/after
- Compare against baseline metrics from 4.07
- Performance should be improved or maintained
- Run tests to ensure no regressions
@@ -111,3 +153,30 @@ This tutorial guides through analyzing, documenting, and refactoring an existing
- Address identified vulnerabilities
- Run security tests if applicable
## 4.97 **🧑‍💻 Developer**: Final Verification
### Quality Gate: Refactoring Complete
Review `@_docs/00_templates/quality_gates.md` - Refactoring Gate 3
### Compare Against Baseline
- [ ] Code coverage >= baseline
- [ ] Performance metrics improved or maintained
- [ ] All features preserved (feature parity checklist complete)
- [ ] Technical debt reduced
### Feature Parity Verification
- [ ] All items in feature parity checklist verified
- [ ] No functionality lost
- [ ] All tests pass
### Documentation
- [ ] Update solution.md with changes
- [ ] Document any intentional behavior changes
- [ ] Update README if needed
### Commit
```bash
git add .
git commit -m "Refactoring: complete"
```