more detailed SDLC plan

This commit is contained in:
Oleksandr Bezdieniezhnykh
2025-12-10 19:05:17 +02:00
parent 73cbe43397
commit fd75243a84
22 changed files with 2087 additions and 34 deletions
@@ -0,0 +1,137 @@
# Tech Stack Selection
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Security approach: `@_docs/00_problem/security_approach.md`
- Solution draft: `@_docs/01_solution/solution.md`
## Role
You are a software architect evaluating technology choices
## Task
- Evaluate technology options against requirements
- Consider team expertise and learning curve
- Assess long-term maintainability
- Document selection rationale
## Output
### Requirements Analysis
#### Functional Requirements
| Requirement | Tech Implications |
|-------------|-------------------|
| [From acceptance criteria] | |
#### Non-Functional Requirements
| Requirement | Tech Implications |
|-------------|-------------------|
| Performance | |
| Scalability | |
| Security | |
| Maintainability | |
#### Constraints
| Constraint | Impact on Tech Choice |
|------------|----------------------|
| [From restrictions] | |
### Technology Evaluation
#### Programming Language
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Language]
**Rationale**: [Why this choice]
#### Framework
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Framework]
**Rationale**: [Why this choice]
#### Database
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Database]
**Rationale**: [Why this choice]
#### Infrastructure/Hosting
| Option | Pros | Cons | Score (1-5) |
|--------|------|------|-------------|
| | | | |
**Selection**: [Platform]
**Rationale**: [Why this choice]
#### Key Libraries/Dependencies
| Category | Library | Version | Purpose | Alternatives Considered |
|----------|---------|---------|---------|------------------------|
| | | | | |
### Evaluation Criteria
Rate each technology option against these criteria:
1. **Fitness for purpose**: Does it meet functional requirements?
2. **Performance**: Can it meet performance requirements?
3. **Security**: Does it have good security track record?
4. **Maturity**: Is it stable and well-maintained?
5. **Community**: Active community and documentation?
6. **Team expertise**: Does team have experience?
7. **Cost**: Licensing, hosting, operational costs?
8. **Scalability**: Can it grow with the project?
### Technology Stack Summary
```
Language: [Language] [Version]
Framework: [Framework] [Version]
Database: [Database] [Version]
Cache: [Cache solution]
Message Queue: [If applicable]
CI/CD: [Platform]
Hosting: [Platform]
Monitoring: [Tools]
```
### Risk Assessment
| Technology | Risk | Mitigation |
|------------|------|------------|
| | | |
### Learning Requirements
| Technology | Team Familiarity | Training Needed |
|------------|-----------------|-----------------|
| | High/Med/Low | Yes/No |
### Decision Record
**Decision**: [Summary of tech stack]
**Date**: [YYYY-MM-DD]
**Participants**: [Who was involved]
**Status**: Approved / Pending Review
Store output to `_docs/01_solution/tech_stack.md`
## Notes
- Avoid over-engineering - choose simplest solution that meets requirements
- Consider total cost of ownership, not just initial development
- Prefer proven technologies over cutting-edge unless required
- Document trade-offs for future reference
- Ask questions about team expertise and constraints
@@ -0,0 +1,57 @@
# Data Model Design
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a professional database architect
## Task
- Analyze solution and components to identify all data entities
- Design database schema that supports all component requirements
- Define relationships, constraints, and indexes
- Consider data access patterns for query optimization
- Plan for data migration if applicable
## Output
### Entity Relationship Diagram
- Create ERD showing all entities and relationships
- Use Mermaid or draw.io format
### Schema Definition
For each entity:
- Table name
- Columns with types, constraints, defaults
- Primary keys
- Foreign keys and relationships
- Indexes (clustered, non-clustered)
- Partitioning strategy (if needed)
### Data Access Patterns
- List common queries per component
- Identify hot paths requiring optimization
- Recommend caching strategy
### Migration Strategy
- Initial schema creation scripts
- Seed data requirements
- Rollback procedures
### Storage Estimates
- Estimated row counts per table
- Storage requirements
- Growth projections
Store output to `_docs/02_components/data_model.md`
## Notes
- Follow database normalization principles (3NF minimum)
- Consider read vs write optimization based on access patterns
- Plan for horizontal scaling if required
- Ask questions to clarify data requirements
@@ -0,0 +1,64 @@
# API Contracts Design
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Data Model: `@_docs/02_components/data_model.md`
## Role
You are a professional API architect
## Task
- Define API contracts between all components
- Specify external API endpoints (if applicable)
- Define data transfer objects (DTOs)
- Establish error response standards
- Plan API versioning strategy
## Output
### Internal Component Interfaces
For each component boundary:
- Interface name
- Methods with signatures
- Input/Output DTOs
- Error types
- Async/Sync designation
### External API Specification
Generate OpenAPI/Swagger spec including:
- Endpoints with HTTP methods
- Request/Response schemas
- Authentication requirements
- Rate limiting rules
- Example requests/responses
### DTO Definitions
For each data transfer object:
- Name and purpose
- Fields with types
- Validation rules
- Serialization format (JSON, Protobuf, etc.)
### Error Contract
- Standard error response format
- Error codes and messages
- HTTP status code mapping
### Versioning Strategy
- API versioning approach (URL, header, query param)
- Deprecation policy
- Breaking vs non-breaking change definitions
Store output to `_docs/02_components/api_contracts.md`
Store OpenAPI spec to `_docs/02_components/openapi.yaml` (if applicable)
## Notes
- Follow RESTful conventions for external APIs
- Keep internal interfaces minimal and focused
- Design for backward compatibility
- Ask questions to clarify integration requirements
@@ -0,0 +1,111 @@
# Risk Assessment
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Estimation: `@_docs/02_components/estimation.md`
## Role
You are a technical risk analyst
## Task
- Identify technical and project risks
- Assess probability and impact
- Define mitigation strategies
- Create risk monitoring plan
## Output
### Risk Register
| ID | Risk | Category | Probability | Impact | Score | Mitigation | Owner |
|----|------|----------|-------------|--------|-------|------------|-------|
| R1 | | Tech/Schedule/Resource/External | High/Med/Low | High/Med/Low | H/M/L | | |
### Risk Scoring Matrix
| | Low Impact | Medium Impact | High Impact |
|--|------------|---------------|-------------|
| High Probability | Medium | High | Critical |
| Medium Probability | Low | Medium | High |
| Low Probability | Low | Low | Medium |
### Risk Categories
#### Technical Risks
- Technology choices may not meet requirements
- Integration complexity underestimated
- Performance targets unachievable
- Security vulnerabilities
#### Schedule Risks
- Scope creep
- Dependencies delayed
- Resource unavailability
- Underestimated complexity
#### Resource Risks
- Key person dependency
- Skill gaps
- Team availability
#### External Risks
- Third-party API changes
- Vendor reliability
- Regulatory changes
### Top Risks (Ranked)
#### 1. [Highest Risk]
- **Description**:
- **Probability**: High/Medium/Low
- **Impact**: High/Medium/Low
- **Mitigation Strategy**:
- **Contingency Plan**:
- **Early Warning Signs**:
- **Owner**:
#### 2. [Second Highest Risk]
...
### Risk Mitigation Plan
| Risk ID | Mitigation Action | Timeline | Cost | Responsible |
|---------|-------------------|----------|------|-------------|
| R1 | | | | |
### Risk Monitoring
#### Review Schedule
- Daily standup: Discuss blockers (potential risks materializing)
- Weekly: Review risk register, update probabilities
- Sprint end: Comprehensive risk review
#### Early Warning Indicators
| Risk | Indicator | Threshold | Action |
|------|-----------|-----------|--------|
| | | | |
### Contingency Budget
- Time buffer: 20% of estimated duration
- Scope flexibility: [List features that can be descoped]
- Resource backup: [Backup resources if available]
### Acceptance Criteria for Risks
Define which risks are acceptable:
- Low risks: Accepted, monitored
- Medium risks: Mitigation required
- High risks: Mitigation + contingency required
- Critical risks: Must be resolved before proceeding
Store output to `_docs/02_components/risk_assessment.md`
## Notes
- Update risk register throughout project
- Escalate critical risks immediately
- Consider both likelihood and impact
- Ask questions to uncover hidden risks
@@ -22,10 +22,21 @@
- helpers - empty implementations or interfaces
- Add .gitignore appropriate for the project's language/framework
- Add .env.example with required environment variables
- Add CI/CD skeleton (GitHub Actions, GitLab CI, or appropriate)
- Configure CI/CD pipeline with full stages:
- Build stage
- Lint/Static analysis stage
- Unit tests stage
- Integration tests stage
- Security scan stage (SAST/dependency check)
- Deploy to staging stage (triggered on merge to stage branch)
- Define environment strategy based on `@_docs/00_templates/environment_strategy.md`:
- Development environment configuration
- Staging environment configuration
- Production environment configuration (if applicable)
- Add database migration setup if applicable
- Add README.md, describe the project by @_docs/01_solution/solution.md
- Create a separate folder for the integration tests (not a separate repo)
- Configure branch protection rules recommendations
## Example
The structure should roughly looks like this:
@@ -1,42 +1,64 @@
# CI/CD Setup
# CI/CD Pipeline Validation & Enhancement
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`.
- Restrictions: `@_docs/00_problem/restrictions.md`.
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Environment Strategy: `@_docs/00_templates/environment_strategy.md`
## Role
You are a DevOps engineer
## Task
- Review project structure and dependencies
- Configure CI/CD pipeline with stages:
- Build
- Lint
- Unit tests
- Integration tests
- Security scan (if applicable)
- Deploy to staging (if applicable)
- Configure environment variables handling
- Set up test reporting
- Configure branch protection rules recommendations
- Review existing CI/CD pipeline configuration
- Validate all stages are working correctly
- Optimize pipeline performance (parallelization, caching)
- Ensure test coverage gates are enforced
- Verify security scanning is properly configured
- Add missing quality gates
## Checklist
### Pipeline Health
- [ ] All stages execute successfully
- [ ] Build time is acceptable (<10 min for most projects)
- [ ] Caching is properly configured (dependencies, build artifacts)
- [ ] Parallel execution where possible
### Quality Gates
- [ ] Code coverage threshold enforced (minimum 75%)
- [ ] Linting errors block merge
- [ ] Security vulnerabilities block merge (critical/high)
- [ ] All tests must pass
### Environment Deployments
- [ ] Staging deployment works on merge to stage branch
- [ ] Environment variables properly configured per environment
- [ ] Secrets are securely managed (not in code)
- [ ] Rollback procedure documented
### Monitoring
- [ ] Build notifications configured (Slack, email, etc.)
- [ ] Failed build alerts
- [ ] Deployment success/failure notifications
## Output
### Pipeline Configuration
- Pipeline file(s) created/updated
- Stages description
- Triggers (on push, PR, etc.)
### Environment Setup
- Required secrets/variables
- Environment-specific configs
### Pipeline Status Report
- Current pipeline configuration summary
- Issues found and fixes applied
- Performance metrics (build times)
### Deployment Strategy
- Staging deployment steps
- Production deployment steps (if applicable)
### Recommended Improvements
- Short-term improvements
- Long-term optimizations
### Quality Gate Configuration
- Thresholds configured
- Enforcement rules
## Notes
- Use project-appropriate CI/CD tool (GitHub Actions, GitLab CI, Azure DevOps, etc.)
- Keep pipeline fast - parallelize where possible
- Do not break existing functionality
- Test changes in separate branch first
- Document any manual steps required
@@ -0,0 +1,72 @@
# Deployment Strategy Planning
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Environment Strategy: `@_docs/00_templates/environment_strategy.md`
## Role
You are a DevOps/Platform engineer
## Task
- Define deployment strategy for each environment
- Plan deployment procedures and automation
- Define rollback procedures
- Establish deployment verification steps
- Document manual intervention points
## Output
### Deployment Architecture
- Infrastructure diagram (where components run)
- Network topology
- Load balancing strategy
- Container/VM configuration
### Deployment Procedures
#### Staging Deployment
- Trigger conditions
- Pre-deployment checks
- Deployment steps
- Post-deployment verification
- Smoke tests to run
#### Production Deployment
- Approval workflow
- Deployment window
- Pre-deployment checks
- Deployment steps (blue-green, rolling, canary)
- Post-deployment verification
- Smoke tests to run
### Rollback Procedures
- Rollback trigger criteria
- Rollback steps per environment
- Data rollback considerations
- Communication plan during rollback
### Health Checks
- Liveness probe configuration
- Readiness probe configuration
- Custom health endpoints
### Deployment Checklist
- [ ] All tests pass in CI
- [ ] Security scan clean
- [ ] Database migrations reviewed
- [ ] Feature flags configured
- [ ] Monitoring alerts configured
- [ ] Rollback plan documented
- [ ] Stakeholders notified
Store output to `_docs/02_components/deployment_strategy.md`
## Notes
- Prefer automated deployments over manual
- Zero-downtime deployments for production
- Always have a rollback plan
- Ask questions about infrastructure constraints
@@ -0,0 +1,123 @@
# Observability Planning
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Deployment Strategy: `@_docs/02_components/deployment_strategy.md`
## Role
You are a Site Reliability Engineer (SRE)
## Task
- Define logging strategy across all components
- Plan metrics collection and dashboards
- Design distributed tracing (if applicable)
- Establish alerting rules
- Document incident response procedures
## Output
### Logging Strategy
#### Log Levels
| Level | Usage | Example |
|-------|-------|---------|
| ERROR | Exceptions, failures requiring attention | Database connection failed |
| WARN | Potential issues, degraded performance | Retry attempt 2/3 |
| INFO | Significant business events | User registered, Order placed |
| DEBUG | Detailed diagnostic information | Request payload, Query params |
#### Log Format
```json
{
"timestamp": "ISO8601",
"level": "INFO",
"service": "service-name",
"correlation_id": "uuid",
"message": "Event description",
"context": {}
}
```
#### Log Storage
- Development: Console/file
- Staging: Centralized (ELK, CloudWatch, etc.)
- Production: Centralized with retention policy
### Metrics
#### System Metrics
- CPU usage
- Memory usage
- Disk I/O
- Network I/O
#### Application Metrics
| Metric | Type | Description |
|--------|------|-------------|
| request_count | Counter | Total requests |
| request_duration | Histogram | Response time |
| error_count | Counter | Failed requests |
| active_connections | Gauge | Current connections |
#### Business Metrics
- [Define based on acceptance criteria]
### Distributed Tracing
#### Trace Context
- Correlation ID propagation
- Span naming conventions
- Sampling strategy
#### Integration Points
- HTTP headers
- Message queue metadata
- Database query tagging
### Alerting
#### Alert Categories
| Severity | Response Time | Examples |
|----------|---------------|----------|
| Critical | 5 min | Service down, Data loss |
| High | 30 min | High error rate, Performance degradation |
| Medium | 4 hours | Elevated latency, Disk usage high |
| Low | Next business day | Non-critical warnings |
#### Alert Rules
```yaml
alerts:
- name: high_error_rate
condition: error_rate > 5%
duration: 5m
severity: high
- name: service_down
condition: health_check_failed
duration: 1m
severity: critical
```
### Dashboards
#### Operations Dashboard
- Service health status
- Request rate and error rate
- Response time percentiles
- Resource utilization
#### Business Dashboard
- Key business metrics
- User activity
- Transaction volumes
Store output to `_docs/02_components/observability_plan.md`
## Notes
- Follow the principle: "If it's not monitored, it's not in production"
- Balance verbosity with cost
- Ensure PII is not logged
- Plan for log rotation and retention
@@ -0,0 +1,92 @@
# Capture Baseline Metrics
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Current codebase
## Role
You are a software engineer preparing for refactoring
## Task
- Capture current system metrics as baseline
- Document current behavior
- Establish benchmarks to compare against after refactoring
- Identify critical paths to monitor
## Output
### Code Quality Metrics
#### Coverage
```
Current test coverage: XX%
- Unit test coverage: XX%
- Integration test coverage: XX%
- Critical paths coverage: XX%
```
#### Code Complexity
- Cyclomatic complexity (average):
- Most complex functions (top 5):
- Lines of code:
- Technical debt ratio:
#### Code Smells
- Total code smells:
- Critical issues:
- Major issues:
### Performance Metrics
#### Response Times
| Endpoint/Operation | P50 | P95 | P99 |
|-------------------|-----|-----|-----|
| [endpoint1] | Xms | Xms | Xms |
| [operation1] | Xms | Xms | Xms |
#### Resource Usage
- Average CPU usage:
- Average memory usage:
- Database query count per operation:
#### Throughput
- Requests per second:
- Concurrent users supported:
### Functionality Inventory
List all current features/endpoints:
| Feature | Status | Test Coverage | Notes |
|---------|--------|---------------|-------|
| | | | |
### Dependency Analysis
- Total dependencies:
- Outdated dependencies:
- Security vulnerabilities in dependencies:
### Build Metrics
- Build time:
- Test execution time:
- Deployment time:
Store output to `_docs/04_refactoring/baseline_metrics.md`
## Measurement Commands
Use project-appropriate tools for your tech stack:
| Metric | Python | C#/.NET | Java | Go | JavaScript/TypeScript |
|--------|--------|---------|------|-----|----------------------|
| Test coverage | pytest --cov | dotnet test --collect | jacoco | go test -cover | jest --coverage |
| Code complexity | radon | CodeMetrics | PMD | gocyclo | eslint-plugin-complexity |
| Lines of code | cloc | cloc | cloc | cloc | cloc |
| Dependency check | pip-audit | dotnet list package --vulnerable | mvn dependency-check | govulncheck | npm audit |
## Notes
- Run measurements multiple times for accuracy
- Document measurement methodology
- Save raw data for comparison
- Focus on metrics relevant to refactoring goals
@@ -9,11 +9,22 @@
## Role
You are a professional Quality Assurance Engineer
## Prerequisites
- Baseline metrics captured (see 4.07_capture_baseline.md)
- Feature parity checklist created (see `@_docs/00_templates/feature_parity_checklist.md`)
## Coverage Requirements (MUST meet before refactoring)
- Minimum overall coverage: 75%
- Critical path coverage: 90%
- All public APIs must have integration tests
- All error handling paths must be tested
## Task
- Analyze existing test coverage
- Define integration tests that capture current system behavior
- Tests should serve as safety net for refactoring
- Cover critical paths and edge cases
- Ensure coverage requirements are met before proceeding to refactoring
## Output
Store test specs to `_docs/02_tests/[##]_[test_name]_spec.md`:
+120
View File
@@ -0,0 +1,120 @@
# Merge and Deploy Feature
Complete the feature development cycle by creating PR, merging, and updating documentation.
## Input parameters
- task_id (required): Jira task ID
Example: /gen_merge_and_deploy AZ-122
## Prerequisites
- All tests pass locally
- Code review completed (or ready for review)
- Definition of Done checklist reviewed
## Steps (Agent)
### 1. Verify Branch Status
```bash
git status
git log --oneline -5
```
- Confirm on feature branch (e.g., az-122-feature-name)
- Confirm all changes committed
- If uncommitted changes exist, prompt user to commit first
### 2. Run Pre-merge Checks
**User action required**: Run your project's test and lint commands before proceeding.
```bash
# Check for merge conflicts
git fetch origin dev
git merge origin/dev --no-commit --no-ff || git merge --abort
```
- [ ] All tests pass (run project-specific test command)
- [ ] No linting errors (run project-specific lint command)
- [ ] No merge conflicts (or resolve them)
### 3. Update Documentation
#### CHANGELOG.md
Add entry under "Unreleased" section:
```markdown
### Added/Changed/Fixed
- [TASK_ID] Brief description of change
```
#### Update Jira
- Add comment with summary of implementation
- Link any related PRs or documentation
### 4. Create Pull Request
#### PR Title Format
`[TASK_ID] Brief description`
#### PR Body (from template)
```markdown
## Description
[Summary of changes]
## Related Issue
Jira ticket: [TASK_ID](link)
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Refactoring
## Checklist
- [ ] Code follows project conventions
- [ ] Self-review completed
- [ ] Tests added/updated
- [ ] All tests pass
- [ ] Documentation updated
## Breaking Changes
[None / List breaking changes]
## Deployment Notes
[None / Special deployment considerations]
## Rollback Plan
[Steps to rollback if issues arise]
## Testing
[How to test these changes]
```
### 5. Post-merge Actions
After PR is approved and merged:
```bash
# Switch to dev branch
git checkout dev
git pull origin dev
# Delete feature branch
git branch -d {feature_branch}
git push origin --delete {feature_branch}
```
### 6. Update Jira Status
- Move ticket to "Done"
- Add link to merged PR
- Log time spent (if tracked)
## Guardrails
- Do NOT merge if tests fail
- Do NOT merge if there are unresolved review comments
- Do NOT delete branch before merge is confirmed
- Always update CHANGELOG before creating PR
## Output
- PR created/URL provided
- CHANGELOG updated
- Jira ticket updated
- Feature branch cleaned up (post-merge)