Refactor annotation tool from WPF desktop app to .NET API

Replace the WPF desktop application (Azaion.Suite, Azaion.Annotator,
Azaion.Common, Azaion.Inference, Azaion.Loader, Azaion.LoaderUI,
Azaion.Dataset, Azaion.Test) with a standalone .NET Web API in src/.

Made-with: Cursor
This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-03-25 04:40:03 +02:00
parent e7ea5a8ded
commit 9e7dc290db
367 changed files with 8840 additions and 16583 deletions
+71
View File
@@ -0,0 +1,71 @@
# Deployment Strategy Planning
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Environment Strategy: `@_docs/00_templates/environment_strategy.md`
## Role
You are a DevOps/Platform engineer
## Task
- Define deployment strategy for each environment
- Plan deployment procedures and automation
- Define rollback procedures
- Establish deployment verification steps
- Document manual intervention points
## Output
### Deployment Architecture
- Infrastructure diagram (where components run)
- Network topology
- Load balancing strategy
- Container/VM configuration
### Deployment Procedures
#### Staging Deployment
- Trigger conditions
- Pre-deployment checks
- Deployment steps
- Post-deployment verification
- Smoke tests to run
#### Production Deployment
- Approval workflow
- Deployment window
- Pre-deployment checks
- Deployment steps (blue-green, rolling, canary)
- Post-deployment verification
- Smoke tests to run
### Rollback Procedures
- Rollback trigger criteria
- Rollback steps per environment
- Data rollback considerations
- Communication plan during rollback
### Health Checks
- Liveness probe configuration
- Readiness probe configuration
- Custom health endpoints
### Deployment Checklist
- [ ] All tests pass in CI
- [ ] Security scan clean
- [ ] Database migrations reviewed
- [ ] Feature flags configured
- [ ] Monitoring alerts configured
- [ ] Rollback plan documented
- [ ] Stakeholders notified
Store output to `_docs/02_components/deployment_strategy.md`
## Notes
- Prefer automated deployments over manual
- Zero-downtime deployments for production
- Always have a rollback plan
- Ask questions about infrastructure constraints
@@ -0,0 +1,45 @@
# Implement E2E Black-Box Tests
Build a separate Docker-based consumer application that exercises the main system as a black box, validating end-to-end use cases.
## Input
- E2E test infrastructure spec: `_docs/02_plans/<topic>/e2e_test_infrastructure.md` (produced by plan skill Step 4b)
## Context
- Problem description: `@_docs/00_problem/problem.md`
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`
- Solution: `@_docs/01_solution/solution.md`
- Architecture: `@_docs/02_plans/<topic>/architecture.md`
## Role
You are a professional QA engineer and developer
## Task
- Read the E2E test infrastructure spec thoroughly
- Build the Docker test environment:
- Create docker-compose.yml with all services (system under test, test DB, consumer app, dependency mocks)
- Configure networks and volumes per spec
- Implement the consumer application:
- Separate project/folder that communicates with the main system only through its public interfaces
- No internal imports from the main system, no direct DB access
- Use the tech stack and entry point defined in the spec
- Implement each E2E test scenario from the spec:
- Check existing E2E tests; update if a similar test already exists
- Prepare seed data and fixtures per the test data management section
- Implement teardown/cleanup procedures
- Run the full E2E suite via `docker compose up`
- If tests fail:
- Fix issues iteratively until all pass
- If a failure is caused by missing external data, API access, or environment config, ask the user
- Ensure the E2E suite integrates into the CI pipeline per the spec
- Produce a CSV test report (test ID, name, execution time, result, error message) at the output path defined in the spec
## Safety Rules
- The consumer app must treat the main system as a true black box
- Never import internal modules or access the main system's database directly
- Docker environment must be self-contained — no host dependencies beyond Docker itself
- If external services need mocking, implement mock/stub services as Docker containers
## Notes
- Ask questions if the spec is ambiguous or incomplete
- If `e2e_test_infrastructure.md` is missing, stop and inform the user to run the plan skill first
+64
View File
@@ -0,0 +1,64 @@
# CI/CD Pipeline Validation & Enhancement
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Environment Strategy: `@_docs/00_templates/environment_strategy.md`
## Role
You are a DevOps engineer
## Task
- Review existing CI/CD pipeline configuration
- Validate all stages are working correctly
- Optimize pipeline performance (parallelization, caching)
- Ensure test coverage gates are enforced
- Verify security scanning is properly configured
- Add missing quality gates
## Checklist
### Pipeline Health
- [ ] All stages execute successfully
- [ ] Build time is acceptable (<10 min for most projects)
- [ ] Caching is properly configured (dependencies, build artifacts)
- [ ] Parallel execution where possible
### Quality Gates
- [ ] Code coverage threshold enforced (minimum 75%)
- [ ] Linting errors block merge
- [ ] Security vulnerabilities block merge (critical/high)
- [ ] All tests must pass
### Environment Deployments
- [ ] Staging deployment works on merge to stage branch
- [ ] Environment variables properly configured per environment
- [ ] Secrets are securely managed (not in code)
- [ ] Rollback procedure documented
### Monitoring
- [ ] Build notifications configured (Slack, email, etc.)
- [ ] Failed build alerts
- [ ] Deployment success/failure notifications
## Output
### Pipeline Status Report
- Current pipeline configuration summary
- Issues found and fixes applied
- Performance metrics (build times)
### Recommended Improvements
- Short-term improvements
- Long-term optimizations
### Quality Gate Configuration
- Thresholds configured
- Enforcement rules
## Notes
- Do not break existing functionality
- Test changes in separate branch first
- Document any manual steps required
+38
View File
@@ -0,0 +1,38 @@
# Code Review
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`.
- Acceptance criteria: `@_docs/00_problem/acceptance_criteria.md`.
- Security approach: `@_docs/00_problem/security_approach.md`.
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
## Role
You are a senior software engineer performing code review
## Task
- Review implemented code against component specifications
- Check code quality: readability, maintainability, SOLID principles
- Check error handling consistency
- Check logging implementation
- Check security requirements are met
- Check test coverage is adequate
- Identify code smells and technical debt
## Output
### Issues Found
For each issue:
- File/Location
- Issue type (Bug/Security/Performance/Style/Debt)
- Description
- Suggested fix
- Priority (High/Medium/Low)
### Summary
- Total issues by type
- Blocking issues that must be fixed
- Recommended improvements
## Notes
- Can also use Cursor's built-in review feature
- Focus on critical issues first
+53
View File
@@ -0,0 +1,53 @@
# Implement Initial Structure
## Input
- Structure plan: `_docs/02_tasks/<topic>/initial_structure.md` (produced by decompose skill)
## Context
- Problem description: `@_docs/00_problem/problem.md`
- Restrictions: `@_docs/00_problem/restrictions.md`
- Solution: `@_docs/01_solution/solution.md`
## Role
You are a professional software architect
## Task
- Read carefully the structure plan in `initial_structure.md`
- Execute the plan — create the project skeleton:
- DTOs and shared models
- Component interfaces
- Empty implementations (stubs)
- Helpers — empty implementations or interfaces
- Add .gitignore appropriate for the project's language/framework
- Add .env.example with required environment variables
- Configure CI/CD pipeline per the structure plan stages
- Apply environment strategy (dev, staging, production) per the structure plan
- Add database migration setup if applicable
- Add README.md, describe the project based on the solution
- Create test folder structure per the structure plan
- Configure branch protection rules recommendations
## Example
The structure should roughly look like this (varies by tech stack):
- .gitignore
- .env.example
- .github/workflows/ (or .gitlab-ci.yml or azure-pipelines.yml)
- api/
- components/
- component1_folder/
- component2_folder/
- db/
- migrations/
- helpers/
- models/
- tests/
- unit/
- integration/
- test_data/
Semantically coherent components may have their own project or subfolder. Common interfaces can be in a shared layer or per-component — follow language conventions.
## Notes
- Follow SOLID, KISS, DRY
- Follow conventions of the project's programming language
- Ask as many questions as needed
+62
View File
@@ -0,0 +1,62 @@
# Implement Next Wave
Identify the next batch of independent features and implement them in parallel using the implementer subagent.
## Prerequisites
- Project scaffolded (`/implement-initial` completed)
- `_docs/02_tasks/<topic>/SUMMARY.md` exists
- `_docs/02_tasks/<topic>/cross_dependencies.md` exists
## Wave Sizing
- One wave = one phase from SUMMARY.md (features whose dependencies are all satisfied)
- Max 4 subagents run concurrently; features in the same component run sequentially
- If a phase has more than 8 features or more than 20 complexity points, suggest splitting into smaller waves and let the user cherry-pick which features to include
## Task
1. **Read the implementation plan**
- Read `SUMMARY.md` for the phased implementation order
- Read `cross_dependencies.md` for the dependency graph
2. **Detect current progress**
- Analyze the codebase to determine which features are already implemented
- Match implemented code against feature specs in `_docs/02_tasks/<topic>/`
- Identify the next incomplete wave/phase from the implementation order
3. **Present the wave**
- List all features in this wave with their complexity points
- Show which component each feature belongs to
- Confirm total features and estimated complexity
- If the phase exceeds 8 features or 20 complexity points, recommend splitting and let user select a subset
- **BLOCKING**: Do NOT proceed until user confirms
4. **Launch parallel implementation**
- For each feature in the wave, launch an `implementer` subagent in background
- Each subagent receives the path to its feature spec file
- Features within different components can run in parallel
- Features within the same component should run sequentially to avoid file conflicts
5. **Monitor and report**
- Wait for all subagents to complete
- Collect results from each: what was implemented, test results, any issues
- Run the full test suite
- Report summary:
- Features completed successfully
- Features that failed or need manual attention
- Test results (passed/failed/skipped)
- Any mocks created for future-wave dependencies
6. **Post-wave actions**
- Suggest: `git add . && git commit` with a wave-level commit message
- If all features passed: "Ready for next wave. Run `/implement-wave` again."
- If some failed: "Fix the failing features before proceeding to the next wave."
## Safety Rules
- Never launch features whose dependencies are not yet implemented
- Features within the same component run sequentially, not in parallel
- If a subagent fails, do NOT retry automatically — report and let user decide
- Always run tests after the wave completes, before suggesting commit
## Notes
- Ask questions if the implementation order is ambiguous
- If SUMMARY.md or cross_dependencies.md is missing, stop and inform the user to run the decompose skill first
+122
View File
@@ -0,0 +1,122 @@
# Observability Planning
## Initial data:
- Problem description: `@_docs/00_problem/problem_description.md`
- Full Solution Description: `@_docs/01_solution/solution.md`
- Components: `@_docs/02_components`
- Deployment Strategy: `@_docs/02_components/deployment_strategy.md`
## Role
You are a Site Reliability Engineer (SRE)
## Task
- Define logging strategy across all components
- Plan metrics collection and dashboards
- Design distributed tracing (if applicable)
- Establish alerting rules
- Document incident response procedures
## Output
### Logging Strategy
#### Log Levels
| Level | Usage | Example |
|-------|-------|---------|
| ERROR | Exceptions, failures requiring attention | Database connection failed |
| WARN | Potential issues, degraded performance | Retry attempt 2/3 |
| INFO | Significant business events | User registered, Order placed |
| DEBUG | Detailed diagnostic information | Request payload, Query params |
#### Log Format
```json
{
"timestamp": "ISO8601",
"level": "INFO",
"service": "service-name",
"correlation_id": "uuid",
"message": "Event description",
"context": {}
}
```
#### Log Storage
- Development: Console/file
- Staging: Centralized (ELK, CloudWatch, etc.)
- Production: Centralized with retention policy
### Metrics
#### System Metrics
- CPU usage
- Memory usage
- Disk I/O
- Network I/O
#### Application Metrics
| Metric | Type | Description |
|--------|------|-------------|
| request_count | Counter | Total requests |
| request_duration | Histogram | Response time |
| error_count | Counter | Failed requests |
| active_connections | Gauge | Current connections |
#### Business Metrics
- [Define based on acceptance criteria]
### Distributed Tracing
#### Trace Context
- Correlation ID propagation
- Span naming conventions
- Sampling strategy
#### Integration Points
- HTTP headers
- Message queue metadata
- Database query tagging
### Alerting
#### Alert Categories
| Severity | Response Time | Examples |
|----------|---------------|----------|
| Critical | 5 min | Service down, Data loss |
| High | 30 min | High error rate, Performance degradation |
| Medium | 4 hours | Elevated latency, Disk usage high |
| Low | Next business day | Non-critical warnings |
#### Alert Rules
```yaml
alerts:
- name: high_error_rate
condition: error_rate > 5%
duration: 5m
severity: high
- name: service_down
condition: health_check_failed
duration: 1m
severity: critical
```
### Dashboards
#### Operations Dashboard
- Service health status
- Request rate and error rate
- Response time percentiles
- Resource utilization
#### Business Dashboard
- Key business metrics
- User activity
- Transaction volumes
Store output to `_docs/02_components/observability_plan.md`
## Notes
- Follow the principle: "If it's not monitored, it's not in production"
- Balance verbosity with cost
- Ensure PII is not logged
- Plan for log rotation and retention