Update .gitignore to include .env and .DS_Store files

Add .cursor autodevelopment system
This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-03-25 17:41:10 +02:00
parent 44e75afc4f
commit d96971b050
95 changed files with 11049 additions and 1 deletions
@@ -0,0 +1,31 @@
# Dependencies Table Template
Use this template after cross-task verification. Save as `TASKS_DIR/_dependencies_table.md`.
---
```markdown
# Dependencies Table
**Date**: [YYYY-MM-DD]
**Total Tasks**: [N]
**Total Complexity Points**: [N]
| Task | Name | Complexity | Dependencies | Epic |
|------|------|-----------|-------------|------|
| [JIRA-ID] | initial_structure | [points] | None | [EPIC-ID] |
| [JIRA-ID] | [short_name] | [points] | [JIRA-ID] | [EPIC-ID] |
| [JIRA-ID] | [short_name] | [points] | [JIRA-ID] | [EPIC-ID] |
| [JIRA-ID] | [short_name] | [points] | [JIRA-ID], [JIRA-ID] | [EPIC-ID] |
| ... | ... | ... | ... | ... |
```
---
## Guidelines
- Every task from TASKS_DIR must appear in this table
- Dependencies column lists Jira IDs (e.g., "AZ-43, AZ-44") or "None"
- No circular dependencies allowed
- Tasks should be listed in recommended execution order
- The `/implement` skill reads this table to compute parallel batches
@@ -0,0 +1,135 @@
# Initial Structure Task Template
Use this template for the bootstrap structure plan. Save as `TASKS_DIR/01_initial_structure.md` initially, then rename to `TASKS_DIR/[JIRA-ID]_initial_structure.md` after Jira ticket creation.
---
```markdown
# Initial Project Structure
**Task**: [JIRA-ID]_initial_structure
**Name**: Initial Structure
**Description**: Scaffold the project skeleton — folders, shared models, interfaces, stubs, CI/CD, DB migrations, test structure
**Complexity**: [3|5] points
**Dependencies**: None
**Component**: Bootstrap
**Jira**: [TASK-ID]
**Epic**: [EPIC-ID]
## Project Folder Layout
```
project-root/
├── [folder structure based on tech stack and components]
└── ...
```
### Layout Rationale
[Brief explanation of why this structure was chosen — language conventions, framework patterns, etc.]
## DTOs and Interfaces
### Shared DTOs
| DTO Name | Used By Components | Fields Summary |
|----------|-------------------|---------------|
| [name] | [component list] | [key fields] |
### Component Interfaces
| Component | Interface | Methods | Exposed To |
|-----------|-----------|---------|-----------|
| [name] | [InterfaceName] | [method list] | [consumers] |
## CI/CD Pipeline
| Stage | Purpose | Trigger |
|-------|---------|---------|
| Build | Compile/bundle the application | Every push |
| Lint / Static Analysis | Code quality and style checks | Every push |
| Unit Tests | Run unit test suite | Every push |
| Blackbox Tests | Run blackbox test suite | Every push |
| Security Scan | SAST / dependency check | Every push |
| Deploy to Staging | Deploy to staging environment | Merge to staging branch |
### Pipeline Configuration Notes
[Framework-specific notes: CI tool, runners, caching, parallelism, etc.]
## Environment Strategy
| Environment | Purpose | Configuration Notes |
|-------------|---------|-------------------|
| Development | Local development | [local DB, mock services, debug flags] |
| Staging | Pre-production testing | [staging DB, staging services, production-like config] |
| Production | Live system | [production DB, real services, optimized config] |
### Environment Variables
| Variable | Dev | Staging | Production | Description |
|----------|-----|---------|------------|-------------|
| [VAR_NAME] | [value/source] | [value/source] | [value/source] | [purpose] |
## Database Migration Approach
**Migration tool**: [tool name]
**Strategy**: [migration strategy — e.g., versioned scripts, ORM migrations]
### Initial Schema
[Key tables/collections that need to be created, referencing component data access patterns]
## Test Structure
```
tests/
├── unit/
│ ├── [component_1]/
│ ├── [component_2]/
│ └── ...
├── integration/
│ ├── test_data/
│ └── [test files]
└── ...
```
### Test Configuration Notes
[Test runner, fixtures, test data management, isolation strategy]
## Implementation Order
| Order | Component | Reason |
|-------|-----------|--------|
| 1 | [name] | [why first — foundational, no dependencies] |
| 2 | [name] | [depends on #1] |
| ... | ... | ... |
## Acceptance Criteria
**AC-1: Project scaffolded**
Given the structure plan above
When the implementer executes this task
Then all folders, stubs, and configuration files exist
**AC-2: Tests runnable**
Given the scaffolded project
When the test suite is executed
Then all stub tests pass (even if they only assert true)
**AC-3: CI/CD configured**
Given the scaffolded project
When CI pipeline runs
Then build, lint, and test stages complete successfully
```
---
## Guidance Notes
- This is a PLAN document, not code. The `/implement` skill executes it.
- Focus on structure and organization decisions, not implementation details.
- Reference component specs for interface and DTO details — don't repeat everything.
- The folder layout should follow conventions of the identified tech stack.
- Environment strategy should account for secrets management and configuration.
+113
View File
@@ -0,0 +1,113 @@
# Task Specification Template
Create a focused behavioral specification that describes **what** the system should do, not **how** it should be built.
Save as `TASKS_DIR/[##]_[short_name].md` initially, then rename to `TASKS_DIR/[JIRA-ID]_[short_name].md` after Jira ticket creation.
---
```markdown
# [Feature Name]
**Task**: [JIRA-ID]_[short_name]
**Name**: [short human name]
**Description**: [one-line description of what this task delivers]
**Complexity**: [1|2|3|5] points
**Dependencies**: [AZ-43_shared_models, AZ-44_db_migrations] or "None"
**Component**: [component name for context]
**Jira**: [TASK-ID]
**Epic**: [EPIC-ID]
## Problem
Clear, concise statement of the problem users are facing.
## Outcome
- Measurable or observable goal 1
- Measurable or observable goal 2
- ...
## Scope
### Included
- What's in scope for this task
### Excluded
- Explicitly what's NOT in scope
## Acceptance Criteria
**AC-1: [Title]**
Given [precondition]
When [action]
Then [expected result]
**AC-2: [Title]**
Given [precondition]
When [action]
Then [expected result]
## Non-Functional Requirements
**Performance**
- [requirement if relevant]
**Compatibility**
- [requirement if relevant]
**Reliability**
- [requirement if relevant]
## Unit Tests
| AC Ref | What to Test | Required Outcome |
|--------|-------------|-----------------|
| AC-1 | [test subject] | [expected result] |
## Blackbox Tests
| AC Ref | Initial Data/Conditions | What to Test | Expected Behavior | NFR References |
|--------|------------------------|-------------|-------------------|----------------|
| AC-1 | [setup] | [test subject] | [expected behavior] | [NFR if any] |
## Constraints
- [Architectural pattern constraint if critical]
- [Technical limitation]
- [Integration requirement]
## Risks & Mitigation
**Risk 1: [Title]**
- *Risk*: [Description]
- *Mitigation*: [Approach]
```
---
## Complexity Points Guide
- 1 point: Trivial, self-contained, no dependencies
- 2 points: Non-trivial, low complexity, minimal coordination
- 3 points: Multi-step, moderate complexity, potential alignment needed
- 5 points: Difficult, interconnected logic, medium-high risk
- 8 points: Too complex — split into smaller tasks
## Output Guidelines
**DO:**
- Focus on behavior and user experience
- Use clear, simple language
- Keep acceptance criteria testable (Gherkin format)
- Include realistic scope boundaries
- Write from the user's perspective
- Include complexity estimation
- Reference dependencies by Jira ID (e.g., AZ-43_shared_models)
**DON'T:**
- Include implementation details (file paths, classes, methods)
- Prescribe technical solutions or libraries
- Add architectural diagrams or code examples
- Specify exact API endpoints or data structures
- Include step-by-step implementation instructions
- Add "how to build" guidance
@@ -0,0 +1,129 @@
# Test Infrastructure Task Template
Use this template for the test infrastructure bootstrap (Step 1t in tests-only mode). Save as `TASKS_DIR/01_test_infrastructure.md` initially, then rename to `TASKS_DIR/[JIRA-ID]_test_infrastructure.md` after Jira ticket creation.
---
```markdown
# Test Infrastructure
**Task**: [JIRA-ID]_test_infrastructure
**Name**: Test Infrastructure
**Description**: Scaffold the Blackbox test project — test runner, mock services, Docker test environment, test data fixtures, reporting
**Complexity**: [3|5] points
**Dependencies**: None
**Component**: Blackbox Tests
**Jira**: [TASK-ID]
**Epic**: [EPIC-ID]
## Test Project Folder Layout
```
e2e/
├── conftest.py
├── requirements.txt
├── Dockerfile
├── mocks/
│ ├── [mock_service_1]/
│ │ ├── Dockerfile
│ │ └── [entrypoint file]
│ └── [mock_service_2]/
│ ├── Dockerfile
│ └── [entrypoint file]
├── fixtures/
│ └── [test data files]
├── tests/
│ ├── test_[category_1].py
│ ├── test_[category_2].py
│ └── ...
└── docker-compose.test.yml
```
### Layout Rationale
[Brief explanation of directory structure choices — framework conventions, separation of mocks from tests, fixture management]
## Mock Services
| Mock Service | Replaces | Endpoints | Behavior |
|-------------|----------|-----------|----------|
| [name] | [external service] | [endpoints it serves] | [response behavior, configurable via control API] |
### Mock Control API
Each mock service exposes a `POST /mock/config` endpoint for test-time behavior control (e.g., simulate downtime, inject errors). A `GET /mock/[resource]` endpoint returns recorded interactions for assertion.
## Docker Test Environment
### docker-compose.test.yml Structure
| Service | Image / Build | Purpose | Depends On |
|---------|--------------|---------|------------|
| [system-under-test] | [build context] | Main system being tested | [mock services] |
| [mock-1] | [build context] | Mock for [external service] | — |
| [e2e-consumer] | [build from e2e/] | Test runner | [system-under-test] |
### Networks and Volumes
[Isolated test network, volume mounts for test data, model files, results output]
## Test Runner Configuration
**Framework**: [e.g., pytest]
**Plugins**: [e.g., pytest-csv, sseclient-py, requests]
**Entry point**: [e.g., pytest --csv=/results/report.csv]
### Fixture Strategy
| Fixture | Scope | Purpose |
|---------|-------|---------|
| [name] | [session/module/function] | [what it provides] |
## Test Data Fixtures
| Data Set | Source | Format | Used By |
|----------|--------|--------|---------|
| [name] | [volume mount / generated / API seed] | [format] | [test categories] |
### Data Isolation
[Strategy: fresh containers per run, volume cleanup, mock state reset]
## Test Reporting
**Format**: [e.g., CSV]
**Columns**: [e.g., Test ID, Test Name, Execution Time (ms), Result, Error Message]
**Output path**: [e.g., /results/report.csv → mounted to host]
## Acceptance Criteria
**AC-1: Test environment starts**
Given the docker-compose.test.yml
When `docker compose -f docker-compose.test.yml up` is executed
Then all services start and the system-under-test is reachable
**AC-2: Mock services respond**
Given the test environment is running
When the e2e-consumer sends requests to mock services
Then mock services respond with configured behavior
**AC-3: Test runner executes**
Given the test environment is running
When the e2e-consumer starts
Then the test runner discovers and executes test files
**AC-4: Test report generated**
Given tests have been executed
When the test run completes
Then a report file exists at the configured output path with correct columns
```
---
## Guidance Notes
- This is a PLAN document, not code. The `/implement` skill executes it.
- Focus on test infrastructure decisions, not individual test implementations.
- Reference environment.md and test-data.md from the test specs — don't repeat everything.
- Mock services must be deterministic: same input always produces same output.
- The Docker environment must be self-contained: `docker compose up` sufficient.