Update test results directory structure and enhance Docker configurations

- Modified `.gitignore` to reflect the new path for test results.
- Updated `docker-compose.test.yml` to mount the correct test results directory.
- Adjusted `Dockerfile.test` to set the `PYTHONPATH` and ensure test results are saved in the updated location.
- Added `boto3` and `netron` to `requirements-test.txt` to support new functionalities.
- Updated `pytest.ini` to include the new `pythonpath` for test discovery.

These changes streamline the testing process and ensure compatibility with the updated directory structure.
This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-03-28 00:13:08 +02:00
parent c20018745b
commit 243b69656b
48 changed files with 707 additions and 581 deletions
+47 -29
View File
@@ -1,45 +1,63 @@
# Phase 4: Execution
**Role**: Software architect and developer
**Goal**: Analyze coupling and execute decoupling changes
**Constraints**: Small incremental changes; tests must stay green after every change
**Role**: Orchestrator
**Goal**: Execute all refactoring tasks by delegating to the implement skill
**Constraints**: No inline code changes — all implementation goes through the implement skill's batching and review pipeline
## 4a. Analyze Coupling
## 4a. Pre-Flight Checks
1. Analyze coupling between components/modules
2. Map dependencies (direct and transitive)
3. Identify circular dependencies
4. Form decoupling strategy
1. Verify refactoring task files exist in TASKS_DIR (created during Phase 2d):
- All `[JIRA-ID]_refactor_*.md` files are present
- Each task file has valid header fields (Task, Name, Description, Complexity, Dependencies)
2. Verify `TASKS_DIR/_dependencies_table.md` includes the refactoring tasks
3. Verify all tests pass (safety net from Phase 3 is green)
4. If any check fails, go back to the relevant phase to fix
Write `REFACTOR_DIR/coupling_analysis.md`:
- Dependency graph (Mermaid)
- Coupling metrics per component
- Problem areas: components involved, coupling type, severity, impact
- Decoupling strategy: priority order, proposed interfaces/abstractions, effort estimates
## 4b. Delegate to Implement Skill
**BLOCKING**: Present coupling analysis to user. Do NOT proceed until user confirms strategy.
Read and execute `.cursor/skills/implement/SKILL.md`.
## 4b. Execute Decoupling
The implement skill will:
1. Parse task files and dependency graph from TASKS_DIR
2. Detect already-completed tasks (skip non-refactoring tasks from prior workflow steps)
3. Compute execution batches for the refactoring tasks
4. Launch implementer subagents (up to 4 in parallel)
5. Run code review after each batch
6. Commit and push per batch
7. Update Jira/ADO ticket status
For each change in the decoupling strategy:
Do NOT modify, skip, or abbreviate any part of the implement skill's workflow. The refactor skill is delegating execution, not optimizing it.
1. Implement the change
2. Run blackbox tests
3. Fix any failures
4. Commit with descriptive message
## 4c. Capture Results
Address code smells encountered: long methods, large classes, duplicate code, dead code, magic numbers.
After the implement skill completes:
Write `REFACTOR_DIR/execution_log.md`:
- Change description, files affected, test status per change
- Before/after metrics comparison against baseline
1. Read batch reports from `_docs/03_implementation/batch_*_report.md`
2. Read `_docs/03_implementation/FINAL_implementation_report.md`
3. Write `RUN_DIR/execution_log.md` summarizing:
- Total tasks executed
- Batches completed
- Code review verdicts per batch
- Files modified (aggregate list)
- Any blocked or failed tasks
- Links to batch reports
## 4d. Update Task Statuses
For each successfully completed refactoring task:
1. Transition the Jira/ADO ticket status to **Done** via the configured tracker MCP
2. If tracker unavailable, note the pending status transitions in `RUN_DIR/execution_log.md`
For any failed or blocked tasks, leave their status as-is (the implement skill already set them to In Testing or blocked).
**Self-verification**:
- [ ] All refactoring tasks show as completed in batch reports
- [ ] All completed tasks have Jira/ADO status set to Done
- [ ] All tests still pass after execution
- [ ] No circular dependencies remain (or reduced per plan)
- [ ] Code smells addressed
- [ ] Metrics improved compared to baseline
- [ ] No tasks remain in blocked or failed state (or user has acknowledged them)
- [ ] `RUN_DIR/execution_log.md` written with links to batch reports
**Save action**: Write execution artifacts
**Save action**: Write `RUN_DIR/execution_log.md`
**BLOCKING**: Present execution summary to user. Do NOT proceed until user confirms.
**GATE**: All refactoring tasks must be implemented. If any tasks failed, present the failures to the user and ask for guidance before proceeding to Phase 5.