Files
Oleksandr Bezdieniezhnykh d818daacd1 Sync .cursor from detections
2026-04-12 05:05:09 +03:00

58 lines
2.6 KiB
Markdown

# Phase 3: Safety Net
**Role**: QA engineer and developer
**Goal**: Ensure tests exist that capture current behavior before refactoring
**Constraints**: Tests must all pass on the current codebase before proceeding
## Skip Condition: Testability Refactoring
If the current run name contains `testability` (e.g., `01-testability-refactoring`), **skip Phase 3 entirely**. The purpose of a testability run is to make the code testable so that tests can be written afterward. Announce the skip and proceed to Phase 4.
## 3a. Check Existing Tests
Before designing or implementing any new tests, check what already exists:
1. Scan the project for existing test files (unit tests, integration tests, blackbox tests)
2. Run the existing test suite — record pass/fail counts
3. Measure current coverage against the areas being refactored (from `RUN_DIR/list-of-changes.md` file paths)
4. Assess coverage against thresholds:
- Minimum overall coverage: 75%
- Critical path coverage: 90%
- All public APIs must have blackbox tests
- All error handling paths must be tested
If existing tests meet all thresholds for the refactoring areas:
- Document the existing coverage in `RUN_DIR/test_specs/existing_coverage.md`
- Skip to the GATE check below
If existing tests partially cover the refactoring areas:
- Document what is covered and what gaps remain
- Proceed to 3b only for the uncovered areas
If no relevant tests exist:
- Proceed to 3b for full test design
## 3b. Design Test Specs (for uncovered areas only)
For each uncovered critical area, write test specs to `RUN_DIR/test_specs/[##]_[test_name].md`:
- Blackbox tests: summary, current behavior, input data, expected result, max expected time
- Acceptance tests: summary, preconditions, steps with expected results
- Coverage analysis: current %, target %, uncovered critical paths
## 3c. Implement Tests (for uncovered areas only)
1. Set up test environment and infrastructure if not exists
2. Implement each test from specs
3. Run tests, verify all pass on current codebase
4. Document any discovered issues
**Self-verification**:
- [ ] Coverage requirements met (75% overall, 90% critical paths) across existing + new tests
- [ ] All tests pass on current codebase
- [ ] All public APIs in refactoring scope have blackbox tests
- [ ] Test data fixtures are configured
**Save action**: Write test specs to RUN_DIR; implemented tests go into the project's test folder
**GATE (BLOCKING)**: ALL tests must pass before proceeding to Phase 4. If tests fail, fix the tests (not the code) or ask user for guidance. Do NOT proceed to Phase 4 with failing tests.