mirror of
https://github.com/azaion/detections.git
synced 2026-04-23 02:46:31 +00:00
Generalize tracker references, restructure refactor skill, and strengthen coding rules
- Replace all Jira-specific references with generic tracker/work-item terminology (TRACKER-ID, work item epics); delete project-management.mdc and mcp.json.example - Restructure refactor skill: extract 8 phases (00–07) and templates into separate files; add guided mode for pre-built change lists - Add Step 3 "Code Testability Revision" to existing-code workflow (renumber steps 3–12 → 3–13) - Simplify autopilot state file to minimal current-step pointer - Strengthen coding rules: AAA test comments per language, test failures as blocking gates, dependency install policy - Add Docker Suitability Assessment to test-spec and test-run skills (local vs Docker execution) - Narrow human-attention sound rule to human-input-needed only - Add AskQuestion fallback to plain text across skills - Rename FINAL_implementation_report to implementation_report_* - Simplify cursor-meta (remove _docs numbering table, quality thresholds) - Make techstackrule alwaysApply, add alwaysApply:false to openapi
This commit is contained in:
@@ -0,0 +1,57 @@
|
||||
# Phase 3: Safety Net
|
||||
|
||||
**Role**: QA engineer and developer
|
||||
**Goal**: Ensure tests exist that capture current behavior before refactoring
|
||||
**Constraints**: Tests must all pass on the current codebase before proceeding
|
||||
|
||||
## Skip Condition: Testability Refactoring
|
||||
|
||||
If the current run name contains `testability` (e.g., `01-testability-refactoring`), **skip Phase 3 entirely**. The purpose of a testability run is to make the code testable so that tests can be written afterward. Announce the skip and proceed to Phase 4.
|
||||
|
||||
## 3a. Check Existing Tests
|
||||
|
||||
Before designing or implementing any new tests, check what already exists:
|
||||
|
||||
1. Scan the project for existing test files (unit tests, integration tests, blackbox tests)
|
||||
2. Run the existing test suite — record pass/fail counts
|
||||
3. Measure current coverage against the areas being refactored (from `RUN_DIR/list-of-changes.md` file paths)
|
||||
4. Assess coverage against thresholds:
|
||||
- Minimum overall coverage: 75%
|
||||
- Critical path coverage: 90%
|
||||
- All public APIs must have blackbox tests
|
||||
- All error handling paths must be tested
|
||||
|
||||
If existing tests meet all thresholds for the refactoring areas:
|
||||
- Document the existing coverage in `RUN_DIR/test_specs/existing_coverage.md`
|
||||
- Skip to the GATE check below
|
||||
|
||||
If existing tests partially cover the refactoring areas:
|
||||
- Document what is covered and what gaps remain
|
||||
- Proceed to 3b only for the uncovered areas
|
||||
|
||||
If no relevant tests exist:
|
||||
- Proceed to 3b for full test design
|
||||
|
||||
## 3b. Design Test Specs (for uncovered areas only)
|
||||
|
||||
For each uncovered critical area, write test specs to `RUN_DIR/test_specs/[##]_[test_name].md`:
|
||||
- Blackbox tests: summary, current behavior, input data, expected result, max expected time
|
||||
- Acceptance tests: summary, preconditions, steps with expected results
|
||||
- Coverage analysis: current %, target %, uncovered critical paths
|
||||
|
||||
## 3c. Implement Tests (for uncovered areas only)
|
||||
|
||||
1. Set up test environment and infrastructure if not exists
|
||||
2. Implement each test from specs
|
||||
3. Run tests, verify all pass on current codebase
|
||||
4. Document any discovered issues
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Coverage requirements met (75% overall, 90% critical paths) across existing + new tests
|
||||
- [ ] All tests pass on current codebase
|
||||
- [ ] All public APIs in refactoring scope have blackbox tests
|
||||
- [ ] Test data fixtures are configured
|
||||
|
||||
**Save action**: Write test specs to RUN_DIR; implemented tests go into the project's test folder
|
||||
|
||||
**GATE (BLOCKING)**: ALL tests must pass before proceeding to Phase 4. If tests fail, fix the tests (not the code) or ask user for guidance. Do NOT proceed to Phase 4 with failing tests.
|
||||
Reference in New Issue
Block a user