mirror of
https://github.com/azaion/detections.git
synced 2026-04-22 22:46:31 +00:00
d28b9584f2
- Replace all Jira-specific references with generic tracker/work-item terminology (TRACKER-ID, work item epics); delete project-management.mdc and mcp.json.example - Restructure refactor skill: extract 8 phases (00–07) and templates into separate files; add guided mode for pre-built change lists - Add Step 3 "Code Testability Revision" to existing-code workflow (renumber steps 3–12 → 3–13) - Simplify autopilot state file to minimal current-step pointer - Strengthen coding rules: AAA test comments per language, test failures as blocking gates, dependency install policy - Add Docker Suitability Assessment to test-spec and test-run skills (local vs Docker execution) - Narrow human-attention sound rule to human-input-needed only - Add AskQuestion fallback to plain text across skills - Rename FINAL_implementation_report to implementation_report_* - Simplify cursor-meta (remove _docs numbering table, quality thresholds) - Make techstackrule alwaysApply, add alwaysApply:false to openapi
2.3 KiB
2.3 KiB
Phase 6: Final Verification
Role: QA engineer Goal: Run all tests end-to-end, compare final metrics against baseline, and confirm the refactoring succeeded Constraints: No code changes. If failures are found, go back to the appropriate phase (4/5) to fix before retrying.
Skip condition: If the run name contains testability, skip Phase 6 entirely — no test suite exists yet to verify against. Proceed directly to Phase 7.
6a. Run Full Test Suite
- Run unit tests, integration tests, and blackbox tests
- Run acceptance tests derived from
acceptance_criteria.md - Record pass/fail counts and any failures
If any test fails:
- Determine whether the failure is a test issue (→ return to Phase 5) or a code issue (→ return to Phase 4)
- Do NOT proceed until all tests pass
6b. Capture Final Metrics
Re-measure all metrics from Phase 0 baseline using the same tools:
| Metric Category | What to Capture |
|---|---|
| Coverage | Overall, unit, blackbox, critical paths |
| Complexity | Cyclomatic complexity (avg + top 5 functions), LOC, tech debt ratio |
| Code Smells | Total, critical, major |
| Performance | Response times (P50/P95/P99), CPU/memory, throughput |
| Dependencies | Total count, outdated, security vulnerabilities |
| Build | Build time, test execution time, deployment time |
6c. Compare Against Baseline
- Read
RUN_DIR/baseline_metrics.md - Produce a side-by-side comparison: baseline vs final for every metric
- Flag any regressions (metrics that got worse)
- Verify acceptance criteria are met
Write RUN_DIR/verification_report.md:
- Test results summary: total, passed, failed, skipped
- Metric comparison table: metric, baseline value, final value, delta, status (improved / unchanged / regressed)
- Acceptance criteria checklist: criterion, status (met / not met), evidence
- Regressions (if any): metric, severity, explanation
Self-verification:
- All tests pass (zero failures)
- All acceptance criteria are met
- No critical metric regressions
- Metrics are captured with the same tools/methodology as Phase 0
Save action: Write RUN_DIR/verification_report.md
GATE (BLOCKING): All tests must pass and no critical regressions. Present verification report to user. Do NOT proceed to Phase 7 until user confirms.