Files
detections/.cursor/skills/refactor/phases/06-verification.md
T
Oleksandr Bezdieniezhnykh d28b9584f2 Generalize tracker references, restructure refactor skill, and strengthen coding rules
- Replace all Jira-specific references with generic tracker/work-item
  terminology (TRACKER-ID, work item epics); delete project-management.mdc
  and mcp.json.example
- Restructure refactor skill: extract 8 phases (00–07) and templates into
  separate files; add guided mode for pre-built change lists
- Add Step 3 "Code Testability Revision" to existing-code workflow
  (renumber steps 3–12 → 3–13)
- Simplify autopilot state file to minimal current-step pointer
- Strengthen coding rules: AAA test comments per language, test failures as
  blocking gates, dependency install policy
- Add Docker Suitability Assessment to test-spec and test-run skills
  (local vs Docker execution)
- Narrow human-attention sound rule to human-input-needed only
- Add AskQuestion fallback to plain text across skills
- Rename FINAL_implementation_report to implementation_report_*
- Simplify cursor-meta (remove _docs numbering table, quality thresholds)
- Make techstackrule alwaysApply, add alwaysApply:false to openapi
2026-03-28 02:42:36 +02:00

2.3 KiB

Phase 6: Final Verification

Role: QA engineer Goal: Run all tests end-to-end, compare final metrics against baseline, and confirm the refactoring succeeded Constraints: No code changes. If failures are found, go back to the appropriate phase (4/5) to fix before retrying.

Skip condition: If the run name contains testability, skip Phase 6 entirely — no test suite exists yet to verify against. Proceed directly to Phase 7.

6a. Run Full Test Suite

  1. Run unit tests, integration tests, and blackbox tests
  2. Run acceptance tests derived from acceptance_criteria.md
  3. Record pass/fail counts and any failures

If any test fails:

  • Determine whether the failure is a test issue (→ return to Phase 5) or a code issue (→ return to Phase 4)
  • Do NOT proceed until all tests pass

6b. Capture Final Metrics

Re-measure all metrics from Phase 0 baseline using the same tools:

Metric Category What to Capture
Coverage Overall, unit, blackbox, critical paths
Complexity Cyclomatic complexity (avg + top 5 functions), LOC, tech debt ratio
Code Smells Total, critical, major
Performance Response times (P50/P95/P99), CPU/memory, throughput
Dependencies Total count, outdated, security vulnerabilities
Build Build time, test execution time, deployment time

6c. Compare Against Baseline

  1. Read RUN_DIR/baseline_metrics.md
  2. Produce a side-by-side comparison: baseline vs final for every metric
  3. Flag any regressions (metrics that got worse)
  4. Verify acceptance criteria are met

Write RUN_DIR/verification_report.md:

  • Test results summary: total, passed, failed, skipped
  • Metric comparison table: metric, baseline value, final value, delta, status (improved / unchanged / regressed)
  • Acceptance criteria checklist: criterion, status (met / not met), evidence
  • Regressions (if any): metric, severity, explanation

Self-verification:

  • All tests pass (zero failures)
  • All acceptance criteria are met
  • No critical metric regressions
  • Metrics are captured with the same tools/methodology as Phase 0

Save action: Write RUN_DIR/verification_report.md

GATE (BLOCKING): All tests must pass and no critical regressions. Present verification report to user. Do NOT proceed to Phase 7 until user confirms.