Files
detections/.cursor/skills/refactor/SKILL.md
T
Oleksandr Bezdieniezhnykh d28b9584f2 Generalize tracker references, restructure refactor skill, and strengthen coding rules
- Replace all Jira-specific references with generic tracker/work-item
  terminology (TRACKER-ID, work item epics); delete project-management.mdc
  and mcp.json.example
- Restructure refactor skill: extract 8 phases (00–07) and templates into
  separate files; add guided mode for pre-built change lists
- Add Step 3 "Code Testability Revision" to existing-code workflow
  (renumber steps 3–12 → 3–13)
- Simplify autopilot state file to minimal current-step pointer
- Strengthen coding rules: AAA test comments per language, test failures as
  blocking gates, dependency install policy
- Add Docker Suitability Assessment to test-spec and test-run skills
  (local vs Docker execution)
- Narrow human-attention sound rule to human-input-needed only
- Add AskQuestion fallback to plain text across skills
- Rename FINAL_implementation_report to implementation_report_*
- Simplify cursor-meta (remove _docs numbering table, quality thresholds)
- Make techstackrule alwaysApply, add alwaysApply:false to openapi
2026-03-28 02:42:36 +02:00

5.9 KiB
Raw Blame History

name, description, category, tags, trigger_phrases, disable-model-invocation
name description category tags trigger_phrases disable-model-invocation
refactor Structured 8-phase refactoring workflow with two input modes: Automatic (skill discovers issues) and Guided (input file with change list). Each run gets its own subfolder in _docs/04_refactoring/. Delegates code execution to the implement skill via task files in _docs/02_tasks/. Additional workflow modes: Targeted (skip discovery), Quick Assessment (phases 0-2 only). evolve
refactoring
coupling
technical-debt
performance
testability
refactor
refactoring
improve code
analyze coupling
decoupling
technical debt
code quality
true

Structured Refactoring

Phase details live in phases/ — read the relevant file before executing each phase.

Core Principles

  • Preserve behavior first: never refactor without a passing test suite (exception: testability runs, where the goal is making code testable)
  • Measure before and after: every change must be justified by metrics
  • Small incremental changes: commit frequently, never break tests
  • Save immediately: write artifacts to disk after each phase
  • Delegate execution: all code changes go through the implement skill via task files
  • Ask, don't assume: when scope or priorities are unclear, STOP and ask the user

Context Resolution

Announce detected paths and input mode to user before proceeding.

Fixed paths:

Path Location
PROBLEM_DIR _docs/00_problem/
SOLUTION_DIR _docs/01_solution/
COMPONENTS_DIR _docs/02_document/components/
DOCUMENT_DIR _docs/02_document/
TASKS_DIR _docs/02_tasks/
TASKS_TODO _docs/02_tasks/todo/
REFACTOR_DIR _docs/04_refactoring/
RUN_DIR REFACTOR_DIR/NN-[run-name]/

Prereqs: problem.md required, acceptance_criteria.md warn if absent.

RUN_DIR resolution: on start, scan REFACTOR_DIR for existing NN-* folders. Auto-increment the numeric prefix for the new run. The run name is derived from the invocation context (e.g., 01-testability-refactoring, 02-coupling-refactoring). If invoked with a guided input file, derive the name from the input file name or ask the user.

Create REFACTOR_DIR and RUN_DIR if missing. If a RUN_DIR with the same name already exists, ask user: resume or start fresh?

Input Modes

Mode Trigger Discovery source
Automatic Default, no input file Skill discovers issues from code analysis
Guided Input file provided (e.g., /refactor @list-of-changes.md) Reads input file + scans code to form validated change list

Both modes produce RUN_DIR/list-of-changes.md (template: templates/list-of-changes.md). Both modes then convert that file into task files in TASKS_DIR during Phase 2.

Guided mode cleanup: after RUN_DIR/list-of-changes.md is created from the input file, delete the original input file to avoid duplication.

Workflow

Phase File Summary Gate
0 phases/00-baseline.md Collect goals, create RUN_DIR, capture baseline metrics BLOCKING: user confirms
1 phases/01-discovery.md Document components (scoped for guided mode), produce list-of-changes.md BLOCKING: user confirms
2 phases/02-analysis.md Research improvements, produce roadmap, create epic, decompose into tasks in TASKS_DIR BLOCKING: user confirms
Quick Assessment stops here
3 phases/03-safety-net.md Check existing tests or implement pre-refactoring tests (skip for testability runs) GATE: all tests pass
4 phases/04-execution.md Delegate task execution to implement skill GATE: implement completes
5 phases/05-test-sync.md Remove obsolete, update broken, add new tests GATE: all tests pass
6 phases/06-verification.md Run full suite, compare metrics vs baseline GATE: all pass, no regressions
7 phases/07-documentation.md Update _docs/ to reflect refactored state Skip if _docs/02_document/ absent

Workflow mode detection:

  • "quick assessment" / "just assess" → phases 02
  • "refactor [specific target]" → skip phase 1 if docs exist
  • Default → all phases

At the start of execution, create a TodoWrite with all applicable phases.

Artifact Structure

All artifacts are written to RUN_DIR:

baseline_metrics.md                      Phase 0
discovery/components/[##]_[name].md      Phase 1
discovery/solution.md                    Phase 1
discovery/system_flows.md                Phase 1
list-of-changes.md                       Phase 1
analysis/research_findings.md            Phase 2
analysis/refactoring_roadmap.md          Phase 2
test_specs/[##]_[test_name].md           Phase 3
execution_log.md                         Phase 4
test_sync/{obsolete_tests,updated_tests,new_tests}.md  Phase 5
verification_report.md                   Phase 6
doc_update_log.md                        Phase 7
FINAL_report.md                          after all phases

Task files produced during Phase 2 go to TASKS_TODO (not RUN_DIR):

TASKS_TODO/[TRACKER-ID]_refactor_[short_name].md
TASKS_DIR/_dependencies_table.md (appended)

Resumability: match existing artifacts to phases above, resume from next incomplete phase.

Final Report

After all phases complete, write RUN_DIR/FINAL_report.md: mode used (automatic/guided), input mode, phases executed, baseline vs final metrics, changes summary, remaining items, lessons learned.

Escalation Rules

Situation Action
Unclear scope or ambiguous criteria ASK user
Tests failing before refactoring ASK user — fix tests or fix code?
Risk of breaking external contracts ASK user
Performance vs readability trade-off ASK user
No test suite or CI exists WARN user, suggest safety net first
Security vulnerability found WARN user immediately
Implement skill reports failures ASK user — review batch reports