mirror of
https://github.com/azaion/detections.git
synced 2026-04-23 00:26:31 +00:00
Generalize tracker references, restructure refactor skill, and strengthen coding rules
- Replace all Jira-specific references with generic tracker/work-item terminology (TRACKER-ID, work item epics); delete project-management.mdc and mcp.json.example - Restructure refactor skill: extract 8 phases (00–07) and templates into separate files; add guided mode for pre-built change lists - Add Step 3 "Code Testability Revision" to existing-code workflow (renumber steps 3–12 → 3–13) - Simplify autopilot state file to minimal current-step pointer - Strengthen coding rules: AAA test comments per language, test failures as blocking gates, dependency install policy - Add Docker Suitability Assessment to test-spec and test-run skills (local vs Docker execution) - Narrow human-attention sound rule to human-input-needed only - Add AskQuestion fallback to plain text across skills - Rename FINAL_implementation_report to implementation_report_* - Simplify cursor-meta (remove _docs numbering table, quality thresholds) - Make techstackrule alwaysApply, add alwaysApply:false to openapi
This commit is contained in:
@@ -1,471 +1,126 @@
|
||||
---
|
||||
name: refactor
|
||||
description: |
|
||||
Structured refactoring workflow (6-phase method) with three execution modes:
|
||||
- Full Refactoring: all 6 phases — baseline, discovery, analysis, safety net, execution, hardening
|
||||
- Targeted Refactoring: skip discovery if docs exist, focus on a specific component/area
|
||||
- Quick Assessment: phases 0-2 only, outputs a refactoring plan without execution
|
||||
Supports project mode (_docs/ structure) and standalone mode (@file.md).
|
||||
Trigger phrases:
|
||||
- "refactor", "refactoring", "improve code"
|
||||
- "analyze coupling", "decoupling", "technical debt"
|
||||
- "refactoring assessment", "code quality improvement"
|
||||
Structured 8-phase refactoring workflow with two input modes:
|
||||
Automatic (skill discovers issues) and Guided (input file with change list).
|
||||
Each run gets its own subfolder in _docs/04_refactoring/.
|
||||
Delegates code execution to the implement skill via task files in _docs/02_tasks/.
|
||||
Additional workflow modes: Targeted (skip discovery), Quick Assessment (phases 0-2 only).
|
||||
category: evolve
|
||||
tags: [refactoring, coupling, technical-debt, performance, hardening]
|
||||
tags: [refactoring, coupling, technical-debt, performance, testability]
|
||||
trigger_phrases: ["refactor", "refactoring", "improve code", "analyze coupling", "decoupling", "technical debt", "code quality"]
|
||||
disable-model-invocation: true
|
||||
---
|
||||
|
||||
# Structured Refactoring (6-Phase Method)
|
||||
# Structured Refactoring
|
||||
|
||||
Transform existing codebases through a systematic refactoring workflow: capture baseline, document current state, research improvements, build safety net, execute changes, and harden.
|
||||
Phase details live in `phases/` — read the relevant file before executing each phase.
|
||||
|
||||
## Core Principles
|
||||
|
||||
- **Preserve behavior first**: never refactor without a passing test suite
|
||||
- **Preserve behavior first**: never refactor without a passing test suite (exception: testability runs, where the goal is making code testable)
|
||||
- **Measure before and after**: every change must be justified by metrics
|
||||
- **Small incremental changes**: commit frequently, never break tests
|
||||
- **Save immediately**: write artifacts to disk after each phase; never accumulate unsaved work
|
||||
- **Save immediately**: write artifacts to disk after each phase
|
||||
- **Delegate execution**: all code changes go through the implement skill via task files
|
||||
- **Ask, don't assume**: when scope or priorities are unclear, STOP and ask the user
|
||||
|
||||
## Context Resolution
|
||||
|
||||
Determine the operating mode based on invocation before any other logic runs.
|
||||
Announce detected paths and input mode to user before proceeding.
|
||||
|
||||
**Project mode** (no explicit input file provided):
|
||||
- PROBLEM_DIR: `_docs/00_problem/`
|
||||
- SOLUTION_DIR: `_docs/01_solution/`
|
||||
- COMPONENTS_DIR: `_docs/02_document/components/`
|
||||
- DOCUMENT_DIR: `_docs/02_document/`
|
||||
- REFACTOR_DIR: `_docs/04_refactoring/`
|
||||
- All existing guardrails apply.
|
||||
**Fixed paths:**
|
||||
|
||||
**Standalone mode** (explicit input file provided, e.g. `/refactor @some_component.md`):
|
||||
- INPUT_FILE: the provided file (treated as component/area description)
|
||||
- REFACTOR_DIR: `_standalone/refactoring/`
|
||||
- Guardrails relaxed: only INPUT_FILE must exist and be non-empty
|
||||
- `acceptance_criteria.md` is optional — warn if absent
|
||||
| Path | Location |
|
||||
|------|----------|
|
||||
| PROBLEM_DIR | `_docs/00_problem/` |
|
||||
| SOLUTION_DIR | `_docs/01_solution/` |
|
||||
| COMPONENTS_DIR | `_docs/02_document/components/` |
|
||||
| DOCUMENT_DIR | `_docs/02_document/` |
|
||||
| TASKS_DIR | `_docs/02_tasks/` |
|
||||
| TASKS_TODO | `_docs/02_tasks/todo/` |
|
||||
| REFACTOR_DIR | `_docs/04_refactoring/` |
|
||||
| RUN_DIR | `REFACTOR_DIR/NN-[run-name]/` |
|
||||
|
||||
Announce the detected mode and resolved paths to the user before proceeding.
|
||||
**Prereqs**: `problem.md` required, `acceptance_criteria.md` warn if absent.
|
||||
|
||||
## Mode Detection
|
||||
**RUN_DIR resolution**: on start, scan REFACTOR_DIR for existing `NN-*` folders. Auto-increment the numeric prefix for the new run. The run name is derived from the invocation context (e.g., `01-testability-refactoring`, `02-coupling-refactoring`). If invoked with a guided input file, derive the name from the input file name or ask the user.
|
||||
|
||||
After context resolution, determine the execution mode:
|
||||
Create REFACTOR_DIR and RUN_DIR if missing. If a RUN_DIR with the same name already exists, ask user: **resume or start fresh?**
|
||||
|
||||
1. **User explicitly says** "quick assessment" or "just assess" → **Quick Assessment**
|
||||
2. **User explicitly says** "refactor [component/file/area]" with a specific target → **Targeted Refactoring**
|
||||
3. **Default** → **Full Refactoring**
|
||||
## Input Modes
|
||||
|
||||
| Mode | Phases Executed | When to Use |
|
||||
|------|----------------|-------------|
|
||||
| **Full Refactoring** | 0 → 1 → 2 → 3 → 4 → 5 | Complete refactoring of a system or major area |
|
||||
| **Targeted Refactoring** | 0 → (skip 1 if docs exist) → 2 → 3 → 4 → 5 | Refactor a specific component; docs already exist |
|
||||
| **Quick Assessment** | 0 → 1 → 2 | Produce a refactoring roadmap without executing changes |
|
||||
| Mode | Trigger | Discovery source |
|
||||
|------|---------|-----------------|
|
||||
| Automatic | Default, no input file | Skill discovers issues from code analysis |
|
||||
| Guided | Input file provided (e.g., `/refactor @list-of-changes.md`) | Reads input file + scans code to form validated change list |
|
||||
|
||||
Inform the user which mode was detected and confirm before proceeding.
|
||||
Both modes produce `RUN_DIR/list-of-changes.md` (template: `templates/list-of-changes.md`). Both modes then convert that file into task files in TASKS_DIR during Phase 2.
|
||||
|
||||
## Prerequisite Checks (BLOCKING)
|
||||
|
||||
**Project mode:**
|
||||
1. PROBLEM_DIR exists with `problem.md` (or `problem_description.md`) — **STOP if missing**, ask user to create it
|
||||
2. If `acceptance_criteria.md` is missing: **warn** and ask whether to proceed
|
||||
3. Create REFACTOR_DIR if it does not exist
|
||||
4. If REFACTOR_DIR already contains artifacts, ask user: **resume from last checkpoint or start fresh?**
|
||||
|
||||
**Standalone mode:**
|
||||
1. INPUT_FILE exists and is non-empty — **STOP if missing**
|
||||
2. Warn if no `acceptance_criteria.md` provided
|
||||
3. Create REFACTOR_DIR if it does not exist
|
||||
|
||||
## Artifact Management
|
||||
|
||||
### Directory Structure
|
||||
|
||||
```
|
||||
REFACTOR_DIR/
|
||||
├── baseline_metrics.md (Phase 0)
|
||||
├── discovery/
|
||||
│ ├── components/
|
||||
│ │ └── [##]_[name].md (Phase 1)
|
||||
│ ├── solution.md (Phase 1)
|
||||
│ └── system_flows.md (Phase 1)
|
||||
├── analysis/
|
||||
│ ├── research_findings.md (Phase 2)
|
||||
│ └── refactoring_roadmap.md (Phase 2)
|
||||
├── test_specs/
|
||||
│ └── [##]_[test_name].md (Phase 3)
|
||||
├── coupling_analysis.md (Phase 4)
|
||||
├── execution_log.md (Phase 4)
|
||||
├── hardening/
|
||||
│ ├── technical_debt.md (Phase 5)
|
||||
│ ├── performance.md (Phase 5)
|
||||
│ └── security.md (Phase 5)
|
||||
└── FINAL_report.md (after all phases)
|
||||
```
|
||||
|
||||
### Save Timing
|
||||
|
||||
| Phase | Save immediately after | Filename |
|
||||
|-------|------------------------|----------|
|
||||
| Phase 0 | Baseline captured | `baseline_metrics.md` |
|
||||
| Phase 1 | Each component documented | `discovery/components/[##]_[name].md` |
|
||||
| Phase 1 | Solution synthesized | `discovery/solution.md`, `discovery/system_flows.md` |
|
||||
| Phase 2 | Research complete | `analysis/research_findings.md` |
|
||||
| Phase 2 | Roadmap produced | `analysis/refactoring_roadmap.md` |
|
||||
| Phase 3 | Test specs written | `test_specs/[##]_[test_name].md` |
|
||||
| Phase 4 | Coupling analyzed | `coupling_analysis.md` |
|
||||
| Phase 4 | Execution complete | `execution_log.md` |
|
||||
| Phase 5 | Each hardening track | `hardening/<track>.md` |
|
||||
| Final | All phases done | `FINAL_report.md` |
|
||||
|
||||
### Resumability
|
||||
|
||||
If REFACTOR_DIR already contains artifacts:
|
||||
|
||||
1. List existing files and match to the save timing table
|
||||
2. Identify the last completed phase based on which artifacts exist
|
||||
3. Resume from the next incomplete phase
|
||||
4. Inform the user which phases are being skipped
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
At the start of execution, create a TodoWrite with all applicable phases. Update status as each phase completes.
|
||||
**Guided mode cleanup**: after `RUN_DIR/list-of-changes.md` is created from the input file, delete the original input file to avoid duplication.
|
||||
|
||||
## Workflow
|
||||
|
||||
### Phase 0: Context & Baseline
|
||||
|
||||
**Role**: Software engineer preparing for refactoring
|
||||
**Goal**: Collect refactoring goals and capture baseline metrics
|
||||
**Constraints**: Measurement only — no code changes
|
||||
|
||||
#### 0a. Collect Goals
|
||||
|
||||
If PROBLEM_DIR files do not yet exist, help the user create them:
|
||||
|
||||
1. `problem.md` — what the system currently does, what changes are needed, pain points
|
||||
2. `acceptance_criteria.md` — success criteria for the refactoring
|
||||
3. `security_approach.md` — security requirements (if applicable)
|
||||
|
||||
Store in PROBLEM_DIR.
|
||||
|
||||
#### 0b. Capture Baseline
|
||||
|
||||
1. Read problem description and acceptance criteria
|
||||
2. Measure current system metrics using project-appropriate tools:
|
||||
|
||||
| Metric Category | What to Capture |
|
||||
|----------------|-----------------|
|
||||
| **Coverage** | Overall, unit, blackbox, critical paths |
|
||||
| **Complexity** | Cyclomatic complexity (avg + top 5 functions), LOC, tech debt ratio |
|
||||
| **Code Smells** | Total, critical, major |
|
||||
| **Performance** | Response times (P50/P95/P99), CPU/memory, throughput |
|
||||
| **Dependencies** | Total count, outdated, security vulnerabilities |
|
||||
| **Build** | Build time, test execution time, deployment time |
|
||||
|
||||
3. Create functionality inventory: all features/endpoints with status and coverage
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] All metric categories measured (or noted as N/A with reason)
|
||||
- [ ] Functionality inventory is complete
|
||||
- [ ] Measurements are reproducible
|
||||
|
||||
**Save action**: Write `REFACTOR_DIR/baseline_metrics.md`
|
||||
|
||||
**BLOCKING**: Present baseline summary to user. Do NOT proceed until user confirms.
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: Discovery
|
||||
|
||||
**Role**: Principal software architect
|
||||
**Goal**: Generate documentation from existing code and form solution description
|
||||
**Constraints**: Document what exists, not what should be. No code changes.
|
||||
|
||||
**Skip condition** (Targeted mode): If `COMPONENTS_DIR` and `SOLUTION_DIR` already contain documentation for the target area, skip to Phase 2. Ask user to confirm skip.
|
||||
|
||||
#### 1a. Document Components
|
||||
|
||||
For each component in the codebase:
|
||||
|
||||
1. Analyze project structure, directories, files
|
||||
2. Go file by file, analyze each method
|
||||
3. Analyze connections between components
|
||||
|
||||
Write per component to `REFACTOR_DIR/discovery/components/[##]_[name].md`:
|
||||
- Purpose and architectural patterns
|
||||
- Mermaid diagrams for logic flows
|
||||
- API reference table (name, description, input, output)
|
||||
- Implementation details: algorithmic complexity, state management, dependencies
|
||||
- Caveats, edge cases, known limitations
|
||||
|
||||
#### 1b. Synthesize Solution & Flows
|
||||
|
||||
1. Review all generated component documentation
|
||||
2. Synthesize into a cohesive solution description
|
||||
3. Create flow diagrams showing component interactions
|
||||
|
||||
Write:
|
||||
- `REFACTOR_DIR/discovery/solution.md` — product description, component overview, interaction diagram
|
||||
- `REFACTOR_DIR/discovery/system_flows.md` — Mermaid flowcharts per major use case
|
||||
|
||||
Also copy to project standard locations if in project mode:
|
||||
- `SOLUTION_DIR/solution.md`
|
||||
- `DOCUMENT_DIR/system_flows.md`
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Every component in the codebase is documented
|
||||
- [ ] Solution description covers all components
|
||||
- [ ] Flow diagrams cover all major use cases
|
||||
- [ ] Mermaid diagrams are syntactically correct
|
||||
|
||||
**Save action**: Write discovery artifacts
|
||||
|
||||
**BLOCKING**: Present discovery summary to user. Do NOT proceed until user confirms documentation accuracy.
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Analysis
|
||||
|
||||
**Role**: Researcher and software architect
|
||||
**Goal**: Research improvements and produce a refactoring roadmap
|
||||
**Constraints**: Analysis only — no code changes
|
||||
|
||||
#### 2a. Deep Research
|
||||
|
||||
1. Analyze current implementation patterns
|
||||
2. Research modern approaches for similar systems
|
||||
3. Identify what could be done differently
|
||||
4. Suggest improvements based on state-of-the-art practices
|
||||
|
||||
Write `REFACTOR_DIR/analysis/research_findings.md`:
|
||||
- Current state analysis: patterns used, strengths, weaknesses
|
||||
- Alternative approaches per component: current vs alternative, pros/cons, migration effort
|
||||
- Prioritized recommendations: quick wins + strategic improvements
|
||||
|
||||
#### 2b. Solution Assessment
|
||||
|
||||
1. Assess current implementation against acceptance criteria
|
||||
2. Identify weak points in codebase, map to specific code areas
|
||||
3. Perform gap analysis: acceptance criteria vs current state
|
||||
4. Prioritize changes by impact and effort
|
||||
|
||||
Write `REFACTOR_DIR/analysis/refactoring_roadmap.md`:
|
||||
- Weak points assessment: location, description, impact, proposed solution
|
||||
- Gap analysis: what's missing, what needs improvement
|
||||
- Phased roadmap: Phase 1 (critical fixes), Phase 2 (major improvements), Phase 3 (enhancements)
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] All acceptance criteria are addressed in gap analysis
|
||||
- [ ] Recommendations are grounded in actual code, not abstract
|
||||
- [ ] Roadmap phases are prioritized by impact
|
||||
- [ ] Quick wins are identified separately
|
||||
|
||||
**Save action**: Write analysis artifacts
|
||||
|
||||
**BLOCKING**: Present refactoring roadmap to user. Do NOT proceed until user confirms.
|
||||
|
||||
**Quick Assessment mode stops here.** Present final summary and write `FINAL_report.md` with phases 0-2 content.
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Safety Net
|
||||
|
||||
**Role**: QA engineer and developer
|
||||
**Goal**: Design and implement tests that capture current behavior before refactoring
|
||||
**Constraints**: Tests must all pass on the current codebase before proceeding
|
||||
|
||||
#### 3a. Design Test Specs
|
||||
|
||||
Coverage requirements (must meet before refactoring — see `.cursor/rules/cursor-meta.mdc` Quality Thresholds):
|
||||
- Minimum overall coverage: 75%
|
||||
- Critical path coverage: 90%
|
||||
- All public APIs must have blackbox tests
|
||||
- All error handling paths must be tested
|
||||
|
||||
For each critical area, write test specs to `REFACTOR_DIR/test_specs/[##]_[test_name].md`:
|
||||
- Blackbox tests: summary, current behavior, input data, expected result, max expected time
|
||||
- Acceptance tests: summary, preconditions, steps with expected results
|
||||
- Coverage analysis: current %, target %, uncovered critical paths
|
||||
|
||||
#### 3b. Implement Tests
|
||||
|
||||
1. Set up test environment and infrastructure if not exists
|
||||
2. Implement each test from specs
|
||||
3. Run tests, verify all pass on current codebase
|
||||
4. Document any discovered issues
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Coverage requirements met (75% overall, 90% critical paths)
|
||||
- [ ] All tests pass on current codebase
|
||||
- [ ] All public APIs have blackbox tests
|
||||
- [ ] Test data fixtures are configured
|
||||
|
||||
**Save action**: Write test specs; implemented tests go into the project's test folder
|
||||
|
||||
**GATE (BLOCKING)**: ALL tests must pass before proceeding to Phase 4. If tests fail, fix the tests (not the code) or ask user for guidance. Do NOT proceed to Phase 4 with failing tests.
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Execution
|
||||
|
||||
**Role**: Software architect and developer
|
||||
**Goal**: Analyze coupling and execute decoupling changes
|
||||
**Constraints**: Small incremental changes; tests must stay green after every change
|
||||
|
||||
#### 4a. Analyze Coupling
|
||||
|
||||
1. Analyze coupling between components/modules
|
||||
2. Map dependencies (direct and transitive)
|
||||
3. Identify circular dependencies
|
||||
4. Form decoupling strategy
|
||||
|
||||
Write `REFACTOR_DIR/coupling_analysis.md`:
|
||||
- Dependency graph (Mermaid)
|
||||
- Coupling metrics per component
|
||||
- Problem areas: components involved, coupling type, severity, impact
|
||||
- Decoupling strategy: priority order, proposed interfaces/abstractions, effort estimates
|
||||
|
||||
**BLOCKING**: Present coupling analysis to user. Do NOT proceed until user confirms strategy.
|
||||
|
||||
#### 4b. Execute Decoupling
|
||||
|
||||
For each change in the decoupling strategy:
|
||||
|
||||
1. Implement the change
|
||||
2. Run blackbox tests
|
||||
3. Fix any failures
|
||||
4. Commit with descriptive message
|
||||
|
||||
Address code smells encountered: long methods, large classes, duplicate code, dead code, magic numbers.
|
||||
|
||||
Write `REFACTOR_DIR/execution_log.md`:
|
||||
- Change description, files affected, test status per change
|
||||
- Before/after metrics comparison against baseline
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] All tests still pass after execution
|
||||
- [ ] No circular dependencies remain (or reduced per plan)
|
||||
- [ ] Code smells addressed
|
||||
- [ ] Metrics improved compared to baseline
|
||||
|
||||
**Save action**: Write execution artifacts
|
||||
|
||||
**BLOCKING**: Present execution summary to user. Do NOT proceed until user confirms.
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Hardening (Optional, Parallel Tracks)
|
||||
|
||||
**Role**: Varies per track
|
||||
**Goal**: Address technical debt, performance, and security
|
||||
**Constraints**: Each track is optional; user picks which to run
|
||||
|
||||
Present the three tracks and let user choose which to execute:
|
||||
|
||||
#### Track A: Technical Debt
|
||||
|
||||
**Role**: Technical debt analyst
|
||||
|
||||
1. Identify and categorize debt items: design, code, test, documentation
|
||||
2. Assess each: location, description, impact, effort, interest (cost of not fixing)
|
||||
3. Prioritize: quick wins → strategic debt → tolerable debt
|
||||
4. Create actionable plan with prevention measures
|
||||
|
||||
Write `REFACTOR_DIR/hardening/technical_debt.md`
|
||||
|
||||
#### Track B: Performance Optimization
|
||||
|
||||
**Role**: Performance engineer
|
||||
|
||||
1. Profile current performance, identify bottlenecks
|
||||
2. For each bottleneck: location, symptom, root cause, impact
|
||||
3. Propose optimizations with expected improvement and risk
|
||||
4. Implement one at a time, benchmark after each change
|
||||
5. Verify tests still pass
|
||||
|
||||
Write `REFACTOR_DIR/hardening/performance.md` with before/after benchmarks
|
||||
|
||||
#### Track C: Security Review
|
||||
|
||||
**Role**: Security engineer
|
||||
|
||||
1. Review code against OWASP Top 10
|
||||
2. Verify security requirements from `security_approach.md` are met
|
||||
3. Check: authentication, authorization, input validation, output encoding, encryption, logging
|
||||
|
||||
Write `REFACTOR_DIR/hardening/security.md`:
|
||||
- Vulnerability assessment: location, type, severity, exploit scenario, fix
|
||||
- Security controls review
|
||||
- Compliance check against `security_approach.md`
|
||||
- Recommendations: critical fixes, improvements, hardening
|
||||
|
||||
**Self-verification** (per track):
|
||||
- [ ] All findings are grounded in actual code
|
||||
- [ ] Recommendations are actionable with effort estimates
|
||||
- [ ] All tests still pass after any changes
|
||||
|
||||
**Save action**: Write hardening artifacts
|
||||
|
||||
---
|
||||
| Phase | File | Summary | Gate |
|
||||
|-------|------|---------|------|
|
||||
| 0 | `phases/00-baseline.md` | Collect goals, create RUN_DIR, capture baseline metrics | BLOCKING: user confirms |
|
||||
| 1 | `phases/01-discovery.md` | Document components (scoped for guided mode), produce list-of-changes.md | BLOCKING: user confirms |
|
||||
| 2 | `phases/02-analysis.md` | Research improvements, produce roadmap, create epic, decompose into tasks in TASKS_DIR | BLOCKING: user confirms |
|
||||
| | | *Quick Assessment stops here* | |
|
||||
| 3 | `phases/03-safety-net.md` | Check existing tests or implement pre-refactoring tests (skip for testability runs) | GATE: all tests pass |
|
||||
| 4 | `phases/04-execution.md` | Delegate task execution to implement skill | GATE: implement completes |
|
||||
| 5 | `phases/05-test-sync.md` | Remove obsolete, update broken, add new tests | GATE: all tests pass |
|
||||
| 6 | `phases/06-verification.md` | Run full suite, compare metrics vs baseline | GATE: all pass, no regressions |
|
||||
| 7 | `phases/07-documentation.md` | Update `_docs/` to reflect refactored state | Skip if `_docs/02_document/` absent |
|
||||
|
||||
**Workflow mode detection:**
|
||||
- "quick assessment" / "just assess" → phases 0–2
|
||||
- "refactor [specific target]" → skip phase 1 if docs exist
|
||||
- Default → all phases
|
||||
|
||||
At the start of execution, create a TodoWrite with all applicable phases.
|
||||
|
||||
## Artifact Structure
|
||||
|
||||
All artifacts are written to RUN_DIR:
|
||||
|
||||
```
|
||||
baseline_metrics.md Phase 0
|
||||
discovery/components/[##]_[name].md Phase 1
|
||||
discovery/solution.md Phase 1
|
||||
discovery/system_flows.md Phase 1
|
||||
list-of-changes.md Phase 1
|
||||
analysis/research_findings.md Phase 2
|
||||
analysis/refactoring_roadmap.md Phase 2
|
||||
test_specs/[##]_[test_name].md Phase 3
|
||||
execution_log.md Phase 4
|
||||
test_sync/{obsolete_tests,updated_tests,new_tests}.md Phase 5
|
||||
verification_report.md Phase 6
|
||||
doc_update_log.md Phase 7
|
||||
FINAL_report.md after all phases
|
||||
```
|
||||
|
||||
Task files produced during Phase 2 go to TASKS_TODO (not RUN_DIR):
|
||||
```
|
||||
TASKS_TODO/[TRACKER-ID]_refactor_[short_name].md
|
||||
TASKS_DIR/_dependencies_table.md (appended)
|
||||
```
|
||||
|
||||
**Resumability**: match existing artifacts to phases above, resume from next incomplete phase.
|
||||
|
||||
## Final Report
|
||||
|
||||
After all executed phases complete, write `REFACTOR_DIR/FINAL_report.md`:
|
||||
|
||||
- Refactoring mode used and phases executed
|
||||
- Baseline metrics vs final metrics comparison
|
||||
- Changes made summary
|
||||
- Remaining items (deferred to future)
|
||||
- Lessons learned
|
||||
After all phases complete, write `RUN_DIR/FINAL_report.md`:
|
||||
mode used (automatic/guided), input mode, phases executed, baseline vs final metrics, changes summary, remaining items, lessons learned.
|
||||
|
||||
## Escalation Rules
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| Unclear refactoring scope | **ASK user** |
|
||||
| Ambiguous acceptance criteria | **ASK user** |
|
||||
| Unclear scope or ambiguous criteria | **ASK user** |
|
||||
| Tests failing before refactoring | **ASK user** — fix tests or fix code? |
|
||||
| Coupling change risks breaking external contracts | **ASK user** |
|
||||
| Performance optimization vs readability trade-off | **ASK user** |
|
||||
| Missing baseline metrics (no test suite, no CI) | **WARN user**, suggest building safety net first |
|
||||
| Security vulnerability found during refactoring | **WARN user** immediately, don't defer |
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
When the user wants to:
|
||||
- Improve existing code structure or quality
|
||||
- Reduce technical debt or coupling
|
||||
- Prepare codebase for new features
|
||||
- Assess code health before major changes
|
||||
|
||||
**Keywords**: "refactor", "refactoring", "improve code", "reduce coupling", "technical debt", "code quality", "decoupling"
|
||||
|
||||
## Methodology Quick Reference
|
||||
|
||||
```
|
||||
┌────────────────────────────────────────────────────────────────┐
|
||||
│ Structured Refactoring (6-Phase Method) │
|
||||
├────────────────────────────────────────────────────────────────┤
|
||||
│ CONTEXT: Resolve mode (project vs standalone) + set paths │
|
||||
│ MODE: Full / Targeted / Quick Assessment │
|
||||
│ │
|
||||
│ 0. Context & Baseline → baseline_metrics.md │
|
||||
│ [BLOCKING: user confirms baseline] │
|
||||
│ 1. Discovery → discovery/ (components, solution) │
|
||||
│ [BLOCKING: user confirms documentation] │
|
||||
│ 2. Analysis → analysis/ (research, roadmap) │
|
||||
│ [BLOCKING: user confirms roadmap] │
|
||||
│ ── Quick Assessment stops here ── │
|
||||
│ 3. Safety Net → test_specs/ + implemented tests │
|
||||
│ [GATE: all tests must pass] │
|
||||
│ 4. Execution → coupling_analysis, execution_log │
|
||||
│ [BLOCKING: user confirms changes] │
|
||||
│ 5. Hardening → hardening/ (debt, perf, security) │
|
||||
│ [optional, user picks tracks] │
|
||||
│ ───────────────────────────────────────────────── │
|
||||
│ FINAL_report.md │
|
||||
├────────────────────────────────────────────────────────────────┤
|
||||
│ Principles: Preserve behavior · Measure before/after │
|
||||
│ Small changes · Save immediately · Ask don't assume│
|
||||
└────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
| Risk of breaking external contracts | **ASK user** |
|
||||
| Performance vs readability trade-off | **ASK user** |
|
||||
| No test suite or CI exists | **WARN user**, suggest safety net first |
|
||||
| Security vulnerability found | **WARN user** immediately |
|
||||
| Implement skill reports failures | **ASK user** — review batch reports |
|
||||
|
||||
@@ -0,0 +1,52 @@
|
||||
# Phase 0: Context & Baseline
|
||||
|
||||
**Role**: Software engineer preparing for refactoring
|
||||
**Goal**: Collect refactoring goals, create run directory, capture baseline metrics
|
||||
**Constraints**: Measurement only — no code changes
|
||||
|
||||
## 0a. Collect Goals
|
||||
|
||||
If PROBLEM_DIR files do not yet exist, help the user create them:
|
||||
|
||||
1. `problem.md` — what the system currently does, what changes are needed, pain points
|
||||
2. `acceptance_criteria.md` — success criteria for the refactoring
|
||||
3. `security_approach.md` — security requirements (if applicable)
|
||||
|
||||
Store in PROBLEM_DIR.
|
||||
|
||||
## 0b. Create RUN_DIR
|
||||
|
||||
1. Scan REFACTOR_DIR for existing `NN-*` folders
|
||||
2. Auto-increment the numeric prefix (e.g., if `01-testability-refactoring` exists, next is `02-...`)
|
||||
3. Determine the run name:
|
||||
- If guided mode with input file: derive from input file name or context (e.g., `01-testability-refactoring`)
|
||||
- If automatic mode: ask user for a short run name, or derive from goals (e.g., `01-coupling-refactoring`)
|
||||
4. Create `REFACTOR_DIR/NN-[run-name]/` — this is RUN_DIR for the rest of the workflow
|
||||
|
||||
Announce RUN_DIR path to user.
|
||||
|
||||
## 0c. Capture Baseline
|
||||
|
||||
1. Read problem description and acceptance criteria
|
||||
2. Measure current system metrics using project-appropriate tools:
|
||||
|
||||
| Metric Category | What to Capture |
|
||||
|----------------|-----------------|
|
||||
| **Coverage** | Overall, unit, blackbox, critical paths |
|
||||
| **Complexity** | Cyclomatic complexity (avg + top 5 functions), LOC, tech debt ratio |
|
||||
| **Code Smells** | Total, critical, major |
|
||||
| **Performance** | Response times (P50/P95/P99), CPU/memory, throughput |
|
||||
| **Dependencies** | Total count, outdated, security vulnerabilities |
|
||||
| **Build** | Build time, test execution time, deployment time |
|
||||
|
||||
3. Create functionality inventory: all features/endpoints with status and coverage
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] RUN_DIR created with correct auto-incremented prefix
|
||||
- [ ] All metric categories measured (or noted as N/A with reason)
|
||||
- [ ] Functionality inventory is complete
|
||||
- [ ] Measurements are reproducible
|
||||
|
||||
**Save action**: Write `RUN_DIR/baseline_metrics.md`
|
||||
|
||||
**BLOCKING**: Present baseline summary to user. Do NOT proceed until user confirms.
|
||||
@@ -0,0 +1,119 @@
|
||||
# Phase 1: Discovery
|
||||
|
||||
**Role**: Principal software architect
|
||||
**Goal**: Analyze existing code and produce `RUN_DIR/list-of-changes.md`
|
||||
**Constraints**: Document what exists, identify what needs to change. No code changes.
|
||||
|
||||
**Skip condition** (Targeted mode): If `COMPONENTS_DIR` and `SOLUTION_DIR` already contain documentation for the target area, skip to Phase 2. Ask user to confirm skip.
|
||||
|
||||
## Mode Branch
|
||||
|
||||
Determine the input mode set during Context Resolution (see SKILL.md):
|
||||
|
||||
- **Guided mode**: input file provided → start with 1g below
|
||||
- **Automatic mode**: no input file → start with 1a below
|
||||
|
||||
---
|
||||
|
||||
## Guided Mode
|
||||
|
||||
### 1g. Read and Validate Input File
|
||||
|
||||
1. Read the provided input file (e.g., `list-of-changes.md` from the autopilot testability revision step or user-provided file)
|
||||
2. Extract file paths, problem descriptions, and proposed changes from each entry
|
||||
3. For each entry, verify against actual codebase:
|
||||
- Referenced files exist
|
||||
- Described problems are accurate (read the code, confirm the issue)
|
||||
- Proposed changes are feasible
|
||||
4. Flag any entries that reference nonexistent files or describe inaccurate problems — ASK user
|
||||
|
||||
### 1h. Scoped Component Analysis
|
||||
|
||||
For each file/area referenced in the input file:
|
||||
|
||||
1. Analyze the specific modules and their immediate dependencies
|
||||
2. Document component structure, interfaces, and coupling points relevant to the proposed changes
|
||||
3. Identify additional issues not in the input file but discovered during analysis of the same areas
|
||||
|
||||
Write per-component to `RUN_DIR/discovery/components/[##]_[name].md` (same format as automatic mode, but scoped to affected areas only).
|
||||
|
||||
### 1i. Produce List of Changes
|
||||
|
||||
1. Start from the validated input file entries
|
||||
2. Enrich each entry with:
|
||||
- Exact file paths confirmed from code
|
||||
- Risk assessment (low/medium/high)
|
||||
- Dependencies between changes
|
||||
3. Add any additional issues discovered during scoped analysis (1h)
|
||||
4. Write `RUN_DIR/list-of-changes.md` using `templates/list-of-changes.md` format
|
||||
- Set **Mode**: `guided`
|
||||
- Set **Source**: path to the original input file
|
||||
|
||||
Skip to **Save action** below.
|
||||
|
||||
---
|
||||
|
||||
## Automatic Mode
|
||||
|
||||
### 1a. Document Components
|
||||
|
||||
For each component in the codebase:
|
||||
|
||||
1. Analyze project structure, directories, files
|
||||
2. Go file by file, analyze each method
|
||||
3. Analyze connections between components
|
||||
|
||||
Write per component to `RUN_DIR/discovery/components/[##]_[name].md`:
|
||||
- Purpose and architectural patterns
|
||||
- Mermaid diagrams for logic flows
|
||||
- API reference table (name, description, input, output)
|
||||
- Implementation details: algorithmic complexity, state management, dependencies
|
||||
- Caveats, edge cases, known limitations
|
||||
|
||||
### 1b. Synthesize Solution & Flows
|
||||
|
||||
1. Review all generated component documentation
|
||||
2. Synthesize into a cohesive solution description
|
||||
3. Create flow diagrams showing component interactions
|
||||
|
||||
Write:
|
||||
- `RUN_DIR/discovery/solution.md` — product description, component overview, interaction diagram
|
||||
- `RUN_DIR/discovery/system_flows.md` — Mermaid flowcharts per major use case
|
||||
|
||||
Also copy to project standard locations:
|
||||
- `SOLUTION_DIR/solution.md`
|
||||
- `DOCUMENT_DIR/system_flows.md`
|
||||
|
||||
### 1c. Produce List of Changes
|
||||
|
||||
From the component analysis and solution synthesis, identify all issues that need refactoring:
|
||||
|
||||
1. Hardcoded values (paths, config, magic numbers)
|
||||
2. Tight coupling between components
|
||||
3. Missing dependency injection / non-configurable parameters
|
||||
4. Global mutable state
|
||||
5. Code duplication
|
||||
6. Missing error handling
|
||||
7. Testability blockers (code that cannot be exercised in isolation)
|
||||
8. Security concerns
|
||||
9. Performance bottlenecks
|
||||
|
||||
Write `RUN_DIR/list-of-changes.md` using `templates/list-of-changes.md` format:
|
||||
- Set **Mode**: `automatic`
|
||||
- Set **Source**: `self-discovered`
|
||||
|
||||
---
|
||||
|
||||
## Save action (both modes)
|
||||
|
||||
Write all discovery artifacts to RUN_DIR.
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Every referenced file in list-of-changes.md exists in the codebase
|
||||
- [ ] Each change entry has file paths, problem, change description, risk, and dependencies
|
||||
- [ ] Component documentation covers all areas affected by the changes
|
||||
- [ ] In guided mode: all input file entries are validated or flagged
|
||||
- [ ] In automatic mode: solution description covers all components
|
||||
- [ ] Mermaid diagrams are syntactically correct
|
||||
|
||||
**BLOCKING**: Present discovery summary and list-of-changes.md to user. Do NOT proceed until user confirms documentation accuracy and change list completeness.
|
||||
@@ -0,0 +1,94 @@
|
||||
# Phase 2: Analysis & Task Decomposition
|
||||
|
||||
**Role**: Researcher, software architect, and task planner
|
||||
**Goal**: Research improvements, produce a refactoring roadmap, and decompose into implementable tasks
|
||||
**Constraints**: Analysis and planning only — no code changes
|
||||
|
||||
## 2a. Deep Research
|
||||
|
||||
1. Analyze current implementation patterns
|
||||
2. Research modern approaches for similar systems
|
||||
3. Identify what could be done differently
|
||||
4. Suggest improvements based on state-of-the-art practices
|
||||
|
||||
Write `RUN_DIR/analysis/research_findings.md`:
|
||||
- Current state analysis: patterns used, strengths, weaknesses
|
||||
- Alternative approaches per component: current vs alternative, pros/cons, migration effort
|
||||
- Prioritized recommendations: quick wins + strategic improvements
|
||||
|
||||
## 2b. Solution Assessment & Hardening Tracks
|
||||
|
||||
1. Assess current implementation against acceptance criteria
|
||||
2. Identify weak points in codebase, map to specific code areas
|
||||
3. Perform gap analysis: acceptance criteria vs current state
|
||||
4. Prioritize changes by impact and effort
|
||||
|
||||
Present optional hardening tracks for user to include in the roadmap:
|
||||
|
||||
```
|
||||
══════════════════════════════════════
|
||||
DECISION REQUIRED: Include hardening tracks?
|
||||
══════════════════════════════════════
|
||||
A) Technical Debt — identify and address design/code/test debt
|
||||
B) Performance Optimization — profile, identify bottlenecks, optimize
|
||||
C) Security Review — OWASP Top 10, auth, encryption, input validation
|
||||
D) All of the above
|
||||
E) None — proceed with structural refactoring only
|
||||
══════════════════════════════════════
|
||||
```
|
||||
|
||||
For each selected track, add entries to `RUN_DIR/list-of-changes.md` (append to the file produced in Phase 1):
|
||||
- **Track A**: tech debt items with location, impact, effort
|
||||
- **Track B**: performance bottlenecks with profiling data
|
||||
- **Track C**: security findings with severity and fix description
|
||||
|
||||
Write `RUN_DIR/analysis/refactoring_roadmap.md`:
|
||||
- Weak points assessment: location, description, impact, proposed solution
|
||||
- Gap analysis: what's missing, what needs improvement
|
||||
- Phased roadmap: Phase 1 (critical fixes), Phase 2 (major improvements), Phase 3 (enhancements)
|
||||
- Selected hardening tracks and their items
|
||||
|
||||
## 2c. Create Epic
|
||||
|
||||
Create a work item tracker epic for this refactoring run:
|
||||
|
||||
1. Epic name: the RUN_DIR name (e.g., `01-testability-refactoring`)
|
||||
2. Create the epic via configured tracker MCP
|
||||
3. Record the Epic ID — all tasks in 2d will be linked under this epic
|
||||
4. If tracker unavailable, use `PENDING` placeholder and note for later
|
||||
|
||||
## 2d. Task Decomposition
|
||||
|
||||
Convert the finalized `RUN_DIR/list-of-changes.md` into implementable task files.
|
||||
|
||||
1. Read `RUN_DIR/list-of-changes.md`
|
||||
2. For each change entry (or group of related entries), create an atomic task file in TASKS_DIR:
|
||||
- Use the standard task template format (`.cursor/skills/decompose/templates/task.md`)
|
||||
- File naming: `[##]_refactor_[short_name].md` (temporary numeric prefix)
|
||||
- **Task**: `PENDING_refactor_[short_name]`
|
||||
- **Description**: derived from the change entry's Problem + Change fields
|
||||
- **Complexity**: estimate 1-5 points; split into multiple tasks if >5
|
||||
- **Dependencies**: map change-level dependencies (C01, C02) to task-level tracker IDs
|
||||
- **Component**: from the change entry's File(s) field
|
||||
- **Epic**: the epic created in 2c
|
||||
- **Acceptance Criteria**: derived from the change entry — verify the problem is resolved
|
||||
3. Create work item ticket for each task under the epic from 2c
|
||||
4. Rename each file to `[TRACKER-ID]_refactor_[short_name].md` after ticket creation
|
||||
5. Update or append to `TASKS_DIR/_dependencies_table.md` with the refactoring tasks
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] All acceptance criteria are addressed in gap analysis
|
||||
- [ ] Recommendations are grounded in actual code, not abstract
|
||||
- [ ] Roadmap phases are prioritized by impact
|
||||
- [ ] Epic created and all tasks linked to it
|
||||
- [ ] Every entry in list-of-changes.md has a corresponding task file in TASKS_DIR
|
||||
- [ ] No task exceeds 5 complexity points
|
||||
- [ ] Task dependencies are consistent (no circular dependencies)
|
||||
- [ ] `_dependencies_table.md` includes all refactoring tasks
|
||||
- [ ] Every task has a work item ticket (or PENDING placeholder)
|
||||
|
||||
**Save action**: Write analysis artifacts to RUN_DIR, task files to TASKS_DIR
|
||||
|
||||
**BLOCKING**: Present refactoring roadmap and task list to user. Do NOT proceed until user confirms.
|
||||
|
||||
**Quick Assessment mode stops here.** Present final summary and write `FINAL_report.md` with phases 0-2 content.
|
||||
@@ -0,0 +1,57 @@
|
||||
# Phase 3: Safety Net
|
||||
|
||||
**Role**: QA engineer and developer
|
||||
**Goal**: Ensure tests exist that capture current behavior before refactoring
|
||||
**Constraints**: Tests must all pass on the current codebase before proceeding
|
||||
|
||||
## Skip Condition: Testability Refactoring
|
||||
|
||||
If the current run name contains `testability` (e.g., `01-testability-refactoring`), **skip Phase 3 entirely**. The purpose of a testability run is to make the code testable so that tests can be written afterward. Announce the skip and proceed to Phase 4.
|
||||
|
||||
## 3a. Check Existing Tests
|
||||
|
||||
Before designing or implementing any new tests, check what already exists:
|
||||
|
||||
1. Scan the project for existing test files (unit tests, integration tests, blackbox tests)
|
||||
2. Run the existing test suite — record pass/fail counts
|
||||
3. Measure current coverage against the areas being refactored (from `RUN_DIR/list-of-changes.md` file paths)
|
||||
4. Assess coverage against thresholds:
|
||||
- Minimum overall coverage: 75%
|
||||
- Critical path coverage: 90%
|
||||
- All public APIs must have blackbox tests
|
||||
- All error handling paths must be tested
|
||||
|
||||
If existing tests meet all thresholds for the refactoring areas:
|
||||
- Document the existing coverage in `RUN_DIR/test_specs/existing_coverage.md`
|
||||
- Skip to the GATE check below
|
||||
|
||||
If existing tests partially cover the refactoring areas:
|
||||
- Document what is covered and what gaps remain
|
||||
- Proceed to 3b only for the uncovered areas
|
||||
|
||||
If no relevant tests exist:
|
||||
- Proceed to 3b for full test design
|
||||
|
||||
## 3b. Design Test Specs (for uncovered areas only)
|
||||
|
||||
For each uncovered critical area, write test specs to `RUN_DIR/test_specs/[##]_[test_name].md`:
|
||||
- Blackbox tests: summary, current behavior, input data, expected result, max expected time
|
||||
- Acceptance tests: summary, preconditions, steps with expected results
|
||||
- Coverage analysis: current %, target %, uncovered critical paths
|
||||
|
||||
## 3c. Implement Tests (for uncovered areas only)
|
||||
|
||||
1. Set up test environment and infrastructure if not exists
|
||||
2. Implement each test from specs
|
||||
3. Run tests, verify all pass on current codebase
|
||||
4. Document any discovered issues
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Coverage requirements met (75% overall, 90% critical paths) across existing + new tests
|
||||
- [ ] All tests pass on current codebase
|
||||
- [ ] All public APIs in refactoring scope have blackbox tests
|
||||
- [ ] Test data fixtures are configured
|
||||
|
||||
**Save action**: Write test specs to RUN_DIR; implemented tests go into the project's test folder
|
||||
|
||||
**GATE (BLOCKING)**: ALL tests must pass before proceeding to Phase 4. If tests fail, fix the tests (not the code) or ask user for guidance. Do NOT proceed to Phase 4 with failing tests.
|
||||
@@ -0,0 +1,63 @@
|
||||
# Phase 4: Execution
|
||||
|
||||
**Role**: Orchestrator
|
||||
**Goal**: Execute all refactoring tasks by delegating to the implement skill
|
||||
**Constraints**: No inline code changes — all implementation goes through the implement skill's batching and review pipeline
|
||||
|
||||
## 4a. Pre-Flight Checks
|
||||
|
||||
1. Verify refactoring task files exist in TASKS_DIR (created during Phase 2d):
|
||||
- All `[TRACKER-ID]_refactor_*.md` files are present
|
||||
- Each task file has valid header fields (Task, Name, Description, Complexity, Dependencies)
|
||||
2. Verify `TASKS_DIR/_dependencies_table.md` includes the refactoring tasks
|
||||
3. Verify all tests pass (safety net from Phase 3 is green)
|
||||
4. If any check fails, go back to the relevant phase to fix
|
||||
|
||||
## 4b. Delegate to Implement Skill
|
||||
|
||||
Read and execute `.cursor/skills/implement/SKILL.md`.
|
||||
|
||||
The implement skill will:
|
||||
1. Parse task files and dependency graph from TASKS_DIR
|
||||
2. Detect already-completed tasks (skip non-refactoring tasks from prior workflow steps)
|
||||
3. Compute execution batches for the refactoring tasks
|
||||
4. Launch implementer subagents (up to 4 in parallel)
|
||||
5. Run code review after each batch
|
||||
6. Commit and push per batch
|
||||
7. Update work item ticket status
|
||||
|
||||
Do NOT modify, skip, or abbreviate any part of the implement skill's workflow. The refactor skill is delegating execution, not optimizing it.
|
||||
|
||||
## 4c. Capture Results
|
||||
|
||||
After the implement skill completes:
|
||||
|
||||
1. Read batch reports from `_docs/03_implementation/batch_*_report.md`
|
||||
2. Read the latest `_docs/03_implementation/implementation_report_*.md` file
|
||||
3. Write `RUN_DIR/execution_log.md` summarizing:
|
||||
- Total tasks executed
|
||||
- Batches completed
|
||||
- Code review verdicts per batch
|
||||
- Files modified (aggregate list)
|
||||
- Any blocked or failed tasks
|
||||
- Links to batch reports
|
||||
|
||||
## 4d. Update Task Statuses
|
||||
|
||||
For each successfully completed refactoring task:
|
||||
|
||||
1. Transition the work item ticket status to **Done** via the configured tracker MCP
|
||||
2. If tracker unavailable, note the pending status transitions in `RUN_DIR/execution_log.md`
|
||||
|
||||
For any failed or blocked tasks, leave their status as-is (the implement skill already set them to In Testing or blocked).
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] All refactoring tasks show as completed in batch reports
|
||||
- [ ] All completed tasks have work item tracker status set to Done
|
||||
- [ ] All tests still pass after execution
|
||||
- [ ] No tasks remain in blocked or failed state (or user has acknowledged them)
|
||||
- [ ] `RUN_DIR/execution_log.md` written with links to batch reports
|
||||
|
||||
**Save action**: Write `RUN_DIR/execution_log.md`
|
||||
|
||||
**GATE**: All refactoring tasks must be implemented. If any tasks failed, present the failures to the user and ask for guidance before proceeding to Phase 5.
|
||||
@@ -0,0 +1,53 @@
|
||||
# Phase 5: Test Synchronization
|
||||
|
||||
**Role**: QA engineer and developer
|
||||
**Goal**: Reconcile the test suite with the refactored codebase — remove obsolete tests, update broken tests, add tests for new code
|
||||
**Constraints**: All tests must pass at the end of this phase. Do not change production code here — only tests.
|
||||
|
||||
**Skip condition**: If the run name contains `testability`, skip Phase 5 entirely — no test suite exists yet to synchronize. Proceed directly to Phase 6.
|
||||
|
||||
## 5a. Identify Obsolete Tests
|
||||
|
||||
1. Compare the pre-refactoring codebase structure (from Phase 0 inventory) with the current state
|
||||
2. Find tests that reference removed functions, classes, modules, or endpoints
|
||||
3. Find tests that duplicate coverage due to merged/consolidated code
|
||||
4. Decide per test: **delete** (functionality removed) or **merge** (duplicates)
|
||||
|
||||
Write `RUN_DIR/test_sync/obsolete_tests.md`:
|
||||
- Test file, test name, reason (target removed / target merged / duplicate coverage), action taken (deleted / merged into)
|
||||
|
||||
## 5b. Update Existing Tests
|
||||
|
||||
1. Run the full test suite — collect failures and errors
|
||||
2. For each failing test, determine the cause:
|
||||
- Renamed/moved function or module → update import paths and references
|
||||
- Changed function signature → update call sites and assertions
|
||||
- Changed behavior (intentional per refactoring plan) → update expected values
|
||||
- Changed data structures → update fixtures and assertions
|
||||
3. Fix each test, re-run to confirm it passes
|
||||
|
||||
Write `RUN_DIR/test_sync/updated_tests.md`:
|
||||
- Test file, test name, change type (import path / signature / assertion / fixture), description of update
|
||||
|
||||
## 5c. Add New Tests
|
||||
|
||||
1. Identify new code introduced during Phase 4 that lacks test coverage:
|
||||
- New public functions, classes, or modules
|
||||
- New interfaces or abstractions introduced during decoupling
|
||||
- New error handling paths
|
||||
2. Write tests following the same patterns and conventions as the existing test suite
|
||||
3. Ensure coverage targets from Phase 3 are maintained or improved
|
||||
|
||||
Write `RUN_DIR/test_sync/new_tests.md`:
|
||||
- Test file, test name, target function/module, coverage type (unit / integration / blackbox)
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] All obsolete tests removed or merged
|
||||
- [ ] All pre-existing tests pass after updates
|
||||
- [ ] New code from Phase 4 has test coverage
|
||||
- [ ] Overall coverage meets or exceeds Phase 3 baseline (75% overall, 90% critical paths)
|
||||
- [ ] No tests reference removed or renamed code
|
||||
|
||||
**Save action**: Write test_sync artifacts; implemented tests go into the project's test folder
|
||||
|
||||
**GATE (BLOCKING)**: ALL tests must pass before proceeding to Phase 6. If tests fail, fix the tests or ask user for guidance.
|
||||
@@ -0,0 +1,53 @@
|
||||
# Phase 6: Final Verification
|
||||
|
||||
**Role**: QA engineer
|
||||
**Goal**: Run all tests end-to-end, compare final metrics against baseline, and confirm the refactoring succeeded
|
||||
**Constraints**: No code changes. If failures are found, go back to the appropriate phase (4/5) to fix before retrying.
|
||||
|
||||
**Skip condition**: If the run name contains `testability`, skip Phase 6 entirely — no test suite exists yet to verify against. Proceed directly to Phase 7.
|
||||
|
||||
## 6a. Run Full Test Suite
|
||||
|
||||
1. Run unit tests, integration tests, and blackbox tests
|
||||
2. Run acceptance tests derived from `acceptance_criteria.md`
|
||||
3. Record pass/fail counts and any failures
|
||||
|
||||
If any test fails:
|
||||
- Determine whether the failure is a test issue (→ return to Phase 5) or a code issue (→ return to Phase 4)
|
||||
- Do NOT proceed until all tests pass
|
||||
|
||||
## 6b. Capture Final Metrics
|
||||
|
||||
Re-measure all metrics from Phase 0 baseline using the same tools:
|
||||
|
||||
| Metric Category | What to Capture |
|
||||
|----------------|-----------------|
|
||||
| **Coverage** | Overall, unit, blackbox, critical paths |
|
||||
| **Complexity** | Cyclomatic complexity (avg + top 5 functions), LOC, tech debt ratio |
|
||||
| **Code Smells** | Total, critical, major |
|
||||
| **Performance** | Response times (P50/P95/P99), CPU/memory, throughput |
|
||||
| **Dependencies** | Total count, outdated, security vulnerabilities |
|
||||
| **Build** | Build time, test execution time, deployment time |
|
||||
|
||||
## 6c. Compare Against Baseline
|
||||
|
||||
1. Read `RUN_DIR/baseline_metrics.md`
|
||||
2. Produce a side-by-side comparison: baseline vs final for every metric
|
||||
3. Flag any regressions (metrics that got worse)
|
||||
4. Verify acceptance criteria are met
|
||||
|
||||
Write `RUN_DIR/verification_report.md`:
|
||||
- Test results summary: total, passed, failed, skipped
|
||||
- Metric comparison table: metric, baseline value, final value, delta, status (improved / unchanged / regressed)
|
||||
- Acceptance criteria checklist: criterion, status (met / not met), evidence
|
||||
- Regressions (if any): metric, severity, explanation
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] All tests pass (zero failures)
|
||||
- [ ] All acceptance criteria are met
|
||||
- [ ] No critical metric regressions
|
||||
- [ ] Metrics are captured with the same tools/methodology as Phase 0
|
||||
|
||||
**Save action**: Write `RUN_DIR/verification_report.md`
|
||||
|
||||
**GATE (BLOCKING)**: All tests must pass and no critical regressions. Present verification report to user. Do NOT proceed to Phase 7 until user confirms.
|
||||
@@ -0,0 +1,45 @@
|
||||
# Phase 7: Documentation Update
|
||||
|
||||
**Role**: Technical writer
|
||||
**Goal**: Update existing `_docs/` artifacts to reflect all changes made during refactoring
|
||||
**Constraints**: Documentation only — no code changes. Only update docs that are affected by refactoring changes.
|
||||
|
||||
**Skip condition**: If no `_docs/02_document/` directory exists, skip this phase entirely.
|
||||
|
||||
## 7a. Identify Affected Documentation
|
||||
|
||||
1. Review `RUN_DIR/execution_log.md` to list all files changed during Phase 4
|
||||
2. Review test changes from Phase 5
|
||||
3. Map changed files to their corresponding module docs in `_docs/02_document/modules/`
|
||||
4. Map changed modules to their parent component docs in `_docs/02_document/components/`
|
||||
5. Determine if system-level docs need updates (`architecture.md`, `system-flows.md`, `data_model.md`)
|
||||
6. Determine if test documentation needs updates (`_docs/02_document/tests/`)
|
||||
|
||||
## 7b. Update Module Documentation
|
||||
|
||||
For each module doc affected by refactoring changes:
|
||||
1. Re-read the current source file
|
||||
2. Update the module doc to reflect new/changed interfaces, dependencies, internal logic
|
||||
3. Remove documentation for deleted code; add documentation for new code
|
||||
|
||||
## 7c. Update Component Documentation
|
||||
|
||||
For each component doc affected:
|
||||
1. Re-read the updated module docs within the component
|
||||
2. Update inter-module interfaces, dependency graphs, caveats
|
||||
3. Update the component relationship diagram if component boundaries changed
|
||||
|
||||
## 7d. Update System-Level Documentation
|
||||
|
||||
If structural changes were made (new modules, removed modules, changed interfaces):
|
||||
1. Update `_docs/02_document/architecture.md` if architecture changed
|
||||
2. Update `_docs/02_document/system-flows.md` if flow sequences changed
|
||||
3. Update `_docs/02_document/diagrams/components.md` if component relationships changed
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Every changed source file has an up-to-date module doc
|
||||
- [ ] Component docs reflect the refactored structure
|
||||
- [ ] No stale references to removed code in any doc
|
||||
- [ ] Dependency graphs in docs match actual imports
|
||||
|
||||
**Save action**: Updated docs written in-place to `_docs/02_document/`
|
||||
@@ -0,0 +1,49 @@
|
||||
# List of Changes Template
|
||||
|
||||
Save as `RUN_DIR/list-of-changes.md`. Produced during Phase 1 (Discovery).
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# List of Changes
|
||||
|
||||
**Run**: [NN-run-name]
|
||||
**Mode**: [automatic | guided]
|
||||
**Source**: [self-discovered | path/to/input-file.md]
|
||||
**Date**: [YYYY-MM-DD]
|
||||
|
||||
## Summary
|
||||
|
||||
[1-2 sentence overview of what this refactoring run addresses]
|
||||
|
||||
## Changes
|
||||
|
||||
### C01: [Short Title]
|
||||
- **File(s)**: [file paths, comma-separated]
|
||||
- **Problem**: [what makes this problematic / untestable / coupled]
|
||||
- **Change**: [what to do — behavioral description, not implementation steps]
|
||||
- **Rationale**: [why this change is needed]
|
||||
- **Risk**: [low | medium | high]
|
||||
- **Dependencies**: [other change IDs this depends on, or "None"]
|
||||
|
||||
### C02: [Short Title]
|
||||
- **File(s)**: [file paths]
|
||||
- **Problem**: [description]
|
||||
- **Change**: [description]
|
||||
- **Rationale**: [description]
|
||||
- **Risk**: [low | medium | high]
|
||||
- **Dependencies**: [C01, or "None"]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidelines
|
||||
|
||||
- **Change IDs** use format `C##` (C01, C02, ...) — sequential within the run
|
||||
- Each change should map to one atomic task (1-5 complexity points); split if larger
|
||||
- **File(s)** must reference actual files verified to exist in the codebase
|
||||
- **Problem** describes the current state, not the desired state
|
||||
- **Change** describes what the system should do differently — behavioral, not prescriptive
|
||||
- **Dependencies** reference other change IDs within this list; cross-run dependencies use tracker IDs
|
||||
- In guided mode, the input file entries are validated against actual code and enriched with file paths, risk, and dependencies before writing
|
||||
- In automatic mode, entries are derived from Phase 1 component analysis and Phase 2 research findings
|
||||
Reference in New Issue
Block a user