mirror of
https://github.com/azaion/detections.git
synced 2026-04-23 10:36:32 +00:00
d28b9584f2
- Replace all Jira-specific references with generic tracker/work-item terminology (TRACKER-ID, work item epics); delete project-management.mdc and mcp.json.example - Restructure refactor skill: extract 8 phases (00–07) and templates into separate files; add guided mode for pre-built change lists - Add Step 3 "Code Testability Revision" to existing-code workflow (renumber steps 3–12 → 3–13) - Simplify autopilot state file to minimal current-step pointer - Strengthen coding rules: AAA test comments per language, test failures as blocking gates, dependency install policy - Add Docker Suitability Assessment to test-spec and test-run skills (local vs Docker execution) - Narrow human-attention sound rule to human-input-needed only - Add AskQuestion fallback to plain text across skills - Rename FINAL_implementation_report to implementation_report_* - Simplify cursor-meta (remove _docs numbering table, quality thresholds) - Make techstackrule alwaysApply, add alwaysApply:false to openapi
53 lines
2.1 KiB
Markdown
53 lines
2.1 KiB
Markdown
# Phase 0: Context & Baseline
|
|
|
|
**Role**: Software engineer preparing for refactoring
|
|
**Goal**: Collect refactoring goals, create run directory, capture baseline metrics
|
|
**Constraints**: Measurement only — no code changes
|
|
|
|
## 0a. Collect Goals
|
|
|
|
If PROBLEM_DIR files do not yet exist, help the user create them:
|
|
|
|
1. `problem.md` — what the system currently does, what changes are needed, pain points
|
|
2. `acceptance_criteria.md` — success criteria for the refactoring
|
|
3. `security_approach.md` — security requirements (if applicable)
|
|
|
|
Store in PROBLEM_DIR.
|
|
|
|
## 0b. Create RUN_DIR
|
|
|
|
1. Scan REFACTOR_DIR for existing `NN-*` folders
|
|
2. Auto-increment the numeric prefix (e.g., if `01-testability-refactoring` exists, next is `02-...`)
|
|
3. Determine the run name:
|
|
- If guided mode with input file: derive from input file name or context (e.g., `01-testability-refactoring`)
|
|
- If automatic mode: ask user for a short run name, or derive from goals (e.g., `01-coupling-refactoring`)
|
|
4. Create `REFACTOR_DIR/NN-[run-name]/` — this is RUN_DIR for the rest of the workflow
|
|
|
|
Announce RUN_DIR path to user.
|
|
|
|
## 0c. Capture Baseline
|
|
|
|
1. Read problem description and acceptance criteria
|
|
2. Measure current system metrics using project-appropriate tools:
|
|
|
|
| Metric Category | What to Capture |
|
|
|----------------|-----------------|
|
|
| **Coverage** | Overall, unit, blackbox, critical paths |
|
|
| **Complexity** | Cyclomatic complexity (avg + top 5 functions), LOC, tech debt ratio |
|
|
| **Code Smells** | Total, critical, major |
|
|
| **Performance** | Response times (P50/P95/P99), CPU/memory, throughput |
|
|
| **Dependencies** | Total count, outdated, security vulnerabilities |
|
|
| **Build** | Build time, test execution time, deployment time |
|
|
|
|
3. Create functionality inventory: all features/endpoints with status and coverage
|
|
|
|
**Self-verification**:
|
|
- [ ] RUN_DIR created with correct auto-incremented prefix
|
|
- [ ] All metric categories measured (or noted as N/A with reason)
|
|
- [ ] Functionality inventory is complete
|
|
- [ ] Measurements are reproducible
|
|
|
|
**Save action**: Write `RUN_DIR/baseline_metrics.md`
|
|
|
|
**BLOCKING**: Present baseline summary to user. Do NOT proceed until user confirms.
|