mirror of
https://github.com/azaion/loader.git
synced 2026-04-22 22:26:33 +00:00
Sync .cursor from detections
This commit is contained in:
@@ -46,18 +46,16 @@ Rules:
|
||||
2. Always include a recommendation with a brief justification
|
||||
3. Keep option descriptions to one line each
|
||||
4. If only 2 options make sense, use A/B only — do not pad with filler options
|
||||
5. Play the notification sound (per `human-attention-sound.mdc`) before presenting the choice
|
||||
6. Record every user decision in the state file's `Key Decisions` section
|
||||
7. After the user picks, proceed immediately — no follow-up confirmation unless the choice was destructive
|
||||
5. Play the notification sound (per `.cursor/rules/human-attention-sound.mdc`) before presenting the choice
|
||||
6. After the user picks, proceed immediately — no follow-up confirmation unless the choice was destructive
|
||||
|
||||
## Work Item Tracker Authentication
|
||||
|
||||
Several workflow steps create work items (epics, tasks, links). The system supports **Jira MCP** and **Azure DevOps MCP** as interchangeable backends. Detect which is configured by listing available MCP servers.
|
||||
Several workflow steps create work items (epics, tasks, links). The system requires some task tracker MCP as interchangeable backend.
|
||||
|
||||
### Tracker Detection
|
||||
|
||||
1. Check for available MCP servers: Jira MCP (`user-Jira-MCP-Server`) or Azure DevOps MCP (`user-AzureDevops`)
|
||||
2. If both are available, ask the user which to use (Choose format)
|
||||
1. If there is no task tracker MCP or it is not authorized, ask the user about it
|
||||
3. Record the choice in the state file: `tracker: jira` or `tracker: ado`
|
||||
4. If neither is available, set `tracker: local` and proceed without external tracking
|
||||
|
||||
@@ -124,16 +122,12 @@ Skill execution → FAILED
|
||||
│
|
||||
├─ retry_count < 3 ?
|
||||
│ YES → increment retry_count in state file
|
||||
│ → log failure reason in state file (Retry Log section)
|
||||
│ → re-read the sub-skill's SKILL.md
|
||||
│ → re-execute from the current sub_step
|
||||
│ → (loop back to check result)
|
||||
│
|
||||
│ NO (retry_count = 3) →
|
||||
│ → set status: failed in Current Step
|
||||
│ → add entry to Blockers section:
|
||||
│ "[Skill Name] failed 3 consecutive times at sub_step [M].
|
||||
│ Last failure: [reason]. Auto-retry exhausted."
|
||||
│ → present warning to user (see Escalation below)
|
||||
│ → do NOT auto-retry again until user intervenes
|
||||
```
|
||||
@@ -143,18 +137,14 @@ Skill execution → FAILED
|
||||
1. **Auto-retry immediately**: when a skill fails, retry it without asking the user — the failure is often transient (missing user confirmation in a prior step, docker not running, file lock, etc.)
|
||||
2. **Preserve sub_step**: retry from the last recorded `sub_step`, not from the beginning of the skill — unless the failure indicates corruption, in which case restart from sub_step 1
|
||||
3. **Increment `retry_count`**: update `retry_count` in the state file's `Current Step` section on each retry attempt
|
||||
4. **Log each failure**: append the failure reason and timestamp to the state file's `Retry Log` section
|
||||
5. **Reset on success**: when the skill eventually succeeds, reset `retry_count: 0` and clear the `Retry Log` for that step
|
||||
4. **Reset on success**: when the skill eventually succeeds, reset `retry_count: 0`
|
||||
|
||||
### Escalation (after 3 consecutive failures)
|
||||
|
||||
After 3 failed auto-retries of the same skill, the failure is likely not user-related. Stop retrying and escalate:
|
||||
|
||||
1. Update the state file:
|
||||
- Set `status: failed` in `Current Step`
|
||||
- Set `retry_count: 3`
|
||||
- Add a blocker entry describing the repeated failure
|
||||
2. Play notification sound (per `human-attention-sound.mdc`)
|
||||
1. Update the state file: set `status: failed` and `retry_count: 3` in `Current Step`
|
||||
2. Play notification sound (per `.cursor/rules/human-attention-sound.mdc`)
|
||||
3. Present using Choose format:
|
||||
|
||||
```
|
||||
@@ -215,9 +205,8 @@ When executing a sub-skill, monitor for these signals:
|
||||
|
||||
If the same autopilot step fails 3 consecutive times across conversations:
|
||||
|
||||
- Record the failure pattern in the state file's `Blockers` section
|
||||
- Do NOT auto-retry on next invocation
|
||||
- Present the blocker and ask user for guidance before attempting again
|
||||
- Present the failure pattern and ask user for guidance before attempting again
|
||||
|
||||
## Context Management Protocol
|
||||
|
||||
@@ -304,11 +293,73 @@ For steps that produce `_docs/` artifacts (problem, research, plan, decompose, d
|
||||
3. **Git safety net**: artifacts are committed with each autopilot step completion. To roll back: `git log --oneline _docs/` to find the commit, then `git checkout <commit> -- _docs/<folder>/`
|
||||
4. **State file rollback**: when rolling back artifacts, also update `_docs/_autopilot_state.md` to reflect the rolled-back step (set it to `in_progress`, clear completed date)
|
||||
|
||||
## Debug / Error Recovery Protocol
|
||||
|
||||
When the implement skill's auto-fix loop fails (code review FAIL after 2 auto-fix attempts) or an implementer subagent reports a blocker, the user is asked to intervene. This protocol guides the recovery process.
|
||||
|
||||
### Structured Debugging Workflow
|
||||
|
||||
When escalated to the user after implementation failure:
|
||||
|
||||
1. **Classify the failure** — determine the category:
|
||||
- **Missing dependency**: a package, service, or module the task needs but isn't available
|
||||
- **Logic error**: code runs but produces wrong results (assertion failures, incorrect output)
|
||||
- **Integration mismatch**: interfaces between components don't align (type errors, missing methods, wrong signatures)
|
||||
- **Environment issue**: Docker, database, network, or configuration problem
|
||||
- **Spec ambiguity**: the task spec is unclear or contradictory
|
||||
|
||||
2. **Reproduce** — isolate the failing behavior:
|
||||
- Run the specific failing test(s) in isolation
|
||||
- Check whether the failure is deterministic or intermittent
|
||||
- Capture the exact error message, stack trace, and relevant file:line
|
||||
|
||||
3. **Narrow scope** — focus on the minimal reproduction:
|
||||
- For logic errors: trace the data flow from input to the point of failure
|
||||
- For integration mismatches: compare the caller's expectations against the callee's actual interface
|
||||
- For environment issues: verify Docker services are running, DB is accessible, env vars are set
|
||||
|
||||
4. **Fix and verify** — apply the fix and confirm:
|
||||
- Make the minimal change that fixes the root cause
|
||||
- Re-run the failing test(s) to confirm the fix
|
||||
- Run the full test suite to check for regressions
|
||||
- If the fix changes a shared interface, check all consumers
|
||||
|
||||
5. **Report** — update the batch report with:
|
||||
- Root cause category
|
||||
- Fix applied (file:line, description)
|
||||
- Tests that now pass
|
||||
|
||||
### Common Recovery Patterns
|
||||
|
||||
| Failure Pattern | Typical Root Cause | Recovery Action |
|
||||
|----------------|-------------------|----------------|
|
||||
| ImportError / ModuleNotFoundError | Missing dependency or wrong path | Install dependency or fix import path |
|
||||
| TypeError on method call | Interface mismatch between tasks | Align caller with callee's actual signature |
|
||||
| AssertionError in test | Logic bug or wrong expected value | Fix logic or update test expectations |
|
||||
| ConnectionRefused | Service not running | Start Docker services, check docker-compose |
|
||||
| Timeout | Blocking I/O or infinite loop | Add timeout, fix blocking call |
|
||||
| FileNotFoundError | Hardcoded path or missing fixture | Make path configurable, add fixture |
|
||||
|
||||
### Escalation
|
||||
|
||||
If debugging does not resolve the issue after 2 focused attempts:
|
||||
|
||||
```
|
||||
══════════════════════════════════════
|
||||
DEBUG ESCALATION: [failure description]
|
||||
══════════════════════════════════════
|
||||
Root cause category: [category]
|
||||
Attempted fixes: [list]
|
||||
Current state: [what works, what doesn't]
|
||||
══════════════════════════════════════
|
||||
A) Continue debugging with more context
|
||||
B) Revert this batch and skip the task (move to backlog)
|
||||
C) Simplify the task scope and retry
|
||||
══════════════════════════════════════
|
||||
```
|
||||
|
||||
## Status Summary
|
||||
|
||||
On every invocation, before executing any skill, present a status summary built from the state file (with folder scan fallback). Use the Status Summary Template from the active flow file (`flows/greenfield.md` or `flows/existing-code.md`).
|
||||
|
||||
For re-entry (state file exists), also include:
|
||||
- Key decisions from the state file's `Key Decisions` section
|
||||
- Last session context from the `Last Session` section
|
||||
- Any blockers from the `Blockers` section
|
||||
For re-entry (state file exists), cross-check the current step against `_docs/` folder structure and present any `status: failed` state to the user before continuing.
|
||||
|
||||
Reference in New Issue
Block a user