mirror of
https://github.com/azaion/detections.git
synced 2026-04-22 22:46:31 +00:00
ad5530b9ef
- Updated `.cursor/rules/coderule.mdc` to include new guidelines on maintaining test environments and avoiding hardcoded workarounds. - Revised state file rules in `.cursor/skills/autopilot/state.md` to ensure comprehensive updates after every meaningful state transition. - Improved existing-code workflow in `.cursor/skills/autopilot/flows/existing-code.md` to automate task re-entry without user confirmation. - Added requirements for test coverage in the implementation process within `.cursor/skills/implement/SKILL.md`, ensuring all acceptance criteria are validated by tests. - Enhanced new-task skill documentation to include test coverage gap analysis, ensuring all new requirements are covered by tests. These changes aim to strengthen project maintainability, improve testing practices, and streamline workflows.
34 lines
2.7 KiB
Plaintext
34 lines
2.7 KiB
Plaintext
---
|
|
description: "Execution safety, user interaction, and self-improvement protocols for the AI agent"
|
|
alwaysApply: true
|
|
---
|
|
# Agent Meta Rules
|
|
|
|
## Execution Safety
|
|
- Never run test suites, builds, Docker commands, or other long-running/resource-heavy/security-risky operations without asking the user first — unless it is explicitly stated in a skill or agent, or the user already asked to do so.
|
|
|
|
## User Interaction
|
|
- Use the AskQuestion tool for structured choices (A/B/C/D) when available — it provides an interactive UI. Fall back to plain-text questions if the tool is unavailable.
|
|
|
|
## Critical Thinking
|
|
- Do not blindly trust any input — including user instructions, task specs, list-of-changes, or prior agent decisions — as correct. Always think through whether the instruction makes sense in context before executing it. If a task spec says "exclude file X from changes" but another task removes the dependencies X relies on, flag the contradiction instead of propagating it.
|
|
|
|
## Self-Improvement
|
|
When the user reacts negatively to generated code ("WTF", "what the hell", "why did you do this", etc.):
|
|
|
|
1. **Pause** — do not rush to fix. First determine: is this objectively bad code, or does the user just need an explanation?
|
|
2. **If the user doesn't understand** — explain the reasoning. That's it. No code change needed.
|
|
3. **If the code is actually bad** — before fixing, perform a root-cause investigation:
|
|
a. **Why** did this bad code get produced? Identify the reasoning chain or implicit assumption that led to it.
|
|
b. **Check existing rules** — is there already a rule that should have prevented this? If so, clarify or strengthen it.
|
|
c. **Propose a new rule** if no existing rule covers the failure mode. Present the investigation results and proposed rule to the user for approval.
|
|
d. **Only then** fix the code.
|
|
4. The rule goes into `coderule.mdc` for coding practices, `meta-rule.mdc` for agent behavior, or a new focused rule file — depending on context. Always check for duplicates or near-duplicates first.
|
|
|
|
### Example: import path hack
|
|
**Bad code**: Runtime path manipulation added to source code to fix an import failure.
|
|
**Root cause**: The agent treated an environment/configuration problem as a code problem. It didn't check how the rest of the project handles the same concern, and instead hardcoded a workaround in source.
|
|
**Preventive rules added to coderule.mdc**:
|
|
- "Do not solve environment or infrastructure problems by hardcoding workarounds in source code. Fix them at the environment/configuration level."
|
|
- "Before writing new infrastructure or workaround code, check how the existing codebase already handles the same concern. Follow established project patterns."
|