Refine coding standards and testing guidelines

- Updated the coding rule descriptions to emphasize readability, meaningful comments, and test verification.
- Revised guidelines to clarify the importance of avoiding boilerplate while maintaining readability.
- Enhanced the testing rules to set a minimum coverage threshold of 75% for business logic and specified criteria for test scenarios.
- Introduced a mechanism for handling skipped tests, categorizing them as legitimate or illegitimate, and outlined resolution steps.

These changes aim to improve code quality, maintainability, and testing effectiveness.
This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-04-17 20:27:45 +03:00
parent 4b52c0be3b
commit 06b47c17c3
17 changed files with 275 additions and 90 deletions
+10
View File
@@ -0,0 +1,10 @@
---
description: Rules for installation and provisioning scripts
globs: scripts/**/*.sh
alwaysApply: false
---
# Automation Scripts
- Automate everything that can be automated. If a dependency can be downloaded and installed, do it automatically — never require the user to manually download and set up prerequisites.
- Use sensible defaults for paths and configuration (e.g. `/opt/` for system-wide tools). Allow overrides via environment variables for users who need non-standard locations.
+20 -11
View File
@@ -1,17 +1,17 @@
---
description: "Enforces concise, comment-free, environment-aware coding standards with strict scope discipline and test verification"
description: "Enforces readable, environment-aware coding standards with scope discipline, meaningful comments, and test verification"
alwaysApply: true
---
# Coding preferences
- Always prefer simple solution
- Prefer the simplest solution that satisfies all requirements, including maintainability. When in doubt between two approaches, choose the one with fewer moving parts — but never sacrifice correctness, error handling, or readability for brevity.
- Follow the Single Responsibility Principle — a class or method should have one reason to change:
- If a method is hard to name precisely from the caller's perspective, its responsibility is misplaced. Vague names like "candidate", "data", or "item" are a signal — fix the design, not just the name.
- Logic specific to a platform, variant, or environment belongs in the class that owns that variant, not in the general coordinator. Passing a dependency through is preferable to leaking variant-specific concepts into shared code.
- Only use static methods for pure, self-contained computations (constants, simple math, stateless lookups). If a static method involves resource access, side effects, OS interaction, or logic that varies across subclasses or environments — use an instance method or factory class instead. Before implementing a non-trivial static method, ask the user.
- Generate concise code
- Avoid boilerplate and unnecessary indirection, but never sacrifice readability for brevity.
- Never suppress errors silently — no `2>/dev/null`, empty `catch` blocks, bare `except: pass`, or discarded error returns. These hide the information you need most when something breaks. If an error is truly safe to ignore, log it or comment why.
- Do not put comments in the code, except in tests: every test must use the Arrange / Act / Assert pattern with language-appropriate comment syntax (`# Arrange` for Python, `// Arrange` for C#/Rust/JS/TS). Omit any section that is not needed (e.g. if there is no setup, skip Arrange; if act and assert are the same line, keep only Assert)
- Do not put logs unless it is an exception, or was asked specifically
- Do not add comments that merely narrate what the code does. Comments are appropriate for: non-obvious business rules, workarounds with references to issues/bugs, safety invariants, and public API contracts. Make comments as short and concise as possible. Exception: every test must use the Arrange / Act / Assert pattern with language-appropriate comment syntax (`# Arrange` for Python, `// Arrange` for C#/Rust/JS/TS). Omit any section that is not needed (e.g. if there is no setup, skip Arrange; if act and assert are the same line, keep only Assert)
- Do not add verbose debug/trace logs by default. Log exceptions, security events (auth failures, permission denials), and business-critical state transitions. Add debug-level logging only when asked.
- Do not put code annotations unless it was asked specifically
- Write code that takes into account the different environments: development, production
- You are careful to make changes that are requested or you are confident the changes are well understood and related to the change being requested
@@ -22,16 +22,25 @@ alwaysApply: true
- When a test fails due to a missing dependency, install it — do not fake or stub the module system. For normal packages, add them to the project's dependency file (requirements-test.txt, package.json devDependencies, test csproj, etc.) and install. Only consider stubbing if the dependency is heavy (e.g. hardware-specific SDK, large native toolchain) — and even then, ask the user first before choosing to stub.
- Do not solve environment or infrastructure problems (dependency resolution, import paths, service discovery, connection config) by hardcoding workarounds in source code. Fix them at the environment/configuration level.
- Before writing new infrastructure or workaround code, check how the existing codebase already handles the same concern. Follow established project patterns.
- If a file, class, or function has no remaining usages — delete it. Do not keep dead code "just in case"; git history preserves everything. Dead code rots: its dependencies drift, it misleads readers, and it breaks when the code it depends on evolves.
- If a file, class, or function has no remaining usages — delete it. Dead code rots: its dependencies drift, it misleads readers, and it breaks when the code it depends on evolves. However, before deletion verify that the symbol is not used via any of the following. If any applies, do NOT delete — leave it or ASK the user:
- Public API surface exported from the package and potentially consumed outside the workspace (see `workspace-boundary.mdc`)
- Reflection, dependency injection, or service registration (scan DI container registrations, `appsettings.json` / equivalent config, attribute-based discovery, plugin manifests)
- Dynamic dispatch from config/data (YAML/JSON references, string-based class lookups, route tables, command dispatchers)
- Test fixtures used only by currently-skipped tests — temporary skips may become active again
- Cross-repo references — if this workspace is part of a multi-repo system, grep sibling repos for shared contracts before deleting
- Focus on the areas of code relevant to the task
- Do not touch code that is unrelated to the task
- Always think about what other methods and areas of code might be affected by the code changes
- When you think you are done with changes, run the full test suite. Every failure — including pre-existing ones, collection errors, and import errors — is a **blocking gate**. Never silently ignore, skip, or proceed past a failing test. On any failure, stop and ask the user to choose one of:
- **Scope discipline**: focus edits on the task scope. The "scope" is:
- Files the task explicitly names
- Files that define interfaces the task changes
- Files that directly call, implement, or test the changed code
- **Adjacent hygiene is permitted** without asking: fixing imports you caused to break, updating obvious stale references within a file you already modify, deleting code that became dead because of your change.
- **Unrelated issues elsewhere**: do not silently fix them as part of this task. Either note them to the user at end of turn and ASK before expanding scope, or record in `_docs/_process_leftovers/` for later handling.
- Always think about what other methods and areas of code might be affected by the code changes, and surface the list to the user before modifying.
- When you think you are done with changes, run the full test suite. Every failure in tests that cover code you modified or that depend on code you modified is a **blocking gate**. For pre-existing failures in unrelated areas, report them to the user but do not block on them. Never silently ignore or skip a failure without reporting it. On any blocking failure, stop and ask the user to choose one of:
- **Investigate and fix** the failing test or source code
- **Remove the test** if it is obsolete or no longer relevant
- Do not rename any databases or tables or table columns without confirmation. Avoid such renaming if possible.
- Make sure we don't commit binaries, create and keep .gitignore up to date and delete binaries after you are done with the task
- Never force-push to main or dev branches
- Place all source code under the `src/` directory; keep project-level config, tests, and tooling at the repo root
- For new projects, place source code under `src/` (this works for all stacks including .NET). For existing projects, follow the established directory structure. Keep project-level config, tests, and tooling at the repo root.
+14
View File
@@ -23,3 +23,17 @@ globs: [".cursor/**"]
## Security
- All `.cursor/` files must be scanned for hidden Unicode before committing (see cursor-security.mdc)
## Quality Thresholds (canonical reference)
All rules and skills must reference the single source of truth below. Do NOT restate different numeric thresholds in individual rule or skill files.
| Concern | Threshold | Enforcement |
|---------|-----------|-------------|
| Test coverage on business logic | 75% | Aim (warn below); 100% on critical paths |
| Test scenario coverage (vs AC + restrictions) | 75% | Blocking in test-spec Phase 1 and Phase 3 |
| CI coverage gate | 75% | Fail build below |
| Lint errors (Critical/High) | 0 | Blocking pre-commit |
| Code-review auto-fix | Low + Medium (Style/Maint/Perf) + High (Style/Scope) | Critical and Security always escalate |
When a skill or rule needs to cite a threshold, link to this table instead of hardcoding a different number.
+3 -2
View File
@@ -5,6 +5,7 @@ alwaysApply: true
# Git Workflow
- Work on the `dev` branch
- Commit message format: `[TRACKER-ID-1] [TRACKER-ID-2] Summary of changes`
- Commit message total length must not exceed 30 characters
- Commit message subject line format: `[TRACKER-ID-1] [TRACKER-ID-2] Summary of changes`
- Subject line must not exceed 72 characters (standard Git convention for the first line). The 72-char limit applies to the subject ONLY, not the full commit message.
- A commit message body is optional. Add one when the subject alone cannot convey the why of the change. Wrap the body at 72 chars per line.
- Do NOT push or merge unless the user explicitly asks you to. Always ask first if there is a need.
+33 -11
View File
@@ -4,21 +4,43 @@ alwaysApply: true
---
# Sound Notification on Human Input
Whenever you are about to ask the user a question, request confirmation, present options for a decision, or otherwise pause and wait for human input, you MUST first run the appropriate shell command for the current OS:
## Sound commands per OS
Detect the OS from user system info or `uname -s`:
- **macOS**: `afplay /System/Library/Sounds/Glass.aiff &`
- **Linux**: `paplay /usr/share/sounds/freedesktop/stereo/bell.oga 2>/dev/null || aplay /usr/share/sounds/freedesktop/stereo/bell.oga 2>/dev/null || echo -e '\a' &`
- **Windows (PowerShell)**: `[System.Media.SystemSounds]::Exclamation.Play()`
Detect the OS from the user's system info or by running `uname -s` if unknown.
## When to play (play exactly once per trigger)
This applies to:
- Asking clarifying questions
- Presenting choices (e.g. via AskQuestion tool)
- Requesting approval for destructive actions
- Reporting that you are blocked and need guidance
- Any situation where the conversation will stall without user response
- Completing a task (final answer / deliverable ready for review)
Play the sound when your turn will end in one of these states:
Do NOT play the sound when:
- You are in the middle of executing a multi-step task and just providing a status update
1. You are about to call the AskQuestion tool — sound BEFORE the AskQuestion call
2. Your text ends with a direct question to the user that cannot be answered without their input (e.g., "Which option do you prefer?", "What is the database name?", "Confirm before I push?")
3. You are reporting that you are BLOCKED and cannot continue without user input (missing credentials, conflicting requirements, external approval required)
4. You have just completed a destructive or irreversible action the user asked to review (commit, push, deploy, data migration, file deletion)
## When NOT to play
- You are mid-execution and returning a progress update (the conversation is not stalling)
- You are answering a purely informational or factual question and no follow-up is required
- You have already played the sound once this turn for the same pause point
- Your response only contains text describing what you did or found, with no question, no block, no irreversible action
## "Trivial" definition
A response is trivial (no sound) when ALL of the following are true:
- No explicit question to the user
- No "I am blocked" report
- No destructive/irreversible action that needs review
If any one of those is present, the response is non-trivial — play the sound.
## Ordering
The sound command is a normal Shell tool call. Place it:
- **Immediately before an AskQuestion tool call** in the same message, or
- **As the last Shell call of the turn** if ending with a text-based question, block report, or post-destructive-action review
Do not play the sound as part of routine command execution — only at the pause points listed under "When to play".
+22 -10
View File
@@ -5,7 +5,7 @@ alwaysApply: true
# Agent Meta Rules
## Execution Safety
- Never run test suites, builds, Docker commands, or other long-running/resource-heavy/security-risky operations without asking the user first — unless it is explicitly stated in a skill or agent, or the user already asked to do so.
- Run the full test suite automatically when you believe code changes are complete (as required by coderule.mdc). For other long-running/resource-heavy/security-risky operations (builds, Docker commands, deployments, performance tests), ask the user first — unless explicitly stated in a skill or the user already asked to do so.
## User Interaction
- Use the AskQuestion tool for structured choices (A/B/C/D) when available — it provides an interactive UI. Fall back to plain-text questions if the tool is unavailable.
@@ -33,18 +33,30 @@ When the user reacts negatively to generated code ("WTF", "what the hell", "why
- "Before writing new infrastructure or workaround code, check how the existing codebase already handles the same concern. Follow established project patterns."
## Debugging Over Contemplation
When the root cause of a bug is not clear after ~5 minutes of reasoning, analysis, and assumption-making — **stop speculating and add debugging logs**. Observe actual runtime behavior before forming another theory. The pattern to follow:
Agents cannot measure wall-clock time between turns. Use observable counts from your own transcript instead.
**Trigger: stop speculating and instrument.** When you've formed **3 or more distinct hypotheses** about a bug without confirming any against runtime evidence (logs, stderr, debugger state, actual test failure messages) — stop and add debugging output. Re-reading the same code hoping to "spot it this time" counts as a new hypothesis that still has zero evidence.
Steps:
1. Identify the last known-good boundary (e.g., "request enters handler") and the known-bad result (e.g., "callback never fires").
2. Add targeted `print(..., flush=True)` or log statements at each intermediate step to narrow the gap.
3. Read the output. Let evidence drive the next step — not inference chains built on unverified assumptions.
2. Add targeted `print(..., flush=True)`, `console.error`, or logger statements at each intermediate step to narrow the gap.
3. Run the instrumented code. Read the output. Let evidence drive the next hypothesis — not inference chains.
Prolonged mental contemplation without evidence is a time sink. A 15-minute instrumented run beats 45 minutes of "could it be X? but then Y... unless Z..." reasoning.
An instrumented run producing real output beats any amount of "could it be X? but then Y..." reasoning.
## Long Investigation Retrospective
When a problem takes significantly longer than expected (>30 minutes), perform a post-mortem before closing out:
1. **Identify the bottleneck**: Was the delay caused by assumptions that turned out wrong? Missing visibility into runtime state? Incorrect mental model of a framework or language boundary?
2. **Extract the general lesson**: What category of mistake was this? (e.g., "Python cannot call Cython `cdef` methods", "engine errors silently swallowed", "wrong layer to fix the problem")
3. **Propose a preventive rule**: Formulate it as a short, actionable statement. Present it to the user for approval.
4. **Write it down**: Add the approved rule to the appropriate `.mdc` file so it applies to all future sessions.
Trigger a post-mortem when ANY of the following is true (all are observable in your own transcript):
- **10+ tool calls** were used to diagnose a single issue
- **Same file modified 3+ times** without tests going green
- **3+ distinct approaches** attempted before arriving at the fix
- Any phrase like "let me try X instead" appeared **more than twice**
- A fix was eventually found by reading docs/source the agent had dismissed earlier
Post-mortem steps:
1. **Identify the bottleneck**: wrong assumption? missing runtime visibility? incorrect mental model of a framework/language boundary? ignored evidence?
2. **Extract the general lesson**: what category of mistake was this? (e.g., "Python cannot call Cython `cdef` methods", "engine errors silently swallowed", "wrong layer to fix the problem")
3. **Propose a preventive rule**: short, actionable. Present to user for approval.
4. **Write it down**: add approved rule to the appropriate `.mdc` so it applies to future sessions.
+1 -1
View File
@@ -8,7 +8,7 @@ globs: ["**/*test*", "**/*spec*", "**/*Test*", "**/tests/**", "**/test/**"]
- One assertion per test when practical; name tests descriptively: `MethodName_Scenario_ExpectedResult`
- Test boundary conditions, error paths, and happy paths
- Use mocks only for external dependencies; prefer real implementations for internal code
- Aim for 80%+ coverage on business logic; 100% on critical paths
- Aim for 75%+ coverage on business logic; 100% on critical paths (code paths where a bug would cause data loss, security breaches, financial errors, or system outages — identify from acceptance criteria marked as must-have or from security_approach.md). The 75% threshold is canonical — see `cursor-meta.mdc` Quality Thresholds.
- Integration tests use real database (Postgres testcontainers or dedicated test DB)
- Never use Thread Sleep or fixed delays in tests; use polling or async waits
- Keep test data factories/builders for reusable test setup
+36
View File
@@ -12,3 +12,39 @@ alwaysApply: true
- Project name: AZAION
- All task IDs follow the format `AZ-<number>`
- Issue types: Epic, Story, Task, Bug, Subtask
## Tracker Availability Gate
- If Jira MCP returns **Unauthorized**, **errored**, **connection refused**, or any non-success response: **STOP** tracker operations and notify the user.
- The user must fix the Jira MCP connection before any further ticket creation/transition/query is attempted.
- Do NOT silently create local-only tasks, skip Jira steps, or pretend the write succeeded. The tracker is the source of truth — if a status transition is lost, the team loses visibility.
## Leftovers Mechanism (non-user-input blockers only)
When a **non-user** blocker prevents a tracker write (MCP down, network error, transient failure, ticket linkage recoverable later), record the deferred write in `_docs/_process_leftovers/<YYYY-MM-DD>_<topic>.md` and continue non-tracker work. Each entry must include:
- Timestamp (ISO 8601)
- What was blocked (ticket creation, status transition, comment, link)
- Full payload that would have been written (summary, description, story points, epic, target status) — so the write can be replayed later
- Reason for the blockage (MCP unavailable, auth expired, unknown epic ID pending user clarification, etc.)
### Hard gates that CANNOT be deferred to leftovers
Anything requiring user input MUST still block:
- Clarifications about requirements, scope, or priority
- Approval for destructive actions or irreversible changes
- Choice between alternatives (A/B/C decisions)
- Confirmation of assumptions that change task outcome
If a blocker of this kind appears, STOP and ASK — do not write to leftovers.
### Replay obligation
At the start of every `/autopilot` invocation, and before any new tracker write in any skill, check `_docs/_process_leftovers/` for pending entries. For each entry:
1. Attempt to replay the deferred write against the tracker
2. If replay succeeds → delete the leftover entry
3. If replay still fails → update the entry's timestamp and reason, continue
4. If the blocker now requires user input (e.g., MCP still down after N retries) → surface to the user
Autopilot must not progress past its own step 0 until all leftovers that CAN be replayed have been replayed.
+7
View File
@@ -0,0 +1,7 @@
# Workspace Boundary
- Only modify files within the current repository (workspace root).
- Never write, edit, or delete files in sibling repositories or parent directories outside the workspace.
- When a task requires changes in another repository (e.g., admin API, flights, UI), **document** the required changes in the task's implementation notes or a dedicated cross-repo doc — do not implement them.
- The mock API at `e2e/mocks/mock_api/` may be updated to reflect the expected contract of external services, but this is a test mock — not the real implementation.
- If a task is entirely scoped to another repository, mark it as out-of-scope for this workspace and note the target repository.