mirror of
https://github.com/azaion/ai-training.git
synced 2026-04-22 07:06:36 +00:00
Update configuration and test structure for improved clarity and functionality
- Modified `.gitignore` to include test fixture data while excluding test results. - Updated `config.yaml` to change the model from 'yolo11m.yaml' to 'yolo26m.pt'. - Enhanced `.cursor/rules/coderule.mdc` with additional guidelines for test environment consistency and infrastructure handling. - Revised autopilot state management in `_docs/_autopilot_state.md` to reflect current progress and tasks. - Removed outdated augmentation tests and adjusted dataset formation tests to align with the new structure. These changes streamline the configuration and testing processes, ensuring better organization and clarity in the project.
This commit is contained in:
@@ -11,8 +11,11 @@ alwaysApply: true
|
||||
- Write code that takes into account the different environments: development, production
|
||||
- You are careful to make changes that are requested or you are confident the changes are well understood and related to the change being requested
|
||||
- Mocking data is needed only for tests, never mock data for dev or prod env
|
||||
- Make test environment (files, db and so on) as close as possible to the production environment
|
||||
- When you add new libraries or dependencies make sure you are using the same version of it as other parts of the code
|
||||
- When a test fails due to a missing dependency, install it — do not fake or stub the module system. For normal packages, add them to the project's dependency file (requirements-test.txt, package.json devDependencies, test csproj, etc.) and install. Only consider stubbing if the dependency is heavy (e.g. hardware-specific SDK, large native toolchain) — and even then, ask the user first before choosing to stub.
|
||||
- Do not solve environment or infrastructure problems (dependency resolution, import paths, service discovery, connection config) by hardcoding workarounds in source code. Fix them at the environment/configuration level.
|
||||
- Before writing new infrastructure or workaround code, check how the existing codebase already handles the same concern. Follow established project patterns.
|
||||
|
||||
- Focus on the areas of code relevant to the task
|
||||
- Do not touch code that is unrelated to the task
|
||||
|
||||
@@ -17,11 +17,5 @@ globs: [".cursor/**"]
|
||||
## Agent Files (.cursor/agents/)
|
||||
- Must have `name` and `description` in frontmatter
|
||||
|
||||
## User Interaction
|
||||
- Use the AskQuestion tool for structured choices (A/B/C/D) when available — it provides an interactive UI. Fall back to plain-text questions if the tool is unavailable.
|
||||
|
||||
## Execution Safety
|
||||
- Never run test suites, builds, Docker commands, or other long-running/resource-heavy/security-risky operations without asking the user first - unlsess it is explicilty stated in skill or agent, or user already asked to do so.
|
||||
|
||||
## Security
|
||||
- All `.cursor/` files must be scanned for hidden Unicode before committing (see cursor-security.mdc)
|
||||
|
||||
@@ -0,0 +1,30 @@
|
||||
---
|
||||
description: "Execution safety, user interaction, and self-improvement protocols for the AI agent"
|
||||
alwaysApply: true
|
||||
---
|
||||
# Agent Meta Rules
|
||||
|
||||
## Execution Safety
|
||||
- Never run test suites, builds, Docker commands, or other long-running/resource-heavy/security-risky operations without asking the user first — unless it is explicitly stated in a skill or agent, or the user already asked to do so.
|
||||
|
||||
## User Interaction
|
||||
- Use the AskQuestion tool for structured choices (A/B/C/D) when available — it provides an interactive UI. Fall back to plain-text questions if the tool is unavailable.
|
||||
|
||||
## Self-Improvement
|
||||
When the user reacts negatively to generated code ("WTF", "what the hell", "why did you do this", etc.):
|
||||
|
||||
1. **Pause** — do not rush to fix. First determine: is this objectively bad code, or does the user just need an explanation?
|
||||
2. **If the user doesn't understand** — explain the reasoning. That's it. No code change needed.
|
||||
3. **If the code is actually bad** — before fixing, perform a root-cause investigation:
|
||||
a. **Why** did this bad code get produced? Identify the reasoning chain or implicit assumption that led to it.
|
||||
b. **Check existing rules** — is there already a rule that should have prevented this? If so, clarify or strengthen it.
|
||||
c. **Propose a new rule** if no existing rule covers the failure mode. Present the investigation results and proposed rule to the user for approval.
|
||||
d. **Only then** fix the code.
|
||||
4. The rule goes into `coderule.mdc` for coding practices, `meta-rule.mdc` for agent behavior, or a new focused rule file — depending on context. Always check for duplicates or near-duplicates first.
|
||||
|
||||
### Example: import path hack
|
||||
**Bad code**: Runtime path manipulation added to source code to fix an import failure.
|
||||
**Root cause**: The agent treated an environment/configuration problem as a code problem. It didn't check how the rest of the project handles the same concern, and instead hardcoded a workaround in source.
|
||||
**Preventive rules added to coderule.mdc**:
|
||||
- "Do not solve environment or infrastructure problems by hardcoding workarounds in source code. Fix them at the environment/configuration level."
|
||||
- "Before writing new infrastructure or workaround code, check how the existing codebase already handles the same concern. Follow established project patterns."
|
||||
@@ -0,0 +1,9 @@
|
||||
# Work Item Tracker
|
||||
|
||||
- Use **Jira** as the sole work item tracker (MCP server: `user-Jira-MCP-Server`)
|
||||
- Do NOT use Azure DevOps for work item management
|
||||
- Jira cloud ID: `denyspopov.atlassian.net`
|
||||
- Project key: `AZ`
|
||||
- Project name: AZAION
|
||||
- All task IDs follow the format `AZ-<number>`
|
||||
- Issue types: Epic, Story, Task, Bug, Subtask
|
||||
@@ -41,7 +41,7 @@ retry_count: 3
|
||||
### State File Rules
|
||||
|
||||
1. **Create** on the first autopilot invocation (after state detection determines Step 1)
|
||||
2. **Update** after every step completion, session boundary, or failed retry
|
||||
2. **Update** after every change — this includes: batch completion, sub-step progress, step completion, session boundary, failed retry, or any meaningful state transition. The state file must always reflect the current reality.
|
||||
3. **Read** as the first action on every invocation — before folder scanning
|
||||
4. **Cross-check**: verify against actual `_docs/` folder contents. If they disagree, trust the folder structure and update the state file
|
||||
5. **Never delete** the state file
|
||||
|
||||
@@ -128,31 +128,31 @@ Auto-fix loop with bounded retries (max 2 attempts) before escalating to user:
|
||||
|
||||
Track `auto_fix_attempts` count in the batch report for retrospective analysis.
|
||||
|
||||
### 10. Test
|
||||
|
||||
- Read and execute `.cursor/skills/test-run/SKILL.md` (detect runner, run suite, diagnose failures, present blocking choices)
|
||||
- Test failures are a **blocking gate** — do not proceed to commit until the test-run skill completes with a user decision
|
||||
- Note: the autopilot also runs a separate full test suite after all implementation batches complete (greenfield Step 7, existing-code Steps 6/10). This is intentional — per-batch tests are regression checks, the post-implement run is final validation.
|
||||
|
||||
### 11. Commit and Push
|
||||
### 10. Commit and Push
|
||||
|
||||
- After user confirms the batch (explicitly for FAIL, implicitly for PASS/PASS_WITH_WARNINGS):
|
||||
- `git add` all changed files from the batch
|
||||
- `git commit` with a message that includes ALL task IDs (tracker IDs or numeric prefixes) of tasks implemented in the batch, followed by a summary of what was implemented. Format: `[TASK-ID-1] [TASK-ID-2] ... Summary of changes`
|
||||
- `git push` to the remote branch
|
||||
|
||||
### 12. Update Tracker Status → In Testing
|
||||
### 11. Update Tracker Status → In Testing
|
||||
|
||||
After the batch is committed and pushed, transition the ticket status of each task in the batch to **In Testing** via the configured work item tracker. If `tracker: local`, skip this step.
|
||||
|
||||
### 13. Archive Completed Tasks
|
||||
### 12. Archive Completed Tasks
|
||||
|
||||
Move each completed task file from `TASKS_DIR/todo/` to `TASKS_DIR/done/`.
|
||||
|
||||
### 14. Loop
|
||||
### 13. Loop
|
||||
|
||||
- Go back to step 2 until all tasks in `todo/` are done
|
||||
- When all tasks are complete, report final summary
|
||||
|
||||
### 14. Final Test Run
|
||||
|
||||
- After all batches are complete, run the full test suite once
|
||||
- Read and execute `.cursor/skills/test-run/SKILL.md` (detect runner, run suite, diagnose failures, present blocking choices)
|
||||
- Test failures are a **blocking gate** — do not proceed until the test-run skill completes with a user decision
|
||||
- When tests pass, report final summary
|
||||
|
||||
## Batch Report Persistence
|
||||
|
||||
@@ -195,7 +195,7 @@ After each batch, produce a structured report:
|
||||
| Implementer fails same approach 3+ times | Stop it, escalate to user |
|
||||
| Task blocked on external dependency (not in task list) | Report and skip |
|
||||
| File ownership conflict unresolvable | ASK user |
|
||||
| Any test failure after a batch | Delegate to test-run skill — blocking gate |
|
||||
| Test failure after final test run | Delegate to test-run skill — blocking gate |
|
||||
| All tasks complete | Report final summary, suggest final commit |
|
||||
| `_dependencies_table.md` missing | STOP — run `/decompose` first |
|
||||
|
||||
@@ -203,7 +203,7 @@ After each batch, produce a structured report:
|
||||
|
||||
Each batch commit serves as a rollback checkpoint. If recovery is needed:
|
||||
|
||||
- **Tests fail after a batch commit**: `git revert <batch-commit-hash>` using the hash from the batch report in `_docs/03_implementation/`
|
||||
- **Tests fail after final test run**: `git revert <batch-commit-hash>` using hashes from the batch reports in `_docs/03_implementation/`
|
||||
- **Resuming after interruption**: Read `_docs/03_implementation/batch_*_report.md` files to determine which batches completed, then continue from the next batch
|
||||
- **Multiple consecutive batches fail**: Stop and escalate to user with links to batch reports and commit hashes
|
||||
|
||||
@@ -212,4 +212,4 @@ Each batch commit serves as a rollback checkpoint. If recovery is needed:
|
||||
- Never launch tasks whose dependencies are not yet completed
|
||||
- Never allow two parallel agents to write to the same file
|
||||
- If a subagent fails or is flagged as stuck, stop it and report — do not let it loop indefinitely
|
||||
- Always run tests after each batch completes
|
||||
- Always run the full test suite after all batches complete (step 14)
|
||||
|
||||
Reference in New Issue
Block a user