mirror of
https://github.com/azaion/admin.git
synced 2026-04-22 22:46:33 +00:00
d96971b050
Add .cursor autodevelopment system
76 lines
2.9 KiB
Markdown
76 lines
2.9 KiB
Markdown
---
|
|
name: test-run
|
|
description: |
|
|
Run the project's test suite, report results, and handle failures.
|
|
Detects test runners automatically (pytest, dotnet test, cargo test, npm test)
|
|
or uses scripts/run-tests.sh if available.
|
|
Trigger phrases:
|
|
- "run tests", "test suite", "verify tests"
|
|
category: build
|
|
tags: [testing, verification, test-suite]
|
|
disable-model-invocation: true
|
|
---
|
|
|
|
# Test Run
|
|
|
|
Run the project's test suite and report results. This skill is invoked by the autopilot at verification checkpoints — after implementing tests, after implementing features, or at any point where the test suite must pass before proceeding.
|
|
|
|
## Workflow
|
|
|
|
### 1. Detect Test Runner
|
|
|
|
Check in order — first match wins:
|
|
|
|
1. `scripts/run-tests.sh` exists → use it
|
|
2. `docker-compose.test.yml` or equivalent test environment exists → spin it up first, then detect runner below
|
|
3. Auto-detect from project files:
|
|
- `pytest.ini`, `pyproject.toml` with `[tool.pytest]`, or `conftest.py` → `pytest`
|
|
- `*.csproj` or `*.sln` → `dotnet test`
|
|
- `Cargo.toml` → `cargo test`
|
|
- `package.json` with test script → `npm test`
|
|
- `Makefile` with `test` target → `make test`
|
|
|
|
If no runner detected → report failure and ask user to specify.
|
|
|
|
### 2. Run Tests
|
|
|
|
1. Execute the detected test runner
|
|
2. Capture output: passed, failed, skipped, errors
|
|
3. If a test environment was spun up, tear it down after tests complete
|
|
|
|
### 3. Report Results
|
|
|
|
Present a summary:
|
|
|
|
```
|
|
══════════════════════════════════════
|
|
TEST RESULTS: [N passed, M failed, K skipped]
|
|
══════════════════════════════════════
|
|
```
|
|
|
|
### 4. Handle Outcome
|
|
|
|
**All tests pass** → return success to the autopilot for auto-chain.
|
|
|
|
**Tests fail** → present using Choose format:
|
|
|
|
```
|
|
══════════════════════════════════════
|
|
TEST RESULTS: [N passed, M failed, K skipped]
|
|
══════════════════════════════════════
|
|
A) Fix failing tests and re-run
|
|
B) Proceed anyway (not recommended)
|
|
C) Abort — fix manually
|
|
══════════════════════════════════════
|
|
Recommendation: A — fix failures before proceeding
|
|
══════════════════════════════════════
|
|
```
|
|
|
|
- If user picks A → attempt to fix failures, then re-run (loop back to step 2)
|
|
- If user picks B → return success with warning to the autopilot
|
|
- If user picks C → return failure to the autopilot
|
|
|
|
## Trigger Conditions
|
|
|
|
This skill is invoked by the autopilot at test verification checkpoints. It is not typically invoked directly by the user.
|