mirror of
https://github.com/azaion/gps-denied-onboard.git
synced 2026-04-23 01:46:38 +00:00
Update skills documentation to reflect changes in directory structure and terminology. Replace references to integration tests with blackbox tests across various SKILL.md files and templates. Revise paths in planning and deployment documentation to align with the updated _docs/02_document/ structure. Enhance clarity in task management processes and ensure consistency in terminology throughout the documentation.
This commit is contained in:
@@ -0,0 +1,75 @@
|
||||
---
|
||||
name: test-run
|
||||
description: |
|
||||
Run the project's test suite, report results, and handle failures.
|
||||
Detects test runners automatically (pytest, dotnet test, cargo test, npm test)
|
||||
or uses scripts/run-tests.sh if available.
|
||||
Trigger phrases:
|
||||
- "run tests", "test suite", "verify tests"
|
||||
category: build
|
||||
tags: [testing, verification, test-suite]
|
||||
disable-model-invocation: true
|
||||
---
|
||||
|
||||
# Test Run
|
||||
|
||||
Run the project's test suite and report results. This skill is invoked by the autopilot at verification checkpoints — after implementing tests, after implementing features, or at any point where the test suite must pass before proceeding.
|
||||
|
||||
## Workflow
|
||||
|
||||
### 1. Detect Test Runner
|
||||
|
||||
Check in order — first match wins:
|
||||
|
||||
1. `scripts/run-tests.sh` exists → use it
|
||||
2. `docker-compose.test.yml` or equivalent test environment exists → spin it up first, then detect runner below
|
||||
3. Auto-detect from project files:
|
||||
- `pytest.ini`, `pyproject.toml` with `[tool.pytest]`, or `conftest.py` → `pytest`
|
||||
- `*.csproj` or `*.sln` → `dotnet test`
|
||||
- `Cargo.toml` → `cargo test`
|
||||
- `package.json` with test script → `npm test`
|
||||
- `Makefile` with `test` target → `make test`
|
||||
|
||||
If no runner detected → report failure and ask user to specify.
|
||||
|
||||
### 2. Run Tests
|
||||
|
||||
1. Execute the detected test runner
|
||||
2. Capture output: passed, failed, skipped, errors
|
||||
3. If a test environment was spun up, tear it down after tests complete
|
||||
|
||||
### 3. Report Results
|
||||
|
||||
Present a summary:
|
||||
|
||||
```
|
||||
══════════════════════════════════════
|
||||
TEST RESULTS: [N passed, M failed, K skipped]
|
||||
══════════════════════════════════════
|
||||
```
|
||||
|
||||
### 4. Handle Outcome
|
||||
|
||||
**All tests pass** → return success to the autopilot for auto-chain.
|
||||
|
||||
**Tests fail** → present using Choose format:
|
||||
|
||||
```
|
||||
══════════════════════════════════════
|
||||
TEST RESULTS: [N passed, M failed, K skipped]
|
||||
══════════════════════════════════════
|
||||
A) Fix failing tests and re-run
|
||||
B) Proceed anyway (not recommended)
|
||||
C) Abort — fix manually
|
||||
══════════════════════════════════════
|
||||
Recommendation: A — fix failures before proceeding
|
||||
══════════════════════════════════════
|
||||
```
|
||||
|
||||
- If user picks A → attempt to fix failures, then re-run (loop back to step 2)
|
||||
- If user picks B → return success with warning to the autopilot
|
||||
- If user picks C → return failure to the autopilot
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
This skill is invoked by the autopilot at test verification checkpoints. It is not typically invoked directly by the user.
|
||||
Reference in New Issue
Block a user