Files
detections/.cursor/skills/test-run/SKILL.md
T
Oleksandr Bezdieniezhnykh 5a968edcba Update .gitignore and refine documentation for execution environment
- Added Cython generated files to .gitignore to prevent unnecessary tracking.
- Updated paths in `inference.c` and `coreml_engine.c` to reflect the correct virtual environment.
- Revised the execution environment documentation to clarify hardware dependency checks and local execution instructions, ensuring accurate guidance for users.
- Removed outdated Docker suitability checks and streamlined the assessment process for test execution environments.
2026-03-30 00:53:46 +03:00

4.6 KiB

name, description, category, tags, disable-model-invocation
name description category tags disable-model-invocation
test-run Run the project's test suite, report results, and handle failures. Detects test runners automatically (pytest, dotnet test, cargo test, npm test) or uses scripts/run-tests.sh if available. Trigger phrases: - "run tests", "test suite", "verify tests" build
testing
verification
test-suite
true

Test Run

Run the project's test suite and report results. This skill is invoked by the autopilot at verification checkpoints — after implementing tests, after implementing features, or at any point where the test suite must pass before proceeding.

Workflow

1. Detect Test Runner

Check in order — first match wins:

  1. scripts/run-tests.sh exists → use it (the script already encodes the correct execution strategy)
  2. docker-compose.test.yml exists → run the Docker Suitability Check (see below). Docker is preferred; use it unless hardware constraints prevent it.
  3. Auto-detect from project files:
    • pytest.ini, pyproject.toml with [tool.pytest], or conftest.pypytest
    • *.csproj or *.slndotnet test
    • Cargo.tomlcargo test
    • package.json with test script → npm test
    • Makefile with test target → make test

If no runner detected → report failure and ask user to specify.

Execution Environment Check

  1. Check _docs/02_document/tests/environment.md for a "Test Execution" section. If the test-spec skill already assessed hardware dependencies and recorded a decision (local / docker / both), follow that decision.
  2. If the "Test Execution" section says local → run tests directly on host (no Docker).
  3. If the "Test Execution" section says docker → use Docker (docker-compose).
  4. If the "Test Execution" section says both → run local first, then Docker (or vice versa), and merge results.
  5. If no prior decision exists → fall back to the hardware-dependency detection logic from the test-spec skill's "Hardware-Dependency & Execution Environment Assessment" section. Ask the user if hardware indicators are found.

2. Run Tests

  1. Execute the detected test runner
  2. Capture output: passed, failed, skipped, errors
  3. If a test environment was spun up, tear it down after tests complete

3. Report Results

Present a summary:

══════════════════════════════════════
 TEST RESULTS: [N passed, M failed, K skipped, E errors]
══════════════════════════════════════

Important: Collection errors (import failures, missing dependencies, syntax errors) count as failures — they are not "skipped" or ignorable.

4. Diagnose Failures

Before presenting choices, list every failing/erroring test with a one-line root cause:

Failures:
 1. test_foo.py::test_bar — missing dependency 'netron' (not installed)
 2. test_baz.py::test_qux — AssertionError: expected 5, got 3 (logic error)
 3. test_old.py::test_legacy — ImportError: no module 'removed_module' (possibly obsolete)

Categorize each as: missing dependency, broken import, logic/assertion error, possibly obsolete, or environment-specific.

5. Handle Outcome

All tests pass → return success to the autopilot for auto-chain.

Any test fails or errors → this is a blocking gate. Never silently ignore or skip failures. Present using Choose format:

══════════════════════════════════════
 TEST RESULTS: [N passed, M failed, K skipped, E errors]
══════════════════════════════════════
 A) Investigate and fix failing tests/code, then re-run
 B) Remove obsolete tests (if diagnosis shows they are no longer relevant)
 C) Abort — fix manually
══════════════════════════════════════
 Recommendation: A — fix failures before proceeding
══════════════════════════════════════
  • If user picks A → investigate root causes, attempt fixes, then re-run (loop back to step 2)
  • If user picks B → confirm which tests to remove, delete them, then re-run (loop back to step 2)
  • If user picks C → return failure to the autopilot

Trigger Conditions

This skill is invoked by the autopilot at test verification checkpoints. It is not typically invoked directly by the user.