mirror of
https://github.com/azaion/detections.git
synced 2026-04-22 23:16:32 +00:00
Enhance autopilot documentation and workflows: Add assumptions regarding single project per workspace, update notification sound references, and introduce context budget heuristics for managing session limits. Revise various skill documents to reflect changes in task management, including ticketing and testing processes, ensuring clarity and consistency across the system.
This commit is contained in:
@@ -0,0 +1,88 @@
|
||||
# Test Runner Script Structure
|
||||
|
||||
Reference for generating `scripts/run-tests.sh` and `scripts/run-performance-tests.sh`.
|
||||
|
||||
## `scripts/run-tests.sh`
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
UNIT_ONLY=false
|
||||
RESULTS_DIR="$PROJECT_ROOT/test-results"
|
||||
|
||||
for arg in "$@"; do
|
||||
case $arg in
|
||||
--unit-only) UNIT_ONLY=true ;;
|
||||
esac
|
||||
done
|
||||
|
||||
cleanup() {
|
||||
# tear down docker-compose if it was started
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
mkdir -p "$RESULTS_DIR"
|
||||
|
||||
# --- Unit Tests ---
|
||||
# [detect runner: pytest / dotnet test / cargo test / npm test]
|
||||
# [run and capture exit code]
|
||||
# [save results to $RESULTS_DIR/unit-results.*]
|
||||
|
||||
# --- Blackbox Tests (skip if --unit-only) ---
|
||||
# if ! $UNIT_ONLY; then
|
||||
# [docker compose -f <compose-file> up -d]
|
||||
# [wait for health checks]
|
||||
# [run blackbox test suite]
|
||||
# [save results to $RESULTS_DIR/blackbox-results.*]
|
||||
# fi
|
||||
|
||||
# --- Summary ---
|
||||
# [print passed / failed / skipped counts]
|
||||
# [exit 0 if all passed, exit 1 otherwise]
|
||||
```
|
||||
|
||||
## `scripts/run-performance-tests.sh`
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
RESULTS_DIR="$PROJECT_ROOT/test-results"
|
||||
|
||||
cleanup() {
|
||||
# tear down test environment if started
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
mkdir -p "$RESULTS_DIR"
|
||||
|
||||
# --- Start System Under Test ---
|
||||
# [docker compose up -d or start local server]
|
||||
# [wait for health checks]
|
||||
|
||||
# --- Run Performance Scenarios ---
|
||||
# [detect tool: k6 / locust / artillery / wrk / built-in]
|
||||
# [run each scenario from performance-tests.md]
|
||||
# [capture metrics: latency P50/P95/P99, throughput, error rate]
|
||||
|
||||
# --- Compare Against Thresholds ---
|
||||
# [read thresholds from test spec or CLI args]
|
||||
# [print per-scenario pass/fail]
|
||||
|
||||
# --- Summary ---
|
||||
# [exit 0 if all thresholds met, exit 1 otherwise]
|
||||
```
|
||||
|
||||
## Key Requirements
|
||||
|
||||
- Both scripts must be idempotent (safe to run multiple times)
|
||||
- Both scripts must work in CI (no interactive prompts, no GUI)
|
||||
- Use `trap cleanup EXIT` to ensure teardown even on failure
|
||||
- Exit codes: 0 = all pass, 1 = failures detected
|
||||
- Write results to `test-results/` directory (add to `.gitignore` if not already present)
|
||||
- The actual commands depend on the detected tech stack — fill them in during Phase 4 of the test-spec skill
|
||||
Reference in New Issue
Block a user