mirror of
https://github.com/azaion/admin.git
synced 2026-04-22 22:26:34 +00:00
Sync .cursor from detections
This commit is contained in:
@@ -147,7 +147,7 @@ If TESTS_OUTPUT_DIR already contains files:
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
At the start of execution, create a TodoWrite with all three phases. Update status as each phase completes.
|
||||
At the start of execution, create a TodoWrite with all four phases. Update status as each phase completes.
|
||||
|
||||
## Workflow
|
||||
|
||||
@@ -209,7 +209,7 @@ Based on all acquired data, acceptance_criteria, and restrictions, form detailed
|
||||
- [ ] Expected results use comparison methods from `.cursor/skills/test-spec/templates/expected-results.md`
|
||||
- [ ] Positive and negative scenarios are balanced
|
||||
- [ ] Consumer app has no direct access to system internals
|
||||
- [ ] Docker environment is self-contained (`docker compose up` sufficient)
|
||||
- [ ] Test environment matches project constraints (see Hardware-Dependency & Execution Environment Assessment below)
|
||||
- [ ] External dependencies have mock/stub services defined
|
||||
- [ ] Traceability matrix has no uncovered AC or restrictions
|
||||
|
||||
@@ -337,11 +337,90 @@ When coverage ≥ 70% and all remaining tests have validated data AND quantifiab
|
||||
|
||||
---
|
||||
|
||||
### Hardware-Dependency & Execution Environment Assessment (BLOCKING — runs before Phase 4)
|
||||
|
||||
Docker is the **preferred** test execution environment (reproducibility, isolation, CI parity). However, hardware-dependent projects may require local execution to exercise the real code paths. This assessment determines the right execution strategy by scanning both documentation and source code.
|
||||
|
||||
#### Step 1 — Documentation scan
|
||||
|
||||
Check the following files for mentions of hardware-specific requirements:
|
||||
|
||||
| File | Look for |
|
||||
|------|----------|
|
||||
| `_docs/00_problem/restrictions.md` | Platform requirements, hardware constraints, OS-specific features |
|
||||
| `_docs/01_solution/solution.md` | Engine selection logic, platform-dependent paths, hardware acceleration |
|
||||
| `_docs/02_document/architecture.md` | Component diagrams showing hardware layers, engine adapters |
|
||||
| `_docs/02_document/components/*/description.md` | Per-component hardware mentions |
|
||||
| `TESTS_OUTPUT_DIR/environment.md` | Existing environment decisions |
|
||||
|
||||
#### Step 2 — Code scan
|
||||
|
||||
Search the project source for indicators of hardware dependence. The project is **hardware-dependent** if ANY of the following are found:
|
||||
|
||||
| Category | Code indicators (imports, APIs, config) |
|
||||
|----------|-----------------------------------------|
|
||||
| GPU / CUDA | `import pycuda`, `import tensorrt`, `import pynvml`, `torch.cuda`, `nvidia-smi`, `CUDA_VISIBLE_DEVICES`, `runtime: nvidia` |
|
||||
| Apple Neural Engine / CoreML | `import coremltools`, `CoreML`, `MLModel`, `ComputeUnit`, `MPS`, `sys.platform == "darwin"`, `platform.machine() == "arm64"` |
|
||||
| OpenCL / Vulkan | `import pyopencl`, `clCreateContext`, vulkan headers |
|
||||
| TPU / FPGA | `import tensorflow.distribute.TPUStrategy`, FPGA bitstream loaders |
|
||||
| Sensors / Cameras | `import cv2.VideoCapture(0)` (device index), serial port access, GPIO, V4L2 |
|
||||
| OS-specific services | Kernel modules (`modprobe`), host-level drivers, platform-gated code (`sys.platform` branches selecting different backends) |
|
||||
|
||||
Also check dependency files (`requirements.txt`, `setup.py`, `pyproject.toml`, `Cargo.toml`, `*.csproj`) for hardware-specific packages.
|
||||
|
||||
#### Step 3 — Classify the project
|
||||
|
||||
Based on Steps 1–2, classify the project:
|
||||
|
||||
- **Not hardware-dependent**: no indicators found → use Docker (preferred default), skip to "Record the decision" below
|
||||
- **Hardware-dependent**: one or more indicators found → proceed to Step 4
|
||||
|
||||
#### Step 4 — Present execution environment choice
|
||||
|
||||
Present the findings and ask the user using Choose format:
|
||||
|
||||
```
|
||||
══════════════════════════════════════
|
||||
DECISION REQUIRED: Test execution environment
|
||||
══════════════════════════════════════
|
||||
Hardware dependencies detected:
|
||||
- [list each indicator found, with file:line]
|
||||
══════════════════════════════════════
|
||||
Running in Docker means these hardware code paths
|
||||
are NOT exercised — Docker uses a Linux VM where
|
||||
[specific hardware, e.g. CoreML / CUDA] is unavailable.
|
||||
The system would fall back to [fallback engine/path].
|
||||
══════════════════════════════════════
|
||||
A) Local execution only (tests the real hardware path)
|
||||
B) Docker execution only (tests the fallback path)
|
||||
C) Both local and Docker (tests both paths, requires
|
||||
two test runs — recommended for CI with heterogeneous
|
||||
runners)
|
||||
══════════════════════════════════════
|
||||
Recommendation: [A, B, or C] — [reason]
|
||||
══════════════════════════════════════
|
||||
```
|
||||
|
||||
#### Step 5 — Record the decision
|
||||
|
||||
Write or update a **"Test Execution"** section in `TESTS_OUTPUT_DIR/environment.md` with:
|
||||
|
||||
1. **Decision**: local / docker / both
|
||||
2. **Hardware dependencies found**: list with file references
|
||||
3. **Execution instructions** per chosen mode:
|
||||
- **Local mode**: prerequisites (OS, SDK, hardware), how to start services, how to run the test runner, environment variables
|
||||
- **Docker mode**: docker-compose profile/command, required images, how results are collected
|
||||
- **Both mode**: instructions for each, plus guidance on which CI runner type runs which mode
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Test Runner Script Generation
|
||||
|
||||
**Skip condition**: If this skill was invoked from the `/plan` skill (planning context, no code exists yet), skip Phase 4 entirely. Script creation should instead be planned as a task during decompose — the decomposer creates a task for creating these scripts. Phase 4 only runs when invoked from the existing-code flow (where source code already exists) or standalone.
|
||||
|
||||
**Role**: DevOps engineer
|
||||
**Goal**: Generate executable shell scripts that run the specified tests, so the autopilot and CI can invoke them consistently.
|
||||
**Constraints**: Scripts must be idempotent, portable across dev/CI, and exit with non-zero on failure.
|
||||
**Constraints**: Scripts must be idempotent, portable across dev/CI, and exit with non-zero on failure. Respect the Docker Suitability Assessment decision above.
|
||||
|
||||
#### Step 1 — Detect test infrastructure
|
||||
|
||||
@@ -350,28 +429,34 @@ When coverage ≥ 70% and all remaining tests have validated data AND quantifiab
|
||||
- .NET: `dotnet test` (*.csproj, *.sln)
|
||||
- Rust: `cargo test` (Cargo.toml)
|
||||
- Node: `npm test` or `vitest` / `jest` (package.json)
|
||||
2. Identify docker-compose files for integration/blackbox tests (`docker-compose.test.yml`, `e2e/docker-compose*.yml`)
|
||||
2. Check Docker Suitability Assessment result:
|
||||
- If **local execution** was chosen → do NOT generate docker-compose test files; scripts run directly on host
|
||||
- If **Docker execution** was chosen → identify/generate docker-compose files for integration/blackbox tests
|
||||
3. Identify performance/load testing tools from dependencies (k6, locust, artillery, wrk, or built-in benchmarks)
|
||||
4. Read `TESTS_OUTPUT_DIR/environment.md` for infrastructure requirements
|
||||
|
||||
#### Step 2 — Generate `scripts/run-tests.sh`
|
||||
#### Step 2 — Generate test runner
|
||||
|
||||
Create `scripts/run-tests.sh` at the project root using `.cursor/skills/test-spec/templates/run-tests-script.md` as structural guidance. The script must:
|
||||
**Docker is the default.** Only generate a local `scripts/run-tests.sh` if the Hardware-Dependency Assessment determined **local** or **both** execution (i.e., the project requires real hardware like GPU/CoreML/TPU/sensors). For all other projects, use `docker-compose.test.yml` — it provides reproducibility, isolation, and CI parity without a custom shell script.
|
||||
|
||||
**If local script is needed** — create `scripts/run-tests.sh` at the project root using `.cursor/skills/test-spec/templates/run-tests-script.md` as structural guidance. The script must:
|
||||
|
||||
1. Set `set -euo pipefail` and trap cleanup on EXIT
|
||||
2. Optionally accept a `--unit-only` flag to skip blackbox tests
|
||||
3. Run unit tests using the detected test runner
|
||||
4. If blackbox tests exist: spin up docker-compose environment, wait for health checks, run blackbox test suite, tear down
|
||||
2. **Install all project and test dependencies** (e.g. `pip install -q -r requirements.txt -r e2e/requirements.txt`, `dotnet restore`, `npm ci`). This prevents collection-time import errors on fresh environments.
|
||||
3. Optionally accept a `--unit-only` flag to skip blackbox tests
|
||||
4. Run unit/blackbox tests using the detected test runner (activate virtualenv if present, run test runner directly on host)
|
||||
5. Print a summary of passed/failed/skipped tests
|
||||
6. Exit 0 on all pass, exit 1 on any failure
|
||||
|
||||
**If Docker** — generate or update `docker-compose.test.yml` that builds the test image, installs all dependencies inside the container, runs the test suite, and exits with the test runner's exit code.
|
||||
|
||||
#### Step 3 — Generate `scripts/run-performance-tests.sh`
|
||||
|
||||
Create `scripts/run-performance-tests.sh` at the project root. The script must:
|
||||
|
||||
1. Set `set -euo pipefail` and trap cleanup on EXIT
|
||||
2. Read thresholds from `_docs/02_document/tests/performance-tests.md` (or accept as CLI args)
|
||||
3. Spin up the system under test (docker-compose or local)
|
||||
3. Start the system under test (local or docker-compose, matching the Docker Suitability Assessment decision)
|
||||
4. Run load/performance scenarios using the detected tool
|
||||
5. Compare results against threshold values from the test spec
|
||||
6. Print a pass/fail summary per scenario
|
||||
|
||||
Reference in New Issue
Block a user