Sync .cursor from detections

This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-04-12 05:05:08 +03:00
parent 416e559e8b
commit 4b52c0be3b
14 changed files with 847 additions and 572 deletions
+72 -32
View File
@@ -209,7 +209,7 @@ Based on all acquired data, acceptance_criteria, and restrictions, form detailed
- [ ] Expected results use comparison methods from `.cursor/skills/test-spec/templates/expected-results.md`
- [ ] Positive and negative scenarios are balanced
- [ ] Consumer app has no direct access to system internals
- [ ] Test environment matches project constraints (see Docker Suitability Assessment below)
- [ ] Test environment matches project constraints (see Hardware-Dependency & Execution Environment Assessment below)
- [ ] External dependencies have mock/stub services defined
- [ ] Traceability matrix has no uncovered AC or restrictions
@@ -337,43 +337,80 @@ When coverage ≥ 70% and all remaining tests have validated data AND quantifiab
---
### Docker Suitability Assessment (BLOCKING — runs before Phase 4)
### Hardware-Dependency & Execution Environment Assessment (BLOCKING — runs before Phase 4)
Docker is the **preferred** test execution environment (reproducibility, isolation, CI parity). Before generating scripts, check whether the project has any constraints that prevent easy Docker usage.
Docker is the **preferred** test execution environment (reproducibility, isolation, CI parity). However, hardware-dependent projects may require local execution to exercise the real code paths. This assessment determines the right execution strategy by scanning both documentation and source code.
**Disqualifying factors** (any one is sufficient to fall back to local):
- Hardware bindings: GPU, MPS, TPU, FPGA, accelerators, sensors, cameras, serial devices, host-level drivers (CUDA, Metal, OpenCL, etc.)
- Host dependencies: licensed software, OS-specific services, kernel modules, proprietary SDKs not installable in a container
- Data/volume constraints: large files (> 100MB) that would be impractical to copy into a container, databases that must run on the host
- Network/environment: tests that require host networking, VPN access, or specific DNS/firewall rules
- Performance: Docker overhead would invalidate benchmarks or latency-sensitive measurements
#### Step 1 — Documentation scan
**Assessment steps**:
1. Scan project source, config files, and dependencies for indicators of the factors above
2. Check `TESTS_OUTPUT_DIR/environment.md` for environment requirements
3. Check `_docs/00_problem/restrictions.md` and `_docs/01_solution/solution.md` for constraints
Check the following files for mentions of hardware-specific requirements:
**Decision**:
- If ANY disqualifying factor is found → recommend **local test execution** as fallback. Present to user using Choose format:
| File | Look for |
|------|----------|
| `_docs/00_problem/restrictions.md` | Platform requirements, hardware constraints, OS-specific features |
| `_docs/01_solution/solution.md` | Engine selection logic, platform-dependent paths, hardware acceleration |
| `_docs/02_document/architecture.md` | Component diagrams showing hardware layers, engine adapters |
| `_docs/02_document/components/*/description.md` | Per-component hardware mentions |
| `TESTS_OUTPUT_DIR/environment.md` | Existing environment decisions |
#### Step 2 — Code scan
Search the project source for indicators of hardware dependence. The project is **hardware-dependent** if ANY of the following are found:
| Category | Code indicators (imports, APIs, config) |
|----------|-----------------------------------------|
| GPU / CUDA | `import pycuda`, `import tensorrt`, `import pynvml`, `torch.cuda`, `nvidia-smi`, `CUDA_VISIBLE_DEVICES`, `runtime: nvidia` |
| Apple Neural Engine / CoreML | `import coremltools`, `CoreML`, `MLModel`, `ComputeUnit`, `MPS`, `sys.platform == "darwin"`, `platform.machine() == "arm64"` |
| OpenCL / Vulkan | `import pyopencl`, `clCreateContext`, vulkan headers |
| TPU / FPGA | `import tensorflow.distribute.TPUStrategy`, FPGA bitstream loaders |
| Sensors / Cameras | `import cv2.VideoCapture(0)` (device index), serial port access, GPIO, V4L2 |
| OS-specific services | Kernel modules (`modprobe`), host-level drivers, platform-gated code (`sys.platform` branches selecting different backends) |
Also check dependency files (`requirements.txt`, `setup.py`, `pyproject.toml`, `Cargo.toml`, `*.csproj`) for hardware-specific packages.
#### Step 3 — Classify the project
Based on Steps 12, classify the project:
- **Not hardware-dependent**: no indicators found → use Docker (preferred default), skip to "Record the decision" below
- **Hardware-dependent**: one or more indicators found → proceed to Step 4
#### Step 4 — Present execution environment choice
Present the findings and ask the user using Choose format:
```
══════════════════════════════════════
DECISION REQUIRED: Test execution environment
══════════════════════════════════════
Docker is preferred, but factors preventing easy
Docker execution detected:
- [list factors found]
Hardware dependencies detected:
- [list each indicator found, with file:line]
══════════════════════════════════════
A) Local execution (recommended)
B) Docker execution (constraints may cause issues)
Running in Docker means these hardware code paths
are NOT exercised — Docker uses a Linux VM where
[specific hardware, e.g. CoreML / CUDA] is unavailable.
The system would fall back to [fallback engine/path].
══════════════════════════════════════
Recommendation: A — detected constraints prevent
easy Docker execution
A) Local execution only (tests the real hardware path)
B) Docker execution only (tests the fallback path)
C) Both local and Docker (tests both paths, requires
two test runs — recommended for CI with heterogeneous
runners)
══════════════════════════════════════
Recommendation: [A, B, or C] — [reason]
══════════════════════════════════════
```
- If NO disqualifying factors → use Docker (preferred default)
- Record the decision in `TESTS_OUTPUT_DIR/environment.md` under a "Test Execution" section
#### Step 5 — Record the decision
Write or update a **"Test Execution"** section in `TESTS_OUTPUT_DIR/environment.md` with:
1. **Decision**: local / docker / both
2. **Hardware dependencies found**: list with file references
3. **Execution instructions** per chosen mode:
- **Local mode**: prerequisites (OS, SDK, hardware), how to start services, how to run the test runner, environment variables
- **Docker mode**: docker-compose profile/command, required images, how results are collected
- **Both mode**: instructions for each, plus guidance on which CI runner type runs which mode
---
@@ -398,17 +435,20 @@ Docker is the **preferred** test execution environment (reproducibility, isolati
3. Identify performance/load testing tools from dependencies (k6, locust, artillery, wrk, or built-in benchmarks)
4. Read `TESTS_OUTPUT_DIR/environment.md` for infrastructure requirements
#### Step 2 — Generate `scripts/run-tests.sh`
#### Step 2 — Generate test runner
Create `scripts/run-tests.sh` at the project root using `.cursor/skills/test-spec/templates/run-tests-script.md` as structural guidance. The script must:
**Docker is the default.** Only generate a local `scripts/run-tests.sh` if the Hardware-Dependency Assessment determined **local** or **both** execution (i.e., the project requires real hardware like GPU/CoreML/TPU/sensors). For all other projects, use `docker-compose.test.yml` — it provides reproducibility, isolation, and CI parity without a custom shell script.
**If local script is needed** — create `scripts/run-tests.sh` at the project root using `.cursor/skills/test-spec/templates/run-tests-script.md` as structural guidance. The script must:
1. Set `set -euo pipefail` and trap cleanup on EXIT
2. Optionally accept a `--unit-only` flag to skip blackbox tests
3. Run unit/blackbox tests using the detected test runner:
- **Local mode**: activate virtualenv (if present), run test runner directly on host
- **Docker mode**: spin up docker-compose environment, wait for health checks, run test suite, tear down
4. Print a summary of passed/failed/skipped tests
5. Exit 0 on all pass, exit 1 on any failure
2. **Install all project and test dependencies** (e.g. `pip install -q -r requirements.txt -r e2e/requirements.txt`, `dotnet restore`, `npm ci`). This prevents collection-time import errors on fresh environments.
3. Optionally accept a `--unit-only` flag to skip blackbox tests
4. Run unit/blackbox tests using the detected test runner (activate virtualenv if present, run test runner directly on host)
5. Print a summary of passed/failed/skipped tests
6. Exit 0 on all pass, exit 1 on any failure
**If Docker** — generate or update `docker-compose.test.yml` that builds the test image, installs all dependencies inside the container, runs the test suite, and exits with the test runner's exit code.
#### Step 3 — Generate `scripts/run-performance-tests.sh`