mirror of
https://github.com/azaion/detections.git
synced 2026-04-22 11:36:32 +00:00
Update .gitignore and refine documentation for execution environment
- Added Cython generated files to .gitignore to prevent unnecessary tracking. - Updated paths in `inference.c` and `coreml_engine.c` to reflect the correct virtual environment. - Revised the execution environment documentation to clarify hardware dependency checks and local execution instructions, ensuring accurate guidance for users. - Removed outdated Docker suitability checks and streamlined the assessment process for test execution environments.
This commit is contained in:
@@ -32,36 +32,13 @@ Check in order — first match wins:
|
||||
|
||||
If no runner detected → report failure and ask user to specify.
|
||||
|
||||
#### Docker Suitability Check
|
||||
#### Execution Environment Check
|
||||
|
||||
Docker is the preferred test environment. Before using it, verify no constraints prevent easy Docker execution:
|
||||
|
||||
1. Check `_docs/02_document/tests/environment.md` for a "Test Execution" decision (if the test-spec skill already assessed this, follow that decision)
|
||||
2. If no prior decision exists, check for disqualifying factors:
|
||||
- Hardware bindings: GPU, MPS, CUDA, TPU, FPGA, sensors, cameras, serial devices, host-level drivers
|
||||
- Host dependencies: licensed software, OS-specific services, kernel modules, proprietary SDKs
|
||||
- Data/volume constraints: large files (> 100MB) impractical to copy into a container
|
||||
- Network/environment: host networking, VPN, specific DNS/firewall rules
|
||||
- Performance: Docker overhead would invalidate benchmarks or latency measurements
|
||||
3. If any disqualifying factor found → fall back to local test runner. Present to user using Choose format:
|
||||
|
||||
```
|
||||
══════════════════════════════════════
|
||||
DECISION REQUIRED: Docker is preferred but factors
|
||||
preventing easy Docker execution detected
|
||||
══════════════════════════════════════
|
||||
Factors detected:
|
||||
- [list factors]
|
||||
══════════════════════════════════════
|
||||
A) Run tests locally (recommended)
|
||||
B) Run tests in Docker anyway
|
||||
══════════════════════════════════════
|
||||
Recommendation: A — detected constraints prevent
|
||||
easy Docker execution
|
||||
══════════════════════════════════════
|
||||
```
|
||||
|
||||
4. If no disqualifying factors → use Docker (preferred default)
|
||||
1. Check `_docs/02_document/tests/environment.md` for a "Test Execution" section. If the test-spec skill already assessed hardware dependencies and recorded a decision (local / docker / both), **follow that decision**.
|
||||
2. If the "Test Execution" section says **local** → run tests directly on host (no Docker).
|
||||
3. If the "Test Execution" section says **docker** → use Docker (docker-compose).
|
||||
4. If the "Test Execution" section says **both** → run local first, then Docker (or vice versa), and merge results.
|
||||
5. If no prior decision exists → fall back to the hardware-dependency detection logic from the test-spec skill's "Hardware-Dependency & Execution Environment Assessment" section. Ask the user if hardware indicators are found.
|
||||
|
||||
### 2. Run Tests
|
||||
|
||||
|
||||
@@ -209,7 +209,7 @@ Based on all acquired data, acceptance_criteria, and restrictions, form detailed
|
||||
- [ ] Expected results use comparison methods from `.cursor/skills/test-spec/templates/expected-results.md`
|
||||
- [ ] Positive and negative scenarios are balanced
|
||||
- [ ] Consumer app has no direct access to system internals
|
||||
- [ ] Test environment matches project constraints (see Docker Suitability Assessment below)
|
||||
- [ ] Test environment matches project constraints (see Hardware-Dependency & Execution Environment Assessment below)
|
||||
- [ ] External dependencies have mock/stub services defined
|
||||
- [ ] Traceability matrix has no uncovered AC or restrictions
|
||||
|
||||
@@ -337,43 +337,80 @@ When coverage ≥ 70% and all remaining tests have validated data AND quantifiab
|
||||
|
||||
---
|
||||
|
||||
### Docker Suitability Assessment (BLOCKING — runs before Phase 4)
|
||||
### Hardware-Dependency & Execution Environment Assessment (BLOCKING — runs before Phase 4)
|
||||
|
||||
Docker is the **preferred** test execution environment (reproducibility, isolation, CI parity). Before generating scripts, check whether the project has any constraints that prevent easy Docker usage.
|
||||
Docker is the **preferred** test execution environment (reproducibility, isolation, CI parity). However, hardware-dependent projects may require local execution to exercise the real code paths. This assessment determines the right execution strategy by scanning both documentation and source code.
|
||||
|
||||
**Disqualifying factors** (any one is sufficient to fall back to local):
|
||||
- Hardware bindings: GPU, MPS, TPU, FPGA, accelerators, sensors, cameras, serial devices, host-level drivers (CUDA, Metal, OpenCL, etc.)
|
||||
- Host dependencies: licensed software, OS-specific services, kernel modules, proprietary SDKs not installable in a container
|
||||
- Data/volume constraints: large files (> 100MB) that would be impractical to copy into a container, databases that must run on the host
|
||||
- Network/environment: tests that require host networking, VPN access, or specific DNS/firewall rules
|
||||
- Performance: Docker overhead would invalidate benchmarks or latency-sensitive measurements
|
||||
#### Step 1 — Documentation scan
|
||||
|
||||
**Assessment steps**:
|
||||
1. Scan project source, config files, and dependencies for indicators of the factors above
|
||||
2. Check `TESTS_OUTPUT_DIR/environment.md` for environment requirements
|
||||
3. Check `_docs/00_problem/restrictions.md` and `_docs/01_solution/solution.md` for constraints
|
||||
Check the following files for mentions of hardware-specific requirements:
|
||||
|
||||
**Decision**:
|
||||
- If ANY disqualifying factor is found → recommend **local test execution** as fallback. Present to user using Choose format:
|
||||
| File | Look for |
|
||||
|------|----------|
|
||||
| `_docs/00_problem/restrictions.md` | Platform requirements, hardware constraints, OS-specific features |
|
||||
| `_docs/01_solution/solution.md` | Engine selection logic, platform-dependent paths, hardware acceleration |
|
||||
| `_docs/02_document/architecture.md` | Component diagrams showing hardware layers, engine adapters |
|
||||
| `_docs/02_document/components/*/description.md` | Per-component hardware mentions |
|
||||
| `TESTS_OUTPUT_DIR/environment.md` | Existing environment decisions |
|
||||
|
||||
#### Step 2 — Code scan
|
||||
|
||||
Search the project source for indicators of hardware dependence. The project is **hardware-dependent** if ANY of the following are found:
|
||||
|
||||
| Category | Code indicators (imports, APIs, config) |
|
||||
|----------|-----------------------------------------|
|
||||
| GPU / CUDA | `import pycuda`, `import tensorrt`, `import pynvml`, `torch.cuda`, `nvidia-smi`, `CUDA_VISIBLE_DEVICES`, `runtime: nvidia` |
|
||||
| Apple Neural Engine / CoreML | `import coremltools`, `CoreML`, `MLModel`, `ComputeUnit`, `MPS`, `sys.platform == "darwin"`, `platform.machine() == "arm64"` |
|
||||
| OpenCL / Vulkan | `import pyopencl`, `clCreateContext`, vulkan headers |
|
||||
| TPU / FPGA | `import tensorflow.distribute.TPUStrategy`, FPGA bitstream loaders |
|
||||
| Sensors / Cameras | `import cv2.VideoCapture(0)` (device index), serial port access, GPIO, V4L2 |
|
||||
| OS-specific services | Kernel modules (`modprobe`), host-level drivers, platform-gated code (`sys.platform` branches selecting different backends) |
|
||||
|
||||
Also check dependency files (`requirements.txt`, `setup.py`, `pyproject.toml`, `Cargo.toml`, `*.csproj`) for hardware-specific packages.
|
||||
|
||||
#### Step 3 — Classify the project
|
||||
|
||||
Based on Steps 1–2, classify the project:
|
||||
|
||||
- **Not hardware-dependent**: no indicators found → use Docker (preferred default), skip to "Record the decision" below
|
||||
- **Hardware-dependent**: one or more indicators found → proceed to Step 4
|
||||
|
||||
#### Step 4 — Present execution environment choice
|
||||
|
||||
Present the findings and ask the user using Choose format:
|
||||
|
||||
```
|
||||
══════════════════════════════════════
|
||||
DECISION REQUIRED: Test execution environment
|
||||
══════════════════════════════════════
|
||||
Docker is preferred, but factors preventing easy
|
||||
Docker execution detected:
|
||||
- [list factors found]
|
||||
Hardware dependencies detected:
|
||||
- [list each indicator found, with file:line]
|
||||
══════════════════════════════════════
|
||||
A) Local execution (recommended)
|
||||
B) Docker execution (constraints may cause issues)
|
||||
Running in Docker means these hardware code paths
|
||||
are NOT exercised — Docker uses a Linux VM where
|
||||
[specific hardware, e.g. CoreML / CUDA] is unavailable.
|
||||
The system would fall back to [fallback engine/path].
|
||||
══════════════════════════════════════
|
||||
Recommendation: A — detected constraints prevent
|
||||
easy Docker execution
|
||||
A) Local execution only (tests the real hardware path)
|
||||
B) Docker execution only (tests the fallback path)
|
||||
C) Both local and Docker (tests both paths, requires
|
||||
two test runs — recommended for CI with heterogeneous
|
||||
runners)
|
||||
══════════════════════════════════════
|
||||
Recommendation: [A, B, or C] — [reason]
|
||||
══════════════════════════════════════
|
||||
```
|
||||
|
||||
- If NO disqualifying factors → use Docker (preferred default)
|
||||
- Record the decision in `TESTS_OUTPUT_DIR/environment.md` under a "Test Execution" section
|
||||
#### Step 5 — Record the decision
|
||||
|
||||
Write or update a **"Test Execution"** section in `TESTS_OUTPUT_DIR/environment.md` with:
|
||||
|
||||
1. **Decision**: local / docker / both
|
||||
2. **Hardware dependencies found**: list with file references
|
||||
3. **Execution instructions** per chosen mode:
|
||||
- **Local mode**: prerequisites (OS, SDK, hardware), how to start services, how to run the test runner, environment variables
|
||||
- **Docker mode**: docker-compose profile/command, required images, how results are collected
|
||||
- **Both mode**: instructions for each, plus guidance on which CI runner type runs which mode
|
||||
|
||||
---
|
||||
|
||||
|
||||
Reference in New Issue
Block a user