# Hardware-Dependency & Execution Environment Assessment (BLOCKING) Runs between Phase 3 and Phase 4. Docker is the **preferred** test execution environment (reproducibility, isolation, CI parity). However, hardware-dependent projects may require local execution to exercise the real code paths. This assessment determines the right execution strategy by scanning both documentation and source code. ## Step 1 — Documentation scan Check the following files for mentions of hardware-specific requirements: | File | Look for | |------|----------| | `_docs/00_problem/restrictions.md` | Platform requirements, hardware constraints, OS-specific features | | `_docs/01_solution/solution.md` | Engine selection logic, platform-dependent paths, hardware acceleration | | `_docs/02_document/architecture.md` | Component diagrams showing hardware layers, engine adapters | | `_docs/02_document/components/*/description.md` | Per-component hardware mentions | | `TESTS_OUTPUT_DIR/environment.md` | Existing environment decisions | ## Step 2 — Code scan Search the project source for indicators of hardware dependence. The project is **hardware-dependent** if ANY of the following are found: | Category | Code indicators (imports, APIs, config) | |----------|-----------------------------------------| | GPU / CUDA | `import pycuda`, `import tensorrt`, `import pynvml`, `torch.cuda`, `nvidia-smi`, `CUDA_VISIBLE_DEVICES`, `runtime: nvidia` | | Apple Neural Engine / CoreML | `import coremltools`, `CoreML`, `MLModel`, `ComputeUnit`, `MPS`, `sys.platform == "darwin"`, `platform.machine() == "arm64"` | | OpenCL / Vulkan | `import pyopencl`, `clCreateContext`, vulkan headers | | TPU / FPGA | `import tensorflow.distribute.TPUStrategy`, FPGA bitstream loaders | | Sensors / Cameras | `import cv2.VideoCapture(0)` (device index), serial port access, GPIO, V4L2 | | OS-specific services | Kernel modules (`modprobe`), host-level drivers, platform-gated code (`sys.platform` branches selecting different backends) | Also check dependency files (`requirements.txt`, `setup.py`, `pyproject.toml`, `Cargo.toml`, `*.csproj`) for hardware-specific packages. ## Step 3 — Classify the project Based on Steps 1–2, classify the project: - **Not hardware-dependent**: no indicators found → use Docker (preferred default), skip to Step 5 "Record the decision" - **Hardware-dependent**: one or more indicators found → proceed to Step 4 ## Step 4 — Present execution environment choice Present the findings and ask the user using Choose format: ``` ══════════════════════════════════════ DECISION REQUIRED: Test execution environment ══════════════════════════════════════ Hardware dependencies detected: - [list each indicator found, with file:line] ══════════════════════════════════════ Running in Docker means these hardware code paths are NOT exercised — Docker uses a Linux VM where [specific hardware, e.g. CoreML / CUDA] is unavailable. The system would fall back to [fallback engine/path]. ══════════════════════════════════════ A) Local execution only (tests the real hardware path) B) Docker execution only (tests the fallback path) C) Both local and Docker (tests both paths, requires two test runs — recommended for CI with heterogeneous runners) ══════════════════════════════════════ Recommendation: [A, B, or C] — [reason] ══════════════════════════════════════ ``` ## Step 5 — Record the decision Write or update a **"Test Execution"** section in `TESTS_OUTPUT_DIR/environment.md` with: 1. **Decision**: local / docker / both 2. **Hardware dependencies found**: list with file references 3. **Execution instructions** per chosen mode: - **Local mode**: prerequisites (OS, SDK, hardware), how to start services, how to run the test runner, environment variables - **Docker mode**: docker-compose profile/command, required images, how results are collected - **Both mode**: instructions for each, plus guidance on which CI runner type runs which mode The decision is consumed by Phase 4 to choose between local `scripts/run-tests.sh` and `docker-compose.test.yml`.