mirror of
https://github.com/azaion/detections.git
synced 2026-04-22 09:16:33 +00:00
Update .gitignore and refine documentation for execution environment
- Added Cython generated files to .gitignore to prevent unnecessary tracking. - Updated paths in `inference.c` and `coreml_engine.c` to reflect the correct virtual environment. - Revised the execution environment documentation to clarify hardware dependency checks and local execution instructions, ensuring accurate guidance for users. - Removed outdated Docker suitability checks and streamlined the assessment process for test execution environments.
This commit is contained in:
@@ -32,36 +32,13 @@ Check in order — first match wins:
|
||||
|
||||
If no runner detected → report failure and ask user to specify.
|
||||
|
||||
#### Docker Suitability Check
|
||||
#### Execution Environment Check
|
||||
|
||||
Docker is the preferred test environment. Before using it, verify no constraints prevent easy Docker execution:
|
||||
|
||||
1. Check `_docs/02_document/tests/environment.md` for a "Test Execution" decision (if the test-spec skill already assessed this, follow that decision)
|
||||
2. If no prior decision exists, check for disqualifying factors:
|
||||
- Hardware bindings: GPU, MPS, CUDA, TPU, FPGA, sensors, cameras, serial devices, host-level drivers
|
||||
- Host dependencies: licensed software, OS-specific services, kernel modules, proprietary SDKs
|
||||
- Data/volume constraints: large files (> 100MB) impractical to copy into a container
|
||||
- Network/environment: host networking, VPN, specific DNS/firewall rules
|
||||
- Performance: Docker overhead would invalidate benchmarks or latency measurements
|
||||
3. If any disqualifying factor found → fall back to local test runner. Present to user using Choose format:
|
||||
|
||||
```
|
||||
══════════════════════════════════════
|
||||
DECISION REQUIRED: Docker is preferred but factors
|
||||
preventing easy Docker execution detected
|
||||
══════════════════════════════════════
|
||||
Factors detected:
|
||||
- [list factors]
|
||||
══════════════════════════════════════
|
||||
A) Run tests locally (recommended)
|
||||
B) Run tests in Docker anyway
|
||||
══════════════════════════════════════
|
||||
Recommendation: A — detected constraints prevent
|
||||
easy Docker execution
|
||||
══════════════════════════════════════
|
||||
```
|
||||
|
||||
4. If no disqualifying factors → use Docker (preferred default)
|
||||
1. Check `_docs/02_document/tests/environment.md` for a "Test Execution" section. If the test-spec skill already assessed hardware dependencies and recorded a decision (local / docker / both), **follow that decision**.
|
||||
2. If the "Test Execution" section says **local** → run tests directly on host (no Docker).
|
||||
3. If the "Test Execution" section says **docker** → use Docker (docker-compose).
|
||||
4. If the "Test Execution" section says **both** → run local first, then Docker (or vice versa), and merge results.
|
||||
5. If no prior decision exists → fall back to the hardware-dependency detection logic from the test-spec skill's "Hardware-Dependency & Execution Environment Assessment" section. Ask the user if hardware indicators are found.
|
||||
|
||||
### 2. Run Tests
|
||||
|
||||
|
||||
Reference in New Issue
Block a user