mirror of
https://github.com/azaion/detections.git
synced 2026-04-22 11:06:32 +00:00
[AZ-178] Implement streaming video detection endpoint
- Added `/detect/video` endpoint for true streaming video detection, allowing inference to start as upload bytes arrive. - Introduced `run_detect_video_stream` method in the inference module to handle video processing from a file-like object. - Updated media hashing to include a new function for computing hashes directly from files with minimal I/O. - Enhanced documentation to reflect changes in video processing and API behavior. Made-with: Cursor
This commit is contained in:
@@ -56,7 +56,7 @@ Present a summary:
|
||||
══════════════════════════════════════
|
||||
```
|
||||
|
||||
**Important**: Collection errors (import failures, missing dependencies, syntax errors) count as failures — they are not "skipped" or ignorable.
|
||||
**Important**: Collection errors (import failures, missing dependencies, syntax errors) count as failures — they are not "skipped" or ignorable. If a collection error is caused by a missing dependency, install it (add to the project's dependency file and install) before re-running. The test runner script (`run-tests.sh`) should install all dependencies automatically — if it doesn't, fix the script to do so.
|
||||
|
||||
### 4. Diagnose Failures and Skips
|
||||
|
||||
|
||||
@@ -435,17 +435,20 @@ Write or update a **"Test Execution"** section in `TESTS_OUTPUT_DIR/environment.
|
||||
3. Identify performance/load testing tools from dependencies (k6, locust, artillery, wrk, or built-in benchmarks)
|
||||
4. Read `TESTS_OUTPUT_DIR/environment.md` for infrastructure requirements
|
||||
|
||||
#### Step 2 — Generate `scripts/run-tests.sh`
|
||||
#### Step 2 — Generate test runner
|
||||
|
||||
Create `scripts/run-tests.sh` at the project root using `.cursor/skills/test-spec/templates/run-tests-script.md` as structural guidance. The script must:
|
||||
**Docker is the default.** Only generate a local `scripts/run-tests.sh` if the Hardware-Dependency Assessment determined **local** or **both** execution (i.e., the project requires real hardware like GPU/CoreML/TPU/sensors). For all other projects, use `docker-compose.test.yml` — it provides reproducibility, isolation, and CI parity without a custom shell script.
|
||||
|
||||
**If local script is needed** — create `scripts/run-tests.sh` at the project root using `.cursor/skills/test-spec/templates/run-tests-script.md` as structural guidance. The script must:
|
||||
|
||||
1. Set `set -euo pipefail` and trap cleanup on EXIT
|
||||
2. Optionally accept a `--unit-only` flag to skip blackbox tests
|
||||
3. Run unit/blackbox tests using the detected test runner:
|
||||
- **Local mode**: activate virtualenv (if present), run test runner directly on host
|
||||
- **Docker mode**: spin up docker-compose environment, wait for health checks, run test suite, tear down
|
||||
4. Print a summary of passed/failed/skipped tests
|
||||
5. Exit 0 on all pass, exit 1 on any failure
|
||||
2. **Install all project and test dependencies** (e.g. `pip install -q -r requirements.txt -r e2e/requirements.txt`, `dotnet restore`, `npm ci`). This prevents collection-time import errors on fresh environments.
|
||||
3. Optionally accept a `--unit-only` flag to skip blackbox tests
|
||||
4. Run unit/blackbox tests using the detected test runner (activate virtualenv if present, run test runner directly on host)
|
||||
5. Print a summary of passed/failed/skipped tests
|
||||
6. Exit 0 on all pass, exit 1 on any failure
|
||||
|
||||
**If Docker** — generate or update `docker-compose.test.yml` that builds the test image, installs all dependencies inside the container, runs the test suite, and exits with the test runner's exit code.
|
||||
|
||||
#### Step 3 — Generate `scripts/run-performance-tests.sh`
|
||||
|
||||
|
||||
@@ -2,7 +2,13 @@
|
||||
|
||||
Reference for generating `scripts/run-tests.sh` and `scripts/run-performance-tests.sh`.
|
||||
|
||||
## `scripts/run-tests.sh`
|
||||
## When to generate a local `run-tests.sh`
|
||||
|
||||
A local shell script is needed **only** for hardware-dependent projects that require real hardware (GPU, CoreML, TPU, sensors, etc.) to exercise the actual code paths. If the Hardware-Dependency Assessment (Phase 4 prerequisite) determined **local** or **both** execution, generate this script.
|
||||
|
||||
For all other projects, **use Docker** (`docker-compose.test.yml` / `Dockerfile.test`). Docker is the default — it provides reproducibility, isolation, and CI parity. Do not generate a local `run-tests.sh` when Docker is sufficient.
|
||||
|
||||
## `scripts/run-tests.sh` (local / hardware-dependent only)
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
@@ -20,23 +26,33 @@ for arg in "$@"; do
|
||||
done
|
||||
|
||||
cleanup() {
|
||||
# tear down docker-compose if it was started
|
||||
# tear down services started by this script
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
mkdir -p "$RESULTS_DIR"
|
||||
|
||||
# --- Install Dependencies ---
|
||||
# MANDATORY: install all project + test dependencies before building or running.
|
||||
# A fresh clone or CI runner may have nothing installed.
|
||||
# Python: pip install -q -r requirements.txt -r e2e/requirements.txt
|
||||
# .NET: dotnet restore
|
||||
# Rust: cargo fetch
|
||||
# Node: npm ci
|
||||
|
||||
# --- Build (if needed) ---
|
||||
# [e.g. Cython: python setup.py build_ext --inplace]
|
||||
|
||||
# --- Unit Tests ---
|
||||
# [detect runner: pytest / dotnet test / cargo test / npm test]
|
||||
# [run and capture exit code]
|
||||
# [save results to $RESULTS_DIR/unit-results.*]
|
||||
|
||||
# --- Blackbox Tests (skip if --unit-only) ---
|
||||
# if ! $UNIT_ONLY; then
|
||||
# [docker compose -f <compose-file> up -d]
|
||||
# [start mock services]
|
||||
# [start system under test]
|
||||
# [wait for health checks]
|
||||
# [run blackbox test suite]
|
||||
# [save results to $RESULTS_DIR/blackbox-results.*]
|
||||
# fi
|
||||
|
||||
# --- Summary ---
|
||||
@@ -61,6 +77,9 @@ trap cleanup EXIT
|
||||
|
||||
mkdir -p "$RESULTS_DIR"
|
||||
|
||||
# --- Install Dependencies ---
|
||||
# [same as above — always install first]
|
||||
|
||||
# --- Start System Under Test ---
|
||||
# [docker compose up -d or start local server]
|
||||
# [wait for health checks]
|
||||
@@ -80,6 +99,8 @@ mkdir -p "$RESULTS_DIR"
|
||||
|
||||
## Key Requirements
|
||||
|
||||
- **Docker is the default**: only generate a local `run-tests.sh` for hardware-dependent projects. Otherwise use `docker-compose.test.yml`.
|
||||
- **Always install dependencies first**: the script must install all project and test dependencies before building or running tests. A fresh clone or CI runner may have nothing installed. Missing a single dependency causes collection errors that abort the entire test run.
|
||||
- Both scripts must be idempotent (safe to run multiple times)
|
||||
- Both scripts must work in CI (no interactive prompts, no GUI)
|
||||
- Use `trap cleanup EXIT` to ensure teardown even on failure
|
||||
|
||||
Reference in New Issue
Block a user