[AZ-178] Implement streaming video detection endpoint

- Added `/detect/video` endpoint for true streaming video detection, allowing inference to start as upload bytes arrive.
- Introduced `run_detect_video_stream` method in the inference module to handle video processing from a file-like object.
- Updated media hashing to include a new function for computing hashes directly from files with minimal I/O.
- Enhanced documentation to reflect changes in video processing and API behavior.

Made-with: Cursor
This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-04-01 03:11:43 +03:00
parent e65d8da6a3
commit be4cab4fcb
42 changed files with 2983 additions and 29 deletions
@@ -0,0 +1,107 @@
# True Streaming Video Detection
**Task**: AZ-178_true_streaming_video_detect
**Name**: Start inference as upload bytes arrive — no buffering
**Description**: Replace the fully-buffered `/detect` upload flow with a true streaming pipeline where video bytes flow simultaneously to disk and to PyAV for frame decoding + inference. First detection must appear within ~500ms of first decodable frames arriving at the API.
**Complexity**: 5 points
**Dependencies**: AZ-173 (stream-based run_detect)
**Component**: Main, Inference, MediaHash
**Jira**: AZ-178
**Parent**: AZ-172
## Problem
The current `/detect` endpoint has three sequential blocking stages before any detection runs:
1. **Starlette multipart buffering**: `UploadFile = File(...)` causes Starlette to consume the entire HTTP body and spool it to a `SpooledTemporaryFile` before the handler is called. For 2 GB → user waits for full upload.
2. **Full RAM load**: `await file.read()` copies the entire spooled file into a `bytes` object in RAM. For 2 GB → ~2 GB+ allocated.
3. **BytesIO + writer thread**: `run_detect_video(video_bytes, ...)` wraps `bytes` in `io.BytesIO` for PyAV and spawns a separate thread to write the same bytes to disk. For 2 GB → ~4 GB RAM total + double disk write.
Net result: zero detection output until the entire file is uploaded AND loaded into RAM.
## Target State
```
HTTP chunks ──┬──▸ StreamingBuffer (temp file) ──▸ PyAV decode ──▸ inference ──▸ SSE
└──▸ (same temp file serves as permanent storage after rename)
```
- Bytes flow chunk-by-chunk from the network into a `StreamingBuffer`
- PyAV reads from the same buffer concurrently — blocks when ahead of the writer, resumes as new data arrives
- No intermediate `bytes` object holds the full file in RAM
- Peak memory: ~model batch size × frame size (tens of MB), not file size
## Technical Design
### 1. StreamingBuffer (`src/streaming_buffer.py`)
A file-like object backed by a temp file with concurrent append + read:
- `append(data)` — called from the async HTTP handler (via executor); writes to temp file, flushes, notifies readers
- `read(size)` — called by PyAV; blocks via `Condition.wait()` when data not yet available
- `seek(offset, whence)` — supports SEEK_SET/SEEK_CUR normally; SEEK_END blocks until writer signals EOF (graceful degradation for non-faststart MP4)
- `tell()`, `seekable()`, `readable()` — standard file protocol
- `close_writer()` — signals EOF
- Thread-safe via `threading.Condition`
**Format compatibility:**
- Faststart MP4, MKV, WebM → true streaming (moov/header at start)
- Standard MP4 (moov at end) → SEEK_END blocks until upload completes, then decoding starts (correct, just not streaming)
### 2. `run_detect_video_stream` in `inference.pyx`
New method accepting a file-like `readable` instead of `bytes`:
```python
cpdef run_detect_video_stream(self, object readable, AIRecognitionConfig ai_config,
str media_name, object annotation_callback,
object status_callback=None)
```
- Opens `av.open(readable)` directly — PyAV calls `read()`/`seek()` on the StreamingBuffer
- Reuses existing `_process_video_pyav` for frame decode → batch inference
- No writer thread needed (StreamingBuffer already persists to disk)
### 3. `compute_media_content_hash_from_file` in `media_hash.py`
File-based variant of `compute_media_content_hash` that reads only 3 sampling regions (3 KB) from disk instead of loading the entire file:
```python
def compute_media_content_hash_from_file(path: str) -> str
```
Produces identical hashes to the existing `compute_media_content_hash(data)`.
### 4. `POST /detect/video` endpoint in `main.py`
New endpoint — raw binary body (not multipart), bypassing Starlette's buffering:
- Filename via `X-Filename` header, config via `X-Config` header
- Auth via `Authorization` / `X-Refresh-Token` headers (same as existing)
- Uses `request.stream()` for async chunk iteration
- Creates `StreamingBuffer`, starts inference in executor thread
- Feeds chunks to buffer via `run_in_executor` (non-blocking event loop)
- After upload completes: compute hash from file, rename to permanent path, create media record
- Returns `{"status": "started", "mediaId": "<hash>"}` — inference continues in background
- Detections flow via existing SSE `/detect/stream`
## Acceptance Criteria
- [ ] Video detection starts as soon as first frames are decodable (~500ms for faststart formats)
- [ ] 2 GB video never loads entirely into RAM (peak memory < 100 MB for the streaming pipeline)
- [ ] Video bytes written to disk exactly once (no double-write)
- [ ] Standard MP4 (moov at end) still works correctly (graceful degradation)
- [ ] Detections delivered via SSE in real-time during upload
- [ ] Content hash identical to existing `compute_media_content_hash`
- [ ] All existing tests pass
- [ ] Existing `/detect` endpoint unchanged (images and legacy callers unaffected)
## File Changes
| File | Action | Description |
|------|--------|-------------|
| `src/streaming_buffer.py` | New | StreamingBuffer class |
| `src/inference.pyx` | Modified | Add `run_detect_video_stream` method |
| `src/media_hash.py` | Modified | Add `compute_media_content_hash_from_file` |
| `src/main.py` | Modified | Add `POST /detect/video` endpoint |
| `tests/test_streaming_buffer.py` | New | Unit tests for StreamingBuffer |