[AZ-137] [AZ-138] Decompose test tasks and scaffold E2E test infrastructure

Made-with: Cursor
This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-03-23 14:07:54 +02:00
parent 091d9a8fb0
commit 86d8e7e22d
47 changed files with 1883 additions and 88 deletions
+23
View File
@@ -85,6 +85,29 @@
| Annotations Service | HTTP POST | Bearer JWT | None observed | Exception silently caught |
| Annotations Auth | HTTP POST | Refresh token | None observed | Exception silently caught |
#### Annotations Service Contract
Detections → Annotations is the primary outbound integration. During async media detection (`POST /detect/{media_id}`), each detection batch is posted to the Annotations service for persistence and downstream sync.
**Endpoint:** `POST {ANNOTATIONS_URL}/annotations`
**Trigger:** Each valid annotation batch during F3 (async media detection), only when the original client request included an Authorization header.
**Payload sent by Detections:** `mediaId`, `source` (AI=0), `videoTime`, list of Detection objects (`centerX`, `centerY`, `width`, `height`, `classNum`, `label`, `confidence`), and optional base64 `image`. `userId` is not included — resolved from the JWT by Annotations. The Annotations API contract also accepts `description`, `affiliation`, and `combatReadiness` on each Detection, but Detections does not populate these.
**Responses:** 201 Created, 400 Bad Request (missing image/mediaId), 404 Not Found (unknown mediaId).
**Auth:** Bearer JWT forwarded from the client. For long-running video, auto-refreshed via `POST {ANNOTATIONS_URL}/auth/refresh` (TokenManager, 60s pre-expiry window).
**Downstream effect (Annotations side):**
1. Annotation persisted to local PostgreSQL (image hashed to XxHash64 ID)
2. SSE event published to UI subscribers
3. Annotation ID enqueued to `annotations_queue_records` → FailsafeProducer → RabbitMQ Stream (`azaion-annotations`) for central DB sync and AI training
**Failure isolation:** All POST failures are silently caught. Detection processing and SSE streaming continue regardless of Annotations service availability.
See `_docs/02_document/modules/main.md` § "Annotations Service Integration" for field-level schema detail.
## 6. Non-Functional Requirements
| Requirement | Target | Measurement | Priority |
@@ -2,16 +2,22 @@
## Seed Data Sets
| Data Set | Description | Used by Tests | How Loaded | Cleanup |
|----------|-------------|---------------|-----------|---------|
| onnx-model | Small YOLO ONNX model (valid architecture, 1280×1280 input, 19 classes) | All detection tests | Volume mount to mock-loader `/models/azaion.onnx` | Container restart |
| classes-json | classes.json with 19 detection classes, 3 weather modes, MaxSizeM values | All tests | Volume mount to detections `/app/classes.json` | Container restart |
| small-image | JPEG image 640×480 — below 1.5× model size (1920×1920 threshold) | FT-P-03, FT-P-05, FT-P-06, FT-P-07, FT-N-01, FT-N-02, NFT-PERF-01 | Volume mount to consumer `/media/` | N/A (read-only) |
| large-image | JPEG image 4000×3000 — above 1.5× model size, triggers tiling | FT-P-04, FT-P-16, NFT-PERF-03 | Volume mount to consumer `/media/` | N/A (read-only) |
| test-video | MP4 video, 10s duration, 30fps — contains objects across frames | FT-P-10, FT-P-11, FT-P-12, NFT-PERF-04 | Volume mount to consumer `/media/` | N/A (read-only) |
| empty-image | Zero-byte file | FT-N-01 | Volume mount to consumer `/media/` | N/A (read-only) |
| corrupt-image | Binary garbage (not valid image format) | FT-N-02 | Volume mount to consumer `/media/` | N/A (read-only) |
| jwt-token | Valid JWT with exp claim (not signature-verified by detections) | FT-P-08, FT-P-09 | Generated by consumer at runtime | N/A |
| Data Set | Source File | Description | Used by Tests | How Loaded | Cleanup |
|----------|------------|-------------|---------------|-----------|---------|
| onnx-model | `input_data/azaion.onnx` | YOLO ONNX model (1280×1280 input, 19 classes, 81MB) | All detection tests | Volume mount to mock-loader `/models/azaion.onnx` | Container restart |
| classes-json | `classes.json` (repo root) | 19 detection classes with Id, Name, Color, MaxSizeM | All tests | Volume mount to detections `/app/classes.json` | Container restart |
| image-small | `input_data/image_small.jpg` | JPEG 1280×720 — below tiling threshold (1920×1920) | FT-P-01..03, 05, 07, 13..15, FT-N-03, 06, NFT-PERF-01..02, NFT-RES-01, 03, NFT-SEC-01, NFT-RES-LIM-01 | Volume mount to consumer `/media/` | N/A (read-only) |
| image-large | `input_data/image_large.JPG` | JPEG 6252×4168 — above tiling threshold, triggers GSD tiling | FT-P-04, 16, NFT-PERF-03 | Volume mount to consumer `/media/` | N/A (read-only) |
| image-dense-01 | `input_data/image_dense01.jpg` | JPEG 1280×720 — dense scene with many clustered objects | FT-P-06, NFT-RES-LIM-03 | Volume mount to consumer `/media/` | N/A (read-only) |
| image-dense-02 | `input_data/image_dense02.jpg` | JPEG 1920×1080 — dense scene variant, borderline tiling | FT-P-06 (variant) | Volume mount to consumer `/media/` | N/A (read-only) |
| image-different-types | `input_data/image_different_types.jpg` | JPEG 900×1600 — varied object classes for class variant tests | FT-P-13 | Volume mount to consumer `/media/` | N/A (read-only) |
| image-empty-scene | `input_data/image_empty_scene.jpg` | JPEG 1920×1080 — clean scene with no detectable objects | Edge case (zero detections) | Volume mount to consumer `/media/` | N/A (read-only) |
| video-short-01 | `input_data/video_short01.mp4` | MP4 video — standard async/SSE/video detection tests | FT-P-08..12, FT-N-04, 07, NFT-PERF-04, NFT-RES-02, NFT-SEC-03 | Volume mount to consumer `/media/` | N/A (read-only) |
| video-short-02 | `input_data/video_short02.mp4` | MP4 video — variant for concurrent and resilience tests | NFT-RES-02 (variant), NFT-RES-04 | Volume mount to consumer `/media/` | N/A (read-only) |
| video-long-03 | `input_data/video_long03.mp4` | MP4 long video (288MB) — generates >100 SSE events for overflow tests | FT-N-08, NFT-RES-LIM-02 | Volume mount to consumer `/media/` | N/A (read-only) |
| empty-image | Generated at build time | Zero-byte file | FT-N-01 | Generated in e2e/fixtures/ | N/A |
| corrupt-image | Generated at build time | Random binary garbage (not valid image format) | FT-N-02 | Generated in e2e/fixtures/ | N/A |
| jwt-token | Generated at runtime | Valid JWT with exp claim (not signature-verified by detections) | FT-P-08, 09, FT-N-04, 07, NFT-SEC-03 | Generated by consumer at runtime | N/A |
## Data Isolation Strategy
@@ -22,6 +28,17 @@ Each test run starts with fresh containers (`docker compose down -v && docker co
| Input Data File | Source Location | Description | Covers Scenarios |
|-----------------|----------------|-------------|-----------------|
| data_parameters.md | `_docs/00_problem/input_data/data_parameters.md` | API parameter schemas, config defaults, classes.json structure | Informs all test input construction |
| azaion.onnx | `_docs/00_problem/input_data/azaion.onnx` | YOLO ONNX detection model | All detection tests |
| image_small.jpg | `_docs/00_problem/input_data/image_small.jpg` | 1280×720 aerial image | Single-frame detection, health, negative, perf tests |
| image_large.JPG | `_docs/00_problem/input_data/image_large.JPG` | 6252×4168 aerial image | Tiling tests |
| image_dense01.jpg | `_docs/00_problem/input_data/image_dense01.jpg` | Dense scene 1280×720 | Dedup, detection cap tests |
| image_dense02.jpg | `_docs/00_problem/input_data/image_dense02.jpg` | Dense scene 1920×1080 | Dedup variant |
| image_different_types.jpg | `_docs/00_problem/input_data/image_different_types.jpg` | Varied classes 900×1600 | Class variant tests |
| image_empty_scene.jpg | `_docs/00_problem/input_data/image_empty_scene.jpg` | Empty scene 1920×1080 | Zero-detection edge case |
| video_short01.mp4 | `_docs/00_problem/input_data/video_short01.mp4` | Standard video | Async, SSE, video, perf tests |
| video_short02.mp4 | `_docs/00_problem/input_data/video_short02.mp4` | Video variant | Resilience, concurrent tests |
| video_long03.mp4 | `_docs/00_problem/input_data/video_long03.mp4` | Long video (288MB) | SSE overflow, queue depth tests |
| classes.json | repo root `classes.json` | 19 detection classes | All tests |
## External Dependency Mocks
+57 -4
View File
@@ -69,10 +69,63 @@ Error mapping: RuntimeError("not available") → 503, RuntimeError → 422, Valu
### Annotations Service Integration
- POST to `{ANNOTATIONS_URL}/annotations` with:
- `mediaId`, `source: 0`, `videoTime` (formatted from ms), `detections` (list of dto dicts)
- Optional base64-encoded `image`
- Bearer token in Authorization header
Detections posts results to the Annotations service (`POST {ANNOTATIONS_URL}/annotations`) server-to-server during async media detection (F3). This only happens when an auth token is present in the original request.
**Endpoint:** `POST {ANNOTATIONS_URL}/annotations`
**Headers:**
| Header | Value |
|--------|-------|
| Authorization | `Bearer {accessToken}` (forwarded from the original client request) |
| Content-Type | `application/json` |
**Request body — payload sent by Detections:**
| Field | Type | Description |
|-------|------|-------------|
| mediaId | string | ID of the media being processed |
| source | int | `0` (AnnotationSource.AI) |
| videoTime | string | Video playback position formatted from ms as `"HH:MM:SS"` — mapped to `Annotations.Time` |
| detections | list | Detection results for this batch (see below) |
| image | string (base64) | Optional — base64-encoded frame image bytes |
`userId` is not included in the payload. The Annotations service resolves the user identity from the Bearer JWT.
**Detection object (as sent by Detections):**
| Field | Type | Description |
|-------|------|-------------|
| centerX | float | X center, normalized 0.01.0 |
| centerY | float | Y center, normalized 0.01.0 |
| width | float | Width, normalized 0.01.0 |
| height | float | Height, normalized 0.01.0 |
| classNum | int | Detection class number |
| label | string | Human-readable class name |
| confidence | float | Model confidence 0.01.0 |
The Annotations API contract (`CreateAnnotationRequest`) also accepts `description` (string), `affiliation` (AffiliationEnum), and `combatReadiness` (CombatReadinessEnum) on each Detection, but the Detections service does not populate these — the Annotations service uses defaults.
**Responses from Annotations service:**
| Status | Condition |
|--------|-----------|
| 201 | Annotation created |
| 400 | Neither image nor mediaId provided |
| 404 | MediaId not found in Annotations DB |
**Failure handling:** POST failures are silently caught — detection processing continues regardless. Annotations that fail to post are not retried.
**Downstream pipeline (Annotations service side):**
1. Saves annotation to local PostgreSQL (image → XxHash64 ID, label file in YOLO format)
2. Publishes SSE event to UI via `GET /annotations/events`
3. Enqueues annotation ID to `annotations_queue_records` buffer table (unless SilentDetection mode is enabled in system settings)
4. `FailsafeProducer` (BackgroundService) drains the buffer to RabbitMQ Stream (`azaion-annotations`) using MessagePack + Gzip
**Token refresh for long-running video:**
For video detection that may outlast the JWT lifetime, the `TokenManager` auto-refreshes via `POST {ANNOTATIONS_URL}/auth/refresh` when the token is within 60s of expiry. The refresh token is provided by the client in the `X-Refresh-Token` request header.
## Dependencies
+30 -1
View File
@@ -150,7 +150,34 @@ sequenceDiagram
| 4 | Engine | Inference | raw detections | numpy ndarray |
| 5 | Inference | API (callback) | Annotation + percent | Python objects |
| 6 | API | SSE clients | DetectionEvent | SSE JSON stream |
| 7 | API | Annotations Service | detections + base64 image | HTTP POST JSON |
| 7 | API | Annotations Service | CreateAnnotationRequest | HTTP POST JSON |
**Step 7 — Annotations POST detail:**
Fired once per detection batch when auth token is present. The request to `POST {ANNOTATIONS_URL}/annotations` carries:
```json
{
"mediaId": "string",
"source": 0,
"videoTime": "00:01:23",
"detections": [
{
"centerX": 0.56, "centerY": 0.67,
"width": 0.25, "height": 0.22,
"classNum": 3, "label": "ArmorVehicle",
"confidence": 0.92
}
],
"image": "<base64 encoded frame bytes, optional>"
}
```
`userId` is not included — the Annotations service resolves the user from the JWT. The Annotations API contract also accepts `description`, `affiliation`, and `combatReadiness` on each detection, but Detections does not populate these.
Authorization: `Bearer {accessToken}` forwarded from the original client request. For long-running video, the token is auto-refreshed via `POST {ANNOTATIONS_URL}/auth/refresh`.
The Annotations service responds 201 on success, 400 if neither image nor mediaId provided, 404 if mediaId unknown. On the Annotations side, the saved annotation triggers: SSE notification to UI, and enqueue to the RabbitMQ sync pipeline (unless SilentDetection mode).
### Error Scenarios
@@ -160,6 +187,8 @@ sequenceDiagram
| Engine unavailable | run_detect | engine is None | Error event pushed to SSE |
| Inference failure | processing | Exception | Error event pushed to SSE, media_id cleared |
| Annotations POST failure | _post_annotation | Exception | Silently caught, detection continues |
| Annotations 404 | _post_annotation | MediaId not found in Annotations DB | Silently caught, detection continues |
| Token refresh failure | TokenManager | Exception on /auth/refresh | Silently caught, subsequent POSTs may fail with 401 |
| SSE queue full | event broadcast | QueueFull | Event silently dropped for that client |
---