mirror of
https://github.com/azaion/detections.git
synced 2026-04-22 20:46:31 +00:00
Add AIAvailabilityStatus and AIRecognitionConfig classes for AI model management
- Introduced `AIAvailabilityStatus` class to manage the availability status of AI models, including methods for setting status and logging messages. - Added `AIRecognitionConfig` class to encapsulate configuration parameters for AI recognition, with a static method for creating instances from dictionaries. - Implemented enums for AI availability states to enhance clarity and maintainability. - Updated related Cython files to support the new classes and ensure proper type handling. These changes aim to improve the structure and functionality of the AI model management system, facilitating better status tracking and configuration handling.
This commit is contained in:
@@ -0,0 +1,107 @@
|
||||
# Distributed Architecture Adaptation
|
||||
|
||||
**Task**: AZ-172_distributed_architecture_adaptation
|
||||
**Name**: Adapt detections module for distributed architecture: stream-based input & DB-driven AI config
|
||||
**Description**: Replace the co-located file-path-based detection flow with stream-based input and DB-driven configuration, enabling UI to run on a separate device from the detections API.
|
||||
**Complexity**: 5 points
|
||||
**Dependencies**: Annotations service (C# backend) needs endpoints for per-user AI config and Media management
|
||||
**Component**: Architecture
|
||||
**Jira**: AZ-172
|
||||
|
||||
## Problem
|
||||
|
||||
The detections module assumes co-located deployment (same machine as the WPF UI). The UI sends local file paths, and inference reads files directly from disk:
|
||||
|
||||
- `inference.pyx` → `_process_video()` opens local video via `cv2.VideoCapture(<str>video_name)`
|
||||
- `inference.pyx` → `_process_images()` reads local images via `cv2.imread(<str>path)`
|
||||
- `ai_config.pyx` has a `paths: list[str]` field carrying local filesystem paths
|
||||
- `AIRecognitionConfig` is passed from UI as a dict (via the `config_dict` parameter in `run_detect`)
|
||||
|
||||
In the new distributed architecture, UI runs on a separate device (laptop, tablet, phone). The detections module is a standalone API on a different device. Local file paths are meaningless.
|
||||
|
||||
## Outcome
|
||||
|
||||
- Video detection works with streamed input (no local file paths required)
|
||||
- Video is simultaneously saved to disk and processed frame-by-frame
|
||||
- Image detection works with uploaded bytes (no local file paths required)
|
||||
- AIRecognitionConfig is fetched from DB by userId, not passed from UI
|
||||
- Media table records created on upload with correct XxHash64 Id, path, type, status
|
||||
- Old path-based code removed
|
||||
|
||||
## Subtasks
|
||||
|
||||
| Jira | Summary | Points |
|
||||
|------|---------|--------|
|
||||
| AZ-173 | Replace path-based `run_detect` with stream-based API in `inference.pyx` | 3 |
|
||||
| AZ-174 | Fetch AIRecognitionConfig from DB by userId instead of UI-passed config | 2 |
|
||||
| AZ-175 | Integrate Media table: create record on upload, store file, track status | 2 |
|
||||
| AZ-176 | Clean up obsolete path-based code and old methods | 1 |
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
**AC-1: Stream-based video detection**
|
||||
Given a video is uploaded via HTTP to the detection API
|
||||
When the detections module processes it
|
||||
Then frames are decoded and run through inference without requiring a local file path from the caller
|
||||
|
||||
**AC-2: Concurrent write and detect for video**
|
||||
Given a video stream is being received
|
||||
When the detection module processes it
|
||||
Then the stream is simultaneously written to persistent storage AND processed frame-by-frame for detection
|
||||
|
||||
**AC-3: Stream-based image detection**
|
||||
Given an image is uploaded via HTTP to the detection API
|
||||
When the detections module processes it
|
||||
Then the image bytes are decoded and run through inference without requiring a local file path
|
||||
|
||||
**AC-4: DB-driven AI config**
|
||||
Given a detection request arrives with a userId (from JWT)
|
||||
When the detection module needs AIRecognitionConfig
|
||||
Then it fetches AIRecognitionSettings + CameraSettings from the DB via the annotations service, not from the request payload
|
||||
|
||||
**AC-5: Default config on user creation**
|
||||
Given a new user is created in the system
|
||||
When their account is provisioned
|
||||
Then default AIRecognitionSettings and CameraSettings rows are created for that user
|
||||
|
||||
**AC-6: Media record lifecycle**
|
||||
Given a file is uploaded for detection
|
||||
When the upload is received
|
||||
Then a Media record is created (XxHash64 Id, Name, Path, MediaType, UserId) and MediaStatus transitions through New → AIProcessing → AIProcessed (or Error)
|
||||
|
||||
**AC-7: Old code removed**
|
||||
Given the refactoring is complete
|
||||
When the codebase is reviewed
|
||||
Then no references to `paths` in AIRecognitionConfig, no `cv2.VideoCapture(local_path)`, no `cv2.imread(local_path)`, and no `is_video(filepath)` remain
|
||||
|
||||
## File Changes
|
||||
|
||||
| File | Action | Description |
|
||||
|------|--------|-------------|
|
||||
| `src/inference.pyx` | Modified | Replace `run_detect` with stream-based methods; remove path iteration |
|
||||
| `src/ai_config.pxd` | Modified | Remove `paths` field |
|
||||
| `src/ai_config.pyx` | Modified | Remove `paths` field; adapt `from_dict` |
|
||||
| `src/main.py` | Modified | Fetch config from DB; handle Media records; adapt endpoints |
|
||||
| `src/loader_http_client.pyx` | Modified | Add method to fetch user AI config from annotations service |
|
||||
|
||||
## Technical Notes
|
||||
|
||||
- `cv2.VideoCapture` can read from a named pipe or a file being appended to. Alternative: feed frames via a queue from the HTTP upload handler, or use PyAV for direct byte-stream decoding
|
||||
- The annotations service (C# backend) owns the DB. Config retrieval requires API endpoints on that service
|
||||
- XxHash64 ID generation algorithm is documented in `_docs/00_database_schema.md`
|
||||
- Token management (JWT refresh) is already implemented in `main.py` via `TokenManager`
|
||||
- DB tables `AIRecognitionSettings` and `CameraSettings` exist in schema but are not yet linked to `Users`; need FK or join table
|
||||
|
||||
## Risks & Mitigation
|
||||
|
||||
**Risk 1: Concurrent write + read of video file**
|
||||
- *Risk*: `cv2.VideoCapture` may fail or stall reading an incomplete file
|
||||
- *Mitigation*: Use a frame queue pipeline (one thread writes, another reads) or PyAV for byte-stream decoding
|
||||
|
||||
**Risk 2: Annotations service API dependency**
|
||||
- *Risk*: New endpoints needed on the C# backend for config retrieval and Media management
|
||||
- *Mitigation*: Define API contract upfront; detections module can use fallback defaults if service is unreachable
|
||||
|
||||
**Risk 3: Config-to-User linking not yet in DB**
|
||||
- *Risk*: `AIRecognitionSettings` and `CameraSettings` tables have no FK to `Users`
|
||||
- *Mitigation*: Add `UserId` FK or create a `UserAIConfig` join table in the backend migration
|
||||
Reference in New Issue
Block a user