mirror of
https://github.com/azaion/detections.git
synced 2026-04-22 06:46:32 +00:00
Add AIAvailabilityStatus and AIRecognitionConfig classes for AI model management
- Introduced `AIAvailabilityStatus` class to manage the availability status of AI models, including methods for setting status and logging messages. - Added `AIRecognitionConfig` class to encapsulate configuration parameters for AI recognition, with a static method for creating instances from dictionaries. - Implemented enums for AI availability states to enhance clarity and maintainability. - Updated related Cython files to support the new classes and ensure proper type handling. These changes aim to improve the structure and functionality of the AI model management system, facilitating better status tracking and configuration handling.
This commit is contained in:
@@ -25,7 +25,6 @@
|
||||
| ML Runtime (CPU) | ONNX Runtime | 1.22.0 | Portable model format, CPU/CUDA provider fallback |
|
||||
| ML Runtime (GPU) | TensorRT + PyCUDA | 10.11.0 / 2025.1.1 | Maximum GPU inference performance |
|
||||
| Image Processing | OpenCV | 4.10.0 | Frame decoding, preprocessing, tiling |
|
||||
| Serialization | msgpack | 1.1.1 | Compact binary serialization for annotations and configs |
|
||||
| HTTP Client | requests | 2.32.4 | Synchronous HTTP to Loader and Annotations services |
|
||||
| Logging | loguru | 0.7.3 | Structured file + console logging |
|
||||
| GPU Monitoring | pynvml | 12.0.0 | GPU detection, capability checks, memory queries |
|
||||
|
||||
@@ -0,0 +1,107 @@
|
||||
# Distributed Architecture Adaptation
|
||||
|
||||
**Task**: AZ-172_distributed_architecture_adaptation
|
||||
**Name**: Adapt detections module for distributed architecture: stream-based input & DB-driven AI config
|
||||
**Description**: Replace the co-located file-path-based detection flow with stream-based input and DB-driven configuration, enabling UI to run on a separate device from the detections API.
|
||||
**Complexity**: 5 points
|
||||
**Dependencies**: Annotations service (C# backend) needs endpoints for per-user AI config and Media management
|
||||
**Component**: Architecture
|
||||
**Jira**: AZ-172
|
||||
|
||||
## Problem
|
||||
|
||||
The detections module assumes co-located deployment (same machine as the WPF UI). The UI sends local file paths, and inference reads files directly from disk:
|
||||
|
||||
- `inference.pyx` → `_process_video()` opens local video via `cv2.VideoCapture(<str>video_name)`
|
||||
- `inference.pyx` → `_process_images()` reads local images via `cv2.imread(<str>path)`
|
||||
- `ai_config.pyx` has a `paths: list[str]` field carrying local filesystem paths
|
||||
- `AIRecognitionConfig` is passed from UI as a dict (via the `config_dict` parameter in `run_detect`)
|
||||
|
||||
In the new distributed architecture, UI runs on a separate device (laptop, tablet, phone). The detections module is a standalone API on a different device. Local file paths are meaningless.
|
||||
|
||||
## Outcome
|
||||
|
||||
- Video detection works with streamed input (no local file paths required)
|
||||
- Video is simultaneously saved to disk and processed frame-by-frame
|
||||
- Image detection works with uploaded bytes (no local file paths required)
|
||||
- AIRecognitionConfig is fetched from DB by userId, not passed from UI
|
||||
- Media table records created on upload with correct XxHash64 Id, path, type, status
|
||||
- Old path-based code removed
|
||||
|
||||
## Subtasks
|
||||
|
||||
| Jira | Summary | Points |
|
||||
|------|---------|--------|
|
||||
| AZ-173 | Replace path-based `run_detect` with stream-based API in `inference.pyx` | 3 |
|
||||
| AZ-174 | Fetch AIRecognitionConfig from DB by userId instead of UI-passed config | 2 |
|
||||
| AZ-175 | Integrate Media table: create record on upload, store file, track status | 2 |
|
||||
| AZ-176 | Clean up obsolete path-based code and old methods | 1 |
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
**AC-1: Stream-based video detection**
|
||||
Given a video is uploaded via HTTP to the detection API
|
||||
When the detections module processes it
|
||||
Then frames are decoded and run through inference without requiring a local file path from the caller
|
||||
|
||||
**AC-2: Concurrent write and detect for video**
|
||||
Given a video stream is being received
|
||||
When the detection module processes it
|
||||
Then the stream is simultaneously written to persistent storage AND processed frame-by-frame for detection
|
||||
|
||||
**AC-3: Stream-based image detection**
|
||||
Given an image is uploaded via HTTP to the detection API
|
||||
When the detections module processes it
|
||||
Then the image bytes are decoded and run through inference without requiring a local file path
|
||||
|
||||
**AC-4: DB-driven AI config**
|
||||
Given a detection request arrives with a userId (from JWT)
|
||||
When the detection module needs AIRecognitionConfig
|
||||
Then it fetches AIRecognitionSettings + CameraSettings from the DB via the annotations service, not from the request payload
|
||||
|
||||
**AC-5: Default config on user creation**
|
||||
Given a new user is created in the system
|
||||
When their account is provisioned
|
||||
Then default AIRecognitionSettings and CameraSettings rows are created for that user
|
||||
|
||||
**AC-6: Media record lifecycle**
|
||||
Given a file is uploaded for detection
|
||||
When the upload is received
|
||||
Then a Media record is created (XxHash64 Id, Name, Path, MediaType, UserId) and MediaStatus transitions through New → AIProcessing → AIProcessed (or Error)
|
||||
|
||||
**AC-7: Old code removed**
|
||||
Given the refactoring is complete
|
||||
When the codebase is reviewed
|
||||
Then no references to `paths` in AIRecognitionConfig, no `cv2.VideoCapture(local_path)`, no `cv2.imread(local_path)`, and no `is_video(filepath)` remain
|
||||
|
||||
## File Changes
|
||||
|
||||
| File | Action | Description |
|
||||
|------|--------|-------------|
|
||||
| `src/inference.pyx` | Modified | Replace `run_detect` with stream-based methods; remove path iteration |
|
||||
| `src/ai_config.pxd` | Modified | Remove `paths` field |
|
||||
| `src/ai_config.pyx` | Modified | Remove `paths` field; adapt `from_dict` |
|
||||
| `src/main.py` | Modified | Fetch config from DB; handle Media records; adapt endpoints |
|
||||
| `src/loader_http_client.pyx` | Modified | Add method to fetch user AI config from annotations service |
|
||||
|
||||
## Technical Notes
|
||||
|
||||
- `cv2.VideoCapture` can read from a named pipe or a file being appended to. Alternative: feed frames via a queue from the HTTP upload handler, or use PyAV for direct byte-stream decoding
|
||||
- The annotations service (C# backend) owns the DB. Config retrieval requires API endpoints on that service
|
||||
- XxHash64 ID generation algorithm is documented in `_docs/00_database_schema.md`
|
||||
- Token management (JWT refresh) is already implemented in `main.py` via `TokenManager`
|
||||
- DB tables `AIRecognitionSettings` and `CameraSettings` exist in schema but are not yet linked to `Users`; need FK or join table
|
||||
|
||||
## Risks & Mitigation
|
||||
|
||||
**Risk 1: Concurrent write + read of video file**
|
||||
- *Risk*: `cv2.VideoCapture` may fail or stall reading an incomplete file
|
||||
- *Mitigation*: Use a frame queue pipeline (one thread writes, another reads) or PyAV for byte-stream decoding
|
||||
|
||||
**Risk 2: Annotations service API dependency**
|
||||
- *Risk*: New endpoints needed on the C# backend for config retrieval and Media management
|
||||
- *Mitigation*: Define API contract upfront; detections module can use fallback defaults if service is unreachable
|
||||
|
||||
**Risk 3: Config-to-User linking not yet in DB**
|
||||
- *Risk*: `AIRecognitionSettings` and `CameraSettings` tables have no FK to `Users`
|
||||
- *Mitigation*: Add `UserId` FK or create a `UserAIConfig` join table in the backend migration
|
||||
@@ -0,0 +1,65 @@
|
||||
# Stream-Based run_detect
|
||||
|
||||
**Task**: AZ-173_stream_based_run_detect
|
||||
**Name**: Replace path-based run_detect with stream-based API in inference.pyx
|
||||
**Description**: Refactor `run_detect` in `inference.pyx` to accept media bytes/stream instead of a config dict with local file paths. Enable simultaneous disk write and frame-by-frame detection for video.
|
||||
**Complexity**: 3 points
|
||||
**Dependencies**: None (core change, other subtasks depend on this)
|
||||
**Component**: Inference
|
||||
**Jira**: AZ-173
|
||||
**Parent**: AZ-172
|
||||
|
||||
## Problem
|
||||
|
||||
`run_detect` currently takes a `config_dict` containing `paths: list[str]` — local filesystem paths. It iterates over them, guesses media type via `mimetypes.guess_type`, and opens files with `cv2.VideoCapture` or `cv2.imread`. This doesn't work when the caller is on a different device.
|
||||
|
||||
## Current State
|
||||
|
||||
```python
|
||||
cpdef run_detect(self, dict config_dict, object annotation_callback, object status_callback=None):
|
||||
ai_config = AIRecognitionConfig.from_dict(config_dict)
|
||||
for p in ai_config.paths:
|
||||
if self.is_video(p): videos.append(p)
|
||||
else: images.append(p)
|
||||
self._process_images(ai_config, images) # cv2.imread(path)
|
||||
for v in videos:
|
||||
self._process_video(ai_config, v) # cv2.VideoCapture(path)
|
||||
```
|
||||
|
||||
## Target State
|
||||
|
||||
Split into two dedicated methods:
|
||||
|
||||
- `run_detect_video(self, stream, AIRecognitionConfig ai_config, str media_name, str save_path, ...)` — accepts a video stream/bytes, writes to `save_path` while decoding frames for detection
|
||||
- `run_detect_image(self, bytes image_bytes, AIRecognitionConfig ai_config, str media_name, ...)` — accepts image bytes, decodes in memory
|
||||
|
||||
Remove:
|
||||
- `is_video(self, str filepath)` method
|
||||
- `paths` iteration loop in `run_detect`
|
||||
- Direct `cv2.VideoCapture(local_path)` and `cv2.imread(local_path)` calls
|
||||
|
||||
## Video Stream Processing Options
|
||||
|
||||
**Option A: Write-then-read**
|
||||
Write entire upload to temp file, then open with `cv2.VideoCapture`. Simple but not real-time.
|
||||
|
||||
**Option B: Concurrent pipe**
|
||||
One thread writes incoming bytes to a file; another thread reads frames via `cv2.VideoCapture` on the growing file. Requires careful synchronization.
|
||||
|
||||
**Option C: PyAV byte-stream decoding**
|
||||
Use `av.open(io.BytesIO(data))` or a custom `av.InputContainer` to decode frames directly from bytes without file I/O. Most flexible for streaming.
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] Video can be processed from bytes/stream without a local file path from the caller
|
||||
- [ ] Video is simultaneously written to disk and processed frame-by-frame
|
||||
- [ ] Image can be processed from bytes without a local file path
|
||||
- [ ] `_process_video_batch` and batch processing logic preserved (only input source changes)
|
||||
- [ ] All existing detection logic (tile splitting, validation, tracking) unaffected
|
||||
|
||||
## File Changes
|
||||
|
||||
| File | Action | Description |
|
||||
|------|--------|-------------|
|
||||
| `src/inference.pyx` | Modified | New stream-based methods, remove path-based `run_detect` |
|
||||
| `src/main.py` | Modified | Adapt callers to new method signatures |
|
||||
@@ -0,0 +1,76 @@
|
||||
# DB-Driven AI Config
|
||||
|
||||
**Task**: AZ-174_db_driven_ai_config
|
||||
**Name**: Fetch AIRecognitionConfig from DB by userId instead of UI-passed config
|
||||
**Description**: Replace UI-passed AI configuration with database-driven config fetched by userId from the annotations service.
|
||||
**Complexity**: 2 points
|
||||
**Dependencies**: Annotations service needs new endpoint `GET /api/users/{userId}/ai-settings`
|
||||
**Component**: Configuration
|
||||
**Jira**: AZ-174
|
||||
**Parent**: AZ-172
|
||||
|
||||
## Problem
|
||||
|
||||
`AIRecognitionConfig` is currently built from a dict passed by the caller (UI). In the distributed architecture, the UI should not own or pass detection configuration — it should be stored server-side per user.
|
||||
|
||||
## Current State
|
||||
|
||||
- `main.py`: `AIConfigDto` Pydantic model with hardcoded defaults, passed as `config_dict`
|
||||
- `ai_config.pyx`: `AIRecognitionConfig.from_dict(data)` builds from dict with defaults
|
||||
- Camera settings (`altitude`, `focal_length`, `sensor_width`) baked into the config DTO
|
||||
- No DB interaction for config
|
||||
|
||||
## Target State
|
||||
|
||||
- Extract userId from JWT (already parsed in `TokenManager._decode_exp`)
|
||||
- Call annotations service: `GET /api/users/{userId}/ai-settings`
|
||||
- Response contains merged `AIRecognitionSettings` + `CameraSettings` fields
|
||||
- Build `AIRecognitionConfig` from the API response
|
||||
- Remove `AIConfigDto` from `main.py` (or keep as optional override for testing)
|
||||
- Remove `paths` field from `AIRecognitionConfig` entirely
|
||||
|
||||
## DB Tables (from schema)
|
||||
|
||||
**AIRecognitionSettings:**
|
||||
- FramePeriodRecognition (default 4)
|
||||
- FrameRecognitionSeconds (default 2)
|
||||
- ProbabilityThreshold (default 0.25)
|
||||
- TrackingDistanceConfidence
|
||||
- TrackingProbabilityIncrease
|
||||
- TrackingIntersectionThreshold
|
||||
- ModelBatchSize
|
||||
- BigImageTileOverlapPercent
|
||||
|
||||
**CameraSettings:**
|
||||
- Altitude (default 400m)
|
||||
- FocalLength (default 24mm)
|
||||
- SensorWidth (default 23.5mm)
|
||||
|
||||
**Linking:** These tables currently have no FK to Users. The backend needs either:
|
||||
- Add `UserId` FK to both tables, or
|
||||
- Create a `UserAIConfig` join table referencing both
|
||||
|
||||
## Backend Dependency
|
||||
|
||||
The annotations C# service needs:
|
||||
1. New endpoint: `GET /api/users/{userId}/ai-settings` returning merged config
|
||||
2. On user creation: seed default `AIRecognitionSettings` + `CameraSettings` rows
|
||||
3. Optional: `PUT /api/users/{userId}/ai-settings` for user to update their config
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] Detection endpoint extracts userId from JWT
|
||||
- [ ] AIRecognitionConfig is fetched from annotations service by userId
|
||||
- [ ] Fallback to sensible defaults if service is unreachable
|
||||
- [ ] `paths` field removed from `AIRecognitionConfig`
|
||||
- [ ] Camera settings come from DB, not request payload
|
||||
|
||||
## File Changes
|
||||
|
||||
| File | Action | Description |
|
||||
|------|--------|-------------|
|
||||
| `src/main.py` | Modified | Fetch config from annotations service via HTTP |
|
||||
| `src/ai_config.pxd` | Modified | Remove `paths` field |
|
||||
| `src/ai_config.pyx` | Modified | Remove `paths` from `__init__` and `from_dict` |
|
||||
| `src/loader_http_client.pyx` | Modified | Add method to fetch user AI config |
|
||||
| `src/loader_http_client.pxd` | Modified | Declare new method |
|
||||
@@ -0,0 +1,73 @@
|
||||
# Media Table Integration
|
||||
|
||||
**Task**: AZ-175_media_table_integration
|
||||
**Name**: Integrate Media table: create record on upload, store file, track status
|
||||
**Description**: When a file is uploaded to the detections API, create a Media record in the DB, store the file at the proper path, and update MediaStatus throughout processing.
|
||||
**Complexity**: 2 points
|
||||
**Dependencies**: Annotations service needs Media CRUD endpoints
|
||||
**Component**: Media Management
|
||||
**Jira**: AZ-175
|
||||
**Parent**: AZ-172
|
||||
|
||||
## Problem
|
||||
|
||||
Currently, uploaded files are written to temp files, processed, and deleted. No `Media` record is created in the database. File persistence and status tracking are missing.
|
||||
|
||||
## Current State
|
||||
|
||||
- `/detect`: writes upload to `tempfile.NamedTemporaryFile`, processes, deletes via `os.unlink`
|
||||
- `/detect/{media_id}`: accepts a media_id parameter but doesn't create or manage Media records
|
||||
- No XxHash64 ID generation in the detections module
|
||||
- No file storage to persistent paths
|
||||
|
||||
## Target State
|
||||
|
||||
### On Upload
|
||||
|
||||
1. Receive file bytes from HTTP upload
|
||||
2. Compute XxHash64 of file content using the sampling algorithm
|
||||
3. Determine MediaType from file extension (Video or Image)
|
||||
4. Store file at proper path (from DirectorySettings: VideosDir or ImagesDir)
|
||||
5. Create Media record via annotations service: `POST /api/media`
|
||||
- Id: XxHash64 hex string
|
||||
- Name: original filename
|
||||
- Path: storage path
|
||||
- MediaType: Video|Image
|
||||
- MediaStatus: New (1)
|
||||
- UserId: from JWT
|
||||
|
||||
### During Processing
|
||||
|
||||
6. Update MediaStatus to AIProcessing (2) via `PUT /api/media/{id}/status`
|
||||
7. Run detection (stream-based per AZ-173)
|
||||
8. Update MediaStatus to AIProcessed (3) on success, or Error (6) on failure
|
||||
|
||||
## XxHash64 Sampling Algorithm
|
||||
|
||||
```
|
||||
For files >= 3072 bytes:
|
||||
Input = file_size_as_8_bytes + first_1024_bytes + middle_1024_bytes + last_1024_bytes
|
||||
Output = XxHash64(input) as hex string
|
||||
|
||||
For files < 3072 bytes:
|
||||
Input = file_size_as_8_bytes + entire_file_content
|
||||
Output = XxHash64(input) as hex string
|
||||
```
|
||||
|
||||
Virtual hashes (in-memory only) prefixed with "V".
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] XxHash64 ID computed correctly using the sampling algorithm
|
||||
- [ ] Media record created in DB on upload with correct fields
|
||||
- [ ] File stored at proper persistent path (not temp)
|
||||
- [ ] MediaStatus transitions: New → AIProcessing → AIProcessed (or Error)
|
||||
- [ ] UserId correctly extracted from JWT and associated with Media record
|
||||
|
||||
## File Changes
|
||||
|
||||
| File | Action | Description |
|
||||
|------|--------|-------------|
|
||||
| `src/main.py` | Modified | Upload handling, Media API calls, status updates |
|
||||
| `src/media_hash.py` | New | XxHash64 sampling hash utility |
|
||||
| `requirements.txt` | Modified | Add `xxhash` library if not present |
|
||||
@@ -0,0 +1,65 @@
|
||||
# Cleanup Obsolete Path-Based Code
|
||||
|
||||
**Task**: AZ-176_cleanup_obsolete_path_code
|
||||
**Name**: Clean up obsolete path-based code and old methods
|
||||
**Description**: Remove all code that relies on the old co-located architecture where the UI sends local file paths to the detection module.
|
||||
**Complexity**: 1 point
|
||||
**Dependencies**: AZ-173 (stream-based run_detect), AZ-174 (DB-driven config)
|
||||
**Component**: Cleanup
|
||||
**Jira**: AZ-176
|
||||
**Parent**: AZ-172
|
||||
|
||||
## Problem
|
||||
|
||||
After implementing stream-based detection and DB-driven config, the old path-based code becomes dead code. It must be removed to avoid confusion and maintenance burden.
|
||||
|
||||
## Items to Remove
|
||||
|
||||
### `inference.pyx`
|
||||
|
||||
| Item | Reason |
|
||||
|------|--------|
|
||||
| `is_video(self, str filepath)` | Media type comes from upload metadata, not filesystem guessing |
|
||||
| `for p in ai_config.paths: ...` loop in `run_detect` | Replaced by stream-based dispatch |
|
||||
| `cv2.VideoCapture(<str>video_name)` with local path arg | Replaced by stream-based video processing |
|
||||
| `cv2.imread(<str>path)` with local path arg | Replaced by bytes-based image processing |
|
||||
| Old `run_detect` signature (if fully replaced) | Replaced by `run_detect_video` / `run_detect_image` |
|
||||
|
||||
### `ai_config.pxd`
|
||||
|
||||
| Item | Reason |
|
||||
|------|--------|
|
||||
| `cdef public list[str] paths` | Paths no longer part of config |
|
||||
|
||||
### `ai_config.pyx`
|
||||
|
||||
| Item | Reason |
|
||||
|------|--------|
|
||||
| `paths` parameter in `__init__` | Paths no longer part of config |
|
||||
| `self.paths = paths` assignment | Paths no longer part of config |
|
||||
| `data.get("paths", [])` in `from_dict` | Paths no longer part of config |
|
||||
| `paths: {self.paths}` in `__str__` | Paths no longer part of config |
|
||||
|
||||
### `main.py`
|
||||
|
||||
| Item | Reason |
|
||||
|------|--------|
|
||||
| `AIConfigDto.paths: list[str]` field | Paths no longer sent by caller |
|
||||
| `config_dict["paths"] = [tmp.name]` in `/detect` | Temp file path injection no longer needed |
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] No references to `paths` in `AIRecognitionConfig` or its Pydantic DTO
|
||||
- [ ] No `cv2.VideoCapture(local_path)` or `cv2.imread(local_path)` calls remain
|
||||
- [ ] No `is_video(filepath)` method remains
|
||||
- [ ] All tests pass after removal
|
||||
- [ ] No dead imports left behind
|
||||
|
||||
## File Changes
|
||||
|
||||
| File | Action | Description |
|
||||
|------|--------|-------------|
|
||||
| `src/inference.pyx` | Modified | Remove old methods and path-based logic |
|
||||
| `src/ai_config.pxd` | Modified | Remove `paths` field declaration |
|
||||
| `src/ai_config.pyx` | Modified | Remove `paths` from init, from_dict, __str__ |
|
||||
| `src/main.py` | Modified | Remove `AIConfigDto.paths`, path injection |
|
||||
@@ -0,0 +1,52 @@
|
||||
# Baseline Metrics
|
||||
|
||||
**Run**: 01-code-cleanup
|
||||
**Date**: 2026-03-30
|
||||
|
||||
## Code Metrics
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Source LOC (pyx + pxd + py) | 1,714 |
|
||||
| Test LOC (e2e + mocks) | 1,238 |
|
||||
| Source files | 22 (.pyx: 10, .pxd: 9, .py: 3) |
|
||||
| Test files | 10 |
|
||||
| Dependencies (requirements.txt) | 11 packages |
|
||||
| Dead code items identified | 20 |
|
||||
|
||||
## Test Suite
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Total tests | 23 |
|
||||
| Passing | 23 |
|
||||
| Failing | 0 |
|
||||
| Skipped | 0 |
|
||||
| Execution time | 11.93s |
|
||||
|
||||
## Functionality Inventory
|
||||
|
||||
| Endpoint | Method | Coverage | Status |
|
||||
|----------|--------|----------|--------|
|
||||
| /health | GET | Covered | Working |
|
||||
| /detect | POST | Covered | Working |
|
||||
| /detect/{media_id} | POST | Covered | Working |
|
||||
| /detect/stream | GET | Covered | Working |
|
||||
|
||||
## File Structure (pre-refactoring)
|
||||
|
||||
All source code lives in the repository root — no `src/` separation:
|
||||
- Root: main.py, setup.py, 8x .pyx, 7x .pxd, classes.json
|
||||
- engines/: 3x .pyx, 4x .pxd, __init__.py, __init__.pxd
|
||||
- e2e/: tests, mocks, fixtures, config
|
||||
|
||||
## Dead Code Inventory
|
||||
|
||||
| Category | Count | Files |
|
||||
|----------|-------|-------|
|
||||
| Unused methods | 4 | serialize() x2, from_msgpack(), stop() |
|
||||
| Unused fields | 3 | file_data, model_batch_size, annotation_name |
|
||||
| Unused constants | 5 | CONFIG_FILE, QUEUE_CONFIG_FILENAME, CDN_CONFIG, SMALL_SIZE_KB, QUEUE_MAXSIZE |
|
||||
| Orphaned declarations | 3 | COMMANDS_QUEUE, ANNOTATIONS_QUEUE, weather enum PXD |
|
||||
| Dead imports | 4 | msgpack x3, typing/numpy in pxd |
|
||||
| Empty files | 1 | engines/__init__.pxd |
|
||||
@@ -0,0 +1,193 @@
|
||||
# Logical Flow Analysis
|
||||
|
||||
**Run**: 01-code-cleanup
|
||||
**Date**: 2026-03-30
|
||||
|
||||
Each documented business flow (from `_docs/02_document/system-flows.md`) traced through actual code. Contradictions classified as: Logic Bug, Performance Waste, Design Contradiction, Documentation Drift.
|
||||
|
||||
---
|
||||
|
||||
## F2: Single Image Detection (`detect_single_image`)
|
||||
|
||||
### LF-01: Batch padding wastes compute (Performance Waste)
|
||||
|
||||
**Documented**: Client uploads one image → preprocess → engine → postprocess → return detections.
|
||||
|
||||
**Actual** (inference.pyx:261-264):
|
||||
```python
|
||||
batch_size = self.engine.get_batch_size()
|
||||
frames = [frame] * batch_size # duplicate frame N times
|
||||
input_blob = self.preprocess(frames) # preprocess N copies
|
||||
outputs = self.engine.run(input_blob)# run inference on N copies
|
||||
list_detections = self.postprocess(outputs, ai_config)
|
||||
detections = list_detections[0] # use only first result
|
||||
```
|
||||
|
||||
For TensorRT (batch_size=4): 4x the preprocessing, 4x the inference, 3/4 of results discarded. For CoreML (batch_size=1): no waste. For ONNX: depends on model's batch dimension.
|
||||
|
||||
**Impact**: Up to 4x unnecessary GPU/CPU compute per single-image request.
|
||||
|
||||
**Fix**: Engine should support running with fewer frames than max batch size. If the engine requires fixed batch, pad only at the engine boundary, not at the preprocessing level.
|
||||
|
||||
---
|
||||
|
||||
## F3: Media Detection — Video Processing (`_process_video`)
|
||||
|
||||
### LF-02: Last partial batch silently dropped (Logic Bug / Data Loss)
|
||||
|
||||
**Documented** (system-flows.md F3): "loop For each media file → preprocess/batch → engine → postprocess"
|
||||
|
||||
**Actual** (inference.pyx:297-340):
|
||||
```python
|
||||
while v_input.isOpened() and not self.stop_signal:
|
||||
ret, frame = v_input.read()
|
||||
if not ret or frame is None:
|
||||
break
|
||||
frame_count += 1
|
||||
if frame_count % ai_config.frame_period_recognition == 0:
|
||||
batch_frames.append(frame)
|
||||
batch_timestamps.append(...)
|
||||
|
||||
if len(batch_frames) == self.engine.get_batch_size():
|
||||
# process batch
|
||||
...
|
||||
batch_frames.clear()
|
||||
batch_timestamps.clear()
|
||||
|
||||
v_input.release() # loop ends
|
||||
self.send_detection_status()
|
||||
# batch_frames may still have 1..(batch_size-1) unprocessed frames — DROPPED
|
||||
```
|
||||
|
||||
When the video ends, any remaining frames in `batch_frames` (fewer than `batch_size`) are silently lost. For batch_size=4 and frame_period=4: up to 3 sampled frames at the end of every video are never processed.
|
||||
|
||||
**Impact**: Detections in the final seconds of every video are potentially missed.
|
||||
|
||||
### LF-03: `split_list_extend` padding is unnecessary and harmful (Design Contradiction + Performance Waste)
|
||||
|
||||
**Design intent**: With dynamic batch sizing (agreed upon during engine refactoring in Step 3), engines should accept variable-size inputs.
|
||||
|
||||
**Actual** (inference.pyx:208-217):
|
||||
```python
|
||||
cdef split_list_extend(self, lst, chunk_size):
|
||||
chunks = [lst[i:i + chunk_size] for i in range(0, len(lst), chunk_size)]
|
||||
last_chunk = chunks[len(chunks) - 1]
|
||||
if len(last_chunk) < chunk_size:
|
||||
last_elem = last_chunk[len(last_chunk)-1]
|
||||
while len(last_chunk) < chunk_size:
|
||||
last_chunk.append(last_elem)
|
||||
return chunks
|
||||
```
|
||||
|
||||
This duplicates the last element to pad the final chunk to exactly `chunk_size`. Problems:
|
||||
1. With dynamic batch sizing, this padding is completely unnecessary — just process the smaller batch
|
||||
2. The duplicated frames go through full preprocessing and inference, wasting compute
|
||||
3. The duplicated detections from padded frames are processed by `_process_images_inner` and may emit duplicate annotations (the dedup logic only catches tile overlaps, not frame-level duplicates from padding)
|
||||
|
||||
**Impact**: Unnecessary compute + potential duplicate detections from padded frames.
|
||||
|
||||
### LF-04: Fixed batch gate `==` should be `>=` or removed entirely (Design Contradiction)
|
||||
|
||||
**Actual** (inference.pyx:307):
|
||||
```python
|
||||
if len(batch_frames) == self.engine.get_batch_size():
|
||||
```
|
||||
|
||||
This strict equality means: only process when the batch is **exactly** full. Combined with LF-02 (no flush), remaining frames are dropped. With dynamic batch support, this gate is unnecessary — process frames as they accumulate, or at minimum flush remaining frames after the loop.
|
||||
|
||||
---
|
||||
|
||||
## F3: Media Detection — Image Processing (`_process_images`)
|
||||
|
||||
### LF-05: Non-last small images silently dropped (Logic Bug / Data Loss)
|
||||
|
||||
**Actual** (inference.pyx:349-379):
|
||||
```python
|
||||
for path in image_paths:
|
||||
frame_data = [] # ← RESET each iteration
|
||||
frame = cv2.imread(path)
|
||||
...
|
||||
frame_data.append(...) # or .extend(...) for tiled images
|
||||
|
||||
if len(frame_data) > self.engine.get_batch_size():
|
||||
for chunk in self.split_list_extend(frame_data, ...):
|
||||
self._process_images_inner(...)
|
||||
self.send_detection_status()
|
||||
|
||||
# Outside loop: only the LAST image's frame_data survives
|
||||
for chunk in self.split_list_extend(frame_data, ...):
|
||||
self._process_images_inner(...)
|
||||
self.send_detection_status()
|
||||
```
|
||||
|
||||
Walk through with 3 images [A(small), B(small), C(small)] and batch_size=4:
|
||||
- Iteration A: `frame_data = [(A, ...)]`. `1 > 4` → False. Not processed.
|
||||
- Iteration B: `frame_data = [(B, ...)]` (A lost!). `1 > 4` → False. Not processed.
|
||||
- Iteration C: `frame_data = [(C, ...)]` (B lost!). `1 > 4` → False. Not processed.
|
||||
- After loop: `frame_data = [(C, ...)]` → processed. Only C was ever detected.
|
||||
|
||||
**Impact**: In multi-image media detection, all images except the last are silently dropped when each is smaller than the batch size. This is a critical data loss bug.
|
||||
|
||||
### LF-06: Large images double-processed (Logic Bug)
|
||||
|
||||
With image D producing 10 tiles and batch_size=4:
|
||||
- Inside loop: `10 > 4` → True. All 10 tiles processed (3 chunks: 4+4+4 with last padded). `send_detection_status()` called.
|
||||
- After loop: `frame_data` still contains all 10 tiles. Processed again (3 more chunks). `send_detection_status()` called again.
|
||||
|
||||
**Impact**: Large images get inference run twice, producing duplicate detection events.
|
||||
|
||||
### LF-07: `frame.shape` before None check (Logic Bug / Crash)
|
||||
|
||||
**Actual** (inference.pyx:355-358):
|
||||
```python
|
||||
frame = cv2.imread(<str>path)
|
||||
img_h, img_w, _ = frame.shape # crashes if frame is None
|
||||
if frame is None: # dead code — never reached
|
||||
continue
|
||||
```
|
||||
|
||||
**Impact**: Corrupt or missing image file crashes the entire detection pipeline instead of gracefully skipping.
|
||||
|
||||
---
|
||||
|
||||
## Cross-Cutting: Batch Size Design Contradiction
|
||||
|
||||
### LF-08: Entire pipeline assumes fixed batch size (Design Contradiction)
|
||||
|
||||
The engine polymorphism (Step 3 refactoring) established that different engines have different batch sizes: TensorRT=4, CoreML=1, ONNX=variable. But the processing pipeline treats batch size as a fixed gate:
|
||||
|
||||
| Location | Pattern | Problem |
|
||||
|----------|---------|---------|
|
||||
| `detect_single_image:262` | `[frame] * batch_size` | Pads single frame to batch size |
|
||||
| `_process_video:307` | `== batch_size` | Only processes exact-full batches |
|
||||
| `_process_images:372` | `> batch_size` | Only processes when exceeding batch |
|
||||
| `split_list_extend` | Pads last chunk | Duplicates frames to fill batch |
|
||||
|
||||
All engines already accept the full batch as a numpy blob. The fix is to make the pipeline batch-agnostic: collect frames, process when you have enough OR when the stream ends. Never pad with duplicates.
|
||||
|
||||
---
|
||||
|
||||
## Architecture Documentation Drift
|
||||
|
||||
### LF-09: Architecture doc lists msgpack as active technology (Documentation Drift)
|
||||
|
||||
**Architecture.md** § Technology Stack:
|
||||
> "Serialization | msgpack | 1.1.1 | Compact binary serialization for annotations and configs"
|
||||
|
||||
**Reality**: All `serialize()` and `from_msgpack()` methods are dead code. The system uses Pydantic JSON for API responses and `from_dict()` for config parsing. msgpack is not used by any live code path.
|
||||
|
||||
---
|
||||
|
||||
## Summary Table
|
||||
|
||||
| ID | Flow | Type | Severity | Description |
|
||||
|----|------|------|----------|-------------|
|
||||
| LF-01 | F2 | Performance Waste | Medium | Single image duplicated to fill batch — up to 4x wasted compute |
|
||||
| LF-02 | F3/Video | Data Loss | High | Last partial video batch silently dropped |
|
||||
| LF-03 | F3/Both | Design Contradiction + Perf | Medium | split_list_extend pads with duplicates instead of processing smaller batch |
|
||||
| LF-04 | F3/Video | Design Contradiction | High | Fixed `== batch_size` gate prevents partial batch processing |
|
||||
| LF-05 | F3/Images | Data Loss | Critical | Non-last small images silently dropped in multi-image processing |
|
||||
| LF-06 | F3/Images | Logic Bug | High | Large images processed twice (inside loop + after loop) |
|
||||
| LF-07 | F3/Images | Crash | High | frame.shape before None check |
|
||||
| LF-08 | Cross-cutting | Design Contradiction | High | Entire pipeline assumes fixed batch size vs dynamic engine reality |
|
||||
| LF-09 | Documentation | Drift | Low | Architecture lists msgpack as active; it's dead |
|
||||
@@ -0,0 +1,132 @@
|
||||
# List of Changes
|
||||
|
||||
**Run**: 01-code-cleanup
|
||||
**Mode**: automatic
|
||||
**Source**: self-discovered
|
||||
**Date**: 2026-03-30
|
||||
|
||||
## Summary
|
||||
|
||||
Two tiers: (1) Fix critical logical flow bugs — batch handling, data loss, crash prevention, and remove the fixed-batch-size assumption that contradicts the dynamic engine design. (2) Dead code cleanup, configurable paths, HTTP timeouts, and move source to `src/`.
|
||||
|
||||
## Changes
|
||||
|
||||
### C01: Move source code to `src/` directory
|
||||
- **File(s)**: main.py, inference.pyx, constants_inf.pyx, constants_inf.pxd, annotation.pyx, annotation.pxd, ai_config.pyx, ai_config.pxd, ai_availability_status.pyx, ai_availability_status.pxd, loader_http_client.pyx, loader_http_client.pxd, engines/, setup.py, run-tests.sh, e2e/run_local.sh, e2e/docker-compose.test.yml
|
||||
- **Problem**: All source code is in the repository root, mixed with config, docs, and test infrastructure.
|
||||
- **Change**: Move all application source files into `src/`. Update setup.py extension paths, run-tests.sh, e2e scripts, and docker-compose volumes. Keep setup.py, requirements, and tests at root.
|
||||
- **Rationale**: Project convention requires source under `src/`.
|
||||
- **Risk**: medium
|
||||
- **Dependencies**: None (do first — all other changes reference new paths)
|
||||
|
||||
### C02: Fix `_process_images` — accumulate all images, process once (LF-05, LF-06)
|
||||
- **File(s)**: src/inference.pyx (`_process_images`)
|
||||
- **Problem**: `frame_data = []` is reset inside the per-image loop, so only the last image's data survives to the outer processing loop. Non-last small images are silently dropped. Large images that exceed batch_size inside the loop are also re-processed outside the loop (double-processing).
|
||||
- **Change**: Accumulate frame_data across ALL images (move reset before the loop). Process all accumulated data once after the loop. Remove the inner batch-processing + status call. Each image's tiles/frames should carry their own ground_sampling_distance so mixed-GSD images process correctly.
|
||||
- **Rationale**: Critical data loss — multi-image requests silently drop all images except the last.
|
||||
- **Risk**: medium
|
||||
- **Dependencies**: C01, C04
|
||||
|
||||
### C03: Fix `_process_video` — flush remaining frames after loop (LF-02, LF-04)
|
||||
- **File(s)**: src/inference.pyx (`_process_video`)
|
||||
- **Problem**: The `if len(batch_frames) == self.engine.get_batch_size()` gate means frames are only processed in exact-batch-size groups. When the video ends with a partial batch (1..batch_size-1 frames), those frames are silently dropped. Detections at the end of every video are potentially missed.
|
||||
- **Change**: After the video read loop, if `batch_frames` is non-empty, process the remaining frames as a partial batch (no padding). Change the `==` gate to `>=` as a safety measure, though with the flush it's not strictly needed.
|
||||
- **Rationale**: Silent data loss — last frames of every video are dropped.
|
||||
- **Risk**: medium
|
||||
- **Dependencies**: C01, C04
|
||||
|
||||
### C04: Remove `split_list_extend` — replace with simple chunking without padding (LF-03, LF-08)
|
||||
- **File(s)**: src/inference.pyx (`split_list_extend`, `_process_images`, `detect_single_image`)
|
||||
- **Problem**: `split_list_extend` pads the last chunk by duplicating its final element to fill `batch_size`. This wastes compute (duplicate inference), may produce duplicate detections, and contradicts the dynamic batch design established in Step 3 (engine polymorphism). In `detect_single_image`, `[frame] * batch_size` pads a single frame to batch_size copies — same issue.
|
||||
- **Change**: Replace `split_list_extend` with plain chunking (no padding). Last chunk keeps its natural size. In `detect_single_image`, pass a single-frame list. Engine `run()` and `preprocess()` must handle variable-size input — verify each engine supports this or add a minimal adapter.
|
||||
- **Rationale**: Unnecessary compute (up to 4x for TensorRT single-image), potential duplicate detections from padding, contradicts dynamic batch design.
|
||||
- **Risk**: high
|
||||
- **Dependencies**: C01
|
||||
|
||||
### C05: Fix frame-is-None crash in `_process_images` (LF-07)
|
||||
- **File(s)**: src/inference.pyx (`_process_images`)
|
||||
- **Problem**: `frame.shape` is accessed before `frame is None` check. If `cv2.imread` fails, the pipeline crashes instead of skipping the file.
|
||||
- **Change**: Move the None check before the shape access.
|
||||
- **Rationale**: Crash prevention for missing/corrupt image files.
|
||||
- **Risk**: low
|
||||
- **Dependencies**: C01
|
||||
|
||||
### C06: Remove orphaned RabbitMQ declarations from constants_inf.pxd
|
||||
- **File(s)**: src/constants_inf.pxd
|
||||
- **Problem**: `QUEUE_MAXSIZE`, `COMMANDS_QUEUE`, `ANNOTATIONS_QUEUE` are declared but have no implementations. Remnants of previous RabbitMQ architecture.
|
||||
- **Change**: Remove the three declarations and their comments.
|
||||
- **Rationale**: Dead declarations mislead about system architecture.
|
||||
- **Risk**: low
|
||||
- **Dependencies**: C01
|
||||
|
||||
### C07: Remove unused constants from constants_inf
|
||||
- **File(s)**: src/constants_inf.pxd, src/constants_inf.pyx
|
||||
- **Problem**: `CONFIG_FILE` (with stale "zmq" comment), `QUEUE_CONFIG_FILENAME`, `CDN_CONFIG`, `SMALL_SIZE_KB` — defined but never referenced.
|
||||
- **Change**: Remove all four from .pxd and .pyx.
|
||||
- **Rationale**: Dead constants with misleading comments.
|
||||
- **Risk**: low
|
||||
- **Dependencies**: C01
|
||||
|
||||
### C08: Remove dead serialize/from_msgpack methods and msgpack imports
|
||||
- **File(s)**: src/annotation.pyx, src/annotation.pxd, src/ai_availability_status.pyx, src/ai_availability_status.pxd, src/ai_config.pyx, src/ai_config.pxd
|
||||
- **Problem**: `Annotation.serialize()`, `AIAvailabilityStatus.serialize()`, `AIRecognitionConfig.from_msgpack()` — all dead. Associated `import msgpack` / `from msgpack import unpackb` only serve these dead methods.
|
||||
- **Change**: Remove all three methods from .pyx and .pxd files. Remove msgpack imports.
|
||||
- **Rationale**: Legacy queue-era serialization with no callers.
|
||||
- **Risk**: low
|
||||
- **Dependencies**: C01
|
||||
|
||||
### C09: Remove unused fields (file_data, model_batch_size, annotation_name)
|
||||
- **File(s)**: src/ai_config.pyx, src/ai_config.pxd, src/annotation.pyx, src/annotation.pxd, src/main.py
|
||||
- **Problem**: `AIRecognitionConfig.file_data` populated but never read. `AIRecognitionConfig.model_batch_size` parsed but never used (engine owns batch size). `Detection.annotation_name` set but never read.
|
||||
- **Change**: Remove field declarations from .pxd, remove from constructors and factory methods in .pyx. Remove `file_data` and `model_batch_size` from AIConfigDto in main.py. Remove annotation_name assignment loop in Annotation.__init__.
|
||||
- **Rationale**: Dead fields that mislead about responsibilities.
|
||||
- **Risk**: low
|
||||
- **Dependencies**: C01, C08
|
||||
|
||||
### C10: Remove misc dead code (stop no-op, empty pxd, unused pxd imports)
|
||||
- **File(s)**: src/loader_http_client.pyx, src/loader_http_client.pxd, src/engines/__init__.pxd, src/engines/inference_engine.pxd
|
||||
- **Problem**: `LoaderHttpClient.stop()` is a no-op. `engines/__init__.pxd` is empty. `inference_engine.pxd` imports `List, Tuple` from typing and `numpy` — both unused.
|
||||
- **Change**: Remove stop() from .pyx and .pxd. Delete empty __init__.pxd. Remove unused imports from inference_engine.pxd.
|
||||
- **Rationale**: Dead code noise.
|
||||
- **Risk**: low
|
||||
- **Dependencies**: C01
|
||||
|
||||
### C11: Remove msgpack from requirements.txt
|
||||
- **File(s)**: requirements.txt
|
||||
- **Problem**: `msgpack==1.1.1` has no consumers after C08 removes all msgpack usage.
|
||||
- **Change**: Remove from requirements.txt.
|
||||
- **Rationale**: Unused dependency.
|
||||
- **Risk**: low
|
||||
- **Dependencies**: C08
|
||||
|
||||
### C12: Make classes.json path configurable via env var
|
||||
- **File(s)**: src/constants_inf.pyx
|
||||
- **Problem**: `open('classes.json')` is hardcoded, depends on CWD at import time.
|
||||
- **Change**: Read from `os.environ.get("CLASSES_JSON_PATH", "classes.json")`.
|
||||
- **Rationale**: Environment-appropriate configuration.
|
||||
- **Risk**: low
|
||||
- **Dependencies**: C01
|
||||
|
||||
### C13: Make log directory configurable via env var
|
||||
- **File(s)**: src/constants_inf.pyx
|
||||
- **Problem**: `sink="Logs/log_inference_..."` is hardcoded.
|
||||
- **Change**: Read from `os.environ.get("LOG_DIR", "Logs")`.
|
||||
- **Rationale**: Environment configurability.
|
||||
- **Risk**: low
|
||||
- **Dependencies**: C01
|
||||
|
||||
### C14: Add timeouts to LoaderHttpClient HTTP calls
|
||||
- **File(s)**: src/loader_http_client.pyx
|
||||
- **Problem**: No explicit timeout on `requests.post()` calls. Stalled loader hangs detections service.
|
||||
- **Change**: Add `timeout=120` to load and upload calls.
|
||||
- **Rationale**: Prevent service hangs.
|
||||
- **Risk**: low
|
||||
- **Dependencies**: C01
|
||||
|
||||
### C15: Update architecture doc — remove msgpack from tech stack (LF-09)
|
||||
- **File(s)**: _docs/02_document/architecture.md
|
||||
- **Problem**: Tech stack lists "msgpack | 1.1.1 | Compact binary serialization for annotations and configs" but msgpack is dead code after this refactoring.
|
||||
- **Change**: Remove msgpack row from tech stack table.
|
||||
- **Rationale**: Documentation accuracy.
|
||||
- **Risk**: low
|
||||
- **Dependencies**: C08, C11
|
||||
@@ -4,8 +4,8 @@
|
||||
flow: existing-code
|
||||
step: 7
|
||||
name: Refactor
|
||||
status: not_started
|
||||
sub_step: 0
|
||||
status: completed
|
||||
sub_step: done
|
||||
retry_count: 0
|
||||
|
||||
## Completed Steps
|
||||
@@ -18,6 +18,7 @@ retry_count: 0
|
||||
| 4 | Decompose Tests | 2026-03-23 | 11 tasks (AZ-138..AZ-148), 35 complexity points, 3 batches. Phase 3 test data gate PASSED: 39/39 scenarios validated, 12 data files provided. |
|
||||
| 5 | Implement Tests | 2026-03-23 | 11 tasks implemented across 4 batches, 38 tests (2 skipped), all code reviews PASS_WITH_WARNINGS. Commits: 5418bd7, a469579, 861d4f0, f0e3737. |
|
||||
| 6 | Run Tests | 2026-03-30 | 23 passed, 0 failed, 0 skipped, 0 errors in 11.93s. Fixed: Cython __reduce_cython__ (clean rebuild), missing Pillow dep, relative MEDIA_DIR paths. Removed 14 dead/unreachable tests. Updated test-run skill to treat skips as blocking gate. |
|
||||
| 7 | Refactor | 2026-03-31 | Engine-centric dynamic batch refactoring. Moved source to src/. Engine pipeline redesign: preprocess/postprocess/process_frames in base InferenceEngine, dynamic batching per engine (CoreML=1, TensorRT=GPU-calculated, ONNX=config). Fixed: video partial batch flush, image accumulation data loss, frame-is-None crash. Removed detect_single_image (POST /detect delegates to run_detect). Dead code: removed msgpack, serialize methods, unused constants/fields. Configurable classes.json + log paths, HTTP timeouts. 28 e2e tests pass. |
|
||||
|
||||
## Key Decisions
|
||||
- User chose to document existing codebase before proceeding
|
||||
@@ -35,12 +36,19 @@ retry_count: 0
|
||||
- User confirmed dependency table and test data gate
|
||||
- Jira MCP auth skipped — tickets not transitioned to In Testing
|
||||
- Test run: removed 14 dead/unreachable tests (explicit @skip + runtime always-skip), added .c to .gitignore
|
||||
- User chose to refactor (option A) — clean up legacy dead code
|
||||
- User requested: move code to src/, thorough re-analysis, exhaustive refactoring list
|
||||
- Refactoring round: 01-code-cleanup, automatic mode, 15 changes identified
|
||||
- User feedback: analyze logical flow contradictions, not just static code. Updated refactor skill Phase 1 with logical flow analysis.
|
||||
- User chose: split scope — engine refactoring as Step 7, architecture shift (streaming, DB config, media storage, Jetson) as Step 8
|
||||
- User chose: remove detect_single_image, POST /detect delegates to run_detect
|
||||
- GPU memory fraction: 80% for inference, 20% buffer (Jetson 40% deferred to Step 8)
|
||||
|
||||
## Last Session
|
||||
date: 2026-03-30
|
||||
ended_at: Step 6 completed, Step 7 (Refactor) next
|
||||
reason: All 23 tests pass with zero skips
|
||||
notes: Fixed Cython build (clean rebuild resolved __reduce_cython__ KeyError), installed missing Pillow, used absolute MEDIA_DIR. Service crash root-caused to CoreML thread-safety during concurrent requests (not a test issue). Updated test-run skill: skipped tests now require investigation like failures.
|
||||
date: 2026-03-31
|
||||
ended_at: Step 7 complete — all 11 todos done, 28 e2e tests pass
|
||||
reason: Refactoring complete
|
||||
notes: Engine-centric dynamic batch refactoring implemented. Source moved to src/. InferenceEngine base class now owns preprocess/postprocess/process_frames with per-engine max_batch_size. CoreML overrides preprocess (direct PIL, no blob reversal) and postprocess. TensorRT calculates max_batch_size from GPU memory (80% fraction) with optimization profiles for dynamic batch. All logical flow bugs fixed (LF-01 through LF-09). Dead code removed (msgpack, serialize, unused constants). POST /detect unified through run_detect. Next: Step 8 (architecture shift — streaming media, DB-backed config, media storage, Jetson support).
|
||||
|
||||
## Blockers
|
||||
- none
|
||||
|
||||
Reference in New Issue
Block a user