Add AIAvailabilityStatus and AIRecognitionConfig classes for AI model management

- Introduced `AIAvailabilityStatus` class to manage the availability status of AI models, including methods for setting status and logging messages.
- Added `AIRecognitionConfig` class to encapsulate configuration parameters for AI recognition, with a static method for creating instances from dictionaries.
- Implemented enums for AI availability states to enhance clarity and maintainability.
- Updated related Cython files to support the new classes and ensure proper type handling.

These changes aim to improve the structure and functionality of the AI model management system, facilitating better status tracking and configuration handling.
This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-03-31 05:49:51 +03:00
parent fc57d677b4
commit 8ce40a9385
43 changed files with 1190 additions and 462 deletions
@@ -0,0 +1,193 @@
# Logical Flow Analysis
**Run**: 01-code-cleanup
**Date**: 2026-03-30
Each documented business flow (from `_docs/02_document/system-flows.md`) traced through actual code. Contradictions classified as: Logic Bug, Performance Waste, Design Contradiction, Documentation Drift.
---
## F2: Single Image Detection (`detect_single_image`)
### LF-01: Batch padding wastes compute (Performance Waste)
**Documented**: Client uploads one image → preprocess → engine → postprocess → return detections.
**Actual** (inference.pyx:261-264):
```python
batch_size = self.engine.get_batch_size()
frames = [frame] * batch_size # duplicate frame N times
input_blob = self.preprocess(frames) # preprocess N copies
outputs = self.engine.run(input_blob)# run inference on N copies
list_detections = self.postprocess(outputs, ai_config)
detections = list_detections[0] # use only first result
```
For TensorRT (batch_size=4): 4x the preprocessing, 4x the inference, 3/4 of results discarded. For CoreML (batch_size=1): no waste. For ONNX: depends on model's batch dimension.
**Impact**: Up to 4x unnecessary GPU/CPU compute per single-image request.
**Fix**: Engine should support running with fewer frames than max batch size. If the engine requires fixed batch, pad only at the engine boundary, not at the preprocessing level.
---
## F3: Media Detection — Video Processing (`_process_video`)
### LF-02: Last partial batch silently dropped (Logic Bug / Data Loss)
**Documented** (system-flows.md F3): "loop For each media file → preprocess/batch → engine → postprocess"
**Actual** (inference.pyx:297-340):
```python
while v_input.isOpened() and not self.stop_signal:
ret, frame = v_input.read()
if not ret or frame is None:
break
frame_count += 1
if frame_count % ai_config.frame_period_recognition == 0:
batch_frames.append(frame)
batch_timestamps.append(...)
if len(batch_frames) == self.engine.get_batch_size():
# process batch
...
batch_frames.clear()
batch_timestamps.clear()
v_input.release() # loop ends
self.send_detection_status()
# batch_frames may still have 1..(batch_size-1) unprocessed frames — DROPPED
```
When the video ends, any remaining frames in `batch_frames` (fewer than `batch_size`) are silently lost. For batch_size=4 and frame_period=4: up to 3 sampled frames at the end of every video are never processed.
**Impact**: Detections in the final seconds of every video are potentially missed.
### LF-03: `split_list_extend` padding is unnecessary and harmful (Design Contradiction + Performance Waste)
**Design intent**: With dynamic batch sizing (agreed upon during engine refactoring in Step 3), engines should accept variable-size inputs.
**Actual** (inference.pyx:208-217):
```python
cdef split_list_extend(self, lst, chunk_size):
chunks = [lst[i:i + chunk_size] for i in range(0, len(lst), chunk_size)]
last_chunk = chunks[len(chunks) - 1]
if len(last_chunk) < chunk_size:
last_elem = last_chunk[len(last_chunk)-1]
while len(last_chunk) < chunk_size:
last_chunk.append(last_elem)
return chunks
```
This duplicates the last element to pad the final chunk to exactly `chunk_size`. Problems:
1. With dynamic batch sizing, this padding is completely unnecessary — just process the smaller batch
2. The duplicated frames go through full preprocessing and inference, wasting compute
3. The duplicated detections from padded frames are processed by `_process_images_inner` and may emit duplicate annotations (the dedup logic only catches tile overlaps, not frame-level duplicates from padding)
**Impact**: Unnecessary compute + potential duplicate detections from padded frames.
### LF-04: Fixed batch gate `==` should be `>=` or removed entirely (Design Contradiction)
**Actual** (inference.pyx:307):
```python
if len(batch_frames) == self.engine.get_batch_size():
```
This strict equality means: only process when the batch is **exactly** full. Combined with LF-02 (no flush), remaining frames are dropped. With dynamic batch support, this gate is unnecessary — process frames as they accumulate, or at minimum flush remaining frames after the loop.
---
## F3: Media Detection — Image Processing (`_process_images`)
### LF-05: Non-last small images silently dropped (Logic Bug / Data Loss)
**Actual** (inference.pyx:349-379):
```python
for path in image_paths:
frame_data = [] # ← RESET each iteration
frame = cv2.imread(path)
...
frame_data.append(...) # or .extend(...) for tiled images
if len(frame_data) > self.engine.get_batch_size():
for chunk in self.split_list_extend(frame_data, ...):
self._process_images_inner(...)
self.send_detection_status()
# Outside loop: only the LAST image's frame_data survives
for chunk in self.split_list_extend(frame_data, ...):
self._process_images_inner(...)
self.send_detection_status()
```
Walk through with 3 images [A(small), B(small), C(small)] and batch_size=4:
- Iteration A: `frame_data = [(A, ...)]`. `1 > 4` → False. Not processed.
- Iteration B: `frame_data = [(B, ...)]` (A lost!). `1 > 4` → False. Not processed.
- Iteration C: `frame_data = [(C, ...)]` (B lost!). `1 > 4` → False. Not processed.
- After loop: `frame_data = [(C, ...)]` → processed. Only C was ever detected.
**Impact**: In multi-image media detection, all images except the last are silently dropped when each is smaller than the batch size. This is a critical data loss bug.
### LF-06: Large images double-processed (Logic Bug)
With image D producing 10 tiles and batch_size=4:
- Inside loop: `10 > 4` → True. All 10 tiles processed (3 chunks: 4+4+4 with last padded). `send_detection_status()` called.
- After loop: `frame_data` still contains all 10 tiles. Processed again (3 more chunks). `send_detection_status()` called again.
**Impact**: Large images get inference run twice, producing duplicate detection events.
### LF-07: `frame.shape` before None check (Logic Bug / Crash)
**Actual** (inference.pyx:355-358):
```python
frame = cv2.imread(<str>path)
img_h, img_w, _ = frame.shape # crashes if frame is None
if frame is None: # dead code — never reached
continue
```
**Impact**: Corrupt or missing image file crashes the entire detection pipeline instead of gracefully skipping.
---
## Cross-Cutting: Batch Size Design Contradiction
### LF-08: Entire pipeline assumes fixed batch size (Design Contradiction)
The engine polymorphism (Step 3 refactoring) established that different engines have different batch sizes: TensorRT=4, CoreML=1, ONNX=variable. But the processing pipeline treats batch size as a fixed gate:
| Location | Pattern | Problem |
|----------|---------|---------|
| `detect_single_image:262` | `[frame] * batch_size` | Pads single frame to batch size |
| `_process_video:307` | `== batch_size` | Only processes exact-full batches |
| `_process_images:372` | `> batch_size` | Only processes when exceeding batch |
| `split_list_extend` | Pads last chunk | Duplicates frames to fill batch |
All engines already accept the full batch as a numpy blob. The fix is to make the pipeline batch-agnostic: collect frames, process when you have enough OR when the stream ends. Never pad with duplicates.
---
## Architecture Documentation Drift
### LF-09: Architecture doc lists msgpack as active technology (Documentation Drift)
**Architecture.md** § Technology Stack:
> "Serialization | msgpack | 1.1.1 | Compact binary serialization for annotations and configs"
**Reality**: All `serialize()` and `from_msgpack()` methods are dead code. The system uses Pydantic JSON for API responses and `from_dict()` for config parsing. msgpack is not used by any live code path.
---
## Summary Table
| ID | Flow | Type | Severity | Description |
|----|------|------|----------|-------------|
| LF-01 | F2 | Performance Waste | Medium | Single image duplicated to fill batch — up to 4x wasted compute |
| LF-02 | F3/Video | Data Loss | High | Last partial video batch silently dropped |
| LF-03 | F3/Both | Design Contradiction + Perf | Medium | split_list_extend pads with duplicates instead of processing smaller batch |
| LF-04 | F3/Video | Design Contradiction | High | Fixed `== batch_size` gate prevents partial batch processing |
| LF-05 | F3/Images | Data Loss | Critical | Non-last small images silently dropped in multi-image processing |
| LF-06 | F3/Images | Logic Bug | High | Large images processed twice (inside loop + after loop) |
| LF-07 | F3/Images | Crash | High | frame.shape before None check |
| LF-08 | Cross-cutting | Design Contradiction | High | Entire pipeline assumes fixed batch size vs dynamic engine reality |
| LF-09 | Documentation | Drift | Low | Architecture lists msgpack as active; it's dead |