mirror of
https://github.com/azaion/detections.git
synced 2026-04-22 09:36:32 +00:00
Add detailed file index and enhance skill documentation for autopilot, decompose, deploy, plan, and research skills. Introduce tests-only mode in decompose skill, clarify required files for deploy and plan skills, and improve prerequisite checks across skills for better user guidance and workflow efficiency.
This commit is contained in:
@@ -0,0 +1,157 @@
|
||||
# Azaion.Detections — Data Model
|
||||
|
||||
## Entity-Relationship Diagram
|
||||
|
||||
```mermaid
|
||||
erDiagram
|
||||
AnnotationClass {
|
||||
int id PK
|
||||
string name
|
||||
string color
|
||||
int max_object_size_meters
|
||||
}
|
||||
|
||||
Detection {
|
||||
double x
|
||||
double y
|
||||
double w
|
||||
double h
|
||||
int cls FK
|
||||
double confidence
|
||||
string annotation_name
|
||||
}
|
||||
|
||||
Annotation {
|
||||
string name PK
|
||||
string original_media_name
|
||||
long time
|
||||
bytes image
|
||||
}
|
||||
|
||||
AIRecognitionConfig {
|
||||
int frame_period_recognition
|
||||
double frame_recognition_seconds
|
||||
double probability_threshold
|
||||
double tracking_distance_confidence
|
||||
double tracking_probability_increase
|
||||
double tracking_intersection_threshold
|
||||
int big_image_tile_overlap_percent
|
||||
int model_batch_size
|
||||
double altitude
|
||||
double focal_length
|
||||
double sensor_width
|
||||
}
|
||||
|
||||
AIAvailabilityStatus {
|
||||
int status
|
||||
string error_message
|
||||
}
|
||||
|
||||
DetectionDto {
|
||||
double centerX
|
||||
double centerY
|
||||
double width
|
||||
double height
|
||||
int classNum
|
||||
string label
|
||||
double confidence
|
||||
}
|
||||
|
||||
DetectionEvent {
|
||||
string mediaId
|
||||
string mediaStatus
|
||||
int mediaPercent
|
||||
}
|
||||
|
||||
Annotation ||--o{ Detection : contains
|
||||
Detection }o--|| AnnotationClass : "classified as"
|
||||
DetectionEvent ||--o{ DetectionDto : annotations
|
||||
```
|
||||
|
||||
## Core Domain Entities
|
||||
|
||||
### AnnotationClass
|
||||
|
||||
Loaded from `classes.json` at startup. 19 base classes × 3 weather modes = up to 57 entries in `annotations_dict`.
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| id | int | Unique class ID (0-18 base, +20 for winter, +40 for night) |
|
||||
| name | str | Display name (e.g. "ArmorVehicle", "Truck(Wint)") |
|
||||
| color | str | Hex color for visualization |
|
||||
| max_object_size_meters | int | Maximum physical size — detections exceeding this are filtered out |
|
||||
|
||||
### Detection
|
||||
|
||||
Normalized bounding box (0..1 coordinate space).
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| x, y | double | Center coordinates (normalized) |
|
||||
| w, h | double | Width and height (normalized) |
|
||||
| cls | int | Class ID → maps to AnnotationClass |
|
||||
| confidence | double | Model confidence score (0..1) |
|
||||
| annotation_name | str | Back-reference to parent Annotation name |
|
||||
|
||||
### Annotation
|
||||
|
||||
Groups detections for a single frame or image tile.
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| name | str | Unique name encoding media + tile/time info |
|
||||
| original_media_name | str | Source media filename (no extension, no spaces) |
|
||||
| time | long | Timestamp in ms (video) or 0 (image) |
|
||||
| detections | list[Detection] | Detected objects in this frame |
|
||||
| image | bytes | JPEG-encoded frame (set after validation) |
|
||||
|
||||
### AIRecognitionConfig
|
||||
|
||||
Runtime configuration for inference behavior. Created from dict (API) or msgpack (internal).
|
||||
|
||||
### AIAvailabilityStatus
|
||||
|
||||
Thread-safe engine lifecycle state. Values: NONE(0), DOWNLOADING(10), CONVERTING(20), UPLOADING(30), ENABLED(200), WARNING(300), ERROR(500).
|
||||
|
||||
## API DTOs (Pydantic)
|
||||
|
||||
### DetectionDto
|
||||
|
||||
Outward-facing detection result. Maps from internal Detection + AnnotationClass label lookup.
|
||||
|
||||
### DetectionEvent
|
||||
|
||||
SSE event payload. Status values: AIProcessing, AIProcessed, Error.
|
||||
|
||||
### AIConfigDto
|
||||
|
||||
API input configuration. Same fields as AIRecognitionConfig with defaults.
|
||||
|
||||
### HealthResponse
|
||||
|
||||
Health check response with AI availability status string.
|
||||
|
||||
## Annotation Naming Convention
|
||||
|
||||
Annotation names encode media source and processing context:
|
||||
|
||||
- **Image**: `{media_name}_000000`
|
||||
- **Image tile**: `{media_name}!split!{tile_size}_{x}_{y}!_000000`
|
||||
- **Video frame**: `{media_name}_{H}{MM}{SS}{f}` (compact time format)
|
||||
|
||||
## Serialization Formats
|
||||
|
||||
| Entity | Format | Usage |
|
||||
|--------|--------|-------|
|
||||
| Detection/Annotation | msgpack (compact keys) | `annotation.serialize()` |
|
||||
| AIRecognitionConfig | msgpack (compact keys) | `from_msgpack()` |
|
||||
| AIAvailabilityStatus | msgpack | `serialize()` |
|
||||
| DetectionDto/Event | JSON (Pydantic) | HTTP API responses, SSE |
|
||||
|
||||
## No Persistent Storage
|
||||
|
||||
This service has no database. All data is transient:
|
||||
- `classes.json` loaded at startup (read-only)
|
||||
- Model bytes downloaded from Loader on demand
|
||||
- Detection results returned via HTTP/SSE and posted to Annotations service
|
||||
- No local caching of results
|
||||
Reference in New Issue
Block a user