mirror of
https://github.com/azaion/detections.git
synced 2026-04-22 22:26:33 +00:00
136 lines
4.5 KiB
Markdown
136 lines
4.5 KiB
Markdown
# Codebase Discovery
|
|
|
|
## Directory Tree
|
|
|
|
```
|
|
detections/
|
|
├── main.py # FastAPI entry point
|
|
├── setup.py # Cython build configuration
|
|
├── requirements.txt # CPU dependencies
|
|
├── requirements-gpu.txt # GPU dependencies (extends requirements.txt)
|
|
├── classes.json # Object detection class definitions (19 classes)
|
|
├── .gitignore
|
|
├── inference.pyx / .pxd # Core inference orchestrator (Cython)
|
|
├── inference_engine.pyx / .pxd # Abstract base engine class (Cython)
|
|
├── onnx_engine.pyx # ONNX Runtime inference engine (Cython)
|
|
├── tensorrt_engine.pyx / .pxd # TensorRT inference engine (Cython)
|
|
├── annotation.pyx / .pxd # Detection & Annotation data models (Cython)
|
|
├── ai_config.pyx / .pxd # AI recognition config (Cython)
|
|
├── ai_availability_status.pyx / .pxd # AI status enum & state (Cython)
|
|
├── constants_inf.pyx / .pxd # Constants, logging, class registry (Cython)
|
|
└── loader_http_client.py # HTTP client for model loading/uploading
|
|
```
|
|
|
|
## Tech Stack Summary
|
|
|
|
| Aspect | Technology |
|
|
|--------|-----------|
|
|
| Language | Python 3 + Cython |
|
|
| Web Framework | FastAPI + Uvicorn |
|
|
| ML Inference (CPU) | ONNX Runtime 1.22.0 |
|
|
| ML Inference (GPU) | TensorRT 10.11.0 + PyCUDA 2025.1.1 |
|
|
| Image Processing | OpenCV 4.10.0 |
|
|
| Serialization | msgpack 1.1.1 |
|
|
| HTTP Client | requests 2.32.4 |
|
|
| Logging | loguru 0.7.3 |
|
|
| GPU Monitoring | pynvml 12.0.0 |
|
|
| Numeric | NumPy 2.3.0 |
|
|
| Build | Cython 3.1.3 + setuptools |
|
|
|
|
## Dependency Graph
|
|
|
|
### Internal Module Dependencies
|
|
|
|
```
|
|
constants_inf ← (leaf) no internal deps
|
|
ai_config ← (leaf) no internal deps
|
|
inference_engine ← (leaf) no internal deps
|
|
loader_http_client ← (leaf) no internal deps
|
|
|
|
ai_availability_status → constants_inf
|
|
annotation → constants_inf
|
|
|
|
onnx_engine → inference_engine, constants_inf
|
|
tensorrt_engine → inference_engine, constants_inf
|
|
|
|
inference → constants_inf, ai_availability_status, annotation, ai_config,
|
|
onnx_engine | tensorrt_engine (conditional on GPU availability)
|
|
|
|
main → inference, constants_inf, loader_http_client
|
|
```
|
|
|
|
### Mermaid Diagram
|
|
|
|
```mermaid
|
|
graph TD
|
|
main["main.py (FastAPI)"]
|
|
inference["inference"]
|
|
onnx_engine["onnx_engine"]
|
|
tensorrt_engine["tensorrt_engine"]
|
|
inference_engine["inference_engine (abstract)"]
|
|
annotation["annotation"]
|
|
ai_availability_status["ai_availability_status"]
|
|
ai_config["ai_config"]
|
|
constants_inf["constants_inf"]
|
|
loader_http_client["loader_http_client"]
|
|
|
|
main --> inference
|
|
main --> constants_inf
|
|
main --> loader_http_client
|
|
|
|
inference --> constants_inf
|
|
inference --> ai_availability_status
|
|
inference --> annotation
|
|
inference --> ai_config
|
|
inference -.->|GPU available| tensorrt_engine
|
|
inference -.->|CPU fallback| onnx_engine
|
|
|
|
onnx_engine --> inference_engine
|
|
onnx_engine --> constants_inf
|
|
|
|
tensorrt_engine --> inference_engine
|
|
tensorrt_engine --> constants_inf
|
|
|
|
ai_availability_status --> constants_inf
|
|
annotation --> constants_inf
|
|
```
|
|
|
|
## Topological Processing Order
|
|
|
|
1. `constants_inf` (leaf)
|
|
2. `ai_config` (leaf)
|
|
3. `inference_engine` (leaf)
|
|
4. `loader_http_client` (leaf)
|
|
5. `ai_availability_status` (depends: constants_inf)
|
|
6. `annotation` (depends: constants_inf)
|
|
7. `onnx_engine` (depends: inference_engine, constants_inf)
|
|
8. `tensorrt_engine` (depends: inference_engine, constants_inf)
|
|
9. `inference` (depends: constants_inf, ai_availability_status, annotation, ai_config, onnx_engine/tensorrt_engine)
|
|
10. `main` (depends: inference, constants_inf, loader_http_client)
|
|
|
|
## Entry Points
|
|
|
|
- `main.py` — FastAPI application, serves HTTP API on uvicorn
|
|
|
|
## Leaf Modules
|
|
|
|
- `constants_inf` — constants, logging, class registry
|
|
- `ai_config` — recognition configuration data class
|
|
- `inference_engine` — abstract base class for engines
|
|
- `loader_http_client` — HTTP client for external loader service
|
|
|
|
## Cycles
|
|
|
|
None detected.
|
|
|
|
## External Services
|
|
|
|
| Service | URL Source | Purpose |
|
|
|---------|-----------|---------|
|
|
| Loader | `LOADER_URL` env var (default `http://loader:8080`) | Download/upload AI models |
|
|
| Annotations | `ANNOTATIONS_URL` env var (default `http://annotations:8080`) | Post detection results, refresh auth tokens |
|
|
|
|
## Data Files
|
|
|
|
- `classes.json` — 19 object detection classes with Ukrainian short names, colors, and max physical size in meters (ArmorVehicle, Truck, Vehicle, Artillery, Shadow, Trenches, MilitaryMan, TyreTracks, etc.)
|