Update autopilot workflow and documentation for project cycle completion

- Modified the existing-code workflow to automatically loop back to New Task after project completion without user confirmation.
- Updated the autopilot state to reflect the current step as `done` and status as `completed`.
- Clarified the deployment status report by specifying non-deployed services and their purposes.

These changes enhance the automation of task management and improve documentation clarity.
This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-03-29 05:02:22 +03:00
parent 0bf3894e03
commit aeb7f8ca8c
20 changed files with 1360 additions and 12 deletions
@@ -8,8 +8,10 @@
|-----------|--------|-------------|---------|-------------------|
| Training Pipeline | Implemented & Tested | `train.py` | Long-running (days) | GPU server, RTX 4090 (24GB VRAM) |
| Annotation Queue | Implemented & Tested | `annotation-queue/annotation_queue_handler.py` | Continuous (async) | Any server with network access |
| Inference Engine | Implemented & Tested | `start_inference.py` | On-demand | GPU-equipped machine |
| Data Tools | Implemented | `convert-annotations.py`, `dataset-visualiser.py` | Ad-hoc | Developer machine |
Not deployed as production services:
- **Inference Engine** (`start_inference.py`) — verification/testing tool, runs ad-hoc on GPU machine
- **Data Tools** (`convert-annotations.py`, `dataset-visualiser.py`) — developer utilities
Note: Augmentation is not a separate process — it is YOLO's built-in mosaic/mixup within the training pipeline.