mirror of
https://github.com/azaion/gps-denied-onboard.git
synced 2026-04-23 00:56:37 +00:00
Initial commit
This commit is contained in:
@@ -0,0 +1,94 @@
|
||||
# ATLAS-GEOFUSE: IMU-Denied UAV Geolocalization
|
||||
|
||||
ATLAS-GEOFUSE is a robust, multi-component Hybrid Visual-Geolocalization SLAM architecture. It processes un-stabilized, high-resolution UAV images in environments where IMU and GPS telemetry are completely denied.
|
||||
|
||||
It uses an "Atlas" multi-map framework, local TensorRT/PyTorch vision matching (SuperPoint+LightGlue), and asynchronous satellite retrieval to deliver scale-aware `<5s` relative poses and highly refined `<20m` absolute global map anchors.
|
||||
|
||||
## 🚀 Quick Start (Docker)
|
||||
|
||||
The easiest way to run the system with all complex dependencies (CUDA, OpenCV, FAISS, PyTorch, GTSAM) is via Docker Compose.
|
||||
|
||||
**Prerequisites:**
|
||||
- Docker and Docker Compose plugin installed.
|
||||
- NVIDIA GPU with minimum 6GB VRAM (e.g., RTX 2060).
|
||||
- NVIDIA Container Toolkit installed.
|
||||
|
||||
```bash
|
||||
# Build and start the background API service
|
||||
docker-compose up --build -d
|
||||
|
||||
# View the live processing logs
|
||||
docker-compose logs -f
|
||||
```
|
||||
|
||||
## 💻 Local Development Setup
|
||||
|
||||
If you want to run the python server natively for development:
|
||||
|
||||
```bash
|
||||
# 1. Create a Python 3.10 virtual environment
|
||||
python -m venv venv
|
||||
source venv/bin/activate
|
||||
|
||||
# 2. Install dependencies
|
||||
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
|
||||
pip install fastapi uvicorn[standard] pydantic numpy opencv-python faiss-gpu gtsam sse-starlette sqlalchemy requests psutil scipy
|
||||
pip install git+https://github.com/cvg/LightGlue.git
|
||||
|
||||
# 3. Run the FastAPI Server
|
||||
python main.py
|
||||
```
|
||||
|
||||
## 🧪 Running the Test Suite
|
||||
|
||||
The project includes a comprehensive suite of PyTest unit and integration tests. To allow running tests quickly on CPU-only machines (or CI/CD pipelines), Deep Learning models are automatically mocked.
|
||||
|
||||
```bash
|
||||
pip install pytest pytest-cov
|
||||
python run_e2e_tests.py
|
||||
```
|
||||
|
||||
## 🌐 API Usage Examples
|
||||
|
||||
The system acts as a headless REST API with Server-Sent Events (SSE) for low-latency streaming.
|
||||
|
||||
### 1. Create a Flight
|
||||
```bash
|
||||
curl -X POST "http://localhost:8000/api/v1/flights" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"name": "Mission Alpha",
|
||||
"start_gps": {"lat": 48.0, "lon": 37.0},
|
||||
"altitude": 400.0,
|
||||
"camera_params": {
|
||||
"focal_length_mm": 25.0,
|
||||
"sensor_width_mm": 36.0,
|
||||
"resolution": {"width": 6252, "height": 4168}
|
||||
}
|
||||
}'
|
||||
```
|
||||
*(Returns `flight_id` used in subsequent requests)*
|
||||
|
||||
### 2. Stream Real-Time Poses (SSE)
|
||||
Connect to this endpoint in your browser or application to receive live unscaled and refined trajectory data:
|
||||
```bash
|
||||
curl -N -H "Accept:text/event-stream" http://localhost:8000/api/v1/flights/{flight_id}/stream
|
||||
```
|
||||
|
||||
### 3. Ingest Images (Simulated or Real)
|
||||
Images are sent in batches to process the trajectory.
|
||||
```bash
|
||||
curl -X POST "http://localhost:8000/api/v1/flights/{flight_id}/images/batch" \
|
||||
-F "start_sequence=1" \
|
||||
-F "end_sequence=2" \
|
||||
-F "batch_number=1" \
|
||||
-F "images=@/path/to/AD000001.jpg" \
|
||||
-F "images=@/path/to/AD000002.jpg"
|
||||
```
|
||||
|
||||
## ⚙️ Environment Variables
|
||||
|
||||
| Variable | Description | Default |
|
||||
| :--- | :--- | :--- |
|
||||
| `USE_MOCK_MODELS` | If `1`, bypasses real PyTorch models and uses random tensors. Critical for fast testing on non-GPU environments. | `0` |
|
||||
| `TEST_FLIGHT_DIR` | Auto-starts a simulation of the images found in this folder upon boot. | `./test_flight_data` |
|
||||
Reference in New Issue
Block a user