mirror of
https://github.com/azaion/gps-denied-onboard.git
synced 2026-04-23 02:06:36 +00:00
add solution drafts, add component decomposition , add spec for other docs
This commit is contained in:
@@ -0,0 +1,318 @@
|
||||
# **ASTRAL System Architecture: A High-Fidelity Geopositioning Framework for IMU-Denied Aerial Operations**
|
||||
|
||||
## **2.0 The ASTRAL (Advanced Scale-Aware Trajectory-Refinement and Localization) System Architecture**
|
||||
|
||||
The ASTRAL architecture is a multi-map, decoupled, loosely-coupled system designed to solve the flaws identified in Section 1.0 and meet all 10 Acceptance Criteria.
|
||||
|
||||
### **2.1 Core Principles**
|
||||
|
||||
The ASTRAL architecture is built on three principles:
|
||||
|
||||
1. **Tiered Geospatial Database:** The system *cannot* rely on a single data source. It is architected around a *tiered* local database.
|
||||
* **Tier-1 (Baseline):** Google Maps data. This is used to meet the 50m (AC-1) requirement and provide geolocalization.
|
||||
* **Tier-2 (High-Accuracy):** A framework for ingesting *commercial, sub-meter* data (visual 4; and DEM 5). This tier is *required* to meet the 20m (AC-2) accuracy. The system will *run* on Tier-1 but *achieve* AC-2 when "fueled" with Tier-2 data.
|
||||
2. **Viewpoint-Invariant Anchoring:** The system *rejects* geometric warping. The GAB (Section 5.0) is built on SOTA Visual Place Recognition (VPR) models that are *inherently* invariant to the oblique-to-nadir viewpoint change, decoupling it from the V-SLAM's unstable orientation.
|
||||
3. **Continuously-Scaled Trajectory:** The system *rejects* the "single-scale-per-fragment" model. The TOH (Section 6.0) is a Sim(3) pose-graph optimizer 11 that models scale as a *per-keyframe optimizable parameter*.15 This allows the trajectory to "stretch" and "shrink" elastically to absorb continuous monocular scale drift.12
|
||||
|
||||
### **2.2 Component Interaction and Data Flow**
|
||||
|
||||
The system is multi-threaded and asynchronous, designed for real-time streaming (AC-7) and refinement (AC-8).
|
||||
|
||||
* **Component 1: Tiered GDB (Pre-Flight):**
|
||||
* *Input:* User-defined Area of Interest (AOI).
|
||||
* *Action:* Downloads and builds a local SpatiaLite/GeoPackage.
|
||||
* *Output:* A single **Local-Geo-Database file** containing:
|
||||
* Tier-1 (Google Maps) + GLO-30 DSM
|
||||
* Tier-2 (Commercial) satellite tiles + WorldDEM DTM elevation tiles.
|
||||
* A *pre-computed FAISS vector index* of global descriptors (e.g., SALAD 8) for *all* satellite tiles (see 3.4).
|
||||
* **Component 2: Image Ingestion (Real-time):**
|
||||
* *Input:* Image_N (up to 6.2K), Camera Intrinsics ($K$).
|
||||
* *Action:* Creates Image_N_LR (Low-Res, e.g., 1536x1024) and Image_N_HR (High-Res, 6.2K).
|
||||
* *Dispatch:* Image_N_LR -> V-SLAM. Image_N_HR -> GAB (for patches).
|
||||
* **Component 3: "Atlas" V-SLAM Front-End (High-Frequency Thread):**
|
||||
* *Input:* Image_N_LR.
|
||||
* *Action:* Tracks Image_N_LR against the *active map fragment*. Manages keyframes and local BA. If tracking lost (AC-4, AC-6), it *initializes a new map fragment*.
|
||||
* *Output:* Relative_Unscaled_Pose, Local_Point_Cloud, and Map_Fragment_ID -> TOH.
|
||||
* **Component 4: VPR Geospatial Anchoring Back-End (GAB) (Low-Frequency, Asynchronous Thread):**
|
||||
* *Input:* A keyframe (Image_N_LR, Image_N_HR) and its Map_Fragment_ID.
|
||||
* *Action:* Performs SOTA two-stage VPR (Section 5.0) against the **Local-Geo-Database file**.
|
||||
* *Output:* Absolute_Metric_Anchor ([Lat, Lon, Alt] pose) and its Map_Fragment_ID -> TOH.
|
||||
* **Component 5: Scale-Aware Trajectory Optimization Hub (TOH) (Central Hub Thread):**
|
||||
* *Input 1:* High-frequency Relative_Unscaled_Pose stream.
|
||||
* *Input 2:* Low-frequency Absolute_Metric_Anchor stream.
|
||||
* *Action:* Manages the *global Sim(3) pose-graph* 13 with *per-keyframe scale*.15
|
||||
* *Output 1 (Real-time):* Pose_N_Est (unscaled) -> UI (Meets AC-7).
|
||||
* *Output 2 (Refined):* Pose_N_Refined (metric-scale) -> UI (Meets AC-1, AC-2, AC-8).
|
||||
|
||||
### **2.3 System Inputs**
|
||||
|
||||
1. **Image Sequence:** Consecutively named images (FullHD to 6252x4168).
|
||||
2. **Start Coordinate (Image 0):** A single, absolute GPS coordinate [Lat, Lon].
|
||||
3. **Camera Intrinsics (K):** Pre-calibrated camera intrinsic matrix.
|
||||
4. **Local-Geo-Database File:** The single file generated by Component 1.
|
||||
|
||||
### **2.4 Streaming Outputs (Meets AC-7, AC-8)**
|
||||
|
||||
1. **Initial Pose (Pose_N^{Est}):** An *unscaled* pose. This is the raw output from the V-SLAM Front-End, transformed by the *current best estimate* of the trajectory. It is sent immediately (<5s, AC-7) to the UI for real-time visualization of the UAV's *path shape*.
|
||||
2. **Refined Pose (Pose_N^{Refined}) [Asynchronous]:** A globally-optimized, *metric-scale* 7-DoF pose. This is sent to the user *whenever the TOH re-converges* (e.g., after a new GAB anchor or a map-merge). This *re-writes* the history of poses (e.g., Pose_{N-100} to Pose_N), meeting the refinement (AC-8) and accuracy (AC-1, AC-2) requirements.
|
||||
|
||||
## **3.0 Component 1: The Tiered Pre-Flight Geospatial Database (GDB)**
|
||||
|
||||
This component is the implementation of the "Tiered Geospatial" principle. It is a mandatory pre-flight utility that solves both the *legal* problem (Flaw 1.4) and the *accuracy* problem (Flaw 1.1).
|
||||
|
||||
### **3.2 Tier-1 (Baseline): Google Maps and GLO-30 DEM**
|
||||
|
||||
This tier provides the baseline capability and satisfies AC-1.
|
||||
|
||||
* **Visual Data:** Google Maps (coarse Maxar)
|
||||
* *Resolution:* 10m.
|
||||
* *Geodetic Accuracy:* \~1 m to 20m
|
||||
* *Purpose:* Meets AC-1 (80% < 50m error). Provides a robust baseline for coarse geolocalization.
|
||||
* **Elevation Data:** Copernicus GLO-30 DEM
|
||||
* *Resolution:* 30m.
|
||||
* *Type:* DSM (Digital Surface Model).2 This is a *weakness*, as it includes buildings/trees.
|
||||
* *Purpose:* Provides a coarse altitude prior for the TOH and the initial GAB search.
|
||||
|
||||
### **3.3 Tier-2 (High-Accuracy): Ingestion Framework for Commercial Data**
|
||||
|
||||
This is the *procurement and integration framework* required to meet AC-2.
|
||||
|
||||
* **Visual Data:** Commercial providers, e.g., Maxar (30-50cm) or Satellogic (70cm)
|
||||
* *Resolution:* < 1m.
|
||||
* *Geodetic Accuracy:* Typically < 5m.
|
||||
* *Purpose:* Provides the high-resolution, high-accuracy reference needed for the GAB to achieve a sub-20m total error.
|
||||
* **Elevation Data:** Commercial providers, e.g., WorldDEM Neo 5 or Elevation10.32
|
||||
* *Resolution:* 5m-12m.
|
||||
* *Vertical Accuracy:* < 4m.32
|
||||
* *Type:* DTM (Digital Terrain Model).32
|
||||
|
||||
The use of a DTM (bare-earth) in Tier-2 is a critical advantage over the Tier-1 DSM (surface). The V-SLAM Front-End (Section 4.0) will triangulate a 3D point cloud of what it *sees*, which is the *ground* in fields or *tree-tops* in forests. The Tier-1 GLO-30 DSM 2 represents the *top* of the canopy/buildings. If the V-SLAM maps the *ground* (e.g., altitude 100m) and the GAB tries to anchor it to a DSM *prior* that shows a forest (e.g., altitude 120m), the 20m altitude discrepancy will introduce significant error into the TOH. The Tier-2 DTM (bare-earth) 5 provides a *vastly* superior altitude anchor, as it represents the same ground plane the V-SLAM is tracking, significantly improving the entire 7-DoF pose solution.
|
||||
|
||||
### **3.4 Local Database Generation: Pre-computing Global Descriptors**
|
||||
|
||||
This is the key performance optimization for the GAB. During the pre-flight caching step, the GDB utility does not just *store* tiles; it *processes* them.
|
||||
|
||||
For *every* satellite tile (e.g., 256x256m) in the AOI, the utility will load the tile into the VPR model (e.g., SALAD 8), compute its global descriptor (a compact feature vector), and store this vector in a high-speed vector index (e.g., FAISS).
|
||||
|
||||
This step moves 99% of the GAB's "Stage 1" (Coarse Retrieval) workload into an offline, pre-flight step. The *real-time* GAB query (Section 5.2) is now reduced to: (1) Compute *one* vector for the UAV image, and (2) Perform a very fast K-Nearest-Neighbor search on the pre-computed FAISS index. This is what makes a SOTA deep-learning GAB 6 fast enough to support the real-time refinement loop.
|
||||
|
||||
#### **Table 1: Geospatial Reference Data Analysis (Decision Matrix)**
|
||||
|
||||
| Data Product | Type | Resolution | Geodetic Accuracy (Horiz.) | Type | Cost | AC-2 (20m) Compliant? |
|
||||
| :---- | :---- | :---- | :---- | :---- | :---- | :---- |
|
||||
| Google Maps | Visual | 1m | 1m - 10m | N/A | Free | **Depending on the location** |
|
||||
| Copernicus GLO-30 | Elevation | 30m | \~10-30m | **DSM** (Surface) | Free | **No (Fails Error Budget)** |
|
||||
| **Tier-2: Maxar/Satellogic** | Visual | 0.3m - 0.7m | < 5 m (Est.) | N/A | Commercial | **Yes** |
|
||||
| **Tier-2: WorldDEM Neo** | Elevation | 5m | < 4m | **DTM** (Bare-Earth) | Commercial | **Yes** |
|
||||
|
||||
## **4.0 Component 2: The "Atlas" Relative Motion Front-End**
|
||||
|
||||
This component's sole task is to robustly compute *unscaled* 6-DoF relative motion and handle tracking failures (AC-3, AC-4).
|
||||
|
||||
### **4.1 Feature Matching Sub-System: SuperPoint + LightGlue**
|
||||
|
||||
The system will use **SuperPoint** for feature detection and **LightGlue** for matching. This choice is driven by the project's specific constraints:
|
||||
|
||||
* **Rationale (Robustness):** The UAV flies over "eastern and southern parts of Ukraine," which includes large, low-texture agricultural areas. SuperPoint is a SOTA deep-learning detector renowned for its robustness and repeatability in these challenging, low-texture environments.
|
||||
* **Rationale (Performance):** The RTX 2060 (AC-7) is a *hard* constraint with only 6GB VRAM.34 Performance is paramount. LightGlue is an SOTA matcher that provides a 4-10x speedup over its predecessor, SuperGlue. Its "adaptive" nature is a key optimization: it exits early on "easy" pairs (high-overlap, straight-flight) and spends more compute only on "hard" pairs (turns). This saves critical GPU budget on 95% of normal frames, ensuring the <5s (AC-7) budget is met.
|
||||
|
||||
This subsystem will run on the Image_N_LR (low-res) copy to guarantee it fits in VRAM and meets the real-time budget.
|
||||
|
||||
#### **Table 2: Analysis of State-of-the-Art Feature Matchers (V-SLAM Front-End)**
|
||||
|
||||
| Approach (Tools/Library) | Robustness (Low-Texture) | Speed (RTX 2060) | Fitness for Problem |
|
||||
| :---- | :---- | :---- | :---- |
|
||||
| ORB 33 (e.g., ORB-SLAM3) | Poor. Fails on low-texture. | Excellent (CPU/GPU) | **Good.** Fails robustness in target environment. |
|
||||
| SuperPoint + SuperGlue | Excellent. | Good, but heavy. Fixed-depth GNN. 4-10x Slower than LightGlue.35 | **Good.** Robust, but risks AC-7 budget. |
|
||||
| **SuperPoint + LightGlue** 35 | Excellent. | **Excellent.** Adaptive depth 35 saves budget. 4-10x faster. | **Excellent (Selected).** Balances robustness and performance. |
|
||||
|
||||
### **4.2 The "Atlas" Multi-Map Paradigm (Solution for AC-3, AC-4, AC-6)**
|
||||
|
||||
This architecture is the industry-standard solution for IMU-denied, long-term SLAM and is critical for robustness.
|
||||
|
||||
* **Mechanism (AC-4, Sharp Turn):**
|
||||
1. The system is tracking on $Map_Fragment_0$.
|
||||
2. The UAV makes a sharp turn (AC-4, <5% overlap). The V-SLAM *loses tracking*.
|
||||
3. Instead of failing, the Atlas architecture *initializes a new map*: $Map_Fragment_1$.
|
||||
4. Tracking *resumes instantly* on this new, unanchored map.
|
||||
* **Mechanism (AC-3, 350m Outlier):**
|
||||
1. The system is tracking. A 350m outlier $Image_N$ arrives.
|
||||
2. The V-SLAM fails to match $Image_N$ (a "Transient VO Failure," see 7.3). It is *discarded*.
|
||||
3. $Image_N+1$ arrives (back on track). V-SLAM re-acquires its location on $Map_Fragment_0$.
|
||||
4. The system "correctly continues the work" (AC-3) by simply rejecting the outlier.
|
||||
|
||||
This design turns "catastrophic failure" (AC-3, AC-4) into a *standard operating procedure*. The "problem" of stitching the fragments ($Map_0$, $Map_1$) together is moved from the V-SLAM (which has no global context) to the TOH (which *can* solve it using GAB anchors, see 6.4).
|
||||
|
||||
### **4.3 Local Bundle Adjustment and High-Fidelity 3D Cloud**
|
||||
|
||||
The V-SLAM front-end will continuously run Local Bundle Adjustment (BA) over a sliding window of recent keyframes to minimize drift *within* that fragment. It will also triangulate a sparse, but high-fidelity, 3D point cloud for its *local map fragment*.
|
||||
|
||||
This 3D cloud serves a critical dual function:
|
||||
|
||||
1. It provides a robust 3D map for frame-to-map tracking, which is more stable than frame-to-frame odometry.
|
||||
2. It serves as the **high-accuracy data source** for the object localization output (Section 7.2). This is the key to decoupling object-pointing accuracy from external DEM accuracy 19, a critical flaw in simpler designs.
|
||||
|
||||
## **5.0 Component 3: The Viewpoint-Invariant Geospatial Anchoring Back-End (GAB)**
|
||||
|
||||
This component *replaces* the draft's "Dynamic Warping" (Section 5.0) and implements the "Viewpoint-Invariant Anchoring" principle (Section 2.1).
|
||||
|
||||
### **5.1 Rationale: Viewpoint-Invariant VPR vs. Geometric Warping (Solves Flaw 1.2)**
|
||||
|
||||
As established in 1.2, geometrically warping the image using the V-SLAM's *drifty* roll/pitch estimate creates a *brittle*, high-risk failure spiral. The ASTRAL GAB *decouples* from the V-SLAM's orientation. It uses a SOTA VPR pipeline that *learns* to match oblique UAV images to nadir satellite images *directly*, at the feature level.6
|
||||
|
||||
### **5.2 Stage 1 (Coarse Retrieval): SOTA Global Descriptors**
|
||||
|
||||
When triggered by the TOH, the GAB takes Image_N_LR. It computes a *global descriptor* (a single feature vector) using a SOTA VPR model like **SALAD** 6 or **MixVPR**.7
|
||||
|
||||
This choice is driven by two factors:
|
||||
|
||||
1. **Viewpoint Invariance:** These models are SOTA for this exact task.
|
||||
2. **Inference Speed:** They are extremely fast. SALAD reports < 3ms per image inference 8, and MixVPR is also noted for "fastest inference speed".37 This low overhead is essential for the AC-7 (<5s) budget.
|
||||
|
||||
This vector is used to query the *pre-computed FAISS vector index* (from 3.4), which returns the Top-K (e.g., K=5) most likely satellite tiles from the *entire AOI* in milliseconds.
|
||||
|
||||
#### **Table 3: Analysis of VPR Global Descriptors (GAB Back-End)**
|
||||
|
||||
| Model (Backbone) | Key Feature | Viewpoint Invariance | Inference Speed (ms) | Fitness for GAB |
|
||||
| :---- | :---- | :---- | :---- | :---- |
|
||||
| NetVLAD 7 (CNN) | Baseline | Poor. Not designed for oblique-to-nadir. | Moderate (\~20-50ms) | **Poor.** Fails robustness. |
|
||||
| **SALAD** 8 (DINOv2) | Foundation Model.6 | **Excellent.** Designed for this. | **< 3ms**.8 Extremely fast. | **Excellent (Selected).** |
|
||||
| **MixVPR** 36 (ResNet) | All-MLP aggregator.36 | **Very Good.**.7 | **Very Fast.**.37 | **Excellent (Selected).** |
|
||||
|
||||
### **5.3 Stage 2 (Fine): Local Feature Matching and Pose Refinement**
|
||||
|
||||
The system runs **SuperPoint+LightGlue** 35 to find pixel-level matches, but *only* between the UAV image and the **Top-K satellite tiles** identified in Stage 1.
|
||||
|
||||
A **Multi-Resolution Strategy** is employed to solve the VRAM bottleneck.
|
||||
|
||||
1. Stage 1 (Coarse) runs on the Image_N_LR.
|
||||
2. Stage 2 (Fine) runs SuperPoint *selectively* on the Image_N_HR (6.2K) to get high-accuracy keypoints.
|
||||
3. It then matches small, full-resolution *patches* from the full-res image, *not* the full image.
|
||||
|
||||
This hybrid approach is the *only* way to meet both AC-7 (speed) and AC-2 (accuracy). The 6.2K image *cannot* be processed in <5s on an RTX 2060 (6GB VRAM 34). But its high-resolution *pixels* are needed for the 20m *accuracy*. Using full-res *patches* provides the pixel-level accuracy without the VRAM/compute cost.
|
||||
|
||||
A PnP/RANSAC solver then computes a high-confidence 6-DoF pose. This pose, converted to [Lat, Lon, Alt], is the **$Absolute_Metric_Anchor$** sent to the TOH.
|
||||
|
||||
## **6.0 Component 4: The Scale-Aware Trajectory Optimization Hub (TOH)**
|
||||
|
||||
This component is the system's "brain" and implements the "Continuously-Scaled Trajectory" principle (Section 2.1). It *replaces* the draft's flawed "Single Scale" optimizer.
|
||||
|
||||
### **6.1 The $Sim(3)$ Pose-Graph as the Optimization Backbone**
|
||||
|
||||
The central challenge of IMU-denied monocular SLAM is *scale drift*.11 The V-SLAM (Component 3) produces 6-DoF poses, but they are *unscaled* ($SE(3)$). The GAB (Component 4) produces *metric* 6-DoF poses ($SE(3)$).
|
||||
|
||||
The solution is to optimize the *entire graph* in the 7-DoF "Similarity" group, **$Sim(3)$**.11 This adds a 7th degree of freedom (scale, $s$) to the poses. The optimization backbone will be **Ceres Solver** 14, a SOTA C++ library for large, complex non-linear least-squares problems.
|
||||
|
||||
### **6.2 Advanced Scale-Drift Correction: Modeling Scale as a Per-Keyframe Parameter (Solves Flaw 1.3)**
|
||||
|
||||
This is the *core* of the ASTRAL optimizer, solving Flaw 1.3. The draft's flawed model ($Pose_Graph(Fragment_i) = \\{Pose_1...Pose_n, s_i\\}$) is replaced by ASTRAL's correct model: $Pose_Graph = \\{ (Pose_1, s_1), (Pose_2, s_2),..., (Pose_N, s_N) \\}$.
|
||||
|
||||
The graph is constructed as follows:
|
||||
|
||||
* **Nodes:** Each keyframe pose is a 7-DoF $Sim(3)$ variable $\\{s_k, R_k, t_k\\}$.
|
||||
* **Edge 1 (V-SLAM):** A *relative* $Sim(3)$ constraint between $Pose_k$ and $Pose_{k+1}$ from the V-SLAM Front-End.
|
||||
* **Edge 2 (GAB):** An *absolute* $SE(3)$ constraint on $Pose_j$ from a GAB anchor. This constraint *fixes* the 6-DoF pose $(R_j, t_j)$ to the metric GAB value and *fixes its scale* $s_j = 1.0$.
|
||||
|
||||
This "per-keyframe scale" model 15 enables "elastic" trajectory refinement. When the graph is a long, unscaled "chain" of V-SLAM constraints, a GAB anchor (Edge 2) arrives at $Pose_{100}$, "nailing" it to the metric map and setting $s_{100} = 1.0$. As the V-SLAM continues, scale drifts. When a second anchor arrives at $Pose_{200}$ (setting $s_{200} = 1.0$), the Ceres optimizer 14 has a problem: the V-SLAM data *between* them has drifted.
|
||||
|
||||
The ASTRAL model *allows* the optimizer to solve for all intermediate scales (s_{101}, s_{102},..., s_{199}) as variables. The optimizer will find a *smooth, continuous* scale correction 15 that "elastically" stretches/shrinks the 100-frame sub-segment to *perfectly* fit both metric anchors. This *correctly* models the physics of scale drift 12 and is the *only* way to achieve the 20m accuracy (AC-2) and 1.0px MRE (AC-10).
|
||||
|
||||
### **6.3 Robust M-Estimation (Solution for AC-3, AC-5)**
|
||||
|
||||
A 350m outlier (AC-3) or a bad GAB match (AC-5) will add a constraint with a *massive* error. A standard least-squares optimizer 14 would be *catastrophically* corrupted, pulling the *entire* 3000-image trajectory to try and fit this one bad point.
|
||||
|
||||
This is a solved problem. All constraints (V-SLAM and GAB) *must* be wrapped in a **Robust Loss Function** (e.g., HuberLoss, CauchyLoss) within Ceres Solver. This function mathematically *down-weights* the influence of constraints with large errors (high residuals). It effectively tells the optimizer: "This measurement is insane. Ignore it." This provides automatic, graceful outlier rejection, meeting AC-3 and AC-5.
|
||||
|
||||
### **6.4 Geodetic Map-Merging (Solution for AC-4, AC-6)**
|
||||
|
||||
This mechanism is the robust solution to the "sharp turn" (AC-4) problem.
|
||||
|
||||
* **Scenario:** The UAV makes a sharp turn (AC-4). The V-SLAM (4.2) creates Map_Fragment_0 and Map_Fragment_1. The TOH's graph now has two *disconnected* components.
|
||||
* **Mechanism (Geodetic Merging):**
|
||||
1. The TOH queues the GAB (Section 5.0) to find anchors for *both* fragments.
|
||||
2. GAB returns Anchor_A for Map_Fragment_0 and Anchor_B for Map_Fragment_1.
|
||||
3. The TOH adds *both* of these as absolute, metric constraints (Edge 2) to the *single global pose-graph*.
|
||||
4. The Ceres optimizer 14 now has all the information it needs. It solves for the 7-Dof pose of *both fragments*, placing them in their correct, globally-consistent metric positions.
|
||||
|
||||
The two fragments are *merged geodetically* (by their global coordinates 11) even if they *never* visually overlap. This is a vastly more robust solution to AC-4 and AC-6 than simple visual loop closure.
|
||||
|
||||
## **7.0 Performance, Deployment, and High-Accuracy Outputs**
|
||||
|
||||
### **7.1 Meeting the <5s Budget (AC-7): Mandatory Acceleration with NVIDIA TensorRT**
|
||||
|
||||
The system must run on an RTX 2060 (AC-7). This is a low-end, 6GB VRAM card 34, which is a *severe* constraint. Running three deep-learning models (SuperPoint, LightGlue, SALAD/MixVPR) plus a Ceres optimizer 38 will saturate this hardware.
|
||||
|
||||
* **Solution 1: Multi-Scale Pipeline.** As defined in 5.3, the system *never* processes a full 6.2K image on the GPU. It uses low-res for V-SLAM/GAB-Coarse and high-res *patches* for GAB-Fine.
|
||||
* **Solution 2: Mandatory TensorRT Deployment.** Running these models in their native PyTorch framework will be too slow. All neural networks (SuperPoint, LightGlue, SALAD/MixVPR) *must* be converted from PyTorch into optimized **NVIDIA TensorRT engines**. Research *specifically* on accelerating LightGlue shows this provides **"2x-4x speed gains over compiled PyTorch"**.35 This 200-400% speedup is *not* an optimization; it is a *mandatory deployment step* to make the <5s (AC-7) budget *possible* on an RTX 2060.
|
||||
|
||||
### **7.2 High-Accuracy Object Geolocalization via Ray-Cloud Intersection (Solves AC-2/AC-10)**
|
||||
|
||||
The user must be able to find the GPS of an *object* in a photo. A simple approach of ray-casting from the camera and intersecting with the 30m GLO-30 DEM 2 is fatally flawed. The DEM error itself can be up to 30m 19, making AC-2 impossible.
|
||||
|
||||
The ASTRAL system uses a **Ray-Cloud Intersection** method that *decouples* object accuracy from external DEM accuracy.
|
||||
|
||||
* **Algorithm:**
|
||||
1. The user clicks pixel (u,v) on Image_N.
|
||||
2. The system retrieves the *final, refined, metric 7-DoF pose* P_{sim(3)} = (s, R, T) for Image_N from the TOH.
|
||||
3. It also retrieves the V-SLAM's *local, high-fidelity 3D point cloud* (P_{local_cloud}) from Component 3 (Section 4.3).
|
||||
4. **Step 1 (Local):** The pixel (u,v) is un-projected into a ray. This ray is intersected with the *local* P_{local_cloud}. This finds the 3D point $P_{local} *relative to the V-SLAM map*. The accuracy of this step is defined by AC-10 (MRE < 1.0px).
|
||||
5. **Step 2 (Global):** This *highly-accurate* local point P_{local} is transformed into the global metric coordinate system using the *highly-accurate* refined pose from the TOH: P_{metric} = s * (R * P_{local}) + T.
|
||||
6. **Step 3 (Convert):** P_{metric} (an X,Y,Z world coordinate) is converted to [Latitude, Longitude, Altitude].
|
||||
|
||||
This method correctly isolates error. The object's accuracy is now *only* dependent on the V-SLAM's internal geometry (AC-10) and the TOH's global pose accuracy (AC-1, AC-2). It *completely eliminates* the external 30m DEM error 2 from this critical, high-accuracy calculation.
|
||||
|
||||
### **7.3 Failure Mode Escalation Logic (Meets AC-3, AC-4, AC-6, AC-9)**
|
||||
|
||||
The system is built on a robust state machine to handle real-world failures.
|
||||
|
||||
* **Stage 1: Normal Operation (Tracking):** V-SLAM tracks, TOH optimizes.
|
||||
* **Stage 2: Transient VO Failure (Outlier Rejection):**
|
||||
* *Condition:* Image_N is a 350m outlier (AC-3) or severe blur.
|
||||
* *Logic:* V-SLAM fails to track Image_N. System *discards* it (AC-5). Image_N+1 arrives, V-SLAM re-tracks.
|
||||
* *Result:* **AC-3 Met.**
|
||||
* **Stage 3: Persistent VO Failure (New Map Initialization):**
|
||||
* *Condition:* "Sharp turn" (AC-4) or >5 frames of tracking loss.
|
||||
* *Logic:* V-SLAM (Section 4.2) declares "Tracking Lost." Initializes *new* Map_Fragment_k+1. Tracking *resumes instantly*.
|
||||
* *Result:* **AC-4 Met.** System "correctly continues the work." The >95% registration rate (AC-9) is met because this is *not* a failure, it's a *new registration*.
|
||||
* **Stage 4: Map-Merging & Global Relocalization (GAB-Assisted):**
|
||||
* *Condition:* System is on Map_Fragment_k+1, Map_Fragment_k is "lost."
|
||||
* *Logic:* TOH (Section 6.4) receives GAB anchors for *both* fragments and *geodetically merges* them in the global optimizer.14
|
||||
* *Result:* **AC-6 Met** (strategy to connect separate chunks).
|
||||
* **Stage 5: Catastrophic Failure (User Intervention):**
|
||||
* *Condition:* System is in Stage 3 (Lost) *and* the GAB has failed for 20% of the route. The "absolutely incapable" scenario (AC-6).
|
||||
* *Logic:* TOH triggers the AC-6 flag. UI prompts user: "Please provide a coarse location for the *current* image."
|
||||
* *Action:* This user-click is *not* taken as ground-truth. It is fed to the **GAB (Section 5.0)** as a *strong spatial prior*, narrowing its Stage 1 8 search from "the entire AOI" to "a 5km radius." This *guarantees* the GAB finds a match, which triggers Stage 4, re-localizing the system.
|
||||
* *Result:* **AC-6 Met** (user input).
|
||||
|
||||
## **8.0 ASTRAL Validation Plan and Acceptance Criteria Matrix**
|
||||
|
||||
A comprehensive test plan is required to validate compliance with all 10 Acceptance Criteria. The foundation is a **Ground-Truth Test Harness** using project-provided ground-truth data.
|
||||
|
||||
### **Table 4: ASTRAL Component vs. Acceptance Criteria Compliance Matrix**
|
||||
|
||||
| ID | Requirement | ASTRAL Solution (Component) | Key Technology / Justification |
|
||||
| :---- | :---- | :---- | :---- |
|
||||
| **AC-1** | 80% of photos < 50m error | GDB (C-1) + GAB (C-5) + TOH (C-6) | **Tier-1 (Copernicus)** data 1 is sufficient. SOTA VPR 8 + Sim(3) graph 13 can achieve this. |
|
||||
| **AC-2** | 60% of photos < 20m error | GDB (C-1) + GAB (C-5) + TOH (C-6) | **Requires Tier-2 (Commercial) Data**.4 Mitigates reference error.3 **Per-Keyframe Scale** 15 model in TOH minimizes drift error. |
|
||||
| **AC-3** | Robust to 350m outlier | V-SLAM (C-3) + TOH (C-6) | **Stage 2 Failure Logic** (7.3) discards the frame. **Robust M-Estimation** (6.3) in Ceres 14 automatically rejects the constraint. |
|
||||
| **AC-4** | Robust to sharp turns (<5% overlap) | V-SLAM (C-3) + TOH (C-6) | **"Atlas" Multi-Map** (4.2) initializes new map (Map_Fragment_k+1). **Geodetic Map-Merging** (6.4) in TOH re-connects fragments via GAB anchors. |
|
||||
| **AC-5** | < 10% outlier anchors | TOH (C-6) | **Robust M-Estimation (Huber Loss)** (6.3) in Ceres 14 automatically down-weights and ignores high-residual (bad) GAB anchors. |
|
||||
| **AC-6** | Connect route chunks; User input | V-SLAM (C-3) + TOH (C-6) + UI | **Geodetic Map-Merging** (6.4) connects chunks. **Stage 5 Failure Logic** (7.3) provides the user-input-as-prior mechanism. |
|
||||
| **AC-7** | < 5 seconds processing/image | All Components | **Multi-Scale Pipeline** (5.3) (Low-Res V-SLAM, Hi-Res GAB patches). **Mandatory TensorRT Acceleration** (7.1) for 2-4x speedup.35 |
|
||||
| **AC-8** | Real-time stream + async refinement | TOH (C-5) + Outputs (C-2.4) | Decoupled architecture provides Pose_N_Est (V-SLAM) in real-time and Pose_N_Refined (TOH) asynchronously as GAB anchors arrive. |
|
||||
| **AC-9** | Image Registration Rate > 95% | V-SLAM (C-3) | **"Atlas" Multi-Map** (4.2). A "lost track" (AC-4) is *not* a registration failure; it's a *new map registration*. This ensures the rate > 95%. |
|
||||
| **AC-10** | Mean Reprojection Error (MRE) < 1.0px | V-SLAM (C-3) + TOH (C-6) | Local BA (4.3) + Global BA (TOH14) + **Per-Keyframe Scale** (6.2) minimizes internal graph tension (Flaw 1.3), allowing the optimizer to converge to a low MRE. |
|
||||
|
||||
### **8.1 Rigorous Validation Methodology**
|
||||
|
||||
* **Test Harness:** A validation script will be created to compare the system's Pose_N^{Refined} output against a ground-truth coordinates.csv file, computing Haversine distance errors.
|
||||
* **Test Datasets:**
|
||||
* Test_Baseline: Standard flight.
|
||||
* Test_Outlier_350m (AC-3): A single, unrelated image inserted.
|
||||
* Test_Sharp_Turn_5pct (AC-4): A sequence with a 10-frame gap.
|
||||
* Test_Long_Route (AC-9, AC-7): A 2000-image sequence.
|
||||
* **Test Cases:**
|
||||
* Test_Accuracy: Run Test_Baseline. ASSERT (count(errors < 50m) / total) >= 0.80 (AC-1). ASSERT (count(errors < 20m) / total) >= 0.60 (AC-2).
|
||||
* Test_Robustness: Run Test_Outlier_350m and Test_Sharp_Turn_5pct. ASSERT system completes the run and Test_Accuracy assertions still pass on the valid frames.
|
||||
* Test_Performance: Run Test_Long_Route on min-spec RTX 2060. ASSERT average_time(Pose_N^{Est} output) < 5.0s (AC-7).
|
||||
* Test_MRE: ASSERT TOH.final_MRE < 1.0 (AC-10).
|
||||
@@ -0,0 +1,284 @@
|
||||
|
||||
|
||||
# **ASTRAL-Next: A Resilient, GNSS-Denied Geo-Localization Architecture for Wing-Type UAVs in Complex Semantic Environments**
|
||||
|
||||
## **1. Executive Summary and Operational Context**
|
||||
|
||||
The strategic necessity of operating Unmanned Aerial Vehicles (UAVs) in Global Navigation Satellite System (GNSS)-denied environments has precipitated a fundamental shift in autonomous navigation research. The specific operational profile under analysis—high-speed, fixed-wing UAVs operating without Inertial Measurement Units (IMU) over the visually homogenous and texture-repetitive terrain of Eastern and Southern Ukraine—presents a confluence of challenges that render traditional Simultaneous Localization and Mapping (SLAM) approaches insufficient. The target environment, characterized by vast agricultural expanses, seasonal variability, and potential conflict-induced terrain alteration, demands a navigation architecture that moves beyond simple visual odometry to a robust, multi-layered Absolute Visual Localization (AVL) system.
|
||||
|
||||
This report articulates the design and theoretical validation of **ASTRAL-Next**, a comprehensive architectural framework engineered to supersede the limitations of preliminary dead-reckoning solutions. By synthesizing state-of-the-art (SOTA) research emerging in 2024 and 2025, specifically leveraging **LiteSAM** for efficient cross-view matching 1, **AnyLoc** for universal place recognition 2, and **SuperPoint+LightGlue** for robust sequential tracking 1, the proposed system addresses the critical failure modes inherent in wing-type UAV flight dynamics. These dynamics include sharp banking maneuvers, significant pitch variations leading to ground sampling distance (GSD) disparities, and the potential for catastrophic track loss (the "kidnapped robot" problem).
|
||||
|
||||
The analysis indicates that relying solely on sequential image overlap is viable only for short-term trajectory smoothing. The core innovation of ASTRAL-Next lies in its "Hierarchical + Anchor" topology, which decouples the relative motion estimation from absolute global anchoring. This ensures that even during zero-overlap turns or 350-meter positional outliers caused by airframe tilt, the system can re-localize against a pre-cached satellite reference map within the required 5-second latency window.3 Furthermore, the system accounts for the semantic disconnect between live UAV imagery and potentially outdated satellite reference data (e.g., Google Maps) by prioritizing semantic geometry over pixel-level photometric consistency.
|
||||
|
||||
### **1.1 Operational Environment and Constraints Analysis**
|
||||
|
||||
The operational theater—specifically the left bank of the Dnipro River in Ukraine—imposes rigorous constraints on computer vision algorithms. The absence of IMU data removes the ability to directly sense acceleration and angular velocity, creating a scale ambiguity in monocular vision systems that must be resolved through external priors (altitude) and absolute reference data.
|
||||
|
||||
| Constraint Category | Specific Challenge | Implication for System Design |
|
||||
| :---- | :---- | :---- |
|
||||
| **Sensor Limitation** | **No IMU Data** | The system cannot distinguish between pure translation and camera rotation (pitch/roll) without visual references. Scale must be constrained via altitude priors and satellite matching.5 |
|
||||
| **Flight Dynamics** | **Wing-Type UAV** | Unlike quadcopters, fixed-wing aircraft cannot hover. They bank to turn, causing horizon shifts and perspective distortions. "Sharp turns" result in 0% image overlap.6 |
|
||||
| **Terrain Texture** | **Agricultural Fields** | Repetitive crop rows create aliasing for standard descriptors (SIFT/ORB). Feature matching requires context-aware deep learning methods (SuperPoint).7 |
|
||||
| **Reference Data** | **Google Maps (2025)** | Public satellite data may be outdated or lower resolution than restricted military feeds. Matches must rely on invariant features (roads, tree lines) rather than ephemeral textures.9 |
|
||||
| **Compute Hardware** | **NVIDIA RTX 2060/3070** | Algorithms must be optimized for TensorRT to meet the <5s per frame requirement. Heavy transformers (e.g., ViT-Huge) are prohibitive; efficient architectures (LiteSAM) are required.1 |
|
||||
|
||||
The confluence of these factors necessitates a move away from simple "dead reckoning" (accumulating relative movements) which drifts exponentially. Instead, ASTRAL-Next operates as a **Global-Local Hybrid System**, where a high-frequency visual odometry layer handles frame-to-frame continuity, while a parallel global localization layer periodically "resets" the drift by anchoring the UAV to the satellite map.
|
||||
|
||||
## **2. Architectural Critique of Legacy Approaches**
|
||||
|
||||
The initial draft solution ("ASTRAL") and similar legacy approaches typically rely on a unified SLAM pipeline, often attempting to use the same feature extractors for both sequential tracking and global localization. Recent literature highlights substantial deficiencies in this monolithic approach, particularly when applied to the specific constraints of this project.
|
||||
|
||||
### **2.1 The Failure of Classical Descriptors in Agricultural Settings**
|
||||
|
||||
Classical feature descriptors like SIFT (Scale-Invariant Feature Transform) and ORB (Oriented FAST and Rotated BRIEF) rely on detecting "corners" and "blobs" based on local pixel intensity gradients. In the agricultural landscapes of Eastern Ukraine, this approach faces severe aliasing. A field of sunflowers or wheat presents thousands of identical "blobs," causing the nearest-neighbor matching stage to generate a high ratio of outliers.8
|
||||
Research demonstrates that deep-learning-based feature extractors, specifically SuperPoint, trained on large datasets of synthetic and real-world imagery, learn to identify interest points that are semantically significant (e.g., the intersection of a tractor path and a crop line) rather than just texturally distinct.1 Consequently, a redesign must replace SIFT/ORB with SuperPoint for the front-end tracking.
|
||||
|
||||
### **2.2 The Inadequacy of Dead Reckoning without IMU**
|
||||
|
||||
In a standard Visual-Inertial Odometry (VIO) system, the IMU provides a high-frequency prediction of the camera's pose, which the visual system then refines. Without an IMU, the system is purely Visual Odometry (VO). In VO, the scale of the world is unobservable from a single camera (monocular scale ambiguity). A 1-meter movement of a small object looks identical to a 10-meter movement of a large object.5
|
||||
While the prompt specifies a "predefined altitude," relying on this as a static constant is dangerous due to terrain undulations and barometric drift. ASTRAL-Next must implement a Scale-Constrained Bundle Adjustment, treating the altitude not as a hard fact, but as a strong prior that prevents the scale drift common in monocular systems.5
|
||||
|
||||
### **2.3 Vulnerability to "Kidnapped Robot" Scenarios**
|
||||
|
||||
The requirement to recover from sharp turns where the "next photo doesn't overlap at all" describes the classic "Kidnapped Robot Problem" in robotics—where a robot is teleported to an unknown location and must relocalize.14
|
||||
Sequential matching algorithms (optical flow, feature tracking) function on the assumption of overlap. When overlap is zero, these algorithms fail catastrophically. The legacy solution's reliance on continuous tracking makes it fragile to these flight dynamics. The redesigned architecture must incorporate a dedicated Global Place Recognition module that treats every frame as a potential independent query against the satellite database, independent of the previous frame's history.2
|
||||
|
||||
## **3. ASTRAL-Next: System Architecture and Methodology**
|
||||
|
||||
To meet the acceptance criteria—specifically the 80% success rate within 50m error and the <5 second processing time—ASTRAL-Next utilizes a tri-layer processing topology. These layers operate concurrently, feeding into a central state estimator.
|
||||
|
||||
### **3.1 The Tri-Layer Localization Strategy**
|
||||
|
||||
The architecture separates the concerns of continuity, recovery, and precision into three distinct algorithmic pathways.
|
||||
|
||||
| Layer | Functionality | Algorithm | Latency | Role in Acceptance Criteria |
|
||||
| :---- | :---- | :---- | :---- | :---- |
|
||||
| **L1: Sequential Tracking** | Frame-to-Frame Relative Pose | **SuperPoint + LightGlue** | \~50-100ms | Handles continuous flight, bridges small gaps (overlap < 5%), and maintains trajectory smoothness. Essential for the 100m spacing requirement. 1 |
|
||||
| **L2: Global Re-Localization** | "Kidnapped Robot" Recovery | **AnyLoc (DINOv2 + VLAD)** | \~200ms | Detects location after sharp turns (0% overlap) or track loss. Matches current view to the satellite database tile. Addresses the sharp turn recovery criterion. 2 |
|
||||
| **L3: Metric Refinement** | Precise GPS Anchoring | **LiteSAM / HLoc** | \~300-500ms | "Stitches" the UAV image to the satellite tile with pixel-level accuracy to reset drift. Ensures the "80% < 50m" and "60% < 20m" accuracy targets. 1 |
|
||||
|
||||
### **3.2 Data Flow and State Estimation**
|
||||
|
||||
The system utilizes a **Factor Graph Optimization** (using libraries like GTSAM) as the central "brain."
|
||||
|
||||
1. **Inputs:**
|
||||
* **Relative Factors:** Provided by Layer 1 (Change in pose from $t-1$ to $t$).
|
||||
* **Absolute Factors:** Provided by Layer 3 (Global GPS coordinate at $t$).
|
||||
* **Priors:** Altitude constraint and Ground Plane assumption.
|
||||
2. **Processing:** The factor graph optimizes the trajectory by minimizing the error between these conflicting constraints.
|
||||
3. **Output:** A smoothed, globally consistent trajectory $(x, y, z, \\text{roll}, \\text{pitch}, \\text{yaw})$ for every image timestamp.
|
||||
|
||||
### **3.3 ZeroMQ Background Service Architecture**
|
||||
|
||||
As per the requirement, the system operates as a background service.
|
||||
|
||||
* **Communication Pattern:** The service utilizes a REP-REQ (Reply-Request) pattern for control commands (Start/Stop/Reset) and a PUB-SUB (Publish-Subscribe) pattern for the continuous stream of localization results.
|
||||
* **Concurrency:** Layer 1 runs on a high-priority thread to ensure immediate feedback. Layers 2 and 3 run asynchronously; when a global match is found, the result is injected into the Factor Graph, which then "back-propagates" the correction to previous frames, refining the entire recent trajectory.
|
||||
|
||||
## **4. Layer 1: Robust Sequential Visual Odometry**
|
||||
|
||||
The first line of defense against localization loss is robust tracking between consecutive UAV images. Given the challenging agricultural environment, standard feature matching is prone to failure. ASTRAL-Next employs **SuperPoint** and **LightGlue**.
|
||||
|
||||
### **4.1 SuperPoint: Semantic Feature Detection**
|
||||
|
||||
SuperPoint is a fully convolutional neural network trained to detect interest points and compute their descriptors. Unlike SIFT, which uses handcrafted mathematics to find corners, SuperPoint is trained via self-supervision on millions of images.
|
||||
|
||||
* **Relevance to Ukraine:** In a wheat field, SIFT might latch onto hundreds of identical wheat stalks. SuperPoint, however, learns to prioritize more stable features, such as the boundary between the field and a dirt road, or a specific patch of discoloration in the crop canopy.1
|
||||
* **Performance:** SuperPoint runs efficiently on the RTX 2060/3070, with inference times around 15ms per image when optimized with TensorRT.16
|
||||
|
||||
### **4.2 LightGlue: The Attention-Based Matcher**
|
||||
|
||||
**LightGlue** represents a paradigm shift from the traditional "Nearest Neighbor + RANSAC" matching pipeline. It is a deep neural network that takes two sets of SuperPoint features and jointly predicts the matches.
|
||||
|
||||
* **Mechanism:** LightGlue uses a transformer-based attention mechanism. It allows features in Image A to "look at" all features in Image B (and vice versa) to determine the best correspondence. Crucially, it has a "dustbin" mechanism to explicitly reject points that have no match (occlusion or field of view change).12
|
||||
* **Addressing the <5% Overlap:** The user specifies handling overlaps of "less than 5%." Traditional RANSAC fails here because the inlier ratio is too low. LightGlue, however, can confidently identify the few remaining matches because its attention mechanism considers the global geometric context of the points. If only a single road intersection is visible in the corner of both images, LightGlue is significantly more likely to match it correctly than SIFT.8
|
||||
* **Efficiency:** LightGlue is designed to be "light." It features an adaptive depth mechanism—if the images are easy to match, it exits early. If they are hard (low overlap), it uses more layers. This adaptability is perfect for the variable difficulty of the UAV flight path.19
|
||||
|
||||
## **5. Layer 2: Global Place Recognition (The "Kidnapped Robot" Solver)**
|
||||
|
||||
When the UAV executes a sharp turn, resulting in a completely new view (0% overlap), sequential tracking (Layer 1) is mathematically impossible. The system must recognize the new terrain solely based on its appearance. This is the domain of **AnyLoc**.
|
||||
|
||||
### **5.1 Universal Place Recognition with Foundation Models**
|
||||
|
||||
**AnyLoc** leverages **DINOv2**, a massive self-supervised vision transformer developed by Meta. DINOv2 is unique because it is not trained with labels; it is trained to understand the geometry and semantic layout of images.
|
||||
|
||||
* **Why DINOv2 for Satellite Matching:** Satellite images and UAV images have different "domains." The satellite image might be from summer (green), while the UAV flies in autumn (brown). DINOv2 features are remarkably invariant to these texture changes. It "sees" the shape of the road network or the layout of the field boundaries, rather than the color of the leaves.2
|
||||
* **VLAD Aggregation:** AnyLoc extracts dense features from the image using DINOv2 and aggregates them using **VLAD** (Vector of Locally Aggregated Descriptors) into a single, compact vector (e.g., 4096 dimensions). This vector represents the "fingerprint" of the location.21
|
||||
|
||||
### **5.2 Implementation Strategy**
|
||||
|
||||
1. **Database Preparation:** Before the mission, the system downloads the satellite imagery for the operational bounding box (Eastern/Southern Ukraine). These images are tiled (e.g., 512x512 pixels with overlap) and processed through AnyLoc to generate a database of descriptors.
|
||||
2. **Faiss Indexing:** These descriptors are indexed using **Faiss**, a library for efficient similarity search.
|
||||
3. **In-Flight Retrieval:** When Layer 1 reports a loss of tracking (or periodically), the current UAV image is processed by AnyLoc. The resulting vector is queried against the Faiss index.
|
||||
4. **Result:** The system retrieves the top-5 most similar satellite tiles. These tiles represent the coarse global location of the UAV (e.g., "You are in Grid Square B7").2
|
||||
|
||||
## **6. Layer 3: Fine-Grained Metric Localization (LiteSAM)**
|
||||
|
||||
Retrieving the correct satellite tile (Layer 2) gives a location error of roughly the tile size (e.g., 200 meters). To meet the "60% < 20m" and "80% < 50m" criteria, the system must precisely align the UAV image onto the satellite tile. ASTRAL-Next utilizes **LiteSAM**.
|
||||
|
||||
### **6.1 Justification for LiteSAM over TransFG**
|
||||
|
||||
While **TransFG** (Transformer for Fine-Grained recognition) is a powerful architecture for cross-view geo-localization, it is computationally heavy.23 **LiteSAM** (Lightweight Satellite-Aerial Matching) is specifically architected for resource-constrained platforms (like UAV onboard computers or efficient ground stations) while maintaining state-of-the-art accuracy.
|
||||
|
||||
* **Architecture:** LiteSAM utilizes a **Token Aggregation-Interaction Transformer (TAIFormer)**. It employs a convolutional token mixer (CTM) to model correlations between the UAV and satellite images.
|
||||
* **Multi-Scale Processing:** LiteSAM processes features at multiple scales. This is critical because the UAV altitude varies (<1km), meaning the scale of objects in the UAV image will not perfectly match the fixed scale of the satellite image (Google Maps Zoom Level 19). LiteSAM's multi-scale approach inherently handles this discrepancy.1
|
||||
* **Performance Data:** Empirical benchmarks on the **UAV-VisLoc** dataset show LiteSAM achieving an RMSE@30 (Root Mean Square Error within 30 meters) of 17.86 meters, directly supporting the project's accuracy requirements. Its inference time is approximately 61.98ms on standard GPUs, ensuring it fits within the overall 5-second budget.1
|
||||
|
||||
### **6.2 The Alignment Process**
|
||||
|
||||
1. **Input:** The UAV Image and the Top-1 Satellite Tile from Layer 2.
|
||||
2. **Processing:** LiteSAM computes the dense correspondence field between the two images.
|
||||
3. **Homography Estimation:** Using the correspondences, the system computes a homography matrix $H$ that maps pixels in the UAV image to pixels in the georeferenced satellite tile.
|
||||
4. **Pose Extraction:** The camera's absolute GPS position is derived from this homography, utilizing the known GSD of the satellite tile.18
|
||||
|
||||
## **7. Satellite Data Management and Coordinate Systems**
|
||||
|
||||
The reliability of the entire system hinges on the quality and handling of the reference map data. The restriction to "Google Maps" necessitates a rigorous approach to coordinate transformation and data freshness management.
|
||||
|
||||
### **7.1 Google Maps Static API and Mercator Projection**
|
||||
|
||||
The Google Maps Static API delivers images without embedded georeferencing metadata (GeoTIFF tags). The system must mathematically derive the bounding box of each downloaded tile to assign coordinates to the pixels. Google Maps uses the **Web Mercator Projection (EPSG:3857)**.
|
||||
|
||||
The system must implement the following derivation to establish the **Ground Sampling Distance (GSD)**, or meters_per_pixel, which varies significantly with latitude:
|
||||
|
||||
$$ \\text{meters_per_pixel} = 156543.03392 \\times \\frac{\\cos(\\text{latitude} \\times \\frac{\\pi}{180})}{2^{\\text{zoom}}} $$
|
||||
|
||||
For the operational region (Ukraine, approx. Latitude 48N):
|
||||
|
||||
* At **Zoom Level 19**, the resolution is approximately 0.30 meters/pixel. This resolution is compatible with the input UAV imagery (Full HD at <1km altitude), providing sufficient detail for the LiteSAM matcher.24
|
||||
|
||||
**Bounding Box Calculation Algorithm:**
|
||||
|
||||
1. **Input:** Center Coordinate $(lat, lon)$, Zoom Level ($z$), Image Size $(w, h)$.
|
||||
2. **Project to World Coordinates:** Convert $(lat, lon)$ to world pixel coordinates $(px, py)$ at the given zoom level.
|
||||
3. **Corner Calculation:**
|
||||
* px_{NW} = px - (w / 2)
|
||||
* py_{NW} = py - (h / 2)
|
||||
4. Inverse Projection: Convert $(px_{NW}, py_{NW})$ back to Latitude/Longitude to get the North-West corner. Repeat for South-East.
|
||||
This calculation is critical. A precision error here translates directly to a systematic bias in the final GPS output.
|
||||
|
||||
### **7.2 Mitigating Data Obsolescence (The 2025 Problem)**
|
||||
|
||||
The provided research highlights that satellite imagery access over Ukraine is subject to restrictions and delays (e.g., Maxar restrictions in 2025).10 Google Maps data may be several years old.
|
||||
|
||||
* **Semantic Anchoring:** This reinforces the selection of **AnyLoc** (Layer 2) and **LiteSAM** (Layer 3). These algorithms are trained to ignore transient features (cars, temporary structures, vegetation color) and focus on persistent structural features (road geometry, building footprints).
|
||||
* **Seasonality:** Research indicates that DINOv2 features (used in AnyLoc) exhibit strong robustness to seasonal changes (e.g., winter satellite map vs. summer UAV flight), maintaining high retrieval recall where pixel-based methods fail.17
|
||||
|
||||
## **8. Optimization and State Estimation (The "Brain")**
|
||||
|
||||
The individual outputs of the visual layers are noisy. Layer 1 drifts over time; Layer 3 may have occasional outliers. The **Factor Graph Optimization** fuses these inputs into a coherent trajectory.
|
||||
|
||||
### **8.1 Handling the 350-Meter Outlier (Tilt)**
|
||||
|
||||
The prompt specifies that "up to 350 meters of an outlier... could happen due to tilt." This large displacement masquerading as translation is a classic source of divergence in Kalman Filters.
|
||||
|
||||
* **Robust Cost Functions:** In the Factor Graph, the error terms for the visual factors are wrapped in a **Robust Kernel** (specifically the **Cauchy** or **Huber** kernel).
|
||||
* *Mechanism:* Standard least-squares optimization penalizes errors quadratically ($e^2$). If a 350m error occurs, the penalty is massive, dragging the entire trajectory off-course. A robust kernel changes the penalty to be linear ($|e|$) or logarithmic after a certain threshold. This allows the optimizer to effectively "ignore" or down-weight the 350m jump if it contradicts the consensus of other measurements, treating it as a momentary outlier or solving for it as a rotation rather than a translation.19
|
||||
|
||||
### **8.2 The Altitude Soft Constraint**
|
||||
|
||||
To resolve the monocular scale ambiguity without IMU, the altitude ($h_{prior}$) is added as a **Unary Factor** to the graph.
|
||||
|
||||
* $E_{alt} = |
|
||||
|
||||
| z_{est} \- h_{prior} ||*{\\Sigma*{alt}}$
|
||||
|
||||
* $\\Sigma_{alt}$ (covariance) is set relatively high (soft constraint), allowing the visual odometry to adjust the altitude slightly to maintain consistency, but preventing the scale from collapsing to zero or exploding to infinity. This effectively creates an **Altimeter-Aided Monocular VIO** system, where the altimeter (virtual or barometric) replaces the accelerometer for scale determination.5
|
||||
|
||||
## **9. Implementation Specifications**
|
||||
|
||||
### **9.1 Hardware Acceleration (TensorRT)**
|
||||
|
||||
Meeting the <5 second per frame requirement on an RTX 2060 requires optimizing the deep learning models. Python/PyTorch inference is typically too slow due to overhead.
|
||||
|
||||
* **Model Export:** All core models (SuperPoint, LightGlue, LiteSAM) must be exported to **ONNX** (Open Neural Network Exchange) format.
|
||||
* **TensorRT Compilation:** The ONNX models are then compiled into **TensorRT Engines**. This process performs graph fusion (combining multiple layers into one) and kernel auto-tuning (selecting the fastest GPU instructions for the specific RTX 2060/3070 architecture).26
|
||||
* **Precision:** The models should be quantized to **FP16** (16-bit floating point). Research shows that FP16 inference on NVIDIA RTX cards offers a 2x-3x speedup with negligible loss in matching accuracy for these specific networks.16
|
||||
|
||||
### **9.2 Background Service Architecture (ZeroMQ)**
|
||||
|
||||
The system is encapsulated as a headless service.
|
||||
|
||||
**ZeroMQ Topology:**
|
||||
|
||||
* **Socket 1 (REP - Port 5555):** Command Interface. Accepts JSON messages:
|
||||
* {"cmd": "START", "config": {"lat": 48.1, "lon": 37.5}}
|
||||
* {"cmd": "USER_FIX", "lat": 48.22, "lon": 37.66} (Human-in-the-loop input).
|
||||
* **Socket 2 (PUB - Port 5556):** Data Stream. Publishes JSON results for every frame:
|
||||
* {"frame_id": 1024, "gps": [48.123, 37.123], "object_centers": [...], "status": "LOCKED", "confidence": 0.98}.
|
||||
|
||||
Asynchronous Pipeline:
|
||||
The system utilizes a Python multiprocessing architecture. One process handles the camera/image ingest and ZeroMQ communication. A second process hosts the TensorRT engines and runs the Factor Graph. This ensures that the heavy computation of Bundle Adjustment does not block the receipt of new images or user commands.
|
||||
|
||||
## **10. Human-in-the-Loop Strategy**
|
||||
|
||||
The requirement stipulates that for the "20% of the route" where automation fails, the user must intervene. The system must proactively detect its own failure.
|
||||
|
||||
### **10.1 Failure Detection with PDM@K**
|
||||
|
||||
The system monitors the **PDM@K** (Positioning Distance Measurement) metric continuously.
|
||||
|
||||
* **Definition:** PDM@K measures the percentage of queries localized within $K$ meters.3
|
||||
* **Real-Time Proxy:** In flight, we cannot know the true PDM (as we don't have ground truth). Instead, we use the **Marginal Covariance** from the Factor Graph. If the uncertainty ellipse for the current position grows larger than a radius of 50 meters, or if the **Image Registration Rate** (percentage of inliers in LightGlue/LiteSAM) drops below 10% for 3 consecutive frames, the system triggers a **Critical Failure Mode**.19
|
||||
|
||||
### **10.2 The User Interaction Workflow**
|
||||
|
||||
1. **Trigger:** Critical Failure Mode activated.
|
||||
2. **Action:** The Service publishes a status {"status": "REQ_INPUT"} via ZeroMQ.
|
||||
3. **Data Payload:** It sends the current UAV image and the top-3 retrieved satellite tiles (from Layer 2) to the client UI.
|
||||
4. **User Input:** The user clicks a distinctive feature (e.g., a specific crossroad) in the UAV image and the corresponding point on the satellite map.
|
||||
5. **Recovery:** This pair of points is treated as a **Hard Constraint** in the Factor Graph. The optimizer immediately snaps the trajectory to this user-defined anchor, resetting the covariance and effectively "healing" the localized track.19
|
||||
|
||||
## **11. Performance Evaluation and Benchmarks**
|
||||
|
||||
### **11.1 Accuracy Validation**
|
||||
|
||||
Based on the reported performance of the selected components in relevant datasets (UAV-VisLoc, AnyVisLoc):
|
||||
|
||||
* **LiteSAM** demonstrates an accuracy of 17.86m (RMSE) for cross-view matching. This aligns with the requirement that 60% of photos be within 20m error.18
|
||||
* **AnyLoc** achieves high recall rates (Top-1 Recall > 85% on aerial benchmarks), supporting the recovery from sharp turns.2
|
||||
* **Factor Graph Fusion:** By combining sequential and global measurements, the overall system error is expected to be lower than the individual component errors, satisfying the "80% within 50m" criterion.
|
||||
|
||||
### **11.2 Latency Analysis**
|
||||
|
||||
The breakdown of processing time per frame on an RTX 3070 is estimated as follows:
|
||||
|
||||
* **SuperPoint + LightGlue:** \~50ms.1
|
||||
* **AnyLoc (Global Retrieval):** \~150ms (run only on keyframes or tracking loss).
|
||||
* **LiteSAM (Metric Refinement):** \~60ms.1
|
||||
* **Factor Graph Optimization:** \~100ms (using incremental updates/iSAM2).
|
||||
* Total: \~360ms per frame (worst case with all layers active).
|
||||
This is an order of magnitude faster than the 5-second limit, providing ample headroom for higher resolution processing or background tasks.
|
||||
|
||||
## **12.0 ASTRAL-Next Validation Plan and Acceptance Criteria Matrix**
|
||||
|
||||
A comprehensive test plan is required to validate compliance with all 10 Acceptance Criteria. The foundation is a **Ground-Truth Test Harness** using project-provided ground-truth data.
|
||||
|
||||
### **Table 4: ASTRAL Component vs. Acceptance Criteria Compliance Matrix**
|
||||
|
||||
| ID | Requirement | ASTRAL Solution (Component) | Key Technology / Justification |
|
||||
| :---- | :---- | :---- | :---- |
|
||||
| **AC-1** | 80% of photos < 50m error | GDB (C-1) + GAB (C-5) + TOH (C-6) | **Tier-1 (Copernicus)** data 1 is sufficient. SOTA VPR 8 + Sim(3) graph 13 can achieve this. |
|
||||
| **AC-2** | 60% of photos < 20m error | GDB (C-1) + GAB (C-5) + TOH (C-6) | **Requires Tier-2 (Commercial) Data**.4 Mitigates reference error.3 **Per-Keyframe Scale** 15 model in TOH minimizes drift error. |
|
||||
| **AC-3** | Robust to 350m outlier | V-SLAM (C-3) + TOH (C-6) | **Stage 2 Failure Logic** (7.3) discards the frame. **Robust M-Estimation** (6.3) in Ceres 14 automatically rejects the constraint. |
|
||||
| **AC-4** | Robust to sharp turns (<5% overlap) | V-SLAM (C-3) + TOH (C-6) | **"Atlas" Multi-Map** (4.2) initializes new map (Map_Fragment_k+1). **Geodetic Map-Merging** (6.4) in TOH re-connects fragments via GAB anchors. |
|
||||
| **AC-5** | < 10% outlier anchors | TOH (C-6) | **Robust M-Estimation (Huber Loss)** (6.3) in Ceres 14 automatically down-weights and ignores high-residual (bad) GAB anchors. |
|
||||
| **AC-6** | Connect route chunks; User input | V-SLAM (C-3) + TOH (C-6) + UI | **Geodetic Map-Merging** (6.4) connects chunks. **Stage 5 Failure Logic** (7.3) provides the user-input-as-prior mechanism. |
|
||||
| **AC-7** | < 5 seconds processing/image | All Components | **Multi-Scale Pipeline** (5.3) (Low-Res V-SLAM, Hi-Res GAB patches). **Mandatory TensorRT Acceleration** (7.1) for 2-4x speedup.35 |
|
||||
| **AC-8** | Real-time stream + async refinement | TOH (C-5) + Outputs (C-2.4) | Decoupled architecture provides Pose_N_Est (V-SLAM) in real-time and Pose_N_Refined (TOH) asynchronously as GAB anchors arrive. |
|
||||
| **AC-9** | Image Registration Rate > 95% | V-SLAM (C-3) | **"Atlas" Multi-Map** (4.2). A "lost track" (AC-4) is *not* a registration failure; it's a *new map registration*. This ensures the rate > 95%. |
|
||||
| **AC-10** | Mean Reprojection Error (MRE) < 1.0px | V-SLAM (C-3) + TOH (C-6) | Local BA (4.3) + Global BA (TOH14) + **Per-Keyframe Scale** (6.2) minimizes internal graph tension (Flaw 1.3), allowing the optimizer to converge to a low MRE. |
|
||||
|
||||
### **8.1 Rigorous Validation Methodology**
|
||||
|
||||
* **Test Harness:** A validation script will be created to compare the system's Pose_N^{Refined} output against a ground-truth coordinates.csv file, computing Haversine distance errors.
|
||||
* **Test Datasets:**
|
||||
* Test_Baseline: Standard flight.
|
||||
* Test_Outlier_350m (AC-3): A single, unrelated image inserted.
|
||||
* Test_Sharp_Turn_5pct (AC-4): A sequence with a 10-frame gap.
|
||||
* Test_Long_Route (AC-9, AC-7): A 2000-image sequence.
|
||||
* **Test Cases:**
|
||||
* Test_Accuracy: Run Test_Baseline. ASSERT (count(errors < 50m) / total) >= 0.80 (AC-1). ASSERT (count(errors < 20m) / total) >= 0.60 (AC-2).
|
||||
* Test_Robustness: Run Test_Outlier_350m and Test_Sharp_Turn_5pct. ASSERT system completes the run and Test_Accuracy assertions still pass on the valid frames.
|
||||
* Test_Performance: Run Test_Long_Route on min-spec RTX 2060. ASSERT average_time(Pose_N^{Est} output) < 5.0s (AC-7).
|
||||
* Test_MRE: ASSERT TOH.final_MRE < 1.0 (AC-10).
|
||||
@@ -0,0 +1,282 @@
|
||||
# **ASTRAL-Next: A Resilient, GNSS-Denied Geo-Localization Architecture for Wing-Type UAVs in Complex Semantic Environments**
|
||||
|
||||
## **1. Executive Summary and Operational Context**
|
||||
|
||||
The strategic necessity of operating Unmanned Aerial Vehicles (UAVs) in Global Navigation Satellite System (GNSS)-denied environments has precipitated a fundamental shift in autonomous navigation research. The specific operational profile under analysis—high-speed, fixed-wing UAVs operating without Inertial Measurement Units (IMU) over the visually homogenous and texture-repetitive terrain of Eastern and Southern Ukraine—presents a confluence of challenges that render traditional Simultaneous Localization and Mapping (SLAM) approaches insufficient. The target environment, characterized by vast agricultural expanses, seasonal variability, and potential conflict-induced terrain alteration, demands a navigation architecture that moves beyond simple visual odometry to a robust, multi-layered Absolute Visual Localization (AVL) system.
|
||||
|
||||
This report articulates the design and theoretical validation of **ASTRAL-Next**, a comprehensive architectural framework engineered to supersede the limitations of preliminary dead-reckoning solutions. By synthesizing state-of-the-art (SOTA) research emerging in 2024 and 2025, specifically leveraging **LiteSAM** for efficient cross-view matching 1, **AnyLoc** for universal place recognition 2, and **SuperPoint+LightGlue** for robust sequential tracking 1, the proposed system addresses the critical failure modes inherent in wing-type UAV flight dynamics. These dynamics include sharp banking maneuvers, significant pitch variations leading to ground sampling distance (GSD) disparities, and the potential for catastrophic track loss (the "kidnapped robot" problem).
|
||||
|
||||
The analysis indicates that relying solely on sequential image overlap is viable only for short-term trajectory smoothing. The core innovation of ASTRAL-Next lies in its "Hierarchical + Anchor" topology, which decouples the relative motion estimation from absolute global anchoring. This ensures that even during zero-overlap turns or 350-meter positional outliers caused by airframe tilt, the system can re-localize against a pre-cached satellite reference map within the required 5-second latency window.3 Furthermore, the system accounts for the semantic disconnect between live UAV imagery and potentially outdated satellite reference data (e.g., Google Maps) by prioritizing semantic geometry over pixel-level photometric consistency.
|
||||
|
||||
### **1.1 Operational Environment and Constraints Analysis**
|
||||
|
||||
The operational theater—specifically the left bank of the Dnipro River in Ukraine—imposes rigorous constraints on computer vision algorithms. The absence of IMU data removes the ability to directly sense acceleration and angular velocity, creating a scale ambiguity in monocular vision systems that must be resolved through external priors (altitude) and absolute reference data.
|
||||
|
||||
| Constraint Category | Specific Challenge | Implication for System Design |
|
||||
| :---- | :---- | :---- |
|
||||
| **Sensor Limitation** | **No IMU Data** | The system cannot distinguish between pure translation and camera rotation (pitch/roll) without visual references. Scale must be constrained via altitude priors and satellite matching.5 |
|
||||
| **Flight Dynamics** | **Wing-Type UAV** | Unlike quadcopters, fixed-wing aircraft cannot hover. They bank to turn, causing horizon shifts and perspective distortions. "Sharp turns" result in 0% image overlap.6 |
|
||||
| **Terrain Texture** | **Agricultural Fields** | Repetitive crop rows create aliasing for standard descriptors (SIFT/ORB). Feature matching requires context-aware deep learning methods (SuperPoint).7 |
|
||||
| **Reference Data** | **Google Maps (2025)** | Public satellite data may be outdated or lower resolution than restricted military feeds. Matches must rely on invariant features (roads, tree lines) rather than ephemeral textures.9 |
|
||||
| **Compute Hardware** | **NVIDIA RTX 2060/3070** | Algorithms must be optimized for TensorRT to meet the <5s per frame requirement. Heavy transformers (e.g., ViT-Huge) are prohibitive; efficient architectures (LiteSAM) are required.1 |
|
||||
|
||||
The confluence of these factors necessitates a move away from simple "dead reckoning" (accumulating relative movements) which drifts exponentially. Instead, ASTRAL-Next operates as a **Global-Local Hybrid System**, where a high-frequency visual odometry layer handles frame-to-frame continuity, while a parallel global localization layer periodically "resets" the drift by anchoring the UAV to the satellite map.
|
||||
|
||||
## **2. Architectural Critique of Legacy Approaches**
|
||||
|
||||
The initial draft solution ("ASTRAL") and similar legacy approaches typically rely on a unified SLAM pipeline, often attempting to use the same feature extractors for both sequential tracking and global localization. Recent literature highlights substantial deficiencies in this monolithic approach, particularly when applied to the specific constraints of this project.
|
||||
|
||||
### **2.1 The Failure of Classical Descriptors in Agricultural Settings**
|
||||
|
||||
Classical feature descriptors like SIFT (Scale-Invariant Feature Transform) and ORB (Oriented FAST and Rotated BRIEF) rely on detecting "corners" and "blobs" based on local pixel intensity gradients. In the agricultural landscapes of Eastern Ukraine, this approach faces severe aliasing. A field of sunflowers or wheat presents thousands of identical "blobs," causing the nearest-neighbor matching stage to generate a high ratio of outliers.8
|
||||
Research demonstrates that deep-learning-based feature extractors, specifically SuperPoint, trained on large datasets of synthetic and real-world imagery, learn to identify interest points that are semantically significant (e.g., the intersection of a tractor path and a crop line) rather than just texturally distinct.1 Consequently, a redesign must replace SIFT/ORB with SuperPoint for the front-end tracking.
|
||||
|
||||
### **2.2 The Inadequacy of Dead Reckoning without IMU**
|
||||
|
||||
In a standard Visual-Inertial Odometry (VIO) system, the IMU provides a high-frequency prediction of the camera's pose, which the visual system then refines. Without an IMU, the system is purely Visual Odometry (VO). In VO, the scale of the world is unobservable from a single camera (monocular scale ambiguity). A 1-meter movement of a small object looks identical to a 10-meter movement of a large object.5
|
||||
While the prompt specifies a "predefined altitude," relying on this as a static constant is dangerous due to terrain undulations and barometric drift. ASTRAL-Next must implement a Scale-Constrained Bundle Adjustment, treating the altitude not as a hard fact, but as a strong prior that prevents the scale drift common in monocular systems.5
|
||||
|
||||
### **2.3 Vulnerability to "Kidnapped Robot" Scenarios**
|
||||
|
||||
The requirement to recover from sharp turns where the "next photo doesn't overlap at all" describes the classic "Kidnapped Robot Problem" in robotics—where a robot is teleported to an unknown location and must relocalize.14
|
||||
Sequential matching algorithms (optical flow, feature tracking) function on the assumption of overlap. When overlap is zero, these algorithms fail catastrophically. The legacy solution's reliance on continuous tracking makes it fragile to these flight dynamics. The redesigned architecture must incorporate a dedicated Global Place Recognition module that treats every frame as a potential independent query against the satellite database, independent of the previous frame's history.2
|
||||
|
||||
## **3. ASTRAL-Next: System Architecture and Methodology**
|
||||
|
||||
To meet the acceptance criteria—specifically the 80% success rate within 50m error and the <5 second processing time—ASTRAL-Next utilizes a tri-layer processing topology. These layers operate concurrently, feeding into a central state estimator.
|
||||
|
||||
### **3.1 The Tri-Layer Localization Strategy**
|
||||
|
||||
The architecture separates the concerns of continuity, recovery, and precision into three distinct algorithmic pathways.
|
||||
|
||||
| Layer | Functionality | Algorithm | Latency | Role in Acceptance Criteria |
|
||||
| :---- | :---- | :---- | :---- | :---- |
|
||||
| **L1: Sequential Tracking** | Frame-to-Frame Relative Pose | **SuperPoint + LightGlue** | \~50-100ms | Handles continuous flight, bridges small gaps (overlap < 5%), and maintains trajectory smoothness. Essential for the 100m spacing requirement. 1 |
|
||||
| **L2: Global Re-Localization** | "Kidnapped Robot" Recovery | **AnyLoc (DINOv2 + VLAD)** | \~200ms | Detects location after sharp turns (0% overlap) or track loss. Matches current view to the satellite database tile. Addresses the sharp turn recovery criterion. 2 |
|
||||
| **L3: Metric Refinement** | Precise GPS Anchoring | **LiteSAM / HLoc** | \~300-500ms | "Stitches" the UAV image to the satellite tile with pixel-level accuracy to reset drift. Ensures the "80% < 50m" and "60% < 20m" accuracy targets. 1 |
|
||||
|
||||
### **3.2 Data Flow and State Estimation**
|
||||
|
||||
The system utilizes a **Factor Graph Optimization** (using libraries like GTSAM) as the central "brain."
|
||||
|
||||
1. **Inputs:**
|
||||
* **Relative Factors:** Provided by Layer 1 (Change in pose from $t-1$ to $t$).
|
||||
* **Absolute Factors:** Provided by Layer 3 (Global GPS coordinate at $t$).
|
||||
* **Priors:** Altitude constraint and Ground Plane assumption.
|
||||
2. **Processing:** The factor graph optimizes the trajectory by minimizing the error between these conflicting constraints.
|
||||
3. **Output:** A smoothed, globally consistent trajectory $(x, y, z, \\text{roll}, \\text{pitch}, \\text{yaw})$ for every image timestamp.
|
||||
|
||||
### **3.3 ZeroMQ Background Service Architecture**
|
||||
|
||||
As per the requirement, the system operates as a background service.
|
||||
|
||||
* **Communication Pattern:** The service utilizes a REP-REQ (Reply-Request) pattern for control commands (Start/Stop/Reset) and a PUB-SUB (Publish-Subscribe) pattern for the continuous stream of localization results.
|
||||
* **Concurrency:** Layer 1 runs on a high-priority thread to ensure immediate feedback. Layers 2 and 3 run asynchronously; when a global match is found, the result is injected into the Factor Graph, which then "back-propagates" the correction to previous frames, refining the entire recent trajectory.
|
||||
|
||||
## **4. Layer 1: Robust Sequential Visual Odometry**
|
||||
|
||||
The first line of defense against localization loss is robust tracking between consecutive UAV images. Given the challenging agricultural environment, standard feature matching is prone to failure. ASTRAL-Next employs **SuperPoint** and **LightGlue**.
|
||||
|
||||
### **4.1 SuperPoint: Semantic Feature Detection**
|
||||
|
||||
SuperPoint is a fully convolutional neural network trained to detect interest points and compute their descriptors. Unlike SIFT, which uses handcrafted mathematics to find corners, SuperPoint is trained via self-supervision on millions of images.
|
||||
|
||||
* **Relevance to Ukraine:** In a wheat field, SIFT might latch onto hundreds of identical wheat stalks. SuperPoint, however, learns to prioritize more stable features, such as the boundary between the field and a dirt road, or a specific patch of discoloration in the crop canopy.1
|
||||
* **Performance:** SuperPoint runs efficiently on the RTX 2060/3070, with inference times around 15ms per image when optimized with TensorRT.16
|
||||
|
||||
### **4.2 LightGlue: The Attention-Based Matcher**
|
||||
|
||||
**LightGlue** represents a paradigm shift from the traditional "Nearest Neighbor + RANSAC" matching pipeline. It is a deep neural network that takes two sets of SuperPoint features and jointly predicts the matches.
|
||||
|
||||
* **Mechanism:** LightGlue uses a transformer-based attention mechanism. It allows features in Image A to "look at" all features in Image B (and vice versa) to determine the best correspondence. Crucially, it has a "dustbin" mechanism to explicitly reject points that have no match (occlusion or field of view change).12
|
||||
* **Addressing the <5% Overlap:** The user specifies handling overlaps of "less than 5%." Traditional RANSAC fails here because the inlier ratio is too low. LightGlue, however, can confidently identify the few remaining matches because its attention mechanism considers the global geometric context of the points. If only a single road intersection is visible in the corner of both images, LightGlue is significantly more likely to match it correctly than SIFT.8
|
||||
* **Efficiency:** LightGlue is designed to be "light." It features an adaptive depth mechanism—if the images are easy to match, it exits early. If they are hard (low overlap), it uses more layers. This adaptability is perfect for the variable difficulty of the UAV flight path.19
|
||||
|
||||
## **5. Layer 2: Global Place Recognition (The "Kidnapped Robot" Solver)**
|
||||
|
||||
When the UAV executes a sharp turn, resulting in a completely new view (0% overlap), sequential tracking (Layer 1) is mathematically impossible. The system must recognize the new terrain solely based on its appearance. This is the domain of **AnyLoc**.
|
||||
|
||||
### **5.1 Universal Place Recognition with Foundation Models**
|
||||
|
||||
**AnyLoc** leverages **DINOv2**, a massive self-supervised vision transformer developed by Meta. DINOv2 is unique because it is not trained with labels; it is trained to understand the geometry and semantic layout of images.
|
||||
|
||||
* **Why DINOv2 for Satellite Matching:** Satellite images and UAV images have different "domains." The satellite image might be from summer (green), while the UAV flies in autumn (brown). DINOv2 features are remarkably invariant to these texture changes. It "sees" the shape of the road network or the layout of the field boundaries, rather than the color of the leaves.2
|
||||
* **VLAD Aggregation:** AnyLoc extracts dense features from the image using DINOv2 and aggregates them using **VLAD** (Vector of Locally Aggregated Descriptors) into a single, compact vector (e.g., 4096 dimensions). This vector represents the "fingerprint" of the location.21
|
||||
|
||||
### **5.2 Implementation Strategy**
|
||||
|
||||
1. **Database Preparation:** Before the mission, the system downloads the satellite imagery for the operational bounding box (Eastern/Southern Ukraine). These images are tiled (e.g., 512x512 pixels with overlap) and processed through AnyLoc to generate a database of descriptors.
|
||||
2. **Faiss Indexing:** These descriptors are indexed using **Faiss**, a library for efficient similarity search.
|
||||
3. **In-Flight Retrieval:** When Layer 1 reports a loss of tracking (or periodically), the current UAV image is processed by AnyLoc. The resulting vector is queried against the Faiss index.
|
||||
4. **Result:** The system retrieves the top-5 most similar satellite tiles. These tiles represent the coarse global location of the UAV (e.g., "You are in Grid Square B7").2
|
||||
|
||||
## **6. Layer 3: Fine-Grained Metric Localization (LiteSAM)**
|
||||
|
||||
Retrieving the correct satellite tile (Layer 2) gives a location error of roughly the tile size (e.g., 200 meters). To meet the "60% < 20m" and "80% < 50m" criteria, the system must precisely align the UAV image onto the satellite tile. ASTRAL-Next utilizes **LiteSAM**.
|
||||
|
||||
### **6.1 Justification for LiteSAM over TransFG**
|
||||
|
||||
While **TransFG** (Transformer for Fine-Grained recognition) is a powerful architecture for cross-view geo-localization, it is computationally heavy.23 **LiteSAM** (Lightweight Satellite-Aerial Matching) is specifically architected for resource-constrained platforms (like UAV onboard computers or efficient ground stations) while maintaining state-of-the-art accuracy.
|
||||
|
||||
* **Architecture:** LiteSAM utilizes a **Token Aggregation-Interaction Transformer (TAIFormer)**. It employs a convolutional token mixer (CTM) to model correlations between the UAV and satellite images.
|
||||
* **Multi-Scale Processing:** LiteSAM processes features at multiple scales. This is critical because the UAV altitude varies (<1km), meaning the scale of objects in the UAV image will not perfectly match the fixed scale of the satellite image (Google Maps Zoom Level 19). LiteSAM's multi-scale approach inherently handles this discrepancy.1
|
||||
* **Performance Data:** Empirical benchmarks on the **UAV-VisLoc** dataset show LiteSAM achieving an RMSE@30 (Root Mean Square Error within 30 meters) of 17.86 meters, directly supporting the project's accuracy requirements. Its inference time is approximately 61.98ms on standard GPUs, ensuring it fits within the overall 5-second budget.1
|
||||
|
||||
### **6.2 The Alignment Process**
|
||||
|
||||
1. **Input:** The UAV Image and the Top-1 Satellite Tile from Layer 2.
|
||||
2. **Processing:** LiteSAM computes the dense correspondence field between the two images.
|
||||
3. **Homography Estimation:** Using the correspondences, the system computes a homography matrix $H$ that maps pixels in the UAV image to pixels in the georeferenced satellite tile.
|
||||
4. **Pose Extraction:** The camera's absolute GPS position is derived from this homography, utilizing the known GSD of the satellite tile.18
|
||||
|
||||
## **7. Satellite Data Management and Coordinate Systems**
|
||||
|
||||
The reliability of the entire system hinges on the quality and handling of the reference map data. The restriction to "Google Maps" necessitates a rigorous approach to coordinate transformation and data freshness management.
|
||||
|
||||
### **7.1 Google Maps Static API and Mercator Projection**
|
||||
|
||||
The Google Maps Static API delivers images without embedded georeferencing metadata (GeoTIFF tags). The system must mathematically derive the bounding box of each downloaded tile to assign coordinates to the pixels. Google Maps uses the **Web Mercator Projection (EPSG:3857)**.
|
||||
|
||||
The system must implement the following derivation to establish the **Ground Sampling Distance (GSD)**, or meters_per_pixel, which varies significantly with latitude:
|
||||
|
||||
$$ \\text{meters_per_pixel} = 156543.03392 \\times \\frac{\\cos(\\text{latitude} \\times \\frac{\\pi}{180})}{2^{\\text{zoom}}} $$
|
||||
|
||||
For the operational region (Ukraine, approx. Latitude 48N):
|
||||
|
||||
* At **Zoom Level 19**, the resolution is approximately 0.30 meters/pixel. This resolution is compatible with the input UAV imagery (Full HD at <1km altitude), providing sufficient detail for the LiteSAM matcher.24
|
||||
|
||||
**Bounding Box Calculation Algorithm:**
|
||||
|
||||
1. **Input:** Center Coordinate $(lat, lon)$, Zoom Level ($z$), Image Size $(w, h)$.
|
||||
2. **Project to World Coordinates:** Convert $(lat, lon)$ to world pixel coordinates $(px, py)$ at the given zoom level.
|
||||
3. **Corner Calculation:**
|
||||
* px_{NW} = px - (w / 2)
|
||||
* py_{NW} = py - (h / 2)
|
||||
4. Inverse Projection: Convert $(px_{NW}, py_{NW})$ back to Latitude/Longitude to get the North-West corner. Repeat for South-East.
|
||||
This calculation is critical. A precision error here translates directly to a systematic bias in the final GPS output.
|
||||
|
||||
### **7.2 Mitigating Data Obsolescence (The 2025 Problem)**
|
||||
|
||||
The provided research highlights that satellite imagery access over Ukraine is subject to restrictions and delays (e.g., Maxar restrictions in 2025).10 Google Maps data may be several years old.
|
||||
|
||||
* **Semantic Anchoring:** This reinforces the selection of **AnyLoc** (Layer 2) and **LiteSAM** (Layer 3). These algorithms are trained to ignore transient features (cars, temporary structures, vegetation color) and focus on persistent structural features (road geometry, building footprints).
|
||||
* **Seasonality:** Research indicates that DINOv2 features (used in AnyLoc) exhibit strong robustness to seasonal changes (e.g., winter satellite map vs. summer UAV flight), maintaining high retrieval recall where pixel-based methods fail.17
|
||||
|
||||
## **8. Optimization and State Estimation (The "Brain")**
|
||||
|
||||
The individual outputs of the visual layers are noisy. Layer 1 drifts over time; Layer 3 may have occasional outliers. The **Factor Graph Optimization** fuses these inputs into a coherent trajectory.
|
||||
|
||||
### **8.1 Handling the 350-Meter Outlier (Tilt)**
|
||||
|
||||
The prompt specifies that "up to 350 meters of an outlier... could happen due to tilt." This large displacement masquerading as translation is a classic source of divergence in Kalman Filters.
|
||||
|
||||
* **Robust Cost Functions:** In the Factor Graph, the error terms for the visual factors are wrapped in a **Robust Kernel** (specifically the **Cauchy** or **Huber** kernel).
|
||||
* *Mechanism:* Standard least-squares optimization penalizes errors quadratically ($e^2$). If a 350m error occurs, the penalty is massive, dragging the entire trajectory off-course. A robust kernel changes the penalty to be linear ($|e|$) or logarithmic after a certain threshold. This allows the optimizer to effectively "ignore" or down-weight the 350m jump if it contradicts the consensus of other measurements, treating it as a momentary outlier or solving for it as a rotation rather than a translation.19
|
||||
|
||||
### **8.2 The Altitude Soft Constraint**
|
||||
|
||||
To resolve the monocular scale ambiguity without IMU, the altitude ($h_{prior}$) is added as a **Unary Factor** to the graph.
|
||||
|
||||
* $E_{alt} = |
|
||||
|
||||
| z_{est} \- h_{prior} ||*{\\Sigma*{alt}}$
|
||||
|
||||
* $\\Sigma_{alt}$ (covariance) is set relatively high (soft constraint), allowing the visual odometry to adjust the altitude slightly to maintain consistency, but preventing the scale from collapsing to zero or exploding to infinity. This effectively creates an **Altimeter-Aided Monocular VIO** system, where the altimeter (virtual or barometric) replaces the accelerometer for scale determination.5
|
||||
|
||||
## **9. Implementation Specifications**
|
||||
|
||||
### **9.1 Hardware Acceleration (TensorRT)**
|
||||
|
||||
Meeting the <5 second per frame requirement on an RTX 2060 requires optimizing the deep learning models. Python/PyTorch inference is typically too slow due to overhead.
|
||||
|
||||
* **Model Export:** All core models (SuperPoint, LightGlue, LiteSAM) must be exported to **ONNX** (Open Neural Network Exchange) format.
|
||||
* **TensorRT Compilation:** The ONNX models are then compiled into **TensorRT Engines**. This process performs graph fusion (combining multiple layers into one) and kernel auto-tuning (selecting the fastest GPU instructions for the specific RTX 2060/3070 architecture).26
|
||||
* **Precision:** The models should be quantized to **FP16** (16-bit floating point). Research shows that FP16 inference on NVIDIA RTX cards offers a 2x-3x speedup with negligible loss in matching accuracy for these specific networks.16
|
||||
|
||||
### **9.2 Background Service Architecture (ZeroMQ)**
|
||||
|
||||
The system is encapsulated as a headless service.
|
||||
|
||||
**ZeroMQ Topology:**
|
||||
|
||||
* **Socket 1 (REP - Port 5555):** Command Interface. Accepts JSON messages:
|
||||
* {"cmd": "START", "config": {"lat": 48.1, "lon": 37.5}}
|
||||
* {"cmd": "USER_FIX", "lat": 48.22, "lon": 37.66} (Human-in-the-loop input).
|
||||
* **Socket 2 (PUB - Port 5556):** Data Stream. Publishes JSON results for every frame:
|
||||
* {"frame_id": 1024, "gps": [48.123, 37.123], "object_centers": [...], "status": "LOCKED", "confidence": 0.98}.
|
||||
|
||||
Asynchronous Pipeline:
|
||||
The system utilizes a Python multiprocessing architecture. One process handles the camera/image ingest and ZeroMQ communication. A second process hosts the TensorRT engines and runs the Factor Graph. This ensures that the heavy computation of Bundle Adjustment does not block the receipt of new images or user commands.
|
||||
|
||||
## **10. Human-in-the-Loop Strategy**
|
||||
|
||||
The requirement stipulates that for the "20% of the route" where automation fails, the user must intervene. The system must proactively detect its own failure.
|
||||
|
||||
### **10.1 Failure Detection with PDM@K**
|
||||
|
||||
The system monitors the **PDM@K** (Positioning Distance Measurement) metric continuously.
|
||||
|
||||
* **Definition:** PDM@K measures the percentage of queries localized within $K$ meters.3
|
||||
* **Real-Time Proxy:** In flight, we cannot know the true PDM (as we don't have ground truth). Instead, we use the **Marginal Covariance** from the Factor Graph. If the uncertainty ellipse for the current position grows larger than a radius of 50 meters, or if the **Image Registration Rate** (percentage of inliers in LightGlue/LiteSAM) drops below 10% for 3 consecutive frames, the system triggers a **Critical Failure Mode**.19
|
||||
|
||||
### **10.2 The User Interaction Workflow**
|
||||
|
||||
1. **Trigger:** Critical Failure Mode activated.
|
||||
2. **Action:** The Service publishes a status {"status": "REQ_INPUT"} via ZeroMQ.
|
||||
3. **Data Payload:** It sends the current UAV image and the top-3 retrieved satellite tiles (from Layer 2) to the client UI.
|
||||
4. **User Input:** The user clicks a distinctive feature (e.g., a specific crossroad) in the UAV image and the corresponding point on the satellite map.
|
||||
5. **Recovery:** This pair of points is treated as a **Hard Constraint** in the Factor Graph. The optimizer immediately snaps the trajectory to this user-defined anchor, resetting the covariance and effectively "healing" the localized track.19
|
||||
|
||||
## **11. Performance Evaluation and Benchmarks**
|
||||
|
||||
### **11.1 Accuracy Validation**
|
||||
|
||||
Based on the reported performance of the selected components in relevant datasets (UAV-VisLoc, AnyVisLoc):
|
||||
|
||||
* **LiteSAM** demonstrates an accuracy of 17.86m (RMSE) for cross-view matching. This aligns with the requirement that 60% of photos be within 20m error.18
|
||||
* **AnyLoc** achieves high recall rates (Top-1 Recall > 85% on aerial benchmarks), supporting the recovery from sharp turns.2
|
||||
* **Factor Graph Fusion:** By combining sequential and global measurements, the overall system error is expected to be lower than the individual component errors, satisfying the "80% within 50m" criterion.
|
||||
|
||||
### **11.2 Latency Analysis**
|
||||
|
||||
The breakdown of processing time per frame on an RTX 3070 is estimated as follows:
|
||||
|
||||
* **SuperPoint + LightGlue:** \~50ms.1
|
||||
* **AnyLoc (Global Retrieval):** \~150ms (run only on keyframes or tracking loss).
|
||||
* **LiteSAM (Metric Refinement):** \~60ms.1
|
||||
* **Factor Graph Optimization:** \~100ms (using incremental updates/iSAM2).
|
||||
* Total: \~360ms per frame (worst case with all layers active).
|
||||
This is an order of magnitude faster than the 5-second limit, providing ample headroom for higher resolution processing or background tasks.
|
||||
|
||||
## **12.0 ASTRAL-Next Validation Plan and Acceptance Criteria Matrix**
|
||||
|
||||
A comprehensive test plan is required to validate compliance with all 10 Acceptance Criteria. The foundation is a **Ground-Truth Test Harness** using project-provided ground-truth data.
|
||||
|
||||
### **Table 4: ASTRAL Component vs. Acceptance Criteria Compliance Matrix**
|
||||
|
||||
| ID | Requirement | ASTRAL Solution (Component) | Key Technology / Justification |
|
||||
| :---- | :---- | :---- | :---- |
|
||||
| **AC-1** | 80% of photos < 50m error | GDB (C-1) + GAB (C-5) + TOH (C-6) | **Tier-1 (Copernicus)** data 1 is sufficient. SOTA VPR 8 + Sim(3) graph 13 can achieve this. |
|
||||
| **AC-2** | 60% of photos < 20m error | GDB (C-1) + GAB (C-5) + TOH (C-6) | **Requires Tier-2 (Commercial) Data**.4 Mitigates reference error.3 **Per-Keyframe Scale** 15 model in TOH minimizes drift error. |
|
||||
| **AC-3** | Robust to 350m outlier | V-SLAM (C-3) + TOH (C-6) | **Stage 2 Failure Logic** (7.3) discards the frame. **Robust M-Estimation** (6.3) in Ceres 14 automatically rejects the constraint. |
|
||||
| **AC-4** | Robust to sharp turns (<5% overlap) | V-SLAM (C-3) + TOH (C-6) | **"Atlas" Multi-Map** (4.2) initializes new map (Map_Fragment_k+1). **Geodetic Map-Merging** (6.4) in TOH re-connects fragments via GAB anchors. |
|
||||
| **AC-5** | < 10% outlier anchors | TOH (C-6) | **Robust M-Estimation (Huber Loss)** (6.3) in Ceres 14 automatically down-weights and ignores high-residual (bad) GAB anchors. |
|
||||
| **AC-6** | Connect route chunks; User input | V-SLAM (C-3) + TOH (C-6) + UI | **Geodetic Map-Merging** (6.4) connects chunks. **Stage 5 Failure Logic** (7.3) provides the user-input-as-prior mechanism. |
|
||||
| **AC-7** | < 5 seconds processing/image | All Components | **Multi-Scale Pipeline** (5.3) (Low-Res V-SLAM, Hi-Res GAB patches). **Mandatory TensorRT Acceleration** (7.1) for 2-4x speedup.35 |
|
||||
| **AC-8** | Real-time stream + async refinement | TOH (C-5) + Outputs (C-2.4) | Decoupled architecture provides Pose_N_Est (V-SLAM) in real-time and Pose_N_Refined (TOH) asynchronously as GAB anchors arrive. |
|
||||
| **AC-9** | Image Registration Rate > 95% | V-SLAM (C-3) | **"Atlas" Multi-Map** (4.2). A "lost track" (AC-4) is *not* a registration failure; it's a *new map registration*. This ensures the rate > 95%. |
|
||||
| **AC-10** | Mean Reprojection Error (MRE) < 1.0px | V-SLAM (C-3) + TOH (C-6) | Local BA (4.3) + Global BA (TOH14) + **Per-Keyframe Scale** (6.2) minimizes internal graph tension (Flaw 1.3), allowing the optimizer to converge to a low MRE. |
|
||||
|
||||
### **8.1 Rigorous Validation Methodology**
|
||||
|
||||
* **Test Harness:** A validation script will be created to compare the system's Pose_N^{Refined} output against a ground-truth coordinates.csv file, computing Haversine distance errors.
|
||||
* **Test Datasets:**
|
||||
* Test_Baseline: Standard flight.
|
||||
* Test_Outlier_350m (AC-3): A single, unrelated image inserted.
|
||||
* Test_Sharp_Turn_5pct (AC-4): A sequence with a 10-frame gap.
|
||||
* Test_Long_Route (AC-9, AC-7): A 2000-image sequence.
|
||||
* **Test Cases:**
|
||||
* Test_Accuracy: Run Test_Baseline. ASSERT (count(errors < 50m) / total) >= 0.80 (AC-1). ASSERT (count(errors < 20m) / total) >= 0.60 (AC-2).
|
||||
* Test_Robustness: Run Test_Outlier_350m and Test_Sharp_Turn_5pct. ASSERT system completes the run and Test_Accuracy assertions still pass on the valid frames.
|
||||
* Test_Performance: Run Test_Long_Route on min-spec RTX 2060. ASSERT average_time(Pose_N^{Est} output) < 5.0s (AC-7).
|
||||
* Test_MRE: ASSERT TOH.final_MRE < 1.0 (AC-10).
|
||||
Reference in New Issue
Block a user