mirror of
https://github.com/azaion/gps-denied-desktop.git
synced 2026-04-22 11:16:36 +00:00
remove the current solution, add skills
This commit is contained in:
@@ -1,64 +0,0 @@
|
||||
Research this problem:
|
||||
|
||||
We have a lot of images taken from a wing-type UAV using a camera with at least Full HD resolution. Resolution of each photo could be up to 6200*4100 for the whole flight, but for other flights, it could be FullHD
|
||||
|
||||
Photos are taken and named consecutively within 100 meters of each other.
|
||||
|
||||
We know only the starting GPS coordinates. We need to determine the GPS of the centers of each image. And also the coordinates of the center of any object in these photos. We can use an external satellite provider for ground checks on the existing photos
|
||||
|
||||
The system should process data samples in the attached files (if any). They are for reference only.
|
||||
- We have the next restrictions:
|
||||
- Photos are taken by only airplane type UAVs.
|
||||
- Photos are taken by the camera pointing downwards and fixed, but it is not autostabilized.
|
||||
- The flying range is restricted by the eastern and southern parts of Ukraine (To the left of the Dnipro River)
|
||||
- The image resolution could be from FullHD to 6252*4168
|
||||
- Altitude is predefined and no more than 1km
|
||||
- There is NO data from IMU
|
||||
- Flights are done mostly in sunny weather
|
||||
- We can use satellite providers, but we're limited right now to Google Maps, which could be outdated for some regions
|
||||
- Number of photos could be up to 3000, usually in the 500-1500 range
|
||||
- During the flight, UAVs can make sharp turns, so that the next photo may be absolutely different from the previous one (no same objects), but it is rather an exception than the rule
|
||||
- Processing is done on a stationary computer or laptop with NVidia GPU at least RTX2060, better 3070. (For the UAV solution Jetson Orin Nano would be used, but that is out of scope.)
|
||||
|
||||
- Output of our system should meet these acceptance criteria:
|
||||
- The system should find out the GPS of centers of 80% of the photos from the flight within an error of no more than 50 meters in comparison to the real GPS
|
||||
|
||||
- The system should find out the GPS of centers of 60% of the photos from the flight within an error of no more than 20 meters in comparison to the real GPS
|
||||
|
||||
- The system should correctly continue the work even in the presence of up to 350 meters of an outlier photo between 2 consecutive pictures en route. This could happen due to tilt of the plane.
|
||||
|
||||
- System should correctly continue the work even during sharp turns, where the next photo doesn't overlap at all, or overlaps in less than 5%. The next photo should be in less than 150m drift and at an angle of less than 50%
|
||||
|
||||
- The number of outliers during the satellite provider images ground check should be less than 10%
|
||||
|
||||
- In case of being absolutely incapable of determining the system to determine next, second next, and third next images GPS, by any means (these 20% of the route), then it should ask the user for input for the next image, so that the user can specify the location
|
||||
|
||||
- Less than 5 seconds for processing one image
|
||||
|
||||
- Results of image processing should appear immediately to user, so that user shouldn't wait for the whole route to complete in order to analyze first results. Also, system could refine existing calculated results and send refined results again to user
|
||||
|
||||
- Image Registration Rate > 95%. The system can find enough matching features to confidently calculate the camera's 6-DoF pose (position and orientation) and "stitch" that image into the final trajectory
|
||||
|
||||
- Mean Reprojection Error (MRE) < 1.0 pixels. The distance, in pixels, between the original pixel location of the object and the re-projected pixel location.
|
||||
|
||||
|
||||
- Find out all the state-of-the-art solutions for this problem and produce the resulting solution draft in the next format:
|
||||
|
||||
- Short Product solution description. Brief component interaction diagram.
|
||||
|
||||
- Architecture approach that meets restrictions and acceptance criteria. For each component, analyze the best possible approaches to solve, and form a table comprising all approaches. Each new approach would be a row, and has the next columns:
|
||||
|
||||
- Tools (library, platform) to solve component tasks
|
||||
|
||||
- Advantages of this approach
|
||||
|
||||
- Limitations of this approach
|
||||
|
||||
- Requirements for this approach
|
||||
|
||||
- How does it fit for the problem component that has to be solved, and the whole solution
|
||||
|
||||
- Testing strategy. Research the best approaches to cover all the acceptance criteria. Form a list of integration functional tests and non-functional tests.
|
||||
|
||||
|
||||
Be concise in formulating. The fewer words, the better, but do not miss any important details.
|
||||
@@ -1,329 +0,0 @@
|
||||
Read carefully about the problem:
|
||||
|
||||
We have a lot of images taken from a wing-type UAV using a camera with at least Full HD resolution. Resolution of each photo could be up to 6200*4100 for the whole flight, but for other flights, it could be FullHD
|
||||
Photos are taken and named consecutively within 100 meters of each other.
|
||||
We know only the starting GPS coordinates. We need to determine the GPS of the centers of each image. And also the coordinates of the center of any object in these photos. We can use an external satellite provider for ground checks on the existing photos
|
||||
|
||||
System has next restrictions and conditions:
|
||||
- Photos are taken by only airplane type UAVs.
|
||||
- Photos are taken by the camera pointing downwards and fixed, but it is not autostabilized.
|
||||
- The flying range is restricted by the eastern and southern parts of Ukraine (To the left of the Dnipro River)
|
||||
- The image resolution could be from FullHD to 6252*4168. Camera parameters are known: focal length, sensor width, resolution and so on.
|
||||
- Altitude is predefined and no more than 1km. The height of the terrain can be neglected.
|
||||
- There is NO data from IMU
|
||||
- Flights are done mostly in sunny weather
|
||||
- We can use satellite providers, but we're limited right now to Google Maps, which could be outdated for some regions
|
||||
- Number of photos could be up to 3000, usually in the 500-1500 range
|
||||
- During the flight, UAVs can make sharp turns, so that the next photo may be absolutely different from the previous one (no same objects), but it is rather an exception than the rule
|
||||
- Processing is done on a stationary computer or laptop with NVidia GPU at least RTX2060, better 3070. (For the UAV solution Jetson Orin Nano would be used, but that is out of scope.)
|
||||
|
||||
Output of the system should address next acceptance criteria:
|
||||
- The system should find out the GPS of centers of 80% of the photos from the flight within an error of no more than 50 meters in comparison to the real GPS
|
||||
- The system should find out the GPS of centers of 60% of the photos from the flight within an error of no more than 20 meters in comparison to the real GPS
|
||||
- The system should correctly continue the work even in the presence of up to 350 meters of an outlier photo between 2 consecutive pictures en route. This could happen due to tilt of the plane.
|
||||
- System should correctly continue the work even during sharp turns, where the next photo doesn't overlap at all, or overlaps in less than 5%. The next photo should be in less than 150m drift and at an angle of less than 50%
|
||||
- The number of outliers during the satellite provider images ground check should be less than 10%
|
||||
- In case of being absolutely incapable of determining the system to determine next, second next, and third next images GPS, by any means (these 20% of the route), then it should ask the user for input for the next image, so that the user can specify the location
|
||||
- Less than 5 seconds for processing one image
|
||||
- Results of image processing should appear immediately to user, so that user shouldn't wait for the whole route to complete in order to analyze first results. Also, system could refine existing calculated results and send refined results again to user
|
||||
- Image Registration Rate > 95%. The system can find enough matching features to confidently calculate the camera's 6-DoF pose (position and orientation) and "stitch" that image into the final trajectory
|
||||
- Mean Reprojection Error (MRE) < 1.0 pixels. The distance, in pixels, between the original pixel location of the object and the re-projected pixel location.
|
||||
|
||||
Here is a solution draft:
|
||||
|
||||
# **GEo-Referenced Trajectory and Object Localization System (GEORTOLS): A Hybrid SLAM Architecture**
|
||||
|
||||
## **1. Executive Summary**
|
||||
|
||||
This report outlines the technical design for a robust, real-time geolocalization system. The objective is to determine the precise GPS coordinates for a sequence of high-resolution images (up to 6252x4168) captured by a fixed-wing, non-stabilized Unmanned Aerial Vehicle (UAV) [User Query]. The system must operate under severe constraints, including the absence of any IMU data, a predefined altitude of no more than 1km, and knowledge of only the starting GPS coordinate [User Query]. The system is required to handle significant in-flight challenges, such as sharp turns with minimal image overlap (<5%), frame-to-frame outliers of up to 350 meters, and operation over low-texture terrain as seen in the provided sample images [User Query, Image 1, Image 7].
|
||||
|
||||
The proposed solution is a **Hybrid Visual-Geolocalization SLAM (VG-SLAM)** architecture. This system is designed to meet the demanding acceptance criteria, including a sub-5-second initial processing time per image, streaming output with asynchronous refinement, and high-accuracy GPS localization (60% of photos within 20m error, 80% within 50m error) [User Query].
|
||||
|
||||
This hybrid architecture is necessitated by the problem's core constraints. The lack of an IMU makes a purely monocular Visual Odometry (VO) system susceptible to catastrophic scale drift.1 Therefore, the system integrates two cooperative sub-systems:
|
||||
|
||||
1. A **Visual Odometry (VO) Front-End:** This component uses state-of-the-art deep-learning feature matchers (SuperPoint + SuperGlue/LightGlue) to provide fast, real-time *relative* pose estimates. This approach is selected for its proven robustness in low-texture environments where traditional features fail.4 This component delivers the initial, sub-5-second pose estimate.
|
||||
2. A **Cross-View Geolocalization (CVGL) Module:** This component provides *absolute*, drift-free GPS pose estimates by matching UAV images against the available satellite provider (Google Maps).7 It functions as the system's "global loop closure" mechanism, correcting the VO's scale drift and, critically, relocalizing the UAV after tracking is lost during sharp turns or outlier frames [User Query].
|
||||
|
||||
These two systems run in parallel. A **Back-End Pose-Graph Optimizer** fuses their respective measurements—high-frequency relative poses from VO and high-confidence absolute poses from CVGL—into a single, globally consistent, and incrementally refined trajectory. This architecture directly satisfies the requirements for immediate, streaming results and subsequent asynchronous refinement [User Query].
|
||||
|
||||
## **2. Product Solution Description and Component Interaction**
|
||||
|
||||
### **Product Solution Description**
|
||||
|
||||
The proposed system, "GEo-Referenced Trajectory and Object Localization System (GEORTOLS)," is a real-time, streaming-capable software solution. It is designed for deployment on a stationary computer or laptop equipped with an NVIDIA GPU (RTX 2060 or better) [User Query].
|
||||
|
||||
* **Inputs:**
|
||||
1. A sequence of consecutively named monocular images (FullHD to 6252x4168).
|
||||
2. The absolute GPS coordinate (Latitude, Longitude) of the *first* image in the sequence.
|
||||
3. A pre-calibrated camera intrinsic matrix.
|
||||
4. Access to the Google Maps satellite imagery API.
|
||||
* **Outputs:**
|
||||
1. A real-time, streaming feed of estimated GPS coordinates (Latitude, Longitude, Altitude) and 6-DoF poses (including Roll, Pitch, Yaw) for the center of each image.
|
||||
2. Asynchronous refinement messages for previously computed poses as the back-end optimizer improves the global trajectory.
|
||||
3. A service to provide the absolute GPS coordinate for any user-selected pixel coordinate (u,v) within any geolocated image.
|
||||
|
||||
### **Component Interaction Diagram**
|
||||
|
||||
The system is architected as four asynchronous, parallel-processing components to meet the stringent real-time and refinement requirements.
|
||||
|
||||
1. **Image Ingestion & Pre-processing:** This module acts as the entry point. It receives the new, high-resolution image (Image N). It immediately creates scaled-down, lower-resolution (e.g., 1024x768) copies of the image for real-time processing by the VO and CVGL modules, while retaining the full-resolution original for object-level GPS lookups.
|
||||
2. **Visual Odometry (VO) Front-End:** This module's sole task is high-speed, frame-to-frame relative pose estimation. It maintains a short-term "sliding window" of features, matching Image N to Image N-1. It uses GPU-accelerated deep-learning models (SuperPoint + SuperGlue) to find feature matches and calculates the 6-DoF relative transform. This result is immediately sent to the Back-End.
|
||||
3. **Cross-View Geolocalization (CVGL) Module:** This is a heavier, slower, asynchronous module. It takes the pre-processed Image N and queries the Google Maps database to find an *absolute* GPS pose. This involves a two-stage retrieval-and-match process. When a high-confidence match is found, its absolute pose is sent to the Back-End as a "global-pose constraint."
|
||||
4. **Trajectory Optimization Back-End:** This is the system's central "brain," managing the complete pose graph.10 It receives two types of data:
|
||||
* *High-frequency, low-confidence relative poses* from the VO Front-End.
|
||||
* Low-frequency, high-confidence absolute poses from the CVGL Module.
|
||||
It continuously fuses these constraints in a pose-graph optimization framework (e.g., g2o or Ceres Solver). When the VO Front-End provides a new relative pose, it is quickly added to the graph to produce the "Initial Pose" (<5s). When the CVGL Module provides a new absolute pose, it triggers a more comprehensive re-optimization of the entire graph, correcting drift and broadcasting "Refined Poses" to the user.11
|
||||
|
||||
## **3. Core Architectural Framework: Hybrid Visual-Geolocalization SLAM (VG-SLAM)**
|
||||
|
||||
### **Rationale for the Hybrid Approach**
|
||||
|
||||
The core constraints of this problem—monocular, IMU-less flight over potentially long distances (up to 3000 images at \~100m intervals equates to a 300km flight) [User Query]—render simple solutions unviable.
|
||||
|
||||
A **VO-Only** system is guaranteed to fail. Monocular Visual Odometry (and SLAM) suffers from an inherent, unobservable ambiguity: the *scale* of the world.1 Because there is no IMU to provide an accelerometer-based scale reference or a gravity vector 12, the system has no way to know if it moved 1 meter or 10 meters. This leads to compounding scale drift, where the entire trajectory will grow or shrink over time.3 Over a 300km flight, the resulting positional error would be measured in kilometers, not the 20-50 meters required [User Query].
|
||||
|
||||
A **CVGL-Only** system is also unviable. Cross-View Geolocalization (CVGL) matches the UAV image to a satellite map to find an absolute pose.7 While this is drift-free, it is a large-scale image retrieval problem. Querying the entire map of Ukraine for a match for every single frame is computationally impossible within the <5 second time limit.13 Furthermore, this approach is brittle; if the Google Maps data is outdated (a specific user restriction) [User Query], the CVGL match will fail, and the system would have no pose estimate at all.
|
||||
|
||||
Therefore, the **Hybrid VG-SLAM** architecture is the only robust solution.
|
||||
|
||||
* The **VO Front-End** provides the fast, high-frequency relative motion. It works even if the satellite map is outdated, as it tracks features in the *real*, current world.
|
||||
* The **CVGL Module** acts as the *only* mechanism for scale correction and absolute georeferencing. It provides periodic, drift-free "anchors" to the real-world GPS coordinates.
|
||||
* The **Back-End Optimizer** fuses these two data streams. The CVGL poses function as "global loop closures" in the SLAM pose graph. They correct the scale drift accumulated by the VO and, critically, serve to relocalize the system after a "kidnapping" event, such as the specified sharp turns or 350m outliers [User Query].
|
||||
|
||||
### **Data Flow for Streaming and Refinement**
|
||||
|
||||
This architecture is explicitly designed to meet the <5s initial output and asynchronous refinement criteria [User Query]. The data flow for a single image (Image N) is as follows:
|
||||
|
||||
* **T \= 0.0s:** Image N (6200x4100) is received by the **Ingestion Module**.
|
||||
* **T \= 0.2s:** Image N is pre-processed (scaled to 1024px) and passed to the VO and CVGL modules.
|
||||
* **T \= 1.0s:** The **VO Front-End** completes GPU-accelerated matching (SuperPoint+SuperGlue) of Image N -> Image N-1. It computes the Relative_Pose(N-1 -> N).
|
||||
* **T \= 1.1s:** The **Back-End Optimizer** receives this Relative_Pose. It appends this pose to the graph relative to the last known pose of N-1.
|
||||
* **T \= 1.2s:** The Back-End broadcasts the **Initial Pose_N_Est** to the user interface. (**<5s criterion met**).
|
||||
* **(Parallel Thread) T \= 1.5s:** The **CVGL Module** (on a separate thread) begins its two-stage search for Image N against the Google Maps database.
|
||||
* **(Parallel Thread) T \= 6.0s:** The CVGL Module successfully finds a high-confidence Absolute_Pose_N_Abs from the satellite match.
|
||||
* **T \= 6.1s:** The **Back-End Optimizer** receives this new, high-confidence absolute constraint for Image N.
|
||||
* **T \= 6.2s:** The Back-End triggers a graph re-optimization. This new "anchor" corrects any scale or positional drift for Image N and all surrounding poses in the graph.
|
||||
* **T \= 6.3s:** The Back-End broadcasts a **Pose_N_Refined** (and Pose_N-1_Refined, Pose_N-2_Refined, etc.) to the user interface. (**Refinement criterion met**).
|
||||
|
||||
## **4. Component Analysis: Front-End (Visual Odometry and Relocalization)**
|
||||
|
||||
The task of the VO Front-End is to rapidly and robustly estimate the 6-DoF relative motion between consecutive frames. This component's success is paramount for the high-frequency tracking required to meet the <5s criterion.
|
||||
|
||||
The primary challenge is the nature of the imagery. The specified operational area and sample images (e.g., Image 1, Image 7) show vast, low-texture agricultural fields [User Query]. These environments are a known failure case for traditional, gradient-based feature extractors like SIFT or ORB, which rely on high-gradient corners and cannot find stable features in "weak texture areas".5 Furthermore, the non-stabilized camera [User Query] will introduce significant rotational motion and viewpoint change, breaking the assumptions of many simple trackers.16
|
||||
|
||||
Deep-learning (DL) based feature extractors and matchers have been developed specifically to overcome these "challenging visual conditions".5 Models like SuperPoint, SuperGlue, and LoFTR are trained to find more robust and repeatable features, even in low-texture scenes.4
|
||||
|
||||
### **Table 1: Analysis of State-of-the-Art Feature Extraction and Matching Techniques**
|
||||
|
||||
| Approach (Tools/Library) | Advantages | Limitations | Requirements | Fitness for Problem Component |
|
||||
| :---- | :---- | :---- | :---- | :---- |
|
||||
| **SIFT + BFMatcher/FLANN** (OpenCV) | - Scale and rotation invariant. - High-quality, robust matches. - Well-studied and mature.15 | - Computationally slow (CPU-based). - Poor performance in low-texture or weakly-textured areas.14 - Patented (though expired). | - High-contrast, well-defined features. | **Poor.** Too slow for the <5s target and will fail to find features in the low-texture agricultural landscapes shown in sample images. |
|
||||
| **ORB + BFMatcher** (OpenCV) | - Extremely fast and lightweight. - Standard for real-time SLAM (e.g., ORB-SLAM).21 - Rotation invariant. | - *Not* scale invariant (uses a pyramid). - Performs very poorly in low-texture scenes.5 - Unstable in high-blur scenarios. | - CPU, lightweight. - High-gradient corners. | **Very Poor.** While fast, it fails on the *robustness* requirement. It is designed for textured, indoor/urban scenes, not sparse, natural terrain. |
|
||||
| **SuperPoint + SuperGlue** (PyTorch, C++/TensorRT) | - SOTA robustness in low-texture, high-blur, and challenging conditions.4 - End-to-end learning for detection and matching.24 - Multiple open-source SLAM integrations exist (e.g., SuperSLAM).25 | - Requires a powerful GPU for real-time performance. - Sparse feature-based (not dense). | - NVIDIA GPU (RTX 2060+). - PyTorch (research) or TensorRT (deployment).26 | **Excellent.** This approach is *designed* for the exact "challenging conditions" of this problem. It provides SOTA robustness in low-texture scenes.4 The user's hardware (RTX 2060+) meets the requirements. |
|
||||
| **LoFTR** (PyTorch) | - Detector-free dense matching.14 - Extremely robust to viewpoint and texture challenges.14 - Excellent performance on natural terrain and low-overlap images.19 | - High computational and VRAM cost. - Can cause CUDA Out-of-Memory (OOM) errors on very high-resolution images.30 - Slower than sparse-feature methods. | - High-end NVIDIA GPU. - PyTorch. | **Good, but Risky.** While its robustness is excellent, its dense, Transformer-based nature makes it vulnerable to OOM errors on the 6252x4168 images.30 The sparse SuperPoint approach is a safer, more-scalable choice for the VO front-end. |
|
||||
|
||||
### **Selected Approach (VO Front-End): SuperPoint + SuperGlue/LightGlue**
|
||||
|
||||
The selected approach is a VO front-end based on **SuperPoint** for feature extraction and **SuperGlue** (or its faster successor, **LightGlue**) for matching.18
|
||||
|
||||
* **Robustness:** This combination is proven to provide superior robustness and accuracy in sparse-texture scenes, extracting more and higher-quality matches than ORB.4
|
||||
* **Performance:** It is designed for GPU acceleration and is used in SOTA real-time SLAM systems, demonstrating its feasibility within the <5s target on an RTX 2060.25
|
||||
* **Scalability:** As a sparse-feature method, it avoids the memory-scaling issues of dense matchers like LoFTR when faced with the user's maximum 6252x4168 resolution.30 The image can be downscaled for real-time VO, and SuperPoint will still find stable features.
|
||||
|
||||
## **5. Component Analysis: Back-End (Trajectory Optimization and Refinement)**
|
||||
|
||||
The task of the Back-End is to fuse all incoming measurements (high-frequency/low-accuracy relative VO poses, low-frequency/high-accuracy absolute CVGL poses) into a single, globally consistent trajectory. This component's design is dictated by the user's real-time streaming and refinement requirements [User Query].
|
||||
|
||||
A critical architectural choice must be made between a traditional, batch **Structure from Motion (SfM)** pipeline and a real-time **SLAM (Simultaneous Localization and Mapping)** pipeline.
|
||||
|
||||
* **Batch SfM:** (e.g., COLMAP).32 This approach is an offline process. It collects all 1500-3000 images, performs feature matching, and then runs a large, non-real-time "Bundle Adjustment" (BA) to solve for all camera poses and 3D points simultaneously.35 While this produces the most accurate possible result, it can take hours to compute. It *cannot* meet the <5s/image or "immediate results" criteria.
|
||||
* **Real-time SLAM:** (e.g., ORB-SLAM3).28 This approach is *online* and *incremental*. It maintains a "pose graph" of the trajectory.10 It provides an immediate pose estimate based on the VO front-end. When a new, high-quality measurement arrives (like a loop closure 37, or in our case, a CVGL fix), it triggers a fast re-optimization of the graph, publishing a *refined* result.11
|
||||
|
||||
The user's requirements for "results...appear immediately" and "system could refine existing calculated results" [User Query] are a textbook description of a real-time SLAM back-end.
|
||||
|
||||
### **Table 2: Analysis of Trajectory Optimization Strategies**
|
||||
|
||||
| Approach (Tools/Library) | Advantages | Limitations | Requirements | Fitness for Problem Component |
|
||||
| :---- | :---- | :---- | :---- | :---- |
|
||||
| **Incremental SLAM (Pose-Graph Optimization)** (g2o, Ceres Solver, GTSAM) | - **Real-time / Online:** Provides immediate pose estimates. - **Supports Refinement:** Explicitly designed to refine past poses when new "loop closure" (CVGL) data arrives.10 - Meets the <5s and streaming criteria. | - Initial estimate is less accurate than a full batch process. - Susceptible to drift *until* a loop closure (CVGL fix) is made. | - A graph optimization library (g2o, Ceres). - A robust cost function to reject outliers. | **Excellent.** This is the *only* architecture that satisfies the user's real-time streaming and asynchronous refinement constraints. |
|
||||
| **Batch Structure from Motion (Global Bundle Adjustment)** (COLMAP, Agisoft Metashape) | - **Globally Optimal Accuracy:** Produces the most accurate possible 3D reconstruction and trajectory.35 - Can import custom DL matches.38 | - **Offline:** Cannot run in real-time or stream results. - High computational cost (minutes to hours). - Fails all timing and streaming criteria. | - All images must be available before processing starts. - High RAM and CPU. | **Unsuitable (for the *online* system).** This approach is ideal for an *optional, post-flight, high-accuracy* refinement, but it cannot be the primary system. |
|
||||
|
||||
### **Selected Approach (Back-End): Incremental Pose-Graph Optimization (g2o/Ceres)**
|
||||
|
||||
The system's back-end will be built as an **Incremental Pose-Graph Optimizer** using a library like **g2o** or **Ceres Solver**. This is the only way to meet the real-time streaming and refinement constraints [User Query].
|
||||
|
||||
The graph will contain:
|
||||
|
||||
* **Nodes:** The 6-DoF pose of each camera frame.
|
||||
* **Edges (Constraints):**
|
||||
1. **Odometry Edges:** Relative 6-DoF transforms from the VO Front-End (SuperPoint+SuperGlue). These are high-frequency but have accumulating drift/scale error.
|
||||
2. **Georeferencing Edges:** Absolute 6-DoF poses from the CVGL Module. These are low-frequency but are drift-free and provide the absolute scale.
|
||||
3. **Start-Point Edge:** A high-confidence absolute pose for Image 1, fixed to the user-provided start GPS.
|
||||
|
||||
This architecture allows the system to provide an immediate estimate (from odometry) and then drastically improve its accuracy (correcting scale and drift) whenever a new georeferencing edge is added.
|
||||
|
||||
## **6. Component Analysis: Global-Pose Correction (Georeferencing Module)**
|
||||
|
||||
This module is the most critical component for meeting the accuracy requirements. Its task is to provide absolute GPS pose estimates by matching the UAV's nadir-pointing-but-non-stabilized images to the Google Maps satellite provider [User Query]. This is the only component that can correct the monocular scale drift.
|
||||
|
||||
This task is known as **Cross-View Geolocalization (CVGL)**.7 It is extremely challenging due to the "domain gap" 44 between the two image sources:
|
||||
|
||||
1. **Viewpoint:** The UAV is at low altitude (<1km) and non-nadir (due to fixed-wing tilt) 45, while the satellite is at a very high altitude and is perfectly nadir.
|
||||
2. **Appearance:** The images come from different sensors, with different lighting (shadows), and at different times. The Google Maps data may be "outdated" [User Query], showing different seasons, vegetation, or man-made structures.47
|
||||
|
||||
A simple, brute-force feature match is computationally impossible. The solution is a **hierarchical, two-stage approach** that mimics SOTA research 7:
|
||||
|
||||
* **Stage 1: Coarse Retrieval.** We cannot run expensive matching against the entire map. Instead, we treat this as an image retrieval problem. We use a Deep Learning model (e.g., a Siamese or Dual CNN trained on this task 50) to generate a compact "embedding vector" (a digital signature) for the UAV image. In an offline step, we pre-compute embeddings for *all* satellite map tiles in the operational area. The UAV image's embedding is then used to perform a very fast (e.g., FAISS library) similarity search against the satellite database, returning the Top-K most likely-matching satellite tiles.
|
||||
* **Stage 2: Fine-Grained Pose.** *Only* for these Top-K candidates do we perform the heavy-duty feature matching. We use our selected **SuperPoint+SuperGlue** matcher 53 to find precise correspondences between the UAV image and the K satellite tiles. If a high-confidence geometric match (e.g., >50 inliers) is found, we can compute the precise 6-DoF pose of the UAV relative to that tile, thus yielding an absolute GPS coordinate.
|
||||
|
||||
### **Table 3: Analysis of State-of-the-Art Cross-View Geolocalization (CVGL) Techniques**
|
||||
|
||||
| Approach (Tools/Library) | Advantages | Limitations | Requirements | Fitness for Problem Component |
|
||||
| :---- | :---- | :---- | :---- | :---- |
|
||||
| **Coarse Retrieval (Siamese/Dual CNNs)** (PyTorch, ResNet18) | - Extremely fast for retrieval (database lookup). - Learns features robust to seasonal and appearance changes.50 - Narrows search space from millions to a few. | - Does *not* provide a precise 6-DoF pose, only a "best match" tile. - Requires training on a dataset of matched UAV-satellite pairs. | - Pre-trained model (e.g., on ResNet18).52 - Pre-computed satellite embedding database. | **Essential (as Stage 1).** This is the only computationally feasible way to "find" the UAV on the map. |
|
||||
| **Fine-Grained Feature Matching** (SuperPoint + SuperGlue) | - Provides a highly-accurate 6-Dof pose estimate.53 - Re-uses the same robust matcher from the VO Front-End.54 | - Too slow to run on the entire map. - *Requires* a good initial guess (from Stage 1) to be effective. | - NVIDIA GPU. - Top-K candidate tiles from Stage 1. | **Essential (as Stage 2).** This is the component that actually computes the precise GPS pose from the coarse candidates. |
|
||||
| **End-to-End DL Models (Transformers)** (PFED, ReCOT, etc.) | - SOTA accuracy in recent benchmarks.13 - Can be highly efficient (e.g., PFED).13 - Can perform retrieval and pose estimation in one model. | - Often research-grade, not robustly open-sourced. - May be complex to train and deploy. - Less modular and harder to debug than the two-stage approach. | - Specific, complex model architectures.13 - Large-scale training datasets. | **Not Recommended (for initial build).** While powerful, these are less practical for a version 1 build. The two-stage approach is more modular, debuggable, and uses components already required by the VO system. |
|
||||
|
||||
### **Selected Approach (CVGL Module): Hierarchical Retrieval + Matching**
|
||||
|
||||
The CVGL module will be implemented as a two-stage hierarchical system:
|
||||
|
||||
1. **Stage 1 (Coarse):** A **Siamese CNN** 52 (or similar model) generates an embedding for the UAV image. This embedding is used to retrieve the Top-5 most similar satellite tiles from a pre-computed database.
|
||||
2. **Stage 2 (Fine):** The **SuperPoint+SuperGlue** matcher 53 is run between the UAV image and these 5 tiles. The match with the highest inlier count and lowest reprojection error is used to calculate the absolute 6-DoF pose, which is then sent to the Back-End optimizer.
|
||||
|
||||
## **7. Addressing Critical Acceptance Criteria and Failure Modes**
|
||||
|
||||
This hybrid architecture's logic is designed to handle the most difficult acceptance criteria [User Query] through a robust, multi-stage escalation process.
|
||||
|
||||
### **Stage 1: Initial State (Normal Operation)**
|
||||
|
||||
* **Condition:** VO(N-1 -> N) succeeds.
|
||||
* **System Logic:** The **VO Front-End** provides the high-frequency relative pose. This is added to the graph, and the **Initial Pose** is sent to the user (<5s).
|
||||
* **Resolution:** The **CVGL Module** runs asynchronously to provide a Refined Pose later, which corrects for scale drift.
|
||||
|
||||
### **Stage 2: Transient Failure / Outlier Handling (AC-3)**
|
||||
|
||||
* **Condition:** VO(N-1 -> N) fails (e.g., >350m jump, severe motion blur, low overlap) [User Query]. This triggers an immediate, high-priority CVGL(N) query.
|
||||
* **System Logic:**
|
||||
1. If CVGL(N) *succeeds*, the system has conflicting data: a failed VO link and a successful CVGL pose. The **Back-End Optimizer** uses a robust kernel to reject the high-error VO link as an outlier and accepts the CVGL pose.56 The trajectory "jumps" to the correct location, and VO resumes from Image N+1.
|
||||
2. If CVGL(N) *also fails* (e.g., due to cloud cover or outdated map), the system assumes Image N is a single bad frame (an outlier).
|
||||
* **Resolution (Frame Skipping):** The system buffers Image N and, upon receiving Image N+1, the **VO Front-End** attempts to "bridge the gap" by matching VO(N-1 -> N+1).
|
||||
* **If successful,** a pose for N+1 is found. Image N is marked as a rejected outlier, and the system continues.
|
||||
* **If VO(N-1 -> N+1) fails,** it repeats for VO(N-1 -> N+2).
|
||||
* If this "bridging" fails for 3 consecutive frames, the system concludes it is not a transient outlier but a persistent tracking loss. This escalates to Stage 3.
|
||||
|
||||
### **Stage 3: Persistent Tracking Loss / Sharp Turn Handling (AC-4)**
|
||||
|
||||
* **Condition:** VO tracking is lost, and the "frame-skipping" in Stage 2 fails (e.g., a "sharp turn" with no overlap) [User Query].
|
||||
* **System Logic (Multi-Map "Chunking"):** The **Back-End Optimizer** declares a "Tracking Lost" state and creates a *new, independent map* ("Chunk 2").
|
||||
* The **VO Front-End** is re-initialized and begins populating this new chunk, tracking VO(N+3 -> N+4), VO(N+4 -> N+5), etc. This new chunk is internally consistent but has no absolute GPS position (it is "floating").
|
||||
* **Resolution (Asynchronous Relocalization):**
|
||||
1. The **CVGL Module** now runs asynchronously on all frames in this new "Chunk 2".
|
||||
2. Crucially, it uses the last known GPS coordinate from "Chunk 1" as a *search prior*, narrowing the satellite map search area to the vicinity.
|
||||
3. The system continues to build Chunk 2 until the CVGL module successfully finds a high-confidence Absolute_Pose for *any* frame in that chunk (e.g., for Image N+20).
|
||||
4. Once this single GPS "anchor" is found, the **Back-End Optimizer** performs a full graph optimization. It calculates the 7-DoF transformation (3D position, 3D rotation, and **scale**) to align all of Chunk 2 and merge it with Chunk 1.
|
||||
5. This "chunking" method robustly handles the "correctly continue the work" criterion by allowing the system to keep tracking locally even while globally lost, confident it can merge the maps later.
|
||||
|
||||
### **Stage 4: Catastrophic Failure / User Intervention (AC-6)**
|
||||
|
||||
* **Condition:** The system has entered Stage 3 and is building "Chunk 2," but the **CVGL Module** has *also* failed for a prolonged period (e.g., 20% of the route, or 50+ consecutive frames) [User Query]. This is a "worst-case" scenario where the UAV is in an area with no VO features (e.g., over a lake) *and* no CVGL features (e.g., heavy clouds or outdated maps).
|
||||
* **System Logic:** The system is "absolutely incapable" of determining its pose.
|
||||
* **Resolution (User Input):** The system triggers the "ask the user for input" event. A UI prompt will show the last known good image (from Chunk 1) on the map and the new, "lost" image (e.g., N+50). It will ask the user to "Click on the map to provide a coarse location." This user-provided GPS point is then fed to the CVGL module as a *strong prior*, drastically narrowing the search space and enabling it to re-acquire a lock.
|
||||
|
||||
## **8. Implementation and Output Generation**
|
||||
|
||||
### **Real-time Workflow (<5s Initial, Async Refinement)**
|
||||
|
||||
A concrete implementation plan for processing Image N:
|
||||
|
||||
1. **T=0.0s:** Image[N] (6200px) received.
|
||||
2. **T=0.1s:** Image pre-processed: Scaled to 1024px for VO/CVGL. Full-res original stored.
|
||||
3. **T=0.5s:** **VO Front-End** (GPU): SuperPoint features extracted for 1024px image.
|
||||
4. **T=1.0s:** **VO Front-End** (GPU): SuperGlue matches 1024px Image[N] -> 1024px Image[N-1]. Relative_Pose (6-DoF) estimated via RANSAC/PnP.
|
||||
5. **T=1.1s:** **Back-End:** Relative_Pose added to graph. Optimizer updates trajectory.
|
||||
6. **T=1.2s:** **OUTPUT:** Initial Pose_N_Est (GPS) sent to user. **(<5s criterion met)**.
|
||||
7. **T=1.3s:** **CVGL Module (Async Task)** (GPU): Siamese/Dual CNN generates embedding for 1024px Image[N].
|
||||
8. **T=1.5s:** **CVGL Module (Async Task):** Coarse retrieval (FAISS lookup) returns Top-5 satellite tile candidates.
|
||||
9. **T=4.0s:** **CVGL Module (Async Task)** (GPU): Fine-grained matching. SuperPoint+SuperGlue runs 5 times (Image[N] vs. 5 satellite tiles).
|
||||
10. **T=4.5s:** **CVGL Module (Async Task):** A high-confidence match is found. Absolute_Pose_N_Abs (6-DoF) is computed.
|
||||
11. **T=4.6s:** **Back-End:** High-confidence Absolute_Pose_N_Abs added to pose graph. Graph re-optimization is triggered.
|
||||
12. **T=4.8s:** **OUTPUT:** Pose_N_Refined (GPS) sent to user. **(Refinement criterion met)**.
|
||||
|
||||
### **Determining Object-Level GPS (from Pixel Coordinate)**
|
||||
|
||||
The requirement to find the "coordinates of the center of any object in these photos" [User Query] is met by projecting a pixel to its 3D world coordinate. This requires the (u,v) pixel, the camera's 6-DoF pose, and the camera's intrinsic matrix (K).
|
||||
|
||||
Two methods will be implemented to support the streaming/refinement architecture:
|
||||
|
||||
1. **Method 1 (Immediate, <5s): Flat-Earth Projection.**
|
||||
* When the user clicks pixel (u,v) on Image[N], the system uses the *Initial Pose_N_Est*.
|
||||
* It assumes the ground is a flat plane at the predefined altitude (e.g., 900m altitude if flying at 1km and ground is at 100m) [User Query].
|
||||
* It computes the 3D ray from the camera center through (u,v) using the intrinsic matrix (K).
|
||||
* It calculates the 3D intersection point of this ray with the flat ground plane.
|
||||
* This 3D world point is converted to a GPS coordinate and sent to the user. This is very fast but less accurate in non-flat terrain.
|
||||
2. **Method 2 (Refined, Post-BA): Structure-from-Motion Projection.**
|
||||
* The Back-End's pose-graph optimization, as a byproduct, will create a sparse 3D point cloud of the world (i.e., the "SfM" part of SLAM).35
|
||||
* When the user clicks (u,v), the system uses the *Pose_N_Refined*.
|
||||
* It raycasts from the camera center through (u,v) and finds the 3D intersection point with the *actual 3D point cloud* generated by the system.
|
||||
* This 3D point's coordinate (X,Y,Z) is converted to GPS. This is far more accurate as it accounts for real-world topography (hills, ditches) captured in the 3D map.
|
||||
|
||||
## **9. Testing and Validation Strategy**
|
||||
|
||||
A rigorous testing strategy is required to validate all 10 acceptance criteria. The foundation of this strategy is the creation of a **Ground-Truth Test Dataset**. This will involve flying several test routes and manually creating a "checkpoint" (CP) file, similar to the provided coordinates.csv 58, using a high-precision RTK/PPK GPS. This provides the "real GPS" for validation.59
|
||||
|
||||
### **Accuracy Validation Methodology (AC-1, AC-2, AC-5, AC-8, AC-9)**
|
||||
|
||||
These tests validate the system's accuracy and completion metrics.59
|
||||
|
||||
1. A test flight of 1000 images with high-precision ground-truth CPs is prepared.
|
||||
2. The system is run given only the first GPS coordinate.
|
||||
3. A test script compares the system's *final refined GPS output* for each image against its *ground-truth CP*. The Haversine distance (error in meters) is calculated for all 1000 images.
|
||||
4. This yields a list of 1000 error values.
|
||||
5. **Test_Accuracy_50m (AC-1):** ASSERT (count(errors < 50m) / 1000) >= 0.80
|
||||
6. **Test_Accuracy_20m (AC-2):** ASSERT (count(errors < 20m) / 1000) >= 0.60
|
||||
7. **Test_Outlier_Rate (AC-5):** ASSERT (count(un-localized_images) / 1000) < 0.10
|
||||
8. **Test_Image_Registration_Rate (AC-8):** ASSERT (count(localized_images) / 1000) > 0.95
|
||||
9. **Test_Mean_Reprojection_Error (AC-9):** ASSERT (Back-End.final_MRE) < 1.0
|
||||
10. **Test_RMSE:** The overall Root Mean Square Error (RMSE) of the entire trajectory will be calculated as a primary performance benchmark.59
|
||||
|
||||
### **Integration and Functional Tests (AC-3, AC-4, AC-6)**
|
||||
|
||||
These tests validate the system's logic and robustness to failure modes.62
|
||||
|
||||
* Test_Low_Overlap_Relocalization (AC-4):
|
||||
* **Setup:** Create a test sequence of 50 images. From this, manually delete images 20-24 (simulating 5 lost frames during a sharp turn).63
|
||||
* **Test:** Run the system on this "broken" sequence.
|
||||
* **Pass/Fail:** The system must report "Tracking Lost" at frame 20, initiate a new "chunk," and then "Tracking Re-acquired" and "Maps Merged" when the CVGL module successfully localizes frame 25 (or a subsequent frame). The final trajectory error for frame 25 must be < 50m.
|
||||
* Test_350m_Outlier_Rejection (AC-3):
|
||||
* **Setup:** Create a test sequence. At image 30, insert a "rogue" image (Image 30b) known to be 350m away.
|
||||
* **Test:** Run the system on this sequence (..., 29, 30, 30b, 31,...).
|
||||
* **Pass/Fail:** The system must correctly identify Image 30b as an outlier (RANSAC failure 56), reject it (or jump to its CVGL-verified pose), and "correctly continue the work" by successfully tracking Image 31 from Image 30 (using the frame-skipping logic). The trajectory must not be corrupted.
|
||||
* Test_User_Intervention_Prompt (AC-6):
|
||||
* **Setup:** Create a test sequence with 50 consecutive "bad" frames (e.g., pure sky, lens cap) to ensure the transient and chunking logics are bypassed.
|
||||
* **Test:** Run the system.
|
||||
* **Pass/Fail:** The system must enter a "LOST" state, attempt and fail to relocalize via CVGL for 50 frames, and then correctly trigger the "ask for user input" event.
|
||||
|
||||
### **Non-Functional Tests (AC-7, AC-8, Hardware)**
|
||||
|
||||
These tests validate performance and resource requirements.66
|
||||
|
||||
* Test_Performance_Per_Image (AC-7):
|
||||
* **Setup:** Run the 1000-image test set on the minimum-spec RTX 2060.
|
||||
* **Test:** Measure the time from "Image In" to "Initial Pose Out" for every frame.
|
||||
* **Pass/Fail:** ASSERT average_time < 5.0s.
|
||||
* Test_Streaming_Refinement (AC-8):
|
||||
* **Setup:** Run the 1000-image test set.
|
||||
* **Test:** A logger must verify that *two* poses are received for >80% of images: an "Initial" pose (T < 5s) and a "Refined" pose (T > 5s, after CVGL).
|
||||
* **Pass/Fail:** The refinement mechanism is functioning correctly.
|
||||
* Test_Scalability_Large_Route (Constraints):
|
||||
* **Setup:** Run the system on a full 3000-image dataset.
|
||||
* **Test:** Monitor system RAM, VRAM, and processing time per frame over the entire run.
|
||||
* **Pass/Fail:** The system must complete the run without memory leaks, and the processing time per image must not degrade significantly as the pose graph grows.
|
||||
|
||||
Identify all potential weak points and problems. Address them and find out ways to solve them. Based on your findings, form a new solution draft in the same format.
|
||||
|
||||
If your finding requires a complete reorganization of the flow and different components, state it.
|
||||
Put all the findings regarding what was weak and poor at the beginning of the report. Put here all new findings, what was updated, replaced, or removed from the previous solution.
|
||||
|
||||
Then form a new solution design without referencing the previous system. Remove Poor and Very Poor component choices from the component analysis tables, but leave Good and Excellent ones.
|
||||
In the updated report, do not put "new" marks, do not compare to the previous solution draft, just make a new solution as if from scratch
|
||||
@@ -1,325 +0,0 @@
|
||||
Read carefully about the problem:
|
||||
|
||||
We have a lot of images taken from a wing-type UAV using a camera with at least Full HD resolution. Resolution of each photo could be up to 6200*4100 for the whole flight, but for other flights, it could be FullHD
|
||||
Photos are taken and named consecutively within 100 meters of each other.
|
||||
We know only the starting GPS coordinates. We need to determine the GPS of the centers of each image. And also the coordinates of the center of any object in these photos. We can use an external satellite provider for ground checks on the existing photos
|
||||
|
||||
System has next restrictions and conditions:
|
||||
- Photos are taken by only airplane type UAVs.
|
||||
- Photos are taken by the camera pointing downwards and fixed, but it is not autostabilized.
|
||||
- The flying range is restricted by the eastern and southern parts of Ukraine (To the left of the Dnipro River)
|
||||
- The image resolution could be from FullHD to 6252*4168. Camera parameters are known: focal length, sensor width, resolution and so on.
|
||||
- Altitude is predefined and no more than 1km. The height of the terrain can be neglected.
|
||||
- There is NO data from IMU
|
||||
- Flights are done mostly in sunny weather
|
||||
- We can use satellite providers, but we're limited right now to Google Maps, which could be outdated for some regions
|
||||
- Number of photos could be up to 3000, usually in the 500-1500 range
|
||||
- During the flight, UAVs can make sharp turns, so that the next photo may be absolutely different from the previous one (no same objects), but it is rather an exception than the rule
|
||||
- Processing is done on a stationary computer or laptop with NVidia GPU at least RTX2060, better 3070. (For the UAV solution Jetson Orin Nano would be used, but that is out of scope.)
|
||||
|
||||
Output of the system should address next acceptance criteria:
|
||||
- The system should find out the GPS of centers of 80% of the photos from the flight within an error of no more than 50 meters in comparison to the real GPS
|
||||
- The system should find out the GPS of centers of 60% of the photos from the flight within an error of no more than 20 meters in comparison to the real GPS
|
||||
- The system should correctly continue the work even in the presence of up to 350 meters of an outlier photo between 2 consecutive pictures en route. This could happen due to tilt of the plane.
|
||||
- System should correctly continue the work even during sharp turns, where the next photo doesn't overlap at all, or overlaps in less than 5%. The next photo should be in less than 150m drift and at an angle of less than 50%
|
||||
- The number of outliers during the satellite provider images ground check should be less than 10%
|
||||
- In case of being absolutely incapable of determining the system to determine next, second next, and third next images GPS, by any means (these 20% of the route), then it should ask the user for input for the next image, so that the user can specify the location
|
||||
- Less than 5 seconds for processing one image
|
||||
- Results of image processing should appear immediately to user, so that user shouldn't wait for the whole route to complete in order to analyze first results. Also, system could refine existing calculated results and send refined results again to user
|
||||
- Image Registration Rate > 95%. The system can find enough matching features to confidently calculate the camera's 6-DoF pose (position and orientation) and "stitch" that image into the final trajectory
|
||||
- Mean Reprojection Error (MRE) < 1.0 pixels. The distance, in pixels, between the original pixel location of the object and the re-projected pixel location.
|
||||
|
||||
Here is a solution draft:
|
||||
|
||||
# **GEORTOLS-SA UAV Image Geolocalization in IMU-Denied Environments**
|
||||
|
||||
The GEORTOLS-SA system is an asynchronous, four-component software solution designed for deployment on an NVIDIA RTX 2060+ GPU. It is architected from the ground up to handle the specific challenges of IMU-denied, scale-aware localization and real-time streaming output.
|
||||
|
||||
### **Product Solution Description**
|
||||
|
||||
* **Inputs:**
|
||||
1. A sequence of consecutively named images (FullHD to 6252x4168).
|
||||
2. The absolute GPS coordinate (Latitude, Longitude) for the first image (Image 0).
|
||||
3. A pre-calibrated camera intrinsic matrix ($K$).
|
||||
4. The predefined, absolute metric altitude of the UAV ($H$, e.g., 900 meters).
|
||||
5. API access to the Google Maps satellite provider.
|
||||
* **Outputs (Streaming):**
|
||||
1. **Initial Pose (T \< 5s):** A high-confidence, *metric-scale* estimate ($Pose\_N\_Est$) of the image's 6-DoF pose and GPS coordinate. This is sent to the user immediately upon calculation (AC-7, AC-8).
|
||||
2. **Refined Pose (T > 5s):** A globally-optimized pose ($Pose\_N\_Refined$) sent asynchronously as the back-end optimizer fuses data from the CVGL module (AC-8).
|
||||
|
||||
### **Component Interaction Diagram and Data Flow**
|
||||
|
||||
The system is architected as four parallel-processing components to meet the stringent real-time and refinement requirements.
|
||||
|
||||
1. **Image Ingestion & Pre-processing:** This module receives the new, high-resolution Image_N. It immediately creates two copies:
|
||||
* Image_N_LR (Low-Resolution, e.g., 1536x1024): This copy is immediately dispatched to the SA-VO Front-End for real-time processing.
|
||||
* Image_N_HR (High-Resolution, 6.2K): This copy is stored and made available to the CVGL Module for its asynchronous, high-accuracy matching pipeline.
|
||||
2. **Scale-Aware VO (SA-VO) Front-End (High-Frequency Thread):** This component's sole task is high-speed, *metric-scale* relative pose estimation. It matches Image_N_LR to Image_N-1_LR, computes the 6-DoF relative transform, and critically, uses the "known altitude" ($H$) constraint to recover the absolute scale (detailed in Section 3.0). It sends this high-confidence Relative_Metric_Pose to the Back-End.
|
||||
3. **Cross-View Geolocalization (CVGL) Module (Low-Frequency, Asynchronous Thread):** This is a heavier, slower module. It takes Image_N (both LR and HR) and queries the Google Maps database to find an *absolute GPS pose*. When a high-confidence match is found, its Absolute_GPS_Pose is sent to the Back-End as a global "anchor" constraint.
|
||||
4. **Trajectory Optimization Back-End (Central Hub):** This component manages the complete flight trajectory as a pose graph.10 It continuously fuses two distinct, high-quality data streams:
|
||||
* **On receiving Relative_Metric_Pose (T \< 5s):** It appends this pose to the graph, calculates the Pose_N_Est, and **sends this initial result to the user (AC-7, AC-8 met)**.
|
||||
* **On receiving Absolute_GPS_Pose (T > 5s):** It adds this as a high-confidence "global anchor" constraint 12, triggers a full graph re-optimization to correct any minor biases, and **sends the Pose_N_Refined to the user (AC-8 refinement met)**.
|
||||
|
||||
###
|
||||
|
||||
### **VO "Trust Model" of GEORTOLS-SA**
|
||||
|
||||
In GEORTOLS-SA, the trust model:
|
||||
|
||||
* The **SA-VO Front-End** is now *highly trusted* for its local, frame-to-frame *metric* accuracy.
|
||||
* The **CVGL Module** remains *highly trusted* for its *global* (GPS) accuracy.
|
||||
|
||||
Both components are operating in the same scale-aware, metric space. The Back-End's job is no longer to fix a broken, drifting VO. Instead, it performs a robust fusion of two independent, high-quality metric measurements.12
|
||||
|
||||
This model is self-correcting. If the user's predefined altitude $H$ is slightly incorrect (e.g., entered as 900m but is truly 880m), the SA-VO front-end will be *consistently* off by a small percentage. The periodic, high-confidence CVGL "anchors" will create a consistent, low-level "tension" in the pose graph. The graph optimizer (e.g., Ceres Solver) 3 will resolve this tension by slightly "pulling" the SA-VO poses to fit the global anchors, effectively *learning* and correcting for the altitude bias. This robust fusion is the key to meeting the 20-meter and 50-meter accuracy targets (AC-1, AC-2).
|
||||
|
||||
## **3.0 Core Component: The Scale-Aware Visual Odometry (SA-VO) Front-End**
|
||||
|
||||
This component is the new, critical engine of the system. Its sole task is to compute the *metric-scale* 6-DoF relative motion between consecutive frames, thereby eliminating scale drift at its source.
|
||||
|
||||
### **3.1 Rationale and Mechanism for Per-Frame Scale Recovery**
|
||||
|
||||
The SA-VO front-end implements a geometric algorithm to recover the absolute scale $s$ for *every* frame-to-frame transition. This algorithm directly leverages the query's "known altitude" ($H$) and "planar ground" constraints.5
|
||||
|
||||
The SA-VO algorithm for processing Image_N (relative to Image_N-1) is as follows:
|
||||
|
||||
1. **Feature Matching:** Extract and match robust features between Image_N and Image_N-1 using the selected feature matcher (see Section 3.2). This yields a set of corresponding 2D pixel coordinates.
|
||||
2. **Essential Matrix:** Use RANSAC (Random Sample Consensus) and the camera intrinsic matrix $K$ to compute the Essential Matrix $E$ from the "inlier" correspondences.2
|
||||
3. **Pose Decomposition:** Decompose $E$ to find the relative Rotation $R$ and the *unscaled* translation vector $t$, where the magnitude $||t||$ is fixed to 1.2
|
||||
4. **Triangulation:** Triangulate the 3D-world points $X$ for all inlier features using the unscaled pose $$.15 These 3D points ($X_i$) are now in a local, *unscaled* coordinate system (i.e., we know the *shape* of the point cloud, but not its *size*).
|
||||
5. **Ground Plane Fitting:** The query states "terrain height can be neglected," meaning we assume a planar ground. A *second* RANSAC pass is performed, this time fitting a 3D plane to the set of triangulated 3D points $X$. The inliers to this RANSAC are identified as the ground points $X_g$.5 This method is highly robust as it does not rely on a single point, but on the consensus of all visible ground features.16
|
||||
6. **Unscaled Height ($h$):** From the fitted plane equation n^T X + d = 0, the parameter $d$ represents the perpendicular distance from the camera (at the coordinate system's origin) to the computed ground plane. This is our *unscaled* height $h$.
|
||||
7. **Scale Computation:** We now have two values: the *real, metric* altitude $h$ (e.g., 900m) provided by the user, and our *computed, unscaled* altitude $h$. The absolute scale $s$ for this frame is the ratio of these two values: s = h / h.
|
||||
8. **Metric Pose:** The final, metric-scale relative pose is $$, where the metric translation $T = s * t$. This high-confidence, scale-aware pose is sent to the Back-End.
|
||||
|
||||
### **3.2 Feature Matching Sub-System Analysis**
|
||||
|
||||
The success of the SA-VO algorithm depends *entirely* on the quality of the initial feature matches, especially in the low-texture agricultural terrain specified in the query. The system requires a matcher that is both robust (for sparse textures) and extremely fast (for AC-7).
|
||||
|
||||
The initial draft's choice of SuperGlue 17 is a strong, proven baseline. However, its successor, LightGlue 18, offers a critical, non-obvious advantage: **adaptivity**.
|
||||
|
||||
The UAV flight is specified as *mostly* straight, with high overlap. Sharp turns (AC-4) are "rather an exception." This means \~95% of our image pairs are "easy" to match, while 5% are "hard."
|
||||
|
||||
* SuperGlue uses a fixed-depth Graph Neural Network (GNN), spending the *same* (large) amount of compute on an "easy" pair as a "hard" pair.19 This is inefficient.
|
||||
* LightGlue is *adaptive*.19 For an easy, high-overlap pair, it can exit early (e.g., at layer 3/9), returning a high-confidence match in a fraction of the time. For a "hard" low-overlap pair, it will use its full depth to get the best possible result.19
|
||||
|
||||
By using LightGlue, the system saves *enormous* amounts of computational budget on the 95% of "easy" frames, ensuring it *always* meets the \<5s budget (AC-7) and reserving that compute for the harder CVGL tasks. LightGlue is a "plug-and-play replacement" 19 that is faster, more accurate, and easier to train.19
|
||||
|
||||
### **Table 1: Analysis of State-of-the-Art Feature Matchers (For SA-VO Front-End)**
|
||||
|
||||
| Approach (Tools/Library) | Advantages | Limitations | Requirements | Fitness for Problem Component |
|
||||
| :---- | :---- | :---- | :---- | :---- |
|
||||
| **SuperPoint + SuperGlue** 17 | - SOTA robustness in low-texture, high-blur conditions. - GNN reasons about 3D scene context. - Proven in real-time SLAM systems.22 | - Computationally heavy (fixed-depth GNN). - Slower than LightGlue.19 - Training is complex.19 | - NVIDIA GPU (RTX 2060+). - PyTorch or TensorRT.25 | **Good.** A solid, baseline choice. Meets robustness needs but will heavily tax the \<5s time budget (AC-7). |
|
||||
| **SuperPoint + LightGlue** 18 | - **Adaptive Depth:** Faster on "easy" pairs, more accurate on "hard" pairs.19 - **Faster & Lighter:** Outperforms SuperGlue on speed and accuracy.19 - **Easier to Train:** Simpler architecture and loss.19 - Direct plug-and-play replacement for SuperGlue. | - Newer, less long-term-SLAM-proven than SuperGlue (though rapidly being adopted). | - NVIDIA GPU (RTX 2060+). - PyTorch or TensorRT.28 | **Excellent (Selected).** The adaptive nature is *perfect* for this problem. It saves compute on the 95% of easy (straight) frames, preserving the budget for the 5% of hard (turn) frames, maximizing our ability to meet AC-7. |
|
||||
|
||||
### **3.3 Selected Approach (SA-VO): SuperPoint + LightGlue**
|
||||
|
||||
The SA-VO front-end will be built using:
|
||||
|
||||
* **Detector:** **SuperPoint** 24 to detect sparse, robust features on the Image_N_LR.
|
||||
* **Matcher:** **LightGlue** 18 to match features from Image_N_LR to Image_N-1_LR.
|
||||
|
||||
This combination provides the SOTA robustness required for low-texture fields, while LightGlue's adaptive performance 19 is the key to meeting the \<5s (AC-7) real-time requirement.
|
||||
|
||||
## **4.0 Global Anchoring: The Cross-View Geolocalization (CVGL) Module**
|
||||
|
||||
With the SA-VO front-end handling metric scale, the CVGL module's task is refined. Its purpose is no longer to *correct scale*, but to provide *absolute global "anchor" poses*. This corrects for any accumulated bias (e.g., if the $h$ prior is off by 5m) and, critically, *relocalizes* the system after a persistent tracking loss (AC-4).
|
||||
|
||||
### **4.1 Hierarchical Retrieval-and-Match Pipeline**
|
||||
|
||||
This module runs asynchronously and is computationally heavy. A brute-force search against the entire Google Maps database is impossible. A two-stage hierarchical pipeline is required:
|
||||
|
||||
1. **Stage 1: Coarse Retrieval.** This is treated as an image retrieval problem.29
|
||||
* A **Siamese CNN** 30 (or similar Dual-CNN architecture) is used to generate a compact "embedding vector" (a digital signature) for the Image_N_LR.
|
||||
* An embedding database will be pre-computed for *all* Google Maps satellite tiles in the specified Eastern Ukraine operational area.
|
||||
* The UAV image's embedding is then used to perform a very fast (e.g., FAISS library) similarity search against the satellite database, returning the *Top-K* (e.g., K=5) most likely-matching satellite tiles.
|
||||
2. **Stage 2: Fine-Grained Pose.**
|
||||
* *Only* for these Top-5 candidates, the system performs the heavy-duty **SuperPoint + LightGlue** matching.
|
||||
* This match is *not* Image_N -> Image_N-1. It is Image_N -> Satellite_Tile_K.
|
||||
* The match with the highest inlier count and lowest reprojection error (MRE \< 1.0, AC-10) is used to compute the precise 6-DoF pose of the UAV relative to that georeferenced satellite tile. This yields the final Absolute_GPS_Pose.
|
||||
|
||||
### **4.2 Critical Insight: Solving the Oblique-to-Nadir "Domain Gap"**
|
||||
|
||||
A critical, unaddressed failure mode exists. The query states the camera is **"not autostabilized"** [User Query]. On a fixed-wing UAV, this guarantees that during a bank or sharp turn (AC-4), the camera will *not* be nadir (top-down). It will be *oblique*, capturing the ground from an angle. The Google Maps reference, however, is *perfectly nadir*.32
|
||||
|
||||
This creates a severe "domain gap".33 A CVGL system trained *only* to match nadir-to-nadir images will *fail* when presented with an oblique UAV image.34 This means the CVGL module will fail *precisely* when it is needed most: during the sharp turns (AC-4) when SA-VO tracking is also lost.
|
||||
|
||||
The solution is to *close this domain gap* during training. Since the real-world UAV images will be oblique, the network must be taught to match oblique views to nadir ones.
|
||||
|
||||
Solution: Synthetic Data Generation for Robust Training
|
||||
The Stage 1 Siamese CNN 30 must be trained on a custom, synthetically-generated dataset.37 The process is as follows:
|
||||
|
||||
1. Acquire nadir satellite imagery and a corresponding Digital Elevation Model (DEM) for the operational area.
|
||||
2. Use this data to *synthetically render* the nadir satellite imagery from a wide variety of *oblique* viewpoints, simulating the UAV's roll and pitch.38
|
||||
3. Create thousands of training pairs, each consisting of (Nadir_Satellite_Tile, Synthetically_Oblique_Tile_Angle_30_Deg).
|
||||
4. Train the Siamese network 29 to learn that these two images—despite their *vastly* different appearances—are a *match*.
|
||||
|
||||
This process teaches the retrieval network to be *viewpoint-invariant*.35 It learns to ignore perspective distortion and match the true underlying ground features (road intersections, field boundaries). This is the *only* way to ensure the CVGL module can robustly relocalize the UAV during a sharp turn (AC-4).
|
||||
|
||||
## **5.0 Trajectory Fusion: The Robust Optimization Back-End**
|
||||
|
||||
This component is the system's central "brain." It runs continuously, fusing all incoming measurements (high-frequency/metric-scale SA-VO poses, low-frequency/globally-absolute CVGL poses) into a single, globally consistent trajectory. This component's design is dictated by the requirements for streaming (AC-8), refinement (AC-8), and outlier-rejection (AC-3).
|
||||
|
||||
### **5.1 Selected Strategy: Incremental Pose-Graph Optimization**
|
||||
|
||||
The user's requirements for "results...appear immediately" and "system could refine existing calculated results" [User Query] are a textbook description of a real-time SLAM back-end.11 A batch Structure from Motion (SfM) process, which requires all images upfront and can take hours, is unsuitable for the primary system.
|
||||
|
||||
### **Table 2: Analysis of Trajectory Optimization Strategies**
|
||||
|
||||
| Approach (Tools/Library) | Advantages | Limitations | Requirements | Fitness for Problem Component |
|
||||
| :---- | :---- | :---- | :---- | :---- |
|
||||
| **Incremental SLAM (Pose-Graph Optimization)** (g2o 13, Ceres Solver 10, GTSAM) | - **Real-time / Online:** Provides immediate pose estimates (AC-7). - **Supports Refinement:** Explicitly designed to refine past poses when new "loop closure" (CVGL) data arrives (AC-8).11 - **Robust:** Can handle outliers via robust kernels.39 | - Initial estimate is less accurate than a full batch process. - Can drift *if* not anchored (though our SA-VO minimizes this). | - A graph optimization library (g2o, Ceres). - A robust cost function.41 | **Excellent (Selected).** This is the *only* architecture that satisfies all user requirements for real-time streaming and asynchronous refinement. |
|
||||
| **Batch Structure from Motion (Global Bundle Adjustment)** (COLMAP, Agisoft Metashape) | - **Globally Optimal Accuracy:** Produces the most accurate possible 3D reconstruction and trajectory. | - **Offline:** Cannot run in real-time or stream results. - High computational cost (minutes to hours). - Fails AC-7 and AC-8 completely. | - All images must be available before processing starts. - High RAM and CPU. | **Good (as an *Optional* Post-Processing Step).** Unsuitable as the primary online system, but could be offered as an optional, high-accuracy "Finalize Trajectory" batch process after the flight. |
|
||||
|
||||
The system's back-end will be built as an **Incremental Pose-Graph Optimizer** using **Ceres Solver**.10 Ceres is selected due to its large user community, robust documentation, excellent support for robust loss functions 10, and proven scalability for large-scale nonlinear least-squares problems.42
|
||||
|
||||
### **5.2 Mechanism for Automatic Outlier Rejection (AC-3, AC-5)**
|
||||
|
||||
The system must "correctly continue the work even in the presence of up to 350 meters of an outlier" (AC-3). A standard least-squares optimizer would be catastrophically corrupted by this event, as it would try to *average* this 350m error, pulling the *entire* 300km trajectory out of alignment.
|
||||
|
||||
A modern optimizer does not need to use brittle, hand-coded if-then logic to reject outliers. It can *mathematically* and *automatically* down-weight them using **Robust Loss Functions (Kernels)**.41
|
||||
|
||||
The mechanism is as follows:
|
||||
|
||||
1. The Ceres Back-End 10 maintains a graph of nodes (poses) and edges (constraints, or measurements).
|
||||
2. A 350m outlier (AC-3) will create an edge with a *massive* error (residual).
|
||||
3. A standard (quadratic) loss function $cost(error) = error^2$ would create a *catastrophic* cost, forcing the optimizer to ruin the entire graph to accommodate it.
|
||||
4. Instead, the system will wrap its cost functions in a **Robust Loss Function**, such as **CauchyLoss** or **HuberLoss**.10
|
||||
5. A robust loss function behaves quadratically for small errors (which it tries hard to fix) but becomes *sub-linear* for large errors. When it "sees" the 350m error, it mathematically *down-weights its influence*.43
|
||||
6. The optimizer effectively *acknowledges* the 350m error but *refuses* to pull the entire graph to fix this one "insane" measurement. It automatically, and gracefully, treats the outlier as a "lost cause" and optimizes the 99.9% of "sane" measurements. This is the modern, robust solution to AC-3 and AC-5.
|
||||
|
||||
## **6.0 High-Resolution (6.2K) and Performance Optimization**
|
||||
|
||||
The system must simultaneously handle massive 6252x4168 (26-Megapixel) images and run on a modest RTX 2060 GPU [User Query] with a \<5s time limit (AC-7). These are opposing constraints.
|
||||
|
||||
### **6.1 The Multi-Scale Patch-Based Processing Pipeline**
|
||||
|
||||
Running *any* deep learning model (SuperPoint, LightGlue) on a full 6.2K image will be impossibly slow and will *immediately* cause a CUDA Out-of-Memory (OOM) error on a 6GB RTX 2060.45
|
||||
|
||||
The solution is not to process the full 6.2K image in real-time. Instead, a **multi-scale, patch-based pipeline** is required, where different components use the resolution best suited to their task.46
|
||||
|
||||
1. **For SA-VO (Real-time, \<5s):** The SA-VO front-end is concerned with *motion*, not fine-grained detail. The 6.2K Image_N_HR is *immediately* downscaled to a manageable 1536x1024 (Image_N_LR). The entire SA-VO (SuperPoint + LightGlue) pipeline runs *only* on this low-resolution, fast-to-process image. This is how the \<5s (AC-7) budget is met.
|
||||
2. **For CVGL (High-Accuracy, Async):** The CVGL module, which runs asynchronously, is where the 6.2K detail is *selectively* used to meet the 20m (AC-2) accuracy target. It uses a "coarse-to-fine" 48 approach:
|
||||
* **Step A (Coarse):** The Siamese CNN 30 runs on the *downscaled* 1536px Image_N_LR to get a coarse [Lat, Lon] guess.
|
||||
* **Step B (Fine):** The system uses this coarse guess to fetch the corresponding *high-resolution* satellite tile.
|
||||
* **Step C (Patching):** The system runs the SuperPoint detector on the *full 6.2K* Image_N_HR to find the Top 100 *most confident* feature keypoints. It then extracts 100 small (e.g., 256x256) *patches* from the full-resolution image, centered on these keypoints.49
|
||||
* **Step D (Matching):** The system then matches *these small, full-resolution patches* against the high-res satellite tile.
|
||||
|
||||
This hybrid method provides the best of both worlds: the fine-grained matching accuracy 50 of the 6.2K image, but without the catastrophic OOM errors or performance penalties.45
|
||||
|
||||
### **6.2 Real-Time Deployment with TensorRT**
|
||||
|
||||
PyTorch is a research and training framework. Its default inference speed, even on an RTX 2060, is often insufficient to meet a \<5s production requirement.23
|
||||
|
||||
For the final production system, the key neural networks (SuperPoint, LightGlue, Siamese CNN) *must* be converted from their PyTorch-native format into a highly-optimized **NVIDIA TensorRT engine**.
|
||||
|
||||
* **Benefits:** TensorRT is an inference optimizer that applies graph optimizations, layer fusion, and precision reduction (e.g., to FP16).52 This can achieve a 2x-4x (or more) speedup over native PyTorch.28
|
||||
* **Deployment:** The resulting TensorRT engine can be deployed via a C++ API 25, which is far more suitable for a robust, high-performance production system.
|
||||
|
||||
This conversion is a *mandatory* deployment step. It is what makes a 2-second inference (well within the 5-second AC-7 budget) *achievable* on the specified RTX 2060 hardware.
|
||||
|
||||
## **7.0 System Robustness: Failure Mode and Logic Escalation**
|
||||
|
||||
The system's logic is designed as a multi-stage escalation process to handle the specific failure modes in the acceptance criteria (AC-3, AC-4, AC-6), ensuring the >95% registration rate (AC-9).
|
||||
|
||||
### **Stage 1: Normal Operation (Tracking)**
|
||||
|
||||
* **Condition:** SA-VO(N-1 -> N) succeeds. The LightGlue match is high-confidence, and the computed scale $s$ is reasonable.
|
||||
* **Logic:**
|
||||
1. The Relative_Metric_Pose is sent to the Back-End.
|
||||
2. The Pose_N_Est is calculated and sent to the user (\<5s).
|
||||
3. The CVGL module is queued to run asynchronously to provide a Pose_N_Refined at a later time.
|
||||
|
||||
### **Stage 2: Transient SA-VO Failure (AC-3 Outlier Handling)**
|
||||
|
||||
* **Condition:** SA-VO(N-1 -> N) fails. This could be a 350m outlier (AC-3), a severely blurred image, or an image with no features (e.g., over a cloud). The LightGlue match fails, or the computed scale $s$ is nonsensical.
|
||||
* **Logic (Frame Skipping):**
|
||||
1. The system *buffers* Image_N and marks it as "tentatively lost."
|
||||
2. When Image_N+1 arrives, the SA-VO front-end attempts to "bridge the gap" by matching SA-VO(N-1 -> N+1).
|
||||
3. **If successful:** A Relative_Metric_Pose for N+1 is found. Image_N is officially marked as a rejected outlier (AC-5). The system "correctly continues the work" (AC-3 met).
|
||||
4. **If fails:** The system repeats for SA-VO(N-1 -> N+2).
|
||||
5. If this "bridging" fails for 3 consecutive frames, the system concludes it is not a transient outlier but a persistent tracking loss, and escalates to Stage 3.
|
||||
|
||||
### **Stage 3: Persistent Tracking Loss (AC-4 Sharp Turn Handling)**
|
||||
|
||||
* **Condition:** The "frame-skipping" in Stage 2 fails. This is the "sharp turn" scenario [AC-4] where there is \<5% overlap between Image_N-1 and Image_N+k.
|
||||
* **Logic (Multi-Map "Chunking"):**
|
||||
1. The Back-End declares a "Tracking Lost" state at Image_N and creates a *new, independent map chunk* ("Chunk 2").
|
||||
2. The SA-VO Front-End is re-initialized at Image_N and begins populating this new chunk, tracking SA-VO(N -> N+1), SA-VO(N+1 -> N+2), etc.
|
||||
3. Because the front-end is **Scale-Aware**, this new "Chunk 2" is *already in metric scale*. It is a "floating island" of *known size and shape*; it just is not anchored to the global GPS map.
|
||||
* **Resolution (Asynchronous Relocalization):**
|
||||
1. The **CVGL Module** is now tasked, high-priority, to find a *single* Absolute_GPS_Pose for *any* frame in this new "Chunk 2".
|
||||
2. Once the CVGL module (which is robust to oblique views, per Section 4.2) finds one (e.g., for Image_N+20), the Back-End has all the information it needs.
|
||||
3. **Merging:** The Back-End calculates the simple 6-DoF transformation (3D translation and rotation, scale=1) to align all of "Chunk 2" and merge it with "Chunk 1". This robustly handles the "correctly continue the work" criterion (AC-4).
|
||||
|
||||
### **Stage 4: Catastrophic Failure (AC-6 User Intervention)**
|
||||
|
||||
* **Condition:** The system has entered Stage 3 and is building "Chunk 2," but the **CVGL Module** has *also* failed for a prolonged period (e.g., 20% of the route, or 50+ consecutive frames). This is the "worst-case" scenario (e.g., heavy clouds *and* over a large, featureless lake). The system is "absolutely incapable" [User Query].
|
||||
* **Logic:**
|
||||
1. The system has a metric-scale "Chunk 2" but zero idea where it is in the world.
|
||||
2. The Back-End triggers the AC-6 flag.
|
||||
* **Resolution (User Input):**
|
||||
1. The UI prompts the user: "Tracking lost. Please provide a coarse location for the *current* image."
|
||||
2. The UI displays the last known good image (from Chunk 1) and the new, "lost" image (e.g., Image_N+50).
|
||||
3. The user clicks *one point* on the satellite map.
|
||||
4. This user-provided [Lat, Lon] is *not* taken as ground truth. It is fed to the CVGL module as a *strong prior*, drastically narrowing its search area from "all of Ukraine" to "a 10km-radius circle."
|
||||
5. This allows the CVGL module to re-acquire a lock, which triggers the Stage 3 merge, and the system continues.
|
||||
|
||||
## **8.0 Output Generation and Validation Strategy**
|
||||
|
||||
This section details how the final user-facing outputs are generated and how the system's compliance with all 10 acceptance criteria will be validated.
|
||||
|
||||
### **8.1 Generating Object-Level GPS (from Pixel Coordinate)**
|
||||
|
||||
This meets the requirement to find the "coordinates of the center of any object in these photos" [User Query]. The system provides this via a **Ray-Plane Intersection** method.
|
||||
|
||||
* **Inputs:**
|
||||
1. The user clicks pixel coordinate $(u,v)$ on Image_N.
|
||||
2. The system retrieves the refined, global 6-DoF pose $$ for Image_N from the Back-End.
|
||||
3. The system uses the known camera intrinsic matrix $K$.
|
||||
4. The system uses the known *global ground-plane equation* (e.g., $Z=150m$, based on the predefined altitude and start coordinate).
|
||||
* **Method:**
|
||||
1. **Un-project Pixel:** The 2D pixel $(u,v)$ is un-projected into a 3D ray *direction* vector $d_{cam}$ in the camera's local coordinate system: $d_{cam} \= K^{-1} \cdot [u, v, 1]^T$.
|
||||
2. **Transform Ray:** This ray direction is transformed into the *global* coordinate system using the pose's rotation matrix: $d_{global} \= R \cdot d_{cam}$.
|
||||
3. **Define Ray:** A 3D ray is now defined, originating at the camera's global position $T$ (from the pose) and traveling in the direction $d_{global}$.
|
||||
4. **Intersect:** The system solves the 3D line-plane intersection equation for this ray and the known global ground plane (e.g., find the intersection with $Z=150m$).
|
||||
5. **Result:** The 3D intersection point $(X, Y, Z)$ is the *metric* world coordinate of the object on the ground.
|
||||
6. **Convert:** This $(X, Y, Z)$ world coordinate is converted to a [Latitude, Longitude, Altitude] GPS coordinate. This process is immediate and can be performed for any pixel on any geolocated image.
|
||||
|
||||
### **8.2 Rigorous Validation Methodology**
|
||||
|
||||
A comprehensive test plan is required to validate all 10 acceptance criteria. The foundation of this is the creation of a **Ground-Truth Test Harness**.
|
||||
|
||||
* **Test Harness:**
|
||||
1. **Ground-Truth Data:** Several test flights will be conducted in the operational area using a UAV equipped with a high-precision RTK/PPK GPS. This provides the "real GPS" (ground truth) for every image.
|
||||
2. **Test Datasets:** Multiple test datasets will be curated from this ground-truth data:
|
||||
* Test_Baseline_1000: A standard 1000-image flight.
|
||||
* Test_Outlier_350m (AC-3): Test_Baseline_1000 with a single image from 350m away manually inserted at frame 30.
|
||||
* Test_Sharp_Turn_5pct (AC-4): A sequence where frames 20-24 are manually deleted, simulating a \<5% overlap jump.
|
||||
* Test_Catastrophic_Fail_20pct (AC-6): A sequence with 200 (20%) consecutive "bad" frames (e.g., pure sky, lens cap) inserted.
|
||||
* Test_Full_3000: A full 3000-image sequence to test scalability and memory usage.
|
||||
* **Test Cases:**
|
||||
* **Test_Accuracy (AC-1, AC-2, AC-5, AC-9):**
|
||||
* Run Test_Baseline_1000. A test script will compare the system's *final refined GPS output* for each image against its *ground-truth GPS*.
|
||||
* ASSERT (count(errors \< 50m) / 1000) \geq 0.80 (AC-1)
|
||||
* ASSERT (count(errors \< 20m) / 1000) \geq 0.60 (AC-2)
|
||||
* ASSERT (count(un-localized_images) / 1000) \< 0.10 (AC-5)
|
||||
* ASSERT (count(localized_images) / 1000) > 0.95 (AC-9)
|
||||
* **Test_MRE (AC-10):**
|
||||
* ASSERT (BackEnd.final_MRE) \< 1.0 (AC-10)
|
||||
* **Test_Performance (AC-7, AC-8):**
|
||||
* Run Test_Full_3000 on the minimum-spec RTX 2060.
|
||||
* Log timestamps for "Image In" -> "Initial Pose Out". ASSERT average_time \< 5.0s (AC-7).
|
||||
* Log the output stream. ASSERT that >80% of images receive *two* poses: an "Initial" and a "Refined" (AC-8).
|
||||
* **Test_Robustness (AC-3, AC-4, AC-6):**
|
||||
* Run Test_Outlier_350m. ASSERT the system correctly continues and the final trajectory error for Image_31 is \< 50m (AC-3).
|
||||
* Run Test_Sharp_Turn_5pct. ASSERT the system logs "Tracking Lost" and "Maps Merged," and the final trajectory is complete and accurate (AC-4).
|
||||
* Run Test_Catastrophic_Fail_20pct. ASSERT the system correctly triggers the "ask for user input" event (AC-6).
|
||||
|
||||
Identify all potential weak points and problems. Address them and find out ways to solve them. Based on your findings, form a new solution draft in the same format.
|
||||
|
||||
If your finding requires a complete reorganization of the flow and different components, state it.
|
||||
Put all the findings regarding what was weak and poor at the beginning of the report. Put here all new findings, what was updated, replaced, or removed from the previous solution.
|
||||
|
||||
Then form a new solution design without referencing the previous system. Remove Poor and Very Poor component choices from the component analysis tables, but leave Good and Excellent ones.
|
||||
In the updated report, do not put "new" marks, do not compare to the previous solution draft, just make a new solution as if from scratch
|
||||
@@ -1,301 +0,0 @@
|
||||
Read carefully about the problem:
|
||||
|
||||
We have a lot of images taken from a wing-type UAV using a camera with at least Full HD resolution. Resolution of each photo could be up to 6200*4100 for the whole flight, but for other flights, it could be FullHD
|
||||
Photos are taken and named consecutively within 100 meters of each other.
|
||||
We know only the starting GPS coordinates. We need to determine the GPS of the centers of each image. And also the coordinates of the center of any object in these photos. We can use an external satellite provider for ground checks on the existing photos
|
||||
|
||||
System has next restrictions and conditions:
|
||||
- Photos are taken by only airplane type UAVs.
|
||||
- Photos are taken by the camera pointing downwards and fixed, but it is not autostabilized.
|
||||
- The flying range is restricted by the eastern and southern parts of Ukraine (To the left of the Dnipro River)
|
||||
- The image resolution could be from FullHD to 6252*4168. Camera parameters are known: focal length, sensor width, resolution and so on.
|
||||
- Altitude is predefined and no more than 1km. The height of the terrain can be neglected.
|
||||
- There is NO data from IMU
|
||||
- Flights are done mostly in sunny weather
|
||||
- We can use satellite providers, but we're limited right now to Google Maps, which could be outdated for some regions
|
||||
- Number of photos could be up to 3000, usually in the 500-1500 range
|
||||
- During the flight, UAVs can make sharp turns, so that the next photo may be absolutely different from the previous one (no same objects), but it is rather an exception than the rule
|
||||
- Processing is done on a stationary computer or laptop with NVidia GPU at least RTX2060, better 3070. (For the UAV solution Jetson Orin Nano would be used, but that is out of scope.)
|
||||
|
||||
Output of the system should address next acceptance criteria:
|
||||
- The system should find out the GPS of centers of 80% of the photos from the flight within an error of no more than 50 meters in comparison to the real GPS
|
||||
- The system should find out the GPS of centers of 60% of the photos from the flight within an error of no more than 20 meters in comparison to the real GPS
|
||||
- The system should correctly continue the work even in the presence of up to 350 meters of an outlier photo between 2 consecutive pictures en route. This could happen due to tilt of the plane.
|
||||
- System should correctly continue the work even during sharp turns, where the next photo doesn't overlap at all, or overlaps in less than 5%. The next photo should be in less than 150m drift and at an angle of less than 50%
|
||||
- The number of outliers during the satellite provider images ground check should be less than 10%
|
||||
- In case of being absolutely incapable of determining the system to determine next, second next, and third next images GPS, by any means (these 20% of the route), then it should ask the user for input for the next image, so that the user can specify the location
|
||||
- Less than 5 seconds for processing one image
|
||||
- Results of image processing should appear immediately to user, so that user shouldn't wait for the whole route to complete in order to analyze first results. Also, system could refine existing calculated results and send refined results again to user
|
||||
- Image Registration Rate > 95%. The system can find enough matching features to confidently calculate the camera's 6-DoF pose (position and orientation) and "stitch" that image into the final trajectory
|
||||
- Mean Reprojection Error (MRE) < 1.0 pixels. The distance, in pixels, between the original pixel location of the object and the re-projected pixel location.
|
||||
|
||||
Here is a solution draft:
|
||||
|
||||
**GEORTEX-R: A Geospatial-Temporal Robust Extraction System for IMU-Denied UAV Geolocalization**
|
||||
|
||||
## **1.0 GEORTEX-R: System Architecture and Data Flow**
|
||||
|
||||
The GEORTEX-R system is an asynchronous, three-component software solution designed for deployment on an NVIDIA RTX 2060+ GPU. It is architected from the ground up to handle the specific, demonstrated challenges of IMU-denied localization in *non-planar terrain* (as seen in Images 1-9) and *temporally-divergent* (outdated) reference maps (AC-5).
|
||||
|
||||
The system's core design principle is the *decoupling of unscaled relative motion from global metric scale*. The front-end estimates high-frequency, robust, but *unscaled* motion. The back-end asynchronously provides sparse, high-confidence *metric* and *geospatial* anchors. The central hub fuses these two data streams into a single, globally-optimized, metric-scale trajectory.
|
||||
|
||||
### **1.1 Inputs**
|
||||
|
||||
1. **Image Sequence:** Consecutively named images (FullHD to 6252x4168).
|
||||
2. **Start Coordinate (Image 0):** A single, absolute GPS coordinate (Latitude, Longitude) for the first image.
|
||||
3. **Camera Intrinsics ($K$):** A pre-calibrated camera intrinsic matrix.
|
||||
4. **Altitude Prior ($H_{prior}$):** The *approximate* predefined metric altitude (e.g., 900 meters). This is used as a *prior* (a hint) for optimization, *not* a hard constraint.
|
||||
5. **Geospatial API Access:** Credentials for an on-demand satellite and DEM provider (e.g., Copernicus, EOSDA).
|
||||
|
||||
### **1.2 Streaming Outputs**
|
||||
|
||||
1. **Initial Pose ($Pose\\_N\\_Est$):** An *unscaled* pose estimate. This is sent immediately to the UI for real-time visualization of the UAV's *path shape* (AC-7, AC-8).
|
||||
2. **Refined Pose ($Pose\\_N\\_Refined$) [Asynchronous]:** A globally-optimized, *metric-scale* 7-DoF pose (X, Y, Z, Qx, Qy, Qz, Qw) and its corresponding [Lat, Lon, Alt] coordinate. This is sent to the user whenever the Trajectory Optimization Hub re-converges, updating all past poses (AC-1, AC-2, AC-8).
|
||||
|
||||
### **1.3 Component Interaction and Data Flow**
|
||||
|
||||
The system is architected as three parallel-processing components:
|
||||
|
||||
1. **Image Ingestion & Pre-processing:** This module receives the new Image_N (up to 6.2K). It creates two copies:
|
||||
* Image_N_LR (Low-Resolution, e.g., 1536x1024): Dispatched *immediately* to the V-SLAM Front-End for real-time processing.
|
||||
* Image_N_HR (High-Resolution, 6.2K): Stored for asynchronous use by the Geospatial Anchoring Back-End (GAB).
|
||||
2. **V-SLAM Front-End (High-Frequency Thread):** This component's sole task is high-speed, *unscaled* relative pose estimation. It tracks Image_N_LR against a *local map of keyframes*. It performs local bundle adjustment to minimize drift 12 and maintains a co-visibility graph of all keyframes. It sends Relative_Unscaled_Pose estimates to the Trajectory Optimization Hub (TOH).
|
||||
3. **Geospatial Anchoring Back-End (GAB) (Low-Frequency, Asynchronous Thread):** This is the system's "anchor." When triggered by the TOH, it fetches *on-demand* geospatial data (satellite imagery and DEMs) from an external API.3 It then performs a robust *hybrid semantic-visual* search 5 to find an *absolute, metric, global pose* for a given keyframe, robust to outdated maps (AC-5) 5 and oblique views (AC-4).14 This Absolute_Metric_Anchor is sent to the TOH.
|
||||
4. **Trajectory Optimization Hub (TOH) (Central Hub):** This component manages the complete flight trajectory as a **Sim(3) pose graph** (7-DoF). It continuously fuses two distinct data streams:
|
||||
* **On receiving Relative_Unscaled_Pose (T \< 5s):** It appends this pose to the graph, calculates the Pose_N_Est, and sends this *unscaled* initial result to the user (AC-7, AC-8 met).
|
||||
* **On receiving Absolute_Metric_Anchor (T > 5s):** This is the critical event. It adds this as a high-confidence *global metric constraint*. This anchor creates "tension" in the graph, which the optimizer (Ceres Solver 15) resolves by finding the *single global scale factor* that best fits all V-SLAM and CVGL measurements. It then triggers a full graph re-optimization, "stretching" the entire trajectory to the correct metric scale, and sends the new Pose_N_Refined stream to the user for all affected poses (AC-1, AC-2, AC-8 refinement met).
|
||||
|
||||
## **2.0 Core Component: The High-Frequency V-SLAM Front-End**
|
||||
|
||||
This component's sole task is to robustly and accurately compute the *unscaled* 6-DoF relative motion of the UAV and build a geometrically-consistent map of keyframes. It is explicitly designed to be more robust to drift than simple frame-to-frame odometry.
|
||||
|
||||
### **2.1 Rationale: Keyframe-Based Monocular SLAM**
|
||||
|
||||
The choice of a keyframe-based V-SLAM front-end over a frame-to-frame VO is deliberate and critical for system robustness.
|
||||
|
||||
* **Drift Mitigation:** Frame-to-frame VO is "prone to drift accumulation due to errors introduced by each frame-to-frame motion estimation".13 A single poor match permanently corrupts all future poses.
|
||||
* **Robustness:** A keyframe-based system tracks new images against a *local map* of *multiple* previous keyframes, not just Image_N-1. This provides resilience to transient failures (e.g., motion blur, occlusion).
|
||||
* **Optimization:** This architecture enables "local bundle adjustment" 12, a process where a sliding window of recent keyframes is continuously re-optimized, actively minimizing error and drift *before* it can accumulate.
|
||||
* **Relocalization:** This architecture possesses *innate relocalization capabilities* (see Section 6.3), which is the correct, robust solution to the "sharp turn" (AC-4) requirement.
|
||||
|
||||
### **2.2 Feature Matching Sub-System**
|
||||
|
||||
The success of the V-SLAM front-end depends entirely on high-quality feature matches, especially in the sparse, low-texture agricultural terrain seen in the provided images (e.g., Image 6, Image 7). The system requires a matcher that is robust (for sparse textures 17) and extremely fast (for AC-7).
|
||||
|
||||
The selected approach is **SuperPoint + LightGlue**.
|
||||
|
||||
* **SuperPoint:** A SOTA (State-of-the-Art) feature detector proven to find robust, repeatable keypoints in challenging, low-texture conditions 17
|
||||
* **LightGlue:** A highly optimized GNN-based matcher that is the successor to SuperGlue 19
|
||||
|
||||
The key advantage of selecting LightGlue 19 over SuperGlue 20 is its *adaptive nature*. The query states sharp turns (AC-4) are "rather an exception." This implies \~95% of image pairs are "easy" (high-overlap, straight flight) and 5% are "hard" (low-overlap, turns). SuperGlue uses a fixed-depth GNN, spending the *same* large amount of compute on an "easy" pair as a "hard" one. LightGlue is *adaptive*.19 For an "easy" pair, it can exit its GNN early, returning a high-confidence match in a fraction of the time. This saves *enormous* computational budget on the 95% of "easy" frames, ensuring the system *always* meets the \<5s budget (AC-7) and reserving that compute for the GAB.
|
||||
|
||||
#### **Table 1: Analysis of State-of-the-Art Feature Matchers (For V-SLAM Front-End)**
|
||||
|
||||
| Approach (Tools/Library) | Advantages | Limitations | Requirements | Fitness for Problem Component |
|
||||
| :---- | :---- | :---- | :---- | :---- |
|
||||
| **SuperPoint + SuperGlue** 20 | - SOTA robustness in low-texture, high-blur conditions. - GNN reasons about 3D scene context. - Proven in real-time SLAM systems. | - Computationally heavy (fixed-depth GNN). - Slower than LightGlue.19 | - NVIDIA GPU (RTX 2060+). - PyTorch or TensorRT.21 | **Good.** A solid, baseline choice. Meets robustness needs but will heavily tax the \<5s time budget (AC-7). |
|
||||
| **SuperPoint + LightGlue** 17 | - **Adaptive Depth:** Faster on "easy" pairs, more accurate on "hard" pairs.19 - **Faster & Lighter:** Outperforms SuperGlue on speed and accuracy. - SOTA "in practice" choice for large-scale matching.17 | - Newer, but rapidly being adopted and proven.21 | - NVIDIA GPU (RTX 2060+). - PyTorch or TensorRT.22 | **Excellent (Selected).** The adaptive nature is *perfect* for this problem. It saves compute on the 95% of easy (straight) frames, maximizing our ability to meet AC-7. |
|
||||
|
||||
## **3.0 Core Component: The Geospatial Anchoring Back-End (GAB)**
|
||||
|
||||
This component is the system's "anchor to reality." It runs asynchronously to provide the *absolute, metric-scale* constraints needed to solve the trajectory. It is an *on-demand* system that solves three distinct "domain gaps": the hardware/scale gap, the temporal gap, and the viewpoint gap.
|
||||
|
||||
### **3.1 On-Demand Geospatial Data Retrieval**
|
||||
|
||||
A "pre-computed database" for all of Eastern Ukraine is operationally unfeasible on laptop-grade hardware.1 This design is replaced by an on-demand, API-driven workflow.
|
||||
|
||||
* **Mechanism:** When the TOH requests a global anchor, the GAB receives a *coarse* [Lat, Lon] estimate. The GAB then performs API calls to a geospatial data provider (e.g., EOSDA 3, Copernicus 8).
|
||||
* **Dual-Retrieval:** The API query requests *two* distinct products for the specified Area of Interest (AOI):
|
||||
1. **Visual Tile:** A high-resolution (e.g., 30-50cm) satellite ortho-image.26
|
||||
2. **Terrain Tile:** The corresponding **Digital Elevation Model (DEM)**, such as the Copernicus GLO-30 (30m resolution) or SRTM (30m).7
|
||||
|
||||
This "Dual-Retrieval" mechanism is the central, enabling synergy of the new architecture. The **Visual Tile** is used by the CVGL (Section 3.2) to find the *geospatial pose*. The **DEM Tile** is used by the *output module* (Section 7.1) to perform high-accuracy **Ray-DEM Intersection**, solving the final output accuracy problem.
|
||||
|
||||
### **3.2 Hybrid Semantic-Visual Localization**
|
||||
|
||||
The "temporal gap" (evidenced by burn scars in Images 1-9) and "outdated maps" (AC-5) makes a purely visual CVGL system unreliable.5 The GAB solves this using a robust, two-stage *hybrid* matching pipeline.
|
||||
|
||||
1. **Stage 1: Coarse Visual Retrieval (Siamese CNN).** A lightweight Siamese CNN 14 is used to find the *approximate* location of the Image_N_LR *within* the large, newly-fetched satellite tile. This acts as a "candidate generator."
|
||||
2. **Stage 2: Fine-Grained Semantic-Visual Fusion.** For the top candidates, the GAB performs a *dual-channel alignment*.
|
||||
* **Visual Channel (Unreliable):** It runs SuperPoint+LightGlue on high-resolution *patches* (from Image_N_HR) against the satellite tile. This match may be *weak* due to temporal gaps.5
|
||||
* **Semantic Channel (Reliable):** It extracts *temporally-invariant* semantic features (e.g., road-vectors, field-boundaries, tree-cluster-polygons, lake shorelines) from *both* the UAV image (using a segmentation model) and the satellite/OpenStreetMap data.5
|
||||
* **Fusion:** A RANSAC-based optimizer finds the 6-DoF pose that *best aligns* this *hybrid* set of features.
|
||||
|
||||
This hybrid approach is robust to the exact failure mode seen in the images. When matching Image 3 (burn scars), the *visual* LightGlue match will be poor. However, the *semantic* features (the dirt road, the tree line) are *unchanged*. The optimizer will find a high-confidence pose by *trusting the semantic alignment* over the poor visual alignment, thereby succeeding despite the "outdated map" (AC-5).
|
||||
|
||||
### **3.3 Solution to Viewpoint Gap: Synthetic Oblique View Training**
|
||||
|
||||
This component is critical for handling "sharp turns" (AC-4). The camera *will* be oblique, not nadir, during turns.
|
||||
|
||||
* **Problem:** The GAB's Stage 1 Siamese CNN 14 will be matching an *oblique* UAV view to a *nadir* satellite tile. This "viewpoint gap" will cause a match failure.14
|
||||
* **Mechanism (Synthetic Data Generation):** The network must be trained for *viewpoint invariance*.28
|
||||
1. Using the on-demand DEMs (fetched in 3.1) and satellite tiles, the system can *synthetically render* the satellite imagery from *any* roll, pitch, and altitude.
|
||||
2. The Siamese network is trained on (Nadir_Tile, Synthetic_Oblique_Tile) pairs.14
|
||||
* **Result:** This process teaches the network to match the *underlying ground features*, not the *perspective distortion*. It ensures the GAB can relocalize the UAV *precisely* when it is needed most: during a sharp, banking turn (AC-4) when VO tracking has been lost.
|
||||
|
||||
## **4.0 Core Component: The Trajectory Optimization Hub (TOH)**
|
||||
|
||||
This component is the system's central "brain." It runs continuously, fusing all measurements (high-frequency/unscaled V-SLAM, low-frequency/metric-scale GAB anchors) into a single, globally consistent trajectory.
|
||||
|
||||
### **4.1 Incremental Sim(3) Pose-Graph Optimization**
|
||||
|
||||
The "planar ground" SA-VO (Finding 1) is removed. This component is its replacement. The system must *discover* the global scale, not *assume* it.
|
||||
|
||||
* **Selected Strategy:** An incremental pose-graph optimizer using **Ceres Solver**.15
|
||||
* **The Sim(3) Insight:** The V-SLAM front-end produces *unscaled* 6-DoF ($SE(3)$) relative poses. The GAB produces *metric-scale* 6-DoF ($SE(3)$) *absolute* poses. These cannot be directly combined. The graph must be optimized in **Sim(3) (7-DoF)**, which adds a *single global scale factor $s$* as an optimizable variable.
|
||||
* **Mechanism (Ceres Solver):**
|
||||
1. **Nodes:** Each keyframe pose (7-DoF: $X, Y, Z, Qx, Qy, Qz, s$).
|
||||
2. **Edge 1 (V-SLAM):** A relative pose constraint between Keyframe_i and Keyframe_j. The error is computed in Sim(3).
|
||||
3. **Edge 2 (GAB):** An *absolute* pose constraint on Keyframe_k. This constraint *fixes* Keyframe_k's pose to the *metric* GPS coordinate and *fixes its scale $s$ to 1.0*.
|
||||
* **Bootstrapping Scale:** The TOH graph "bootstraps" the scale.32 The GAB's $s=1.0$ anchor creates "tension" in the graph. The Ceres optimizer 15 resolves this tension by finding the *one* global scale $s$ for all V-SLAM nodes that minimizes the total error, effectively "stretching" the entire unscaled trajectory to fit the metric anchors. This is robust to *any* terrain.34
|
||||
|
||||
#### **Table 2: Analysis of Trajectory Optimization Strategies**
|
||||
|
||||
| Approach (Tools/Library) | Advantages | Limitations | Requirements | Fitness for Problem Component |
|
||||
| :---- | :---- | :---- | :---- | :---- |
|
||||
| **Incremental SLAM (Pose-Graph Optimization)** (Ceres Solver 15, g2o 35, GTSAM) | - **Real-time / Online:** Provides immediate pose estimates (AC-7). - **Supports Refinement:** Explicitly designed to refine past poses when new "loop closure" (GAB) data arrives (AC-8).13 - **Robust:** Can handle outliers via robust kernels.15 | - Initial estimate is *unscaled* until a GAB anchor arrives. - Can drift *if* not anchored (though V-SLAM minimizes this). | - A graph optimization library (Ceres). - A robust cost function. | **Excellent (Selected).** This is the *only* architecture that satisfies all user requirements for real-time streaming and asynchronous refinement. |
|
||||
| **Batch Structure from Motion (Global Bundle Adjustment)** (COLMAP, Agisoft Metashape) | - **Globally Optimal Accuracy:** Produces the most accurate possible 3D reconstruction and trajectory. | - **Offline:** Cannot run in real-time or stream results. - High computational cost (minutes to hours). - Fails AC-7 and AC-8 completely. | - All images must be available before processing starts. - High RAM and CPU. | **Good (as an *Optional* Post-Processing Step).** Unsuitable as the primary online system, but could be offered as an optional, high-accuracy "Finalize Trajectory" batch process. |
|
||||
|
||||
### **4.2 Automatic Outlier Rejection (AC-3, AC-5)**
|
||||
|
||||
The system must handle 350m outliers (AC-3) and \<10% bad GAB matches (AC-5).
|
||||
|
||||
* **Mechanism (Robust Loss Functions):** A standard least-squares optimizer (like Ceres 15) would be catastrophically corrupted by a 350m error. The solution is to wrap *all* constraints in a **Robust Loss Function (e.g., HuberLoss, CauchyLoss)**.15
|
||||
* **Result:** A robust loss function mathematically *down-weights* the influence of constraints with large errors. When it "sees" the 350m error (AC-3), it effectively acknowledges the measurement but *refuses* to pull the entire 3000-image trajectory to fit this one "insane" data point. It automatically and gracefully *ignores* the outlier, optimizing the 99.9% of "sane" measurements. This is the modern, robust solution to AC-3 and AC-5.
|
||||
|
||||
## **5.0 High-Performance Compute & Deployment**
|
||||
|
||||
The system must run on an RTX 2060 (AC-7) and process 6.2K images. These are opposing constraints.
|
||||
|
||||
### **5.1 Multi-Scale, Patch-Based Processing Pipeline**
|
||||
|
||||
Running deep learning models (SuperPoint, LightGlue) on a full 6.2K (26-Megapixel) image will cause a CUDA Out-of-Memory (OOM) error and be impossibly slow.
|
||||
|
||||
* **Mechanism (Coarse-to-Fine):**
|
||||
1. **For V-SLAM (Real-time, \<5s):** The V-SLAM front-end (Section 2.0) runs *only* on the Image_N_LR (e.g., 1536x1024) copy. This is fast enough to meet the AC-7 budget.
|
||||
2. **For GAB (High-Accuracy, Async):** The GAB (Section 3.0) uses the full-resolution Image_N_HR *selectively* to meet the 20m accuracy (AC-2).
|
||||
* It first runs its coarse Siamese CNN 27 on the Image_N_LR.
|
||||
* It then runs the SuperPoint detector on the *full 6.2K* image to find the *most confident* feature keypoints.
|
||||
* It then extracts small, 256x256 *patches* from the *full-resolution* image, centered on these keypoints.
|
||||
* It matches *these small, full-resolution patches* against the high-res satellite tile.
|
||||
* **Result:** This hybrid method provides the fine-grained matching accuracy of the 6.2K image (needed for AC-2) without the catastrophic OOM errors or performance penalties.
|
||||
|
||||
### **5.2 Mandatory Deployment: NVIDIA TensorRT Acceleration**
|
||||
|
||||
PyTorch is a research framework. For production, its inference speed is insufficient.
|
||||
|
||||
* **Requirement:** The key neural networks (SuperPoint, LightGlue, Siamese CNN) *must* be converted from PyTorch into a highly-optimized **NVIDIA TensorRT engine**.
|
||||
* **Research Validation:** 23 demonstrates this process for LightGlue, achieving "2x-4x speed gains over compiled PyTorch." 22 and 21 provide open-source repositories for SuperPoint+LightGlue conversion to ONNX and TensorRT.
|
||||
* **Result:** This is not an "optional" optimization. It is a *mandatory* deployment step. This conversion (which applies layer fusion, graph optimization, and FP16 precision) is what makes achieving the \<5s (AC-7) performance *possible* on the specified RTX 2060 hardware.36
|
||||
|
||||
## **6.0 System Robustness: Failure Mode Escalation Logic**
|
||||
|
||||
This logic defines the system's behavior during real-world failures, ensuring it meets criteria AC-3, AC-4, AC-6, and AC-9.
|
||||
|
||||
### **6.1 Stage 1: Normal Operation (Tracking)**
|
||||
|
||||
* **Condition:** V-SLAM front-end (Section 2.0) is healthy.
|
||||
* **Logic:**
|
||||
1. V-SLAM successfully tracks Image_N_LR against its local keyframe map.
|
||||
2. A new Relative_Unscaled_Pose is sent to the TOH.
|
||||
3. TOH sends Pose_N_Est (unscaled) to the user (\<5s).
|
||||
4. If Image_N is selected as a new keyframe, the GAB (Section 3.0) is *queued* to find an Absolute_Metric_Anchor for it, which will trigger a Pose_N_Refined update later.
|
||||
|
||||
### **6.2 Stage 2: Transient VO Failure (Outlier Rejection)**
|
||||
|
||||
* **Condition:** Image_N is unusable (e.g., severe blur, sun-glare, 350m outlier per AC-3).
|
||||
* **Logic (Frame Skipping):**
|
||||
1. V-SLAM front-end fails to track Image_N_LR against the local map.
|
||||
2. The system *discards* Image_N (marking it as a rejected outlier, AC-5).
|
||||
3. When Image_N+1 arrives, the V-SLAM front-end attempts to track it against the *same* local keyframe map (from Image_N-1).
|
||||
4. **If successful:** Tracking resumes. Image_N is officially an outlier. The system "correctly continues the work" (AC-3 met).
|
||||
5. **If fails:** The system repeats for Image_N+2, N+3. If this fails for \~5 consecutive frames, it escalates to Stage 3.
|
||||
|
||||
### **6.3 Stage 3: Persistent VO Failure (Relocalization)**
|
||||
|
||||
* **Condition:** Tracking is lost for multiple frames. This is the "sharp turn" (AC-4) or "low overlap" (AC-4) scenario.
|
||||
* **Logic (Keyframe-Based Relocalization):**
|
||||
1. The V-SLAM front-end declares "Tracking Lost."
|
||||
2. **Critically:** It does *not* create a "new map chunk."
|
||||
3. Instead, it enters **Relocalization Mode**. For every new Image_N+k, it extracts features (SuperPoint) and queries the *entire* existing database of past keyframes for a match.
|
||||
* **Resolution:** The UAV completes its sharp turn. Image_N+5 now has high overlap with Image_N-10 (from *before* the turn).
|
||||
1. The relocalization query finds a strong match.
|
||||
2. The V-SLAM front-end computes the 6-DoF pose of Image_N+5 relative to the *existing map*.
|
||||
3. Tracking is *resumed* seamlessly. The system "correctly continues the work" (AC-4 met). This is vastly more robust than the previous "map-merging" logic.
|
||||
|
||||
### **6.4 Stage 4: Catastrophic Failure (User Intervention)**
|
||||
|
||||
* **Condition:** The system is in Stage 3 (Lost), but *also*, the **GAB (Section 3.0) has failed** to find *any* global anchors for a prolonged period (e.g., 20% of the route). This is the "absolutely incapable" scenario (AC-6), (e.g., heavy fog *and* over a featureless ocean).
|
||||
* **Logic:**
|
||||
1. The system has an *unscaled* trajectory, and *zero* idea where it is in the world.
|
||||
2. The TOH triggers the AC-6 flag.
|
||||
* **Resolution (User-Aided Prior):**
|
||||
1. The UI prompts the user: "Tracking lost. Please provide a coarse location for the *current* image."
|
||||
2. The user clicks *one point* on a map.
|
||||
3. This [Lat, Lon] is *not* taken as ground truth. It is fed to the **GAB (Section 3.1)** as a *strong prior* for its on-demand API query.
|
||||
4. This narrows the GAB's search area from "all of Ukraine" to "a 5km radius." This *guarantees* the GAB's Dual-Retrieval (Section 3.1) will fetch the *correct* satellite and DEM tiles, allowing the Hybrid Matcher (Section 3.2) to find a high-confidence Absolute_Metric_Anchor, which in turn re-scales (Section 4.1) and relocalizes the entire trajectory.
|
||||
|
||||
## **7.0 Output Generation and Validation Strategy**
|
||||
|
||||
This section details how the final user-facing outputs are generated, specifically solving the "planar ground" output flaw, and how the system's compliance with all 10 ACs will be validated.
|
||||
|
||||
### **7.1 High-Accuracy Object Geolocalization via Ray-DEM Intersection**
|
||||
|
||||
The "Ray-Plane Intersection" method is inaccurate for non-planar terrain 37 and is replaced with a high-accuracy ray-tracing method. This is the correct method for geolocating an object on the *non-planar* terrain visible in Images 1-9.
|
||||
|
||||
* **Inputs:**
|
||||
1. User clicks pixel coordinate $(u,v)$ on Image_N.
|
||||
2. System retrieves the *final, refined, metric* 7-DoF pose $P = (R, T, s)$ for Image_N from the TOH.
|
||||
3. The system uses the known camera intrinsic matrix $K$.
|
||||
4. System retrieves the specific **30m DEM tile** 8 that was fetched by the GAB (Section 3.1) for this region of the map. This DEM is a 3D terrain mesh.
|
||||
* **Algorithm (Ray-DEM Intersection):**
|
||||
1. **Un-project Pixel:** The 2D pixel $(u,v)$ is un-projected into a 3D ray *direction* vector $d_{cam}$ in the camera's local coordinate system: $d_{cam} = K^{-1} \\cdot [u, v, 1]^T$.
|
||||
2. **Transform Ray:** This ray direction $d_{cam}$ and origin (0,0,0) are transformed into the *global, metric* coordinate system using the pose $P$. This yields a ray originating at $T$ and traveling in direction $R \\cdot d_{cam}$.
|
||||
3. **Intersect:** The system performs a numerical *ray-mesh intersection* 39 to find the 3D point $(X, Y, Z)$ where this global ray *intersects the 3D terrain mesh* of the DEM.
|
||||
4. **Result:** This 3D intersection point $(X, Y, Z)$ is the *metric* world coordinate of the object *on the actual terrain*.
|
||||
5. **Convert:** This $(X, Y, Z)$ world coordinate is converted to a [Latitude, Longitude, Altitude] GPS coordinate.
|
||||
|
||||
This method correctly accounts for terrain. A pixel aimed at the top of a hill will intersect the DEM at a high Z-value. A pixel aimed at the ravine (Image 1) will intersect at a low Z-value. This is the *only* method that can reliably meet the 20m accuracy (AC-2) for object localization.
|
||||
|
||||
### **7.2 Rigorous Validation Methodology**
|
||||
|
||||
A comprehensive test plan is required. The foundation is a **Ground-Truth Test Harness** using the provided coordinates.csv.42
|
||||
|
||||
* **Test Harness:**
|
||||
1. **Ground-Truth Data:** The file coordinates.csv 42 provides ground-truth [Lat, Lon] for 60 images (e.g., AD000001.jpg...AD000060.jpg).
|
||||
2. **Test Datasets:**
|
||||
* Test_Baseline_60 42: The 60 images and their coordinates.
|
||||
* Test_Outlier_350m (AC-3): Test_Baseline_60 with a single, unrelated image inserted at frame 30.
|
||||
* Test_Sharp_Turn_5pct (AC-4): A sequence where frames 20-24 are manually deleted, simulating a \<5% overlap jump.
|
||||
* **Test Cases:**
|
||||
* **Test_Accuracy (AC-1, AC-2, AC-5, AC-9):**
|
||||
* **Run:** Execute GEORTEX-R on Test_Baseline_60, providing AD000001.jpg's coordinate (48.275292, 37.385220) as the Start Coordinate 42
|
||||
* **Script:** A validation script will compute the Haversine distance error between the *system's refined GPS output* for each image (2-60) and the *ground-truth GPS* from coordinates.csv.
|
||||
* **ASSERT** (count(errors \< 50m) / 60) >= 0.80 **(AC-1 Met)**
|
||||
* **ASSERT** (count(errors \< 20m) / 60) >= 0.60 **(AC-2 Met)**
|
||||
* **ASSERT** (count(un-localized_images) / 60) \< 0.10 **(AC-5 Met)**
|
||||
* **ASSERT** (count(localized_images) / 60) > 0.95 **(AC-9 Met)**
|
||||
* **Test_MRE (AC-10):**
|
||||
* **Run:** After Test_Baseline_60 completes.
|
||||
* **ASSERT** TOH.final_Mean_Reprojection_Error \< 1.0 **(AC-10 Met)**
|
||||
* **Test_Performance (AC-7, AC-8):**
|
||||
* **Run:** Execute on a 1500-image sequence on the minimum-spec RTX 2060.
|
||||
* **Log:** Log timestamps for "Image In" -> "Initial Pose Out".
|
||||
* **ASSERT** average_time \< 5.0s **(AC-7 Met)**
|
||||
* **Log:** Log the output stream.
|
||||
* **ASSERT** >80% of images receive *two* poses: an "Initial" and a "Refined" **(AC-8 Met)**
|
||||
* **Test_Robustness (AC-3, AC-4):**
|
||||
* **Run:** Execute Test_Outlier_350m.
|
||||
* **ASSERT** System logs "Stage 2: Discarding Outlier" and the final trajectory error for Image_31 is \< 50m **(AC-3 Met)**.
|
||||
* **Run:** Execute Test_Sharp_Turn_5pct.
|
||||
* **ASSERT** System logs "Stage 3: Tracking Lost" and "Relocalization Succeeded," and the final trajectory is complete and accurate **(AC-4 Met)**.
|
||||
|
||||
|
||||
|
||||
Identify all potential weak points and problems. Address them and find out ways to solve them. Based on your findings, form a new solution draft in the same format.
|
||||
|
||||
If your finding requires a complete reorganization of the flow and different components, state it.
|
||||
Put all the findings regarding what was weak and poor at the beginning of the report. Put here all new findings, what was updated, replaced, or removed from the previous solution.
|
||||
|
||||
Then form a new solution design without referencing the previous system. Remove Poor and Very Poor component choices from the component analysis tables, but leave Good and Excellent ones.
|
||||
In the updated report, do not put "new" marks, do not compare to the previous solution draft, just make a new solution as if from scratch
|
||||
@@ -1,370 +0,0 @@
|
||||
Read carefully about the problem:
|
||||
|
||||
We have a lot of images taken from a wing-type UAV using a camera with at least Full HD resolution. Resolution of each photo could be up to 6200*4100 for the whole flight, but for other flights, it could be FullHD
|
||||
Photos are taken and named consecutively within 100 meters of each other.
|
||||
We know only the starting GPS coordinates. We need to determine the GPS of the centers of each image. And also the coordinates of the center of any object in these photos. We can use an external satellite provider for ground checks on the existing photos
|
||||
|
||||
System has next restrictions and conditions:
|
||||
- Photos are taken by only airplane type UAVs.
|
||||
- Photos are taken by the camera pointing downwards and fixed, but it is not autostabilized.
|
||||
- The flying range is restricted by the eastern and southern parts of Ukraine (To the left of the Dnipro River)
|
||||
- The image resolution could be from FullHD to 6252*4168. Camera parameters are known: focal length, sensor width, resolution and so on.
|
||||
- Altitude is predefined and no more than 1km. The height of the terrain can be neglected.
|
||||
- There is NO data from IMU
|
||||
- Flights are done mostly in sunny weather
|
||||
- We can use satellite providers, but we're limited right now to Google Maps, which could be outdated for some regions
|
||||
- Number of photos could be up to 3000, usually in the 500-1500 range
|
||||
- During the flight, UAVs can make sharp turns, so that the next photo may be absolutely different from the previous one (no same objects), but it is rather an exception than the rule
|
||||
- Processing is done on a stationary computer or laptop with NVidia GPU at least RTX2060, better 3070. (For the UAV solution Jetson Orin Nano would be used, but that is out of scope.)
|
||||
|
||||
Output of the system should address next acceptance criteria:
|
||||
- The system should find out the GPS of centers of 80% of the photos from the flight within an error of no more than 50 meters in comparison to the real GPS
|
||||
- The system should find out the GPS of centers of 60% of the photos from the flight within an error of no more than 20 meters in comparison to the real GPS
|
||||
- The system should correctly continue the work even in the presence of up to 350 meters of an outlier photo between 2 consecutive pictures en route. This could happen due to tilt of the plane.
|
||||
- System should correctly continue the work even during sharp turns, where the next photo doesn't overlap at all, or overlaps in less than 5%. The next photo should be in less than 150m drift and at an angle of less than 50%
|
||||
- The number of outliers during the satellite provider images ground check should be less than 10%
|
||||
- System should try to operate when UAV made a sharp turn, and all the next photos has no common points with previous route. In that situation system should try to figure out location of the new piece of the route and connect it to the previous route. Also this separate chunks could be more than 2, so this strategy should be in the core of the system
|
||||
- In case of being absolutely incapable of determining the system to determine next, second next, and third next images GPS, by any means (these 20% of the route), then it should ask the user for input for the next image, so that the user can specify the location
|
||||
- Less than 5 seconds for processing one image
|
||||
- Results of image processing should appear immediately to user, so that user shouldn't wait for the whole route to complete in order to analyze first results. Also, system could refine existing calculated results and send refined results again to user
|
||||
- Image Registration Rate > 95%. The system can find enough matching features to confidently calculate the camera's 6-DoF pose (position and orientation) and "stitch" that image into the final trajectory
|
||||
- Mean Reprojection Error (MRE) < 1.0 pixels. The distance, in pixels, between the original pixel location of the object and the re-projected pixel location.
|
||||
|
||||
Here is a solution draft:
|
||||
|
||||
## **The ATLAS-GEOFUSE System Architecture**
|
||||
|
||||
Multi-component architecture designed for high-performance, real-time geolocalization in IMU-denied, high-drift environments. Its architecture is explicitly designed around **pre-flight data caching** and **multi-map robustness**.
|
||||
|
||||
### **2.1 Core Design Principles**
|
||||
|
||||
1. **Pre-Flight Caching:** To meet the <5s (AC-7) real-time requirement, all network latency must be eliminated. The system mandates a "Pre-Flight" step (Section 3.0) where all geospatial data (satellite tiles, DEMs, vector data) for the Area of Interest (AOI) is downloaded from a viable open-source provider (e.g., Copernicus 6) and stored in a local database on the processing laptop. All real-time queries are made against this local cache.
|
||||
2. **Decoupled Multi-Map SLAM:** The system separates *relative* motion from *absolute* scale. A Visual SLAM (V-SLAM) "Atlas" Front-End (Section 4.0) computes high-frequency, robust, but *unscaled* relative motion. A Local Geospatial Anchoring Back-End (GAB) (Section 5.0) provides sparse, high-confidence, *absolute metric* anchors by querying the local cache. A Trajectory Optimization Hub (TOH) (Section 6.0) fuses these two streams in a Sim(3) pose-graph to solve for the global 7-DoF trajectory (pose + scale).
|
||||
3. **Multi-Map Robustness (Atlas):** To solve the "sharp turn" (AC-4) and "tracking loss" (AC-6) requirements, the V-SLAM front-end is based on an "Atlas" architecture.14 Tracking loss initiates a *new, independent map fragment*.13 The TOH is responsible for anchoring and merging *all* fragments geodetically 19 into a single, globally-consistent trajectory.
|
||||
|
||||
### **2.2 Component Interaction and Data Flow**
|
||||
|
||||
* **Component 1: Pre-Flight Caching Module (PCM) (Offline)**
|
||||
* *Input:* User-defined Area of Interest (AOI) (e.g., a KML polygon).
|
||||
* *Action:* Queries Copernicus 6 and OpenStreetMap APIs. Downloads and builds a local geospatial database (GeoPackage/SpatiaLite) containing satellite tiles, DEM tiles, and road/river vectors for the AOI.
|
||||
* *Output:* A single, self-contained **Local Geo-Database file**.
|
||||
* **Component 2: Image Ingestion & Pre-processing (Real-time)**
|
||||
* *Input:* Image_N (up to 6.2K), Camera Intrinsics ($K$).
|
||||
* *Action:* Creates two copies:
|
||||
* **Image_N_LR** (Low-Resolution, e.g., 1536x1024): Dispatched *immediately* to the V-SLAM Front-End.
|
||||
* **Image_N_HR** (High-Resolution, 6.2K): Stored for asynchronous use by the GAB.
|
||||
* **Component 3: V-SLAM "Atlas" Front-End (High-Frequency Thread)**
|
||||
* *Input:* Image_N_LR.
|
||||
* *Action:* Tracks Image_N_LR against its *active map fragment*. Manages keyframes, local bundle adjustment 38, and the co-visibility graph. If tracking is lost (e.g., AC-4 sharp turn), it initializes a *new map fragment* 14 and continues tracking.
|
||||
* *Output:* **Relative_Unscaled_Pose** and **Local_Point_Cloud** data, sent to the TOH.
|
||||
* **Component 4: Local Geospatial Anchoring Back-End (GAB) (Low-Frequency, Asynchronous Thread)**
|
||||
* *Input:* A keyframe (Image_N_HR) and its *unscaled* pose, triggered by the TOH.
|
||||
* *Action:* Performs a visual-only, coarse-to-fine search 34 against the *Local Geo-Database*.
|
||||
* *Output:* An **Absolute_Metric_Anchor** (a high-confidence [Lat, Lon, Alt] pose) for that keyframe, sent to the TOH.
|
||||
* **Component 5: Trajectory Optimization Hub (TOH) (Central Hub Thread)**
|
||||
* *Input:* (1) High-frequency Relative_Unscaled_Pose stream. (2) Low-frequency Absolute_Metric_Anchor stream.
|
||||
* *Action:* Manages the complete flight trajectory as a **Sim(3) pose graph** 39 using Ceres Solver.19 Continuously fuses all data.
|
||||
* *Output 1 (Real-time):* **Pose_N_Est** (unscaled) sent to UI (meets AC-7, AC-8).
|
||||
* *Output 2 (Refined):* **Pose_N_Refined** (metric-scale, globally-optimized) sent to UI (meets AC-1, AC-2, AC-8).
|
||||
|
||||
### **2.3 System Inputs**
|
||||
|
||||
1. **Image Sequence:** Consecutively named images (FullHD to 6252x4168).
|
||||
2. **Start Coordinate (Image 0):** A single, absolute GPS coordinate (Latitude, Longitude).
|
||||
3. **Camera Intrinsics ($K$):** Pre-calibrated camera intrinsic matrix.
|
||||
4. **Local Geo-Database File:** The single file generated by the Pre-Flight Caching Module (Section 3.0).
|
||||
|
||||
### **2.4 Streaming Outputs (Meets AC-7, AC-8)**
|
||||
|
||||
1. **Initial Pose ($Pose_N^{Est}$):** An *unscaled* pose estimate. This is sent immediately (<5s, AC-7) to the UI for real-time visualization of the UAV's *path shape*.
|
||||
2. **Refined Pose ($Pose_N^{Refined}$) [Asynchronous]:** A globally-optimized, *metric-scale* 7-DoF pose (X, Y, Z, Qx, Qy, Qz, Qw) and its corresponding [Lat, Lon, Alt] coordinate. This is sent to the user whenever the TOH re-converges (e.g., after a new GAB anchor or map-merge), updating all past poses (AC-1, AC-2, AC-8 refinement met).
|
||||
|
||||
## **3.0 Pre-Flight Component: The Geospatial Caching Module (PCM)**
|
||||
|
||||
This component is a new, mandatory, pre-flight utility that solves the fatal flaws (Section 1.1, 1.2) of the GEORTEX-R design. It eliminates all real-time network latency (AC-7) and all ToS violations (AC-5), ensuring the project is both performant and legally viable.
|
||||
|
||||
### **3.1 Defining the Area of Interest (AOI)**
|
||||
|
||||
The system is designed for long-range flights. Given 3000 photos at 100m intervals, the maximum linear track is 300km. The user must provide a coarse "bounding box" or polygon (e.g., KML/GeoJSON format) of the intended flight area. The PCM will automatically add a generous buffer (e.g., 20km) to this AOI to account for navigational drift and ensure all necessary reference data is captured.
|
||||
|
||||
### **3.2 Legal & Viable Data Sources (Copernicus & OpenStreetMap)**
|
||||
|
||||
As established in 1.1, the system *must* use open-data providers. The PCM is architected to use the following:
|
||||
|
||||
1. **Visual/Terrain Data (Primary):** The **Copernicus Data Space Ecosystem** 6 is the primary source. The PCM will use the Copernicus Processing and Catalogue APIs 6 to query, process, and download two key products for the buffered AOI:
|
||||
* **Sentinel-2 Satellite Imagery:** High-resolution (10m) visual tiles.
|
||||
* **Copernicus GLO-30 DEM:** A 30m-resolution Digital Elevation Model.7 This DEM is *not* used for high-accuracy object localization (see 1.4), but as a coarse altitude *prior* for the TOH and for the critical dynamic-warping step (Section 5.3).
|
||||
2. **Semantic Data (Secondary):** OpenStreetMap (OSM) data 40 for the AOI will be downloaded. This provides temporally-invariant vector data (roads, rivers, building footprints) which can be used as a secondary, optional verification layer for the GAB, especially in cases of extreme temporal divergence (e.g., new construction).42
|
||||
|
||||
### **3.3 Building the Local Geo-Database**
|
||||
|
||||
The PCM utility will process all downloaded data into a single, efficient, compressed file. A modern GeoPackage or SpatiaLite database is the ideal format. This database will contain the satellite tiles, DEM tiles, and vector features, all indexed by a common spatial grid (e.g., UTM).
|
||||
|
||||
This single file is then loaded by the main ATLAS-GEOFUSE application at runtime. The GAB's (Section 5.0) "API calls" are thus transformed from high-latency, unreliable HTTP requests 9 into high-speed, zero-latency local SQL queries, guaranteeing that data I/O is never the bottleneck for meeting the AC-7 performance requirement.
|
||||
|
||||
## **4.0 Core Component: The Multi-Map V-SLAM "Atlas" Front-End**
|
||||
|
||||
This component's sole task is to robustly and accurately compute the *unscaled* 6-DoF relative motion of the UAV and build a geometrically-consistent map of keyframes. It is explicitly designed to be more robust than simple frame-to-frame odometry and to handle catastrophic tracking loss (AC-4) gracefully.
|
||||
|
||||
### **4.1 Rationale: ORB-SLAM3 "Atlas" Architecture**
|
||||
|
||||
The system will implement a V-SLAM front-end based on the "Atlas" multi-map paradigm, as seen in SOTA systems like ORB-SLAM3.14 This is the industry-standard solution for robust, long-term navigation in environments where tracking loss is possible.13
|
||||
|
||||
The mechanism is as follows:
|
||||
|
||||
1. The system initializes and begins tracking on **Map_Fragment_0**, using the known start GPS as a metadata tag.
|
||||
2. It tracks all new frames (Image_N_LR) against this active map.
|
||||
3. **If tracking is lost** (e.g., a sharp turn (AC-4) or a persistent 350m outlier (AC-3)):
|
||||
* The "Atlas" architecture does not fail. It declares Map_Fragment_0 "inactive," stores it, and *immediately initializes* **Map_Fragment_1** from the current frame.14
|
||||
* Tracking *resumes instantly* on this new map fragment, ensuring the system "correctly continues the work" (AC-4).
|
||||
|
||||
This architecture converts the "sharp turn" failure case into a *standard operating procedure*. The system never "fails"; it simply fragments. The burden of stitching these fragments together is correctly moved from the V-SLAM front-end (which has no global context) to the TOH (Section 6.0), which *can* solve it using global-metric anchors.
|
||||
|
||||
### **4.2 Feature Matching Sub-System: SuperPoint + LightGlue**
|
||||
|
||||
The V-SLAM front-end's success depends entirely on high-quality feature matches, especially in the sparse, low-texture agricultural terrain seen in the user's images. The selected approach is **SuperPoint + LightGlue**.
|
||||
|
||||
* **SuperPoint:** A SOTA feature detector proven to find robust, repeatable keypoints in challenging, low-texture conditions.43
|
||||
* **LightGlue:** A highly optimized GNN-based matcher that is the successor to SuperGlue.44
|
||||
|
||||
The choice of LightGlue over SuperGlue is a deliberate performance optimization. LightGlue is *adaptive*.46 The user query states sharp turns (AC-4) are "rather an exception." This implies \~95% of image pairs are "easy" (high-overlap, straight flight) and 5% are "hard" (low-overlap, turns). LightGlue's adaptive-depth GNN exits early on "easy" pairs, returning a high-confidence match in a fraction of the time. This saves *enormous* computational budget on the 95% of normal frames, ensuring the system *always* meets the <5s budget (AC-7) and reserving that compute for the GAB and TOH. This component will run on **Image_N_LR** (low-res) to guarantee performance, and will be accelerated via TensorRT (Section 7.0).
|
||||
|
||||
### **4.3 Keyframe Management and Local 3D Cloud**
|
||||
|
||||
The front-end will maintain a co-visibility graph of keyframes for its *active map fragment*. It will perform local Bundle Adjustment 38 continuously over a sliding window of recent keyframes to minimize drift *within* that fragment.
|
||||
|
||||
Crucially, it will triangulate features to create a **local, high-density 3D point cloud** for its map fragment.28 This point cloud is essential for two reasons:
|
||||
|
||||
1. It provides robust tracking (tracking against a 3D map, not just a 2D frame).
|
||||
2. It serves as the **high-accuracy source** for the object localization output (Section 9.1), as established in 1.4, allowing the system to bypass the high-error external DEM.
|
||||
|
||||
#### **Table 1: Analysis of State-of-the-Art Feature Matchers (For V-SLAM Front-End)**
|
||||
|
||||
| Approach (Tools/Library) | Advantages | Limitations | Requirements | Fitness for Problem Component |
|
||||
| :---- | :---- | :---- | :---- | :---- |
|
||||
| **SuperPoint + SuperGlue** | - SOTA robustness in low-texture, high-blur conditions. - GNN reasons about 3D scene context. - Proven in real-time SLAM systems. | - Computationally heavy (fixed-depth GNN). - Slower than LightGlue. | - NVIDIA GPU (RTX 2060+). - PyTorch or TensorRT. | **Good.** A solid, baseline choice. Meets robustness needs but will heavily tax the <5s time budget (AC-7). |
|
||||
| **SuperPoint + LightGlue** 44 | - **Adaptive Depth:** Faster on "easy" pairs, more accurate on "hard" pairs.46 - **Faster & Lighter:** Outperforms SuperGlue on speed and accuracy. - SOTA "in practice" choice for large-scale matching. | - Newer, but rapidly being adopted and proven.48 | - NVIDIA GPU (RTX 2060+). - PyTorch or TensorRT. | **Excellent (Selected).** The adaptive nature is *perfect* for this problem. It saves compute on the 95% of easy (straight) frames, maximizing our ability to meet AC-7. |
|
||||
|
||||
## **5.0 Core Component: The Local Geospatial Anchoring Back-End (GAB)**
|
||||
|
||||
This asynchronous component is the system's "anchor to reality." Its sole purpose is to find a high-confidence, *absolute-metric* pose for a given V-SLAM keyframe by matching it against the **local, pre-cached geo-database** (from Section 3.0). This component is a full replacement for the high-risk, high-latency GAB from the GEORTEX-R draft (see 1.2, 1.5).
|
||||
|
||||
### **5.1 Rationale: Local-First Query vs. On-Demand API**
|
||||
|
||||
As established in 1.2, all queries are made to the local SSD. This guarantees zero-latency I/O, which is a hard requirement for a real-time system, as external network latency is unacceptably high and variable.9 The GAB itself runs asynchronously and can take longer than 5s (e.g., 10-15s), but it must not be *blocked* by network I/O, which would stall the entire processing pipeline.
|
||||
|
||||
### **5.2 SOTA Visual-Only Coarse-to-Fine Localization**
|
||||
|
||||
This component implements a state-of-the-art, two-stage *visual-only* pipeline, which is lower-risk and more performant (see 1.5) than the GEORTEX-R's semantic-hybrid model. This approach is well-supported by SOTA research in aerial localization.34
|
||||
|
||||
1. **Stage 1 (Coarse): Global Descriptor Retrieval.**
|
||||
* *Action:* When the TOH requests an anchor for Keyframe_k, the GAB first computes a *global descriptor* (a compact vector representation) for the *nadir-warped* (see 5.3) low-resolution Image_k_LR.
|
||||
* *Technology:* A SOTA Visual Place Recognition (VPR) model like **SALAD** 49, **TransVLAD** 50, or **NetVLAD** 33 will be used. These are designed for this "image retrieval" task.45
|
||||
* *Result:* This descriptor is used to perform a fast FAISS/vector search against the descriptors of the *local satellite tiles* (which were pre-computed and stored in the Geo-Database). This returns the Top-K (e.g., K=5) most likely satellite tiles in milliseconds.
|
||||
2. **Stage 2 (Fine): Local Feature Matching.**
|
||||
* *Action:* The system runs **SuperPoint+LightGlue** 43 to find pixel-level correspondences.
|
||||
* *Performance:* This is *not* run on the *full* UAV image against the *full* satellite map. It is run *only* between high-resolution patches (from **Image_k_HR**) and the **Top-K satellite tiles** identified in Stage 1.
|
||||
* *Result:* This produces a set of 2D-2D (image-to-map) feature matches. A PnP/RANSAC solver then computes a high-confidence 6-DoF pose. This pose is the **Absolute_Metric_Anchor** that is sent to the TOH.
|
||||
|
||||
### **5.3 Solving the Viewpoint Gap: Dynamic Feature Warping**
|
||||
|
||||
The GAB must solve the "viewpoint gap" 33: the UAV image is oblique (due to roll/pitch), while the satellite tiles are nadir (top-down).
|
||||
|
||||
The GEORTEX-R draft proposed a complex, high-risk deep learning solution. The ATLAS-GEOFUSE solution is far more elegant and requires zero R\&D:
|
||||
|
||||
1. The V-SLAM Front-End (Section 4.0) already *knows* the camera's *relative* 6-DoF pose, including its **roll and pitch** orientation relative to the *local map's ground plane*.
|
||||
2. The *Local Geo-Database* (Section 3.0) contains a 30m-resolution DEM for the AOI.
|
||||
3. When the GAB processes Keyframe_k, it *first* performs a **dynamic homography warp**. It projects the V-SLAM ground plane onto the coarse DEM, and then uses the known camera roll/pitch to calculate the perspective transform (homography) needed to *un-distort* the oblique UAV image into a synthetic *nadir-view*.
|
||||
|
||||
This *nadir-warped* UAV image is then used in the Coarse-to-Fine pipeline (5.2). It will now match the *nadir* satellite tiles with extremely high-fidelity. This method *eliminates* the viewpoint gap *without* training any new neural networks, leveraging the inherent synergy between the V-SLAM component and the GAB's pre-cached DEM.
|
||||
|
||||
## **6.0 Core Component: The Multi-Map Trajectory Optimization Hub (TOH)**
|
||||
|
||||
This component is the system's central "brain." It runs continuously, fusing all measurements (high-frequency/unscaled V-SLAM, low-frequency/metric-scale GAB anchors) from *all map fragments* into a single, globally consistent trajectory.
|
||||
|
||||
### **6.1 Incremental Sim(3) Pose-Graph Optimization**
|
||||
|
||||
The central challenge of monocular, IMU-denied SLAM is scale-drift. The V-SLAM front-end produces *unscaled* 6-DoF ($SE(3)$) relative poses.37 The GAB produces *metric-scale* 6-DoF ($SE(3)$) *absolute* poses. These cannot be directly combined.
|
||||
|
||||
The solution is that the graph *must* be optimized in **Sim(3) (7-DoF)**.39 This adds a *single global scale factor $s$* as an optimizable variable to each V-SLAM map fragment. The TOH will maintain a pose-graph using **Ceres Solver** 19, a SOTA optimization library.
|
||||
|
||||
The graph is constructed as follows:
|
||||
|
||||
1. **Nodes:** Each keyframe pose (7-DoF: $X, Y, Z, Qx, Qy, Qz, s$).
|
||||
2. **Edge 1 (V-SLAM):** A relative pose constraint between Keyframe_i and Keyframe_j *within the same map fragment*. The error is computed in Sim(3).29
|
||||
3. **Edge 2 (GAB):** An *absolute* pose constraint on Keyframe_k. This constraint *fixes* Keyframe_k's pose to the *metric* GPS coordinate from the GAB anchor and *fixes its scale $s$ to 1.0*.
|
||||
|
||||
The GAB's $s=1.0$ anchor creates "tension" in the graph. The Ceres optimizer 20 resolves this tension by finding the *one* global scale $s$ for all *other* V-SLAM nodes in that fragment that minimizes the total error. This effectively "stretches" or "shrinks" the entire unscaled V-SLAM fragment to fit the metric anchors, which is the core of monocular SLAM scale-drift correction.29
|
||||
|
||||
### **6.2 Geodetic Map-Merging via Absolute Anchors**
|
||||
|
||||
This is the robust solution to the "sharp turn" (AC-4) problem, replacing the flawed "relocalization" model from the original draft.
|
||||
|
||||
* **Scenario:** The UAV makes a sharp turn (AC-4). The V-SLAM front-end *loses tracking* on Map_Fragment_0 and *creates* Map_Fragment_1 (per Section 4.1). The TOH's pose graph now contains *two disconnected components*.
|
||||
* **Mechanism (Geodetic Merging):**
|
||||
1. The GAB (Section 5.0) is *queued* to find anchors for keyframes in *both* fragments.
|
||||
2. The GAB returns Anchor_A for Keyframe_10 (in Map_Fragment_0) with GPS [Lat_A, Lon_A].
|
||||
3. The GAB returns Anchor_B for Keyframe_50 (in Map_Fragment_1) with GPS ``.
|
||||
4. The TOH adds *both* of these as absolute, metric constraints (Edge 2) to the global pose-graph.
|
||||
* The graph optimizer 20 now has all the information it needs. It will solve for the 7-DoF pose of *both fragments*, placing them in their correct, globally-consistent metric positions. The two fragments are *merged geodetically* (i.e., by their global coordinates) even if they *never* visually overlap. This is a vastly more robust and modern solution than simple visual loop closure.19
|
||||
|
||||
### **6.3 Automatic Outlier Rejection (AC-3, AC-5)**
|
||||
|
||||
The system must be robust to 350m outliers (AC-3) and <10% bad GAB matches (AC-5). A standard least-squares optimizer (like Ceres 20) would be catastrophically corrupted by a 350m error.
|
||||
|
||||
This is a solved problem in modern graph optimization.19 The solution is to wrap *all* constraints (V-SLAM and GAB) in a **Robust Loss Function (e.g., HuberLoss, CauchyLoss)** within Ceres Solver.
|
||||
|
||||
A robust loss function mathematically *down-weights* the influence of constraints with large errors (high residuals). When the TOH "sees" the 350m error from a V-SLAM relative pose (AC-3) or a bad GAB anchor (AC-5), the robust loss function effectively acknowledges the measurement but *refuses* to pull the entire 3000-image trajectory to fit this one "insane" data point. It automatically and gracefully *ignores* the outlier, optimizing the 99.9% of "sane" measurements, thus meeting AC-3 and AC-5.
|
||||
|
||||
### **Table 2: Analysis of Trajectory Optimization Strategies**
|
||||
|
||||
| Approach (Tools/Library) | Advantages | Limitations | Requirements | Fitness for Problem Component |
|
||||
| :---- | :---- | :---- | :---- | :---- |
|
||||
| **Incremental SLAM (Pose-Graph Optimization)** (Ceres Solver 19, g2o, GTSAM) | - **Real-time / Online:** Provides immediate pose estimates (AC-7). - **Supports Refinement:** Explicitly designed to refine past poses when new "loop closure" (GAB) data arrives (AC-8). - **Robust:** Can handle outliers via robust kernels.19 | - Initial estimate is *unscaled* until a GAB anchor arrives. - Can drift *if* not anchored. | - A graph optimization library (Ceres). - A robust cost function (Huber). | **Excellent (Selected).** This is the *only* architecture that satisfies all user requirements for real-time streaming (AC-7) and asynchronous refinement (AC-8). |
|
||||
| **Batch Structure from Motion (Global Bundle Adjustment)** (COLMAP, Agisoft Metashape) | - **Globally Optimal Accuracy:** Produces the most accurate possible 3D reconstruction. | - **Offline:** Cannot run in real-time or stream results. - High computational cost (minutes to hours). - Fails AC-7 and AC-8 completely. | - All images must be available before processing starts. - High RAM and CPU. | **Good (as an *Optional* Post-Processing Step).** Unsuitable as the primary online system, but could be offered as an optional, high-accuracy "Finalize Trajectory" batch process. |
|
||||
|
||||
## **7.0 High-Performance Compute & Deployment**
|
||||
|
||||
The system must run on an RTX 2060 (AC-7) while processing 6.2K images. These are opposing constraints that require a deliberate compute strategy to balance speed and accuracy.
|
||||
|
||||
### **7.1 Multi-Scale, Coarse-to-Fine Processing Pipeline**
|
||||
|
||||
The system must balance the conflicting demands of real-time speed (AC-7) and high accuracy (AC-2). This is achieved by running different components at different resolutions.
|
||||
|
||||
* **V-SLAM Front-End (Real-time, <5s):** This component (Section 4.0) runs *only* on the **Image_N_LR** (e.g., 1536x1024) copy. This is fast enough to meet the AC-7 budget.46
|
||||
* **GAB (Asynchronous, High-Accuracy):** This component (Section 5.0) uses the full-resolution **Image_N_HR** *selectively* to meet the 20m accuracy (AC-2).
|
||||
1. Stage 1 (Coarse) runs on the low-res, nadir-warped image.
|
||||
2. Stage 2 (Fine) runs SuperPoint on the *full 6.2K* image to find the *most confident* keypoints. It then extracts small, 256x256 *patches* from the *full-resolution* image, centered on these keypoints.
|
||||
3. It matches *these small, full-resolution patches* against the high-res satellite tile.
|
||||
|
||||
This hybrid, multi-scale method provides the fine-grained matching accuracy of the 6.2K image (needed for AC-2) without the catastrophic CUDA Out-of-Memory errors (an RTX 2060 has only 6GB VRAM 30) or performance penalties that full-resolution processing would entail.
|
||||
|
||||
### **7.2 Mandatory Deployment: NVIDIA TensorRT Acceleration**
|
||||
|
||||
The deep learning models (SuperPoint, LightGlue, NetVLAD) will be too slow in their native PyTorch framework to meet AC-7 on an RTX 2060.
|
||||
|
||||
This is not an "optional" optimization; it is a *mandatory* deployment step. The key neural networks *must* be converted from PyTorch into a highly-optimized **NVIDIA TensorRT engine**.
|
||||
|
||||
Research *specifically* on accelerating LightGlue with TensorRT shows **"2x-4x speed gains over compiled PyTorch"**.48 Other benchmarks confirm TensorRT provides 30-70% speedups for deep learning inference.52 This conversion (which applies layer fusion, graph optimization, and FP16/INT8 precision) is what makes achieving the <5s (AC-7) performance *possible* on the specified RTX 2060 hardware.
|
||||
|
||||
## **8.0 System Robustness: Failure Mode Escalation Logic**
|
||||
|
||||
This logic defines the system's behavior during real-world failures, ensuring it meets criteria AC-3, AC-4, AC-6, and AC-9, and is built upon the new "Atlas" multi-map architecture.
|
||||
|
||||
### **8.1 Stage 1: Normal Operation (Tracking)**
|
||||
|
||||
* **Condition:** V-SLAM front-end (Section 4.0) is healthy.
|
||||
* **Logic:**
|
||||
1. V-SLAM successfully tracks Image_N_LR against its *active map fragment*.
|
||||
2. A new **Relative_Unscaled_Pose** is sent to the TOH (Section 6.0).
|
||||
3. TOH sends **Pose_N_Est** (unscaled) to the user (AC-7, AC-8 met).
|
||||
4. If Image_N is selected as a keyframe, the GAB (Section 5.0) is *queued* to find an anchor for it, which will trigger a **Pose_N_Refined** update later.
|
||||
|
||||
### **8.2 Stage 2: Transient VO Failure (Outlier Rejection)**
|
||||
|
||||
* **Condition:** Image_N is unusable (e.g., severe blur, sun-glare, or the 350m outlier from AC-3).
|
||||
* **Logic (Frame Skipping):**
|
||||
1. V-SLAM front-end fails to track Image_N_LR against the active map.
|
||||
2. The system *discards* Image_N (marking it as a rejected outlier, AC-5).
|
||||
3. When Image_N+1 arrives, the V-SLAM front-end attempts to track it against the *same* local keyframe map (from Image_N-1).
|
||||
4. **If successful:** Tracking resumes. Image_N is officially an outlier. The system "correctly continues the work" (AC-3 met).
|
||||
5. **If fails:** The system repeats for Image_N+2, N+3. If this fails for \~5 consecutive frames, it escalates to Stage 3.
|
||||
|
||||
### **8.3 Stage 3: Persistent VO Failure (New Map Initialization)**
|
||||
|
||||
* **Condition:** Tracking is lost for multiple frames. This is the **"sharp turn" (AC-4)** or "low overlap" (AC-4) scenario.
|
||||
* **Logic (Atlas Multi-Map):**
|
||||
1. The V-SLAM front-end (Section 4.0) declares "Tracking Lost."
|
||||
2. It marks the current Map_Fragment_k as "inactive".13
|
||||
3. It *immediately* initializes a **new** Map_Fragment_k+1 using the current frame (Image_N+5).
|
||||
4. **Tracking resumes instantly** on this new, unscaled, un-anchored map fragment.
|
||||
5. This "registering" of a new map ensures the system "correctly continues the work" (AC-4 met) and maintains the >95% registration rate (AC-9) by not counting this as a failure.
|
||||
|
||||
### **8.4 Stage 4: Map-Merging & Global Relocalization (GAB-Assisted)**
|
||||
|
||||
* **Condition:** The system is now tracking on Map_Fragment_k+1, while Map_Fragment_k is inactive. The TOH pose-graph (Section 6.0) is disconnected.
|
||||
* **Logic (Geodetic Merging):**
|
||||
1. The TOH queues the GAB (Section 5.0) to find anchors for *both* map fragments.
|
||||
2. The GAB finds anchors for keyframes in *both* fragments.
|
||||
3. The TOH (Section 6.2) receives these metric anchors, adds them to the graph, and the Ceres optimizer 20 *finds the global 7-DoF pose for both fragments*, merging them into a single, metrically-consistent trajectory.
|
||||
|
||||
### **8.5 Stage 5: Catastrophic Failure (User Intervention)**
|
||||
|
||||
* **Condition:** The system is in Stage 3 (Lost), *and* the GAB (Section 5.0) has *also* failed to find *any* global anchors for a new Map_Fragment_k+1 for a prolonged period (e.g., 20% of the route). This is the "absolutely incapable" scenario (AC-6), (e.g., flying over a large, featureless body of water or dense, uniform fog).
|
||||
* **Logic:**
|
||||
1. The system has an *unscaled, un-anchored* map fragment (Map_Fragment_k+1) and *zero* idea where it is in the world.
|
||||
2. The TOH triggers the AC-6 flag.
|
||||
* **Resolution (User-Aided Prior):**
|
||||
1. The UI prompts the user: "Tracking lost. Please provide a coarse location for the *current* image."
|
||||
2. The user clicks *one point* on a map.
|
||||
3. This [Lat, Lon] is *not* taken as ground truth. It is fed to the **GAB (Section 5.0)** as a *strong spatial prior* for its *local database query* (Section 5.2).
|
||||
4. This narrows the GAB's Stage 1 search area from "the entire AOI" to "a 5km radius around the user's click." This *guarantees* the GAB will find the correct satellite tile, find a high-confidence **Absolute_Metric_Anchor**, and allow the TOH (Stage 4) to re-scale 29 and geodetically-merge 20 this lost fragment, re-localizing the entire trajectory.
|
||||
|
||||
## **9.0 High-Accuracy Output Generation and Validation Strategy**
|
||||
|
||||
This section details how the final user-facing outputs are generated, specifically replacing the flawed "Ray-DEM" method (see 1.4) with a high-accuracy "Ray-Cloud" method to meet the 20m accuracy (AC-2).
|
||||
|
||||
### **9.1 High-Accuracy Object Geolocalization via Ray-Cloud Intersection**
|
||||
|
||||
As established in 1.4, using an external 30m DEM 21 for object localization introduces uncontrollable errors (up to 4m+22) that make meeting the 20m (AC-2) accuracy goal impossible. The system *must* use its *own*, internally-generated 3D map, which is locally far more accurate.25
|
||||
|
||||
* **Inputs:**
|
||||
1. User clicks pixel coordinate $(u,v)$ on Image_N.
|
||||
2. The system retrieves the **final, refined, metric 7-DoF Sim(3) pose** $P_{sim(3)} = (s, R, T)$ for the *map fragment* that Image_N belongs to. This transform $P_{sim(3)}$ maps the *local V-SLAM coordinate system* to the *global metric coordinate system*.
|
||||
3. The system retrieves the *local, unscaled* **V-SLAM 3D point cloud** ($P_{local_cloud}$) generated by the Front-End (Section 4.3).
|
||||
4. The known camera intrinsic matrix $K$.
|
||||
* **Algorithm (Ray-Cloud Intersection):**
|
||||
1. **Un-project Pixel:** The 2D pixel $(u,v)$ is un-projected into a 3D ray *direction* vector $d_{cam}$ in the camera's local coordinate system: $d_{cam} = K^{-1} \\cdot [u, v, 1]^T$.
|
||||
2. **Transform Ray (Local):** This ray is transformed using the *local V-SLAM pose* of Image_N to get a ray in the *local map fragment's* coordinate system.
|
||||
3. **Intersect (Local):** The system performs a numerical *ray-mesh intersection* (or nearest-neighbor search) to find the 3D point $P_{local}$ where this local ray *intersects the local V-SLAM point cloud* ($P_{local_cloud}$).25 This $P_{local}$ is *highly accurate* relative to the V-SLAM map.26
|
||||
4. **Transform (Global):** This local 3D point $P_{local}$ is now transformed to the global, metric coordinate system using the 7-DoF Sim(3) transform from the TOH: $P_{metric} = s \\cdot (R \\cdot P_{local}) + T$.
|
||||
5. **Result:** This 3D intersection point $P_{metric}$ is the *metric* world coordinate of the object.
|
||||
6. **Convert:** This $(X, Y, Z)$ world coordinate is converted to a [Latitude, Longitude, Altitude] GPS coordinate.55
|
||||
|
||||
This method correctly isolates the error. The object's accuracy is now *only* dependent on the V-SLAM's geometric fidelity (AC-10 MRE < 1.0px) and the GAB's global anchoring (AC-1, AC-2). It *completely eliminates* the external 30m DEM error 22 from this critical, high-accuracy calculation.
|
||||
|
||||
### **9.2 Rigorous Validation Methodology**
|
||||
|
||||
A comprehensive test plan is required to validate compliance with all 10 Acceptance Criteria. The foundation is a **Ground-Truth Test Harness** (e.g., using the provided coordinates.csv data).
|
||||
|
||||
* **Test Harness:**
|
||||
1. **Ground-Truth Data:** coordinates.csv provides ground-truth [Lat, Lon] for a set of images.
|
||||
2. **Test Datasets:**
|
||||
* Test_Baseline: The ground-truth images and coordinates.
|
||||
* Test_Outlier_350m (AC-3): Test_Baseline with a single, unrelated image inserted.
|
||||
* Test_Sharp_Turn_5pct (AC-4): A sequence where several frames are manually deleted to simulate <5% overlap.
|
||||
* Test_Long_Route (AC-9): A 1500-image sequence.
|
||||
* **Test Cases:**
|
||||
* **Test_Accuracy (AC-1, AC-2, AC-5, AC-9):**
|
||||
* **Run:** Execute ATLAS-GEOFUSE on Test_Baseline, providing the first image's coordinate as the Start Coordinate.
|
||||
* **Script:** A validation script will compute the Haversine distance error between the *system's refined GPS output* ($Pose_N^{Refined}$) for each image and the *ground-truth GPS*.
|
||||
* **ASSERT** (count(errors < 50m) / total_images) >= 0.80 **(AC-1 Met)**
|
||||
* **ASSERT** (count(errors < 20m) / total_images) >= 0.60 **(AC-2 Met)**
|
||||
* **ASSERT** (count(un-localized_images) / total_images) < 0.10 **(AC-5 Met)**
|
||||
* **ASSERT** (count(localized_images) / total_images) > 0.95 **(AC-9 Met)**
|
||||
* **Test_MRE (AC-10):**
|
||||
* **Run:** After Test_Baseline completes.
|
||||
* **ASSERT** TOH.final_Mean_Reprojection_Error < 1.0 **(AC-10 Met)**
|
||||
* **Test_Performance (AC-7, AC-8):**
|
||||
* **Run:** Execute on Test_Long_Route on the minimum-spec RTX 2060.
|
||||
* **Log:** Log timestamps for "Image In" -> "Initial Pose Out" ($Pose_N^{Est}$).
|
||||
* **ASSERT** average_time < 5.0s **(AC-7 Met)**
|
||||
* **Log:** Log the output stream.
|
||||
* **ASSERT** >80% of images receive *two* poses: an "Initial" and a "Refined" **(AC-8 Met)**
|
||||
* **Test_Robustness (AC-3, AC-4, AC-6):**
|
||||
* **Run:** Execute Test_Outlier_350m.
|
||||
* **ASSERT** System logs "Stage 2: Discarding Outlier" or "Stage 3: New Map" *and* the final trajectory error for the *next* frame is < 50m **(AC-3 Met)**.
|
||||
* **Run:** Execute Test_Sharp_Turn_5pct.
|
||||
* **ASSERT** System logs "Stage 3: New Map Initialization" and "Stage 4: Geodetic Map-Merge," and the final trajectory is complete and accurate **(AC-4 Met)**.
|
||||
* **Run:** Execute on a sequence with no GAB anchors possible for 20% of the route.
|
||||
* **ASSERT** System logs "Stage 5: User Intervention Requested" **(AC-6 Met)**.
|
||||
|
||||
|
||||
Identify all potential weak points and problems. Address them and find out ways to solve them. Based on your findings, form a new solution draft in the same format.
|
||||
|
||||
If your finding requires a complete reorganization of the flow and different components, state it.
|
||||
Put all the findings regarding what was weak and poor at the beginning of the report.
|
||||
At the very beginning of the report list most profound changes you've made to previous solution.
|
||||
|
||||
Then form a new solution design without referencing the previous system. Remove Poor and Very Poor component choices from the component analysis tables, but leave Good and Excellent ones.
|
||||
In the updated report, do not put "new" marks, do not compare to the previous solution draft, just make a new solution as if from scratch
|
||||
@@ -1,379 +0,0 @@
|
||||
Read carefully about the problem:
|
||||
|
||||
We have a lot of images taken from a wing-type UAV using a camera with at least Full HD resolution. Resolution of each photo could be up to 6200*4100 for the whole flight, but for other flights, it could be FullHD
|
||||
Photos are taken and named consecutively within 100 meters of each other.
|
||||
We know only the starting GPS coordinates. We need to determine the GPS of the centers of each image. And also the coordinates of the center of any object in these photos. We can use an external satellite provider for ground checks on the existing photos
|
||||
|
||||
System has next restrictions and conditions:
|
||||
- Photos are taken by only airplane type UAVs.
|
||||
- Photos are taken by the camera pointing downwards and fixed, but it is not autostabilized.
|
||||
- The flying range is restricted by the eastern and southern parts of Ukraine (To the left of the Dnipro River)
|
||||
- The image resolution could be from FullHD to 6252*4168. Camera parameters are known: focal length, sensor width, resolution and so on.
|
||||
- Altitude is predefined and no more than 1km. The height of the terrain can be neglected.
|
||||
- There is NO data from IMU
|
||||
- Flights are done mostly in sunny weather
|
||||
- We can use satellite providers, but we're limited right now to Google Maps, which could be outdated for some regions
|
||||
- Number of photos could be up to 3000, usually in the 500-1500 range
|
||||
- During the flight, UAVs can make sharp turns, so that the next photo may be absolutely different from the previous one (no same objects), but it is rather an exception than the rule
|
||||
- Processing is done on a stationary computer or laptop with NVidia GPU at least RTX2060, better 3070. (For the UAV solution Jetson Orin Nano would be used, but that is out of scope.)
|
||||
|
||||
Output of the system should address next acceptance criteria:
|
||||
- The system should find out the GPS of centers of 80% of the photos from the flight within an error of no more than 50 meters in comparison to the real GPS
|
||||
|
||||
- The system should find out the GPS of centers of 60% of the photos from the flight within an error of no more than 20 meters in comparison to the real GPS
|
||||
|
||||
- The system should correctly continue the work even in the presence of up to 350 meters of an outlier photo between 2 consecutive pictures en route. This could happen due to tilt of the plane.
|
||||
|
||||
- System should correctly continue the work even during sharp turns, where the next photo doesn't overlap at all, or overlaps in less than 5%. The next photo should be in less than 200m drift and at an angle of less than 70%
|
||||
|
||||
- System should try to operate when UAV made a sharp turn, and all the next photos has no common points with previous route. In that situation system should try to figure out location of the new piece of the route and connect it to the previous route. Also this separate chunks could be more than 2, so this strategy should be in the core of the system
|
||||
|
||||
- In case of being absolutely incapable of determining the system to determine next, second next, and third next images GPS, by any means (these 20% of the route), then it should ask the user for input for the next image, so that the user can specify the location
|
||||
|
||||
- Less than 5 seconds for processing one image
|
||||
|
||||
- Results of image processing should appear immediately to user, so that user shouldn't wait for the whole route to complete in order to analyze first results. Also, system could refine existing calculated results and send refined results again to user
|
||||
|
||||
- Image Registration Rate > 95%. The system can find enough matching features to confidently calculate the camera's 6-DoF pose (position and orientation) and "stitch" that image into the final trajectory
|
||||
|
||||
- Mean Reprojection Error (MRE) < 1.0 pixels. The distance, in pixels, between the original pixel location of the object and the re-projected pixel location.
|
||||
|
||||
- The whole system should work as a background service. The interaction should be done by zeromq. Sevice should be up and running and awaiting for the initial input message. On the input message processing should started, and immediately after the first results system should provide them to the client
|
||||
|
||||
Here is a solution draft:
|
||||
|
||||
|
||||
# **ASTRAL System Architecture: A High-Fidelity Geopositioning Framework for IMU-Denied Aerial Operations**
|
||||
|
||||
## **2.0 The ASTRAL (Advanced Scale-Aware Trajectory-Refinement and Localization) System Architecture**
|
||||
|
||||
The ASTRAL architecture is a multi-map, decoupled, loosely-coupled system designed to solve the flaws identified in Section 1.0 and meet all 10 Acceptance Criteria.
|
||||
|
||||
### **2.1 Core Principles**
|
||||
|
||||
The ASTRAL architecture is built on three principles:
|
||||
|
||||
1. **Tiered Geospatial Database:** The system *cannot* rely on a single data source. It is architected around a *tiered* local database.
|
||||
* **Tier-1 (Baseline):** Google Maps data. This is used to meet the 50m (AC-1) requirement and provide geolocalization.
|
||||
* **Tier-2 (High-Accuracy):** A framework for ingesting *commercial, sub-meter* data (visual 4; and DEM 5). This tier is *required* to meet the 20m (AC-2) accuracy. The system will *run* on Tier-1 but *achieve* AC-2 when "fueled" with Tier-2 data.
|
||||
2. **Viewpoint-Invariant Anchoring:** The system *rejects* geometric warping. The GAB (Section 5.0) is built on SOTA Visual Place Recognition (VPR) models that are *inherently* invariant to the oblique-to-nadir viewpoint change, decoupling it from the V-SLAM's unstable orientation.
|
||||
3. **Continuously-Scaled Trajectory:** The system *rejects* the "single-scale-per-fragment" model. The TOH (Section 6.0) is a Sim(3) pose-graph optimizer 11 that models scale as a *per-keyframe optimizable parameter*.15 This allows the trajectory to "stretch" and "shrink" elastically to absorb continuous monocular scale drift.12
|
||||
|
||||
### **2.2 Component Interaction and Data Flow**
|
||||
|
||||
The system is multi-threaded and asynchronous, designed for real-time streaming (AC-7) and refinement (AC-8).
|
||||
|
||||
* **Component 1: Tiered GDB (Pre-Flight):**
|
||||
* *Input:* User-defined Area of Interest (AOI).
|
||||
* *Action:* Downloads and builds a local SpatiaLite/GeoPackage.
|
||||
* *Output:* A single **Local-Geo-Database file** containing:
|
||||
* Tier-1 (Google Maps) + GLO-30 DSM
|
||||
* Tier-2 (Commercial) satellite tiles + WorldDEM DTM elevation tiles.
|
||||
* A *pre-computed FAISS vector index* of global descriptors (e.g., SALAD 8) for *all* satellite tiles (see 3.4).
|
||||
* **Component 2: Image Ingestion (Real-time):**
|
||||
* *Input:* Image_N (up to 6.2K), Camera Intrinsics ($K$).
|
||||
* *Action:* Creates Image_N_LR (Low-Res, e.g., 1536x1024) and Image_N_HR (High-Res, 6.2K).
|
||||
* *Dispatch:* Image_N_LR -> V-SLAM. Image_N_HR -> GAB (for patches).
|
||||
* **Component 3: "Atlas" V-SLAM Front-End (High-Frequency Thread):**
|
||||
* *Input:* Image_N_LR.
|
||||
* *Action:* Tracks Image_N_LR against the *active map fragment*. Manages keyframes and local BA. If tracking lost (AC-4, AC-6), it *initializes a new map fragment*.
|
||||
* *Output:* Relative_Unscaled_Pose, Local_Point_Cloud, and Map_Fragment_ID -> TOH.
|
||||
* **Component 4: VPR Geospatial Anchoring Back-End (GAB) (Low-Frequency, Asynchronous Thread):**
|
||||
* *Input:* A keyframe (Image_N_LR, Image_N_HR) and its Map_Fragment_ID.
|
||||
* *Action:* Performs SOTA two-stage VPR (Section 5.0) against the **Local-Geo-Database file**.
|
||||
* *Output:* Absolute_Metric_Anchor ([Lat, Lon, Alt] pose) and its Map_Fragment_ID -> TOH.
|
||||
* **Component 5: Scale-Aware Trajectory Optimization Hub (TOH) (Central Hub Thread):**
|
||||
* *Input 1:* High-frequency Relative_Unscaled_Pose stream.
|
||||
* *Input 2:* Low-frequency Absolute_Metric_Anchor stream.
|
||||
* *Action:* Manages the *global Sim(3) pose-graph* 13 with *per-keyframe scale*.15
|
||||
* *Output 1 (Real-time):* Pose_N_Est (unscaled) -> UI (Meets AC-7).
|
||||
* *Output 2 (Refined):* Pose_N_Refined (metric-scale) -> UI (Meets AC-1, AC-2, AC-8).
|
||||
|
||||
### **2.3 System Inputs**
|
||||
|
||||
1. **Image Sequence:** Consecutively named images (FullHD to 6252x4168).
|
||||
2. **Start Coordinate (Image 0):** A single, absolute GPS coordinate [Lat, Lon].
|
||||
3. **Camera Intrinsics (K):** Pre-calibrated camera intrinsic matrix.
|
||||
4. **Local-Geo-Database File:** The single file generated by Component 1.
|
||||
|
||||
### **2.4 Streaming Outputs (Meets AC-7, AC-8)**
|
||||
|
||||
1. **Initial Pose (Pose_N^{Est}):** An *unscaled* pose. This is the raw output from the V-SLAM Front-End, transformed by the *current best estimate* of the trajectory. It is sent immediately (<5s, AC-7) to the UI for real-time visualization of the UAV's *path shape*.
|
||||
2. **Refined Pose (Pose_N^{Refined}) [Asynchronous]:** A globally-optimized, *metric-scale* 7-DoF pose. This is sent to the user *whenever the TOH re-converges* (e.g., after a new GAB anchor or a map-merge). This *re-writes* the history of poses (e.g., Pose_{N-100} to Pose_N), meeting the refinement (AC-8) and accuracy (AC-1, AC-2) requirements.
|
||||
|
||||
## **3.0 Component 1: The Tiered Pre-Flight Geospatial Database (GDB)**
|
||||
|
||||
This component is the implementation of the "Tiered Geospatial" principle. It is a mandatory pre-flight utility that solves both the *legal* problem (Flaw 1.4) and the *accuracy* problem (Flaw 1.1).
|
||||
|
||||
### **3.2 Tier-1 (Baseline): Google Maps and GLO-30 DEM**
|
||||
|
||||
This tier provides the baseline capability and satisfies AC-1.
|
||||
|
||||
* **Visual Data:** Google Maps (coarse Maxar)
|
||||
* *Resolution:* 10m.
|
||||
* *Geodetic Accuracy:* \~1 m to 20m
|
||||
* *Purpose:* Meets AC-1 (80% < 50m error). Provides a robust baseline for coarse geolocalization.
|
||||
* **Elevation Data:** Copernicus GLO-30 DEM
|
||||
* *Resolution:* 30m.
|
||||
* *Type:* DSM (Digital Surface Model).2 This is a *weakness*, as it includes buildings/trees.
|
||||
* *Purpose:* Provides a coarse altitude prior for the TOH and the initial GAB search.
|
||||
|
||||
### **3.3 Tier-2 (High-Accuracy): Ingestion Framework for Commercial Data**
|
||||
|
||||
This is the *procurement and integration framework* required to meet AC-2.
|
||||
|
||||
* **Visual Data:** Commercial providers, e.g., Maxar (30-50cm) or Satellogic (70cm)
|
||||
* *Resolution:* < 1m.
|
||||
* *Geodetic Accuracy:* Typically < 5m.
|
||||
* *Purpose:* Provides the high-resolution, high-accuracy reference needed for the GAB to achieve a sub-20m total error.
|
||||
* **Elevation Data:** Commercial providers, e.g., WorldDEM Neo 5 or Elevation10.32
|
||||
* *Resolution:* 5m-12m.
|
||||
* *Vertical Accuracy:* < 4m.32
|
||||
* *Type:* DTM (Digital Terrain Model).32
|
||||
|
||||
The use of a DTM (bare-earth) in Tier-2 is a critical advantage over the Tier-1 DSM (surface). The V-SLAM Front-End (Section 4.0) will triangulate a 3D point cloud of what it *sees*, which is the *ground* in fields or *tree-tops* in forests. The Tier-1 GLO-30 DSM 2 represents the *top* of the canopy/buildings. If the V-SLAM maps the *ground* (e.g., altitude 100m) and the GAB tries to anchor it to a DSM *prior* that shows a forest (e.g., altitude 120m), the 20m altitude discrepancy will introduce significant error into the TOH. The Tier-2 DTM (bare-earth) 5 provides a *vastly* superior altitude anchor, as it represents the same ground plane the V-SLAM is tracking, significantly improving the entire 7-DoF pose solution.
|
||||
|
||||
### **3.4 Local Database Generation: Pre-computing Global Descriptors**
|
||||
|
||||
This is the key performance optimization for the GAB. During the pre-flight caching step, the GDB utility does not just *store* tiles; it *processes* them.
|
||||
|
||||
For *every* satellite tile (e.g., 256x256m) in the AOI, the utility will load the tile into the VPR model (e.g., SALAD 8), compute its global descriptor (a compact feature vector), and store this vector in a high-speed vector index (e.g., FAISS).
|
||||
|
||||
This step moves 99% of the GAB's "Stage 1" (Coarse Retrieval) workload into an offline, pre-flight step. The *real-time* GAB query (Section 5.2) is now reduced to: (1) Compute *one* vector for the UAV image, and (2) Perform a very fast K-Nearest-Neighbor search on the pre-computed FAISS index. This is what makes a SOTA deep-learning GAB 6 fast enough to support the real-time refinement loop.
|
||||
|
||||
#### **Table 1: Geospatial Reference Data Analysis (Decision Matrix)**
|
||||
|
||||
| Data Product | Type | Resolution | Geodetic Accuracy (Horiz.) | Type | Cost | AC-2 (20m) Compliant? |
|
||||
| :---- | :---- | :---- | :---- | :---- | :---- | :---- |
|
||||
| Google Maps | Visual | 1m | 1m - 10m | N/A | Free | **Depending on the location** |
|
||||
| Copernicus GLO-30 | Elevation | 30m | \~10-30m | **DSM** (Surface) | Free | **No (Fails Error Budget)** |
|
||||
| **Tier-2: Maxar/Satellogic** | Visual | 0.3m - 0.7m | < 5 m (Est.) | N/A | Commercial | **Yes** |
|
||||
| **Tier-2: WorldDEM Neo** | Elevation | 5m | < 4m | **DTM** (Bare-Earth) | Commercial | **Yes** |
|
||||
|
||||
## **4.0 Component 2: The "Atlas" Relative Motion Front-End**
|
||||
|
||||
This component's sole task is to robustly compute *unscaled* 6-DoF relative motion and handle tracking failures (AC-3, AC-4).
|
||||
|
||||
### **4.1 Feature Matching Sub-System: SuperPoint + LightGlue**
|
||||
|
||||
The system will use **SuperPoint** for feature detection and **LightGlue** for matching. This choice is driven by the project's specific constraints:
|
||||
|
||||
* **Rationale (Robustness):** The UAV flies over "eastern and southern parts of Ukraine," which includes large, low-texture agricultural areas. SuperPoint is a SOTA deep-learning detector renowned for its robustness and repeatability in these challenging, low-texture environments.
|
||||
* **Rationale (Performance):** The RTX 2060 (AC-7) is a *hard* constraint with only 6GB VRAM.34 Performance is paramount. LightGlue is an SOTA matcher that provides a 4-10x speedup over its predecessor, SuperGlue. Its "adaptive" nature is a key optimization: it exits early on "easy" pairs (high-overlap, straight-flight) and spends more compute only on "hard" pairs (turns). This saves critical GPU budget on 95% of normal frames, ensuring the <5s (AC-7) budget is met.
|
||||
|
||||
This subsystem will run on the Image_N_LR (low-res) copy to guarantee it fits in VRAM and meets the real-time budget.
|
||||
|
||||
#### **Table 2: Analysis of State-of-the-Art Feature Matchers (V-SLAM Front-End)**
|
||||
|
||||
| Approach (Tools/Library) | Robustness (Low-Texture) | Speed (RTX 2060) | Fitness for Problem |
|
||||
| :---- | :---- | :---- | :---- |
|
||||
| ORB 33 (e.g., ORB-SLAM3) | Poor. Fails on low-texture. | Excellent (CPU/GPU) | **Good.** Fails robustness in target environment. |
|
||||
| SuperPoint + SuperGlue | Excellent. | Good, but heavy. Fixed-depth GNN. 4-10x Slower than LightGlue.35 | **Good.** Robust, but risks AC-7 budget. |
|
||||
| **SuperPoint + LightGlue** 35 | Excellent. | **Excellent.** Adaptive depth 35 saves budget. 4-10x faster. | **Excellent (Selected).** Balances robustness and performance. |
|
||||
|
||||
### **4.2 The "Atlas" Multi-Map Paradigm (Solution for AC-3, AC-4, AC-6)**
|
||||
|
||||
This architecture is the industry-standard solution for IMU-denied, long-term SLAM and is critical for robustness.
|
||||
|
||||
* **Mechanism (AC-4, Sharp Turn):**
|
||||
1. The system is tracking on $Map_Fragment_0$.
|
||||
2. The UAV makes a sharp turn (AC-4, <5% overlap). The V-SLAM *loses tracking*.
|
||||
3. Instead of failing, the Atlas architecture *initializes a new map*: $Map_Fragment_1$.
|
||||
4. Tracking *resumes instantly* on this new, unanchored map.
|
||||
* **Mechanism (AC-3, 350m Outlier):**
|
||||
1. The system is tracking. A 350m outlier $Image_N$ arrives.
|
||||
2. The V-SLAM fails to match $Image_N$ (a "Transient VO Failure," see 7.3). It is *discarded*.
|
||||
3. $Image_N+1$ arrives (back on track). V-SLAM re-acquires its location on $Map_Fragment_0$.
|
||||
4. The system "correctly continues the work" (AC-3) by simply rejecting the outlier.
|
||||
|
||||
This design turns "catastrophic failure" (AC-3, AC-4) into a *standard operating procedure*. The "problem" of stitching the fragments ($Map_0$, $Map_1$) together is moved from the V-SLAM (which has no global context) to the TOH (which *can* solve it using GAB anchors, see 6.4).
|
||||
|
||||
### **4.3 Local Bundle Adjustment and High-Fidelity 3D Cloud**
|
||||
|
||||
The V-SLAM front-end will continuously run Local Bundle Adjustment (BA) over a sliding window of recent keyframes to minimize drift *within* that fragment. It will also triangulate a sparse, but high-fidelity, 3D point cloud for its *local map fragment*.
|
||||
|
||||
This 3D cloud serves a critical dual function:
|
||||
|
||||
1. It provides a robust 3D map for frame-to-map tracking, which is more stable than frame-to-frame odometry.
|
||||
2. It serves as the **high-accuracy data source** for the object localization output (Section 7.2). This is the key to decoupling object-pointing accuracy from external DEM accuracy 19, a critical flaw in simpler designs.
|
||||
|
||||
## **5.0 Component 3: The Viewpoint-Invariant Geospatial Anchoring Back-End (GAB)**
|
||||
|
||||
This component *replaces* the draft's "Dynamic Warping" (Section 5.0) and implements the "Viewpoint-Invariant Anchoring" principle (Section 2.1).
|
||||
|
||||
### **5.1 Rationale: Viewpoint-Invariant VPR vs. Geometric Warping (Solves Flaw 1.2)**
|
||||
|
||||
As established in 1.2, geometrically warping the image using the V-SLAM's *drifty* roll/pitch estimate creates a *brittle*, high-risk failure spiral. The ASTRAL GAB *decouples* from the V-SLAM's orientation. It uses a SOTA VPR pipeline that *learns* to match oblique UAV images to nadir satellite images *directly*, at the feature level.6
|
||||
|
||||
### **5.2 Stage 1 (Coarse Retrieval): SOTA Global Descriptors**
|
||||
|
||||
When triggered by the TOH, the GAB takes Image_N_LR. It computes a *global descriptor* (a single feature vector) using a SOTA VPR model like **SALAD** 6 or **MixVPR**.7
|
||||
|
||||
This choice is driven by two factors:
|
||||
|
||||
1. **Viewpoint Invariance:** These models are SOTA for this exact task.
|
||||
2. **Inference Speed:** They are extremely fast. SALAD reports < 3ms per image inference 8, and MixVPR is also noted for "fastest inference speed".37 This low overhead is essential for the AC-7 (<5s) budget.
|
||||
|
||||
This vector is used to query the *pre-computed FAISS vector index* (from 3.4), which returns the Top-K (e.g., K=5) most likely satellite tiles from the *entire AOI* in milliseconds.
|
||||
|
||||
#### **Table 3: Analysis of VPR Global Descriptors (GAB Back-End)**
|
||||
|
||||
| Model (Backbone) | Key Feature | Viewpoint Invariance | Inference Speed (ms) | Fitness for GAB |
|
||||
| :---- | :---- | :---- | :---- | :---- |
|
||||
| NetVLAD 7 (CNN) | Baseline | Poor. Not designed for oblique-to-nadir. | Moderate (\~20-50ms) | **Poor.** Fails robustness. |
|
||||
| **SALAD** 8 (DINOv2) | Foundation Model.6 | **Excellent.** Designed for this. | **< 3ms**.8 Extremely fast. | **Excellent (Selected).** |
|
||||
| **MixVPR** 36 (ResNet) | All-MLP aggregator.36 | **Very Good.**.7 | **Very Fast.**.37 | **Excellent (Selected).** |
|
||||
|
||||
### **5.3 Stage 2 (Fine): Local Feature Matching and Pose Refinement**
|
||||
|
||||
The system runs **SuperPoint+LightGlue** 35 to find pixel-level matches, but *only* between the UAV image and the **Top-K satellite tiles** identified in Stage 1.
|
||||
|
||||
A **Multi-Resolution Strategy** is employed to solve the VRAM bottleneck.
|
||||
|
||||
1. Stage 1 (Coarse) runs on the Image_N_LR.
|
||||
2. Stage 2 (Fine) runs SuperPoint *selectively* on the Image_N_HR (6.2K) to get high-accuracy keypoints.
|
||||
3. It then matches small, full-resolution *patches* from the full-res image, *not* the full image.
|
||||
|
||||
This hybrid approach is the *only* way to meet both AC-7 (speed) and AC-2 (accuracy). The 6.2K image *cannot* be processed in <5s on an RTX 2060 (6GB VRAM 34). But its high-resolution *pixels* are needed for the 20m *accuracy*. Using full-res *patches* provides the pixel-level accuracy without the VRAM/compute cost.
|
||||
|
||||
A PnP/RANSAC solver then computes a high-confidence 6-DoF pose. This pose, converted to [Lat, Lon, Alt], is the **$Absolute_Metric_Anchor$** sent to the TOH.
|
||||
|
||||
## **6.0 Component 4: The Scale-Aware Trajectory Optimization Hub (TOH)**
|
||||
|
||||
This component is the system's "brain" and implements the "Continuously-Scaled Trajectory" principle (Section 2.1). It *replaces* the draft's flawed "Single Scale" optimizer.
|
||||
|
||||
### **6.1 The $Sim(3)$ Pose-Graph as the Optimization Backbone**
|
||||
|
||||
The central challenge of IMU-denied monocular SLAM is *scale drift*.11 The V-SLAM (Component 3) produces 6-DoF poses, but they are *unscaled* ($SE(3)$). The GAB (Component 4) produces *metric* 6-DoF poses ($SE(3)$).
|
||||
|
||||
The solution is to optimize the *entire graph* in the 7-DoF "Similarity" group, **$Sim(3)$**.11 This adds a 7th degree of freedom (scale, $s$) to the poses. The optimization backbone will be **Ceres Solver** 14, a SOTA C++ library for large, complex non-linear least-squares problems.
|
||||
|
||||
### **6.2 Advanced Scale-Drift Correction: Modeling Scale as a Per-Keyframe Parameter (Solves Flaw 1.3)**
|
||||
|
||||
This is the *core* of the ASTRAL optimizer, solving Flaw 1.3. The draft's flawed model ($Pose_Graph(Fragment_i) = \\{Pose_1...Pose_n, s_i\\}$) is replaced by ASTRAL's correct model: $Pose_Graph = \\{ (Pose_1, s_1), (Pose_2, s_2),..., (Pose_N, s_N) \\}$.
|
||||
|
||||
The graph is constructed as follows:
|
||||
|
||||
* **Nodes:** Each keyframe pose is a 7-DoF $Sim(3)$ variable $\\{s_k, R_k, t_k\\}$.
|
||||
* **Edge 1 (V-SLAM):** A *relative* $Sim(3)$ constraint between $Pose_k$ and $Pose_{k+1}$ from the V-SLAM Front-End.
|
||||
* **Edge 2 (GAB):** An *absolute* $SE(3)$ constraint on $Pose_j$ from a GAB anchor. This constraint *fixes* the 6-DoF pose $(R_j, t_j)$ to the metric GAB value and *fixes its scale* $s_j = 1.0$.
|
||||
|
||||
This "per-keyframe scale" model 15 enables "elastic" trajectory refinement. When the graph is a long, unscaled "chain" of V-SLAM constraints, a GAB anchor (Edge 2) arrives at $Pose_{100}$, "nailing" it to the metric map and setting $s_{100} = 1.0$. As the V-SLAM continues, scale drifts. When a second anchor arrives at $Pose_{200}$ (setting $s_{200} = 1.0$), the Ceres optimizer 14 has a problem: the V-SLAM data *between* them has drifted.
|
||||
|
||||
The ASTRAL model *allows* the optimizer to solve for all intermediate scales (s_{101}, s_{102},..., s_{199}) as variables. The optimizer will find a *smooth, continuous* scale correction 15 that "elastically" stretches/shrinks the 100-frame sub-segment to *perfectly* fit both metric anchors. This *correctly* models the physics of scale drift 12 and is the *only* way to achieve the 20m accuracy (AC-2) and 1.0px MRE (AC-10).
|
||||
|
||||
### **6.3 Robust M-Estimation (Solution for AC-3, AC-5)**
|
||||
|
||||
A 350m outlier (AC-3) or a bad GAB match (AC-5) will add a constraint with a *massive* error. A standard least-squares optimizer 14 would be *catastrophically* corrupted, pulling the *entire* 3000-image trajectory to try and fit this one bad point.
|
||||
|
||||
This is a solved problem. All constraints (V-SLAM and GAB) *must* be wrapped in a **Robust Loss Function** (e.g., HuberLoss, CauchyLoss) within Ceres Solver. This function mathematically *down-weights* the influence of constraints with large errors (high residuals). It effectively tells the optimizer: "This measurement is insane. Ignore it." This provides automatic, graceful outlier rejection, meeting AC-3 and AC-5.
|
||||
|
||||
### **6.4 Geodetic Map-Merging (Solution for AC-4, AC-6)**
|
||||
|
||||
This mechanism is the robust solution to the "sharp turn" (AC-4) problem.
|
||||
|
||||
* **Scenario:** The UAV makes a sharp turn (AC-4). The V-SLAM (4.2) creates Map_Fragment_0 and Map_Fragment_1. The TOH's graph now has two *disconnected* components.
|
||||
* **Mechanism (Geodetic Merging):**
|
||||
1. The TOH queues the GAB (Section 5.0) to find anchors for *both* fragments.
|
||||
2. GAB returns Anchor_A for Map_Fragment_0 and Anchor_B for Map_Fragment_1.
|
||||
3. The TOH adds *both* of these as absolute, metric constraints (Edge 2) to the *single global pose-graph*.
|
||||
4. The Ceres optimizer 14 now has all the information it needs. It solves for the 7-Dof pose of *both fragments*, placing them in their correct, globally-consistent metric positions.
|
||||
|
||||
The two fragments are *merged geodetically* (by their global coordinates 11) even if they *never* visually overlap. This is a vastly more robust solution to AC-4 and AC-6 than simple visual loop closure.
|
||||
|
||||
## **7.0 Performance, Deployment, and High-Accuracy Outputs**
|
||||
|
||||
### **7.1 Meeting the <5s Budget (AC-7): Mandatory Acceleration with NVIDIA TensorRT**
|
||||
|
||||
The system must run on an RTX 2060 (AC-7). This is a low-end, 6GB VRAM card 34, which is a *severe* constraint. Running three deep-learning models (SuperPoint, LightGlue, SALAD/MixVPR) plus a Ceres optimizer 38 will saturate this hardware.
|
||||
|
||||
* **Solution 1: Multi-Scale Pipeline.** As defined in 5.3, the system *never* processes a full 6.2K image on the GPU. It uses low-res for V-SLAM/GAB-Coarse and high-res *patches* for GAB-Fine.
|
||||
* **Solution 2: Mandatory TensorRT Deployment.** Running these models in their native PyTorch framework will be too slow. All neural networks (SuperPoint, LightGlue, SALAD/MixVPR) *must* be converted from PyTorch into optimized **NVIDIA TensorRT engines**. Research *specifically* on accelerating LightGlue shows this provides **"2x-4x speed gains over compiled PyTorch"**.35 This 200-400% speedup is *not* an optimization; it is a *mandatory deployment step* to make the <5s (AC-7) budget *possible* on an RTX 2060.
|
||||
|
||||
### **7.2 High-Accuracy Object Geolocalization via Ray-Cloud Intersection (Solves AC-2/AC-10)**
|
||||
|
||||
The user must be able to find the GPS of an *object* in a photo. A simple approach of ray-casting from the camera and intersecting with the 30m GLO-30 DEM 2 is fatally flawed. The DEM error itself can be up to 30m 19, making AC-2 impossible.
|
||||
|
||||
The ASTRAL system uses a **Ray-Cloud Intersection** method that *decouples* object accuracy from external DEM accuracy.
|
||||
|
||||
* **Algorithm:**
|
||||
1. The user clicks pixel (u,v) on Image_N.
|
||||
2. The system retrieves the *final, refined, metric 7-DoF pose* P_{sim(3)} = (s, R, T) for Image_N from the TOH.
|
||||
3. It also retrieves the V-SLAM's *local, high-fidelity 3D point cloud* (P_{local_cloud}) from Component 3 (Section 4.3).
|
||||
4. **Step 1 (Local):** The pixel (u,v) is un-projected into a ray. This ray is intersected with the *local* P_{local_cloud}. This finds the 3D point $P_{local} *relative to the V-SLAM map*. The accuracy of this step is defined by AC-10 (MRE < 1.0px).
|
||||
5. **Step 2 (Global):** This *highly-accurate* local point P_{local} is transformed into the global metric coordinate system using the *highly-accurate* refined pose from the TOH: P_{metric} = s * (R * P_{local}) + T.
|
||||
6. **Step 3 (Convert):** P_{metric} (an X,Y,Z world coordinate) is converted to [Latitude, Longitude, Altitude].
|
||||
|
||||
This method correctly isolates error. The object's accuracy is now *only* dependent on the V-SLAM's internal geometry (AC-10) and the TOH's global pose accuracy (AC-1, AC-2). It *completely eliminates* the external 30m DEM error 2 from this critical, high-accuracy calculation.
|
||||
|
||||
### **7.3 Failure Mode Escalation Logic (Meets AC-3, AC-4, AC-6, AC-9)**
|
||||
|
||||
The system is built on a robust state machine to handle real-world failures.
|
||||
|
||||
* **Stage 1: Normal Operation (Tracking):** V-SLAM tracks, TOH optimizes.
|
||||
* **Stage 2: Transient VO Failure (Outlier Rejection):**
|
||||
* *Condition:* Image_N is a 350m outlier (AC-3) or severe blur.
|
||||
* *Logic:* V-SLAM fails to track Image_N. System *discards* it (AC-5). Image_N+1 arrives, V-SLAM re-tracks.
|
||||
* *Result:* **AC-3 Met.**
|
||||
* **Stage 3: Persistent VO Failure (New Map Initialization):**
|
||||
* *Condition:* "Sharp turn" (AC-4) or >5 frames of tracking loss.
|
||||
* *Logic:* V-SLAM (Section 4.2) declares "Tracking Lost." Initializes *new* Map_Fragment_k+1. Tracking *resumes instantly*.
|
||||
* *Result:* **AC-4 Met.** System "correctly continues the work." The >95% registration rate (AC-9) is met because this is *not* a failure, it's a *new registration*.
|
||||
* **Stage 4: Map-Merging & Global Relocalization (GAB-Assisted):**
|
||||
* *Condition:* System is on Map_Fragment_k+1, Map_Fragment_k is "lost."
|
||||
* *Logic:* TOH (Section 6.4) receives GAB anchors for *both* fragments and *geodetically merges* them in the global optimizer.14
|
||||
* *Result:* **AC-6 Met** (strategy to connect separate chunks).
|
||||
* **Stage 5: Catastrophic Failure (User Intervention):**
|
||||
* *Condition:* System is in Stage 3 (Lost) *and* the GAB has failed for 20% of the route. The "absolutely incapable" scenario (AC-6).
|
||||
* *Logic:* TOH triggers the AC-6 flag. UI prompts user: "Please provide a coarse location for the *current* image."
|
||||
* *Action:* This user-click is *not* taken as ground-truth. It is fed to the **GAB (Section 5.0)** as a *strong spatial prior*, narrowing its Stage 1 8 search from "the entire AOI" to "a 5km radius." This *guarantees* the GAB finds a match, which triggers Stage 4, re-localizing the system.
|
||||
* *Result:* **AC-6 Met** (user input).
|
||||
|
||||
## **8.0 ASTRAL Validation Plan and Acceptance Criteria Matrix**
|
||||
|
||||
A comprehensive test plan is required to validate compliance with all 10 Acceptance Criteria. The foundation is a **Ground-Truth Test Harness** using project-provided ground-truth data.
|
||||
|
||||
### **Table 4: ASTRAL Component vs. Acceptance Criteria Compliance Matrix**
|
||||
|
||||
| ID | Requirement | ASTRAL Solution (Component) | Key Technology / Justification |
|
||||
| :---- | :---- | :---- | :---- |
|
||||
| **AC-1** | 80% of photos < 50m error | GDB (C-1) + GAB (C-5) + TOH (C-6) | **Tier-1 (Copernicus)** data 1 is sufficient. SOTA VPR 8 + Sim(3) graph 13 can achieve this. |
|
||||
| **AC-2** | 60% of photos < 20m error | GDB (C-1) + GAB (C-5) + TOH (C-6) | **Requires Tier-2 (Commercial) Data**.4 Mitigates reference error.3 **Per-Keyframe Scale** 15 model in TOH minimizes drift error. |
|
||||
| **AC-3** | Robust to 350m outlier | V-SLAM (C-3) + TOH (C-6) | **Stage 2 Failure Logic** (7.3) discards the frame. **Robust M-Estimation** (6.3) in Ceres 14 automatically rejects the constraint. |
|
||||
| **AC-4** | Robust to sharp turns (<5% overlap) | V-SLAM (C-3) + TOH (C-6) | **"Atlas" Multi-Map** (4.2) initializes new map (Map_Fragment_k+1). **Geodetic Map-Merging** (6.4) in TOH re-connects fragments via GAB anchors. |
|
||||
| **AC-5** | < 10% outlier anchors | TOH (C-6) | **Robust M-Estimation (Huber Loss)** (6.3) in Ceres 14 automatically down-weights and ignores high-residual (bad) GAB anchors. |
|
||||
| **AC-6** | Connect route chunks; User input | V-SLAM (C-3) + TOH (C-6) + UI | **Geodetic Map-Merging** (6.4) connects chunks. **Stage 5 Failure Logic** (7.3) provides the user-input-as-prior mechanism. |
|
||||
| **AC-7** | < 5 seconds processing/image | All Components | **Multi-Scale Pipeline** (5.3) (Low-Res V-SLAM, Hi-Res GAB patches). **Mandatory TensorRT Acceleration** (7.1) for 2-4x speedup.35 |
|
||||
| **AC-8** | Real-time stream + async refinement | TOH (C-5) + Outputs (C-2.4) | Decoupled architecture provides Pose_N_Est (V-SLAM) in real-time and Pose_N_Refined (TOH) asynchronously as GAB anchors arrive. |
|
||||
| **AC-9** | Image Registration Rate > 95% | V-SLAM (C-3) | **"Atlas" Multi-Map** (4.2). A "lost track" (AC-4) is *not* a registration failure; it's a *new map registration*. This ensures the rate > 95%. |
|
||||
| **AC-10** | Mean Reprojection Error (MRE) < 1.0px | V-SLAM (C-3) + TOH (C-6) | Local BA (4.3) + Global BA (TOH14) + **Per-Keyframe Scale** (6.2) minimizes internal graph tension (Flaw 1.3), allowing the optimizer to converge to a low MRE. |
|
||||
|
||||
### **8.1 Rigorous Validation Methodology**
|
||||
|
||||
* **Test Harness:** A validation script will be created to compare the system's Pose_N^{Refined} output against a ground-truth coordinates.csv file, computing Haversine distance errors.
|
||||
* **Test Datasets:**
|
||||
* Test_Baseline: Standard flight.
|
||||
* Test_Outlier_350m (AC-3): A single, unrelated image inserted.
|
||||
* Test_Sharp_Turn_5pct (AC-4): A sequence with a 10-frame gap.
|
||||
* Test_Long_Route (AC-9, AC-7): A 2000-image sequence.
|
||||
* **Test Cases:**
|
||||
* Test_Accuracy: Run Test_Baseline. ASSERT (count(errors < 50m) / total) >= 0.80 (AC-1). ASSERT (count(errors < 20m) / total) >= 0.60 (AC-2).
|
||||
* Test_Robustness: Run Test_Outlier_350m and Test_Sharp_Turn_5pct. ASSERT system completes the run and Test_Accuracy assertions still pass on the valid frames.
|
||||
* Test_Performance: Run Test_Long_Route on min-spec RTX 2060. ASSERT average_time(Pose_N^{Est} output) < 5.0s (AC-7).
|
||||
* Test_MRE: ASSERT TOH.final_MRE < 1.0 (AC-10).
|
||||
|
||||
Identify all potential weak points and problems. Address them and find out ways to solve them. Based on your findings, form a new solution draft in the same format.
|
||||
|
||||
If your finding requires a complete reorganization of the flow and different components, state it.
|
||||
Put all the findings regarding what was weak and poor at the beginning of the report.
|
||||
At the very beginning of the report, list the most profound changes you've made to the previous solution.
|
||||
Then form a new solution design without referencing the previous system. Remove Poor and Very Poor component choices from the component analysis tables, but leave Good and Excellent ones.
|
||||
In the updated report, do not put "new" marks; do not compare to the previous solution draft, just make a new solution as if from scratch.
|
||||
|
||||
Also, find out more ideas, like a
|
||||
- A Cross-View Geo-Localization Algorithm Using UAV Image
|
||||
https://www.mdpi.com/1424-8220/24/12/3719
|
||||
- Exploring the best way for UAV visual localization under Low-altitude Multi-view Observation condition
|
||||
https://arxiv.org/pdf/2503.10692
|
||||
|
||||
Assess them and try to either integrate or replace some of the components in the current solution draft
|
||||
@@ -1,373 +0,0 @@
|
||||
Read carefully about the problem:
|
||||
|
||||
We have a lot of images taken from a wing-type UAV using a camera with at least Full HD resolution. Resolution of each photo could be up to 6200*4100 for the whole flight, but for other flights, it could be FullHD
|
||||
Photos are taken and named consecutively within 100 meters of each other.
|
||||
We know only the starting GPS coordinates. We need to determine the GPS of the centers of each image. And also the coordinates of the center of any object in these photos. We can use an external satellite provider for ground checks on the existing photos
|
||||
|
||||
System has next restrictions and conditions:
|
||||
- Photos are taken by only airplane type UAVs.
|
||||
- Photos are taken by the camera pointing downwards and fixed, but it is not autostabilized.
|
||||
- The flying range is restricted by the eastern and southern parts of Ukraine (To the left of the Dnipro River)
|
||||
- The image resolution could be from FullHD to 6252*4168. Camera parameters are known: focal length, sensor width, resolution and so on.
|
||||
- Altitude is predefined and no more than 1km. The height of the terrain can be neglected.
|
||||
- There is NO data from IMU
|
||||
- Flights are done mostly in sunny weather
|
||||
- We can use satellite providers, but we're limited right now to Google Maps, which could be outdated for some regions
|
||||
- Number of photos could be up to 3000, usually in the 500-1500 range
|
||||
- During the flight, UAVs can make sharp turns, so that the next photo may be absolutely different from the previous one (no same objects), but it is rather an exception than the rule
|
||||
- Processing is done on a stationary computer or laptop with NVidia GPU at least RTX2060, better 3070. (For the UAV solution Jetson Orin Nano would be used, but that is out of scope.)
|
||||
|
||||
Output of the system should address next acceptance criteria:
|
||||
- The system should find out the GPS of centers of 80% of the photos from the flight within an error of no more than 50 meters in comparison to the real GPS
|
||||
|
||||
- The system should find out the GPS of centers of 60% of the photos from the flight within an error of no more than 20 meters in comparison to the real GPS
|
||||
|
||||
- The system should correctly continue the work even in the presence of up to 350 meters of an outlier photo between 2 consecutive pictures en route. This could happen due to tilt of the plane.
|
||||
|
||||
- System should correctly continue the work even during sharp turns, where the next photo doesn't overlap at all, or overlaps in less than 5%. The next photo should be in less than 200m drift and at an angle of less than 70%
|
||||
|
||||
- System should try to operate when UAV made a sharp turn, and all the next photos has no common points with previous route. In that situation system should try to figure out location of the new piece of the route and connect it to the previous route. Also this separate chunks could be more than 2, so this strategy should be in the core of the system
|
||||
|
||||
- In case of being absolutely incapable of determining the system to determine next, second next, and third next images GPS, by any means (these 20% of the route), then it should ask the user for input for the next image, so that the user can specify the location
|
||||
|
||||
- Less than 5 seconds for processing one image
|
||||
|
||||
- Results of image processing should appear immediately to user, so that user shouldn't wait for the whole route to complete in order to analyze first results. Also, system could refine existing calculated results and send refined results again to user
|
||||
|
||||
- Image Registration Rate > 95%. The system can find enough matching features to confidently calculate the camera's 6-DoF pose (position and orientation) and "stitch" that image into the final trajectory
|
||||
|
||||
- Mean Reprojection Error (MRE) < 1.0 pixels. The distance, in pixels, between the original pixel location of the object and the re-projected pixel location.
|
||||
|
||||
- The whole system should work as a background service. The interaction should be done by zeromq. Sevice should be up and running and awaiting for the initial input message. On the input message processing should started, and immediately after the first results system should provide them to the client
|
||||
|
||||
Here is a solution draft:
|
||||
|
||||
|
||||
# **ASTRAL System Architecture: A High-Fidelity Geopositioning Framework for IMU-Denied Aerial Operations**
|
||||
|
||||
## **2.0 The ASTRAL (Advanced Scale-Aware Trajectory-Refinement and Localization) System Architecture**
|
||||
|
||||
The ASTRAL architecture is a multi-map, decoupled, loosely-coupled system designed to solve the flaws identified in Section 1.0 and meet all 10 Acceptance Criteria.
|
||||
|
||||
### **2.1 Core Principles**
|
||||
|
||||
The ASTRAL architecture is built on three principles:
|
||||
|
||||
1. **Tiered Geospatial Database:** The system *cannot* rely on a single data source. It is architected around a *tiered* local database.
|
||||
* **Tier-1 (Baseline):** Google Maps data. This is used to meet the 50m (AC-1) requirement and provide geolocalization.
|
||||
* **Tier-2 (High-Accuracy):** A framework for ingesting *commercial, sub-meter* data (visual 4; and DEM 5). This tier is *required* to meet the 20m (AC-2) accuracy. The system will *run* on Tier-1 but *achieve* AC-2 when "fueled" with Tier-2 data.
|
||||
2. **Viewpoint-Invariant Anchoring:** The system *rejects* geometric warping. The GAB (Section 5.0) is built on SOTA Visual Place Recognition (VPR) models that are *inherently* invariant to the oblique-to-nadir viewpoint change, decoupling it from the V-SLAM's unstable orientation.
|
||||
3. **Continuously-Scaled Trajectory:** The system *rejects* the "single-scale-per-fragment" model. The TOH (Section 6.0) is a Sim(3) pose-graph optimizer 11 that models scale as a *per-keyframe optimizable parameter*.15 This allows the trajectory to "stretch" and "shrink" elastically to absorb continuous monocular scale drift.12
|
||||
|
||||
### **2.2 Component Interaction and Data Flow**
|
||||
|
||||
The system is multi-threaded and asynchronous, designed for real-time streaming (AC-7) and refinement (AC-8).
|
||||
|
||||
* **Component 1: Tiered GDB (Pre-Flight):**
|
||||
* *Input:* User-defined Area of Interest (AOI).
|
||||
* *Action:* Downloads and builds a local SpatiaLite/GeoPackage.
|
||||
* *Output:* A single **Local-Geo-Database file** containing:
|
||||
* Tier-1 (Google Maps) + GLO-30 DSM
|
||||
* Tier-2 (Commercial) satellite tiles + WorldDEM DTM elevation tiles.
|
||||
* A *pre-computed FAISS vector index* of global descriptors (e.g., SALAD 8) for *all* satellite tiles (see 3.4).
|
||||
* **Component 2: Image Ingestion (Real-time):**
|
||||
* *Input:* Image_N (up to 6.2K), Camera Intrinsics ($K$).
|
||||
* *Action:* Creates Image_N_LR (Low-Res, e.g., 1536x1024) and Image_N_HR (High-Res, 6.2K).
|
||||
* *Dispatch:* Image_N_LR -> V-SLAM. Image_N_HR -> GAB (for patches).
|
||||
* **Component 3: "Atlas" V-SLAM Front-End (High-Frequency Thread):**
|
||||
* *Input:* Image_N_LR.
|
||||
* *Action:* Tracks Image_N_LR against the *active map fragment*. Manages keyframes and local BA. If tracking lost (AC-4, AC-6), it *initializes a new map fragment*.
|
||||
* *Output:* Relative_Unscaled_Pose, Local_Point_Cloud, and Map_Fragment_ID -> TOH.
|
||||
* **Component 4: VPR Geospatial Anchoring Back-End (GAB) (Low-Frequency, Asynchronous Thread):**
|
||||
* *Input:* A keyframe (Image_N_LR, Image_N_HR) and its Map_Fragment_ID.
|
||||
* *Action:* Performs SOTA two-stage VPR (Section 5.0) against the **Local-Geo-Database file**.
|
||||
* *Output:* Absolute_Metric_Anchor ([Lat, Lon, Alt] pose) and its Map_Fragment_ID -> TOH.
|
||||
* **Component 5: Scale-Aware Trajectory Optimization Hub (TOH) (Central Hub Thread):**
|
||||
* *Input 1:* High-frequency Relative_Unscaled_Pose stream.
|
||||
* *Input 2:* Low-frequency Absolute_Metric_Anchor stream.
|
||||
* *Action:* Manages the *global Sim(3) pose-graph* 13 with *per-keyframe scale*.15
|
||||
* *Output 1 (Real-time):* Pose_N_Est (unscaled) -> UI (Meets AC-7).
|
||||
* *Output 2 (Refined):* Pose_N_Refined (metric-scale) -> UI (Meets AC-1, AC-2, AC-8).
|
||||
|
||||
### **2.3 System Inputs**
|
||||
|
||||
1. **Image Sequence:** Consecutively named images (FullHD to 6252x4168).
|
||||
2. **Start Coordinate (Image 0):** A single, absolute GPS coordinate [Lat, Lon].
|
||||
3. **Camera Intrinsics (K):** Pre-calibrated camera intrinsic matrix.
|
||||
4. **Local-Geo-Database File:** The single file generated by Component 1.
|
||||
|
||||
### **2.4 Streaming Outputs (Meets AC-7, AC-8)**
|
||||
|
||||
1. **Initial Pose (Pose_N^{Est}):** An *unscaled* pose. This is the raw output from the V-SLAM Front-End, transformed by the *current best estimate* of the trajectory. It is sent immediately (<5s, AC-7) to the UI for real-time visualization of the UAV's *path shape*.
|
||||
2. **Refined Pose (Pose_N^{Refined}) [Asynchronous]:** A globally-optimized, *metric-scale* 7-DoF pose. This is sent to the user *whenever the TOH re-converges* (e.g., after a new GAB anchor or a map-merge). This *re-writes* the history of poses (e.g., Pose_{N-100} to Pose_N), meeting the refinement (AC-8) and accuracy (AC-1, AC-2) requirements.
|
||||
|
||||
## **3.0 Component 1: The Tiered Pre-Flight Geospatial Database (GDB)**
|
||||
|
||||
This component is the implementation of the "Tiered Geospatial" principle. It is a mandatory pre-flight utility that solves both the *legal* problem (Flaw 1.4) and the *accuracy* problem (Flaw 1.1).
|
||||
|
||||
### **3.2 Tier-1 (Baseline): Google Maps and GLO-30 DEM**
|
||||
|
||||
This tier provides the baseline capability and satisfies AC-1.
|
||||
|
||||
* **Visual Data:** Google Maps (coarse Maxar)
|
||||
* *Resolution:* 10m.
|
||||
* *Geodetic Accuracy:* \~1 m to 20m
|
||||
* *Purpose:* Meets AC-1 (80% < 50m error). Provides a robust baseline for coarse geolocalization.
|
||||
* **Elevation Data:** Copernicus GLO-30 DEM
|
||||
* *Resolution:* 30m.
|
||||
* *Type:* DSM (Digital Surface Model).2 This is a *weakness*, as it includes buildings/trees.
|
||||
* *Purpose:* Provides a coarse altitude prior for the TOH and the initial GAB search.
|
||||
|
||||
### **3.3 Tier-2 (High-Accuracy): Ingestion Framework for Commercial Data**
|
||||
|
||||
This is the *procurement and integration framework* required to meet AC-2.
|
||||
|
||||
* **Visual Data:** Commercial providers, e.g., Maxar (30-50cm) or Satellogic (70cm)
|
||||
* *Resolution:* < 1m.
|
||||
* *Geodetic Accuracy:* Typically < 5m.
|
||||
* *Purpose:* Provides the high-resolution, high-accuracy reference needed for the GAB to achieve a sub-20m total error.
|
||||
* **Elevation Data:** Commercial providers, e.g., WorldDEM Neo 5 or Elevation10.32
|
||||
* *Resolution:* 5m-12m.
|
||||
* *Vertical Accuracy:* < 4m.32
|
||||
* *Type:* DTM (Digital Terrain Model).32
|
||||
|
||||
The use of a DTM (bare-earth) in Tier-2 is a critical advantage over the Tier-1 DSM (surface). The V-SLAM Front-End (Section 4.0) will triangulate a 3D point cloud of what it *sees*, which is the *ground* in fields or *tree-tops* in forests. The Tier-1 GLO-30 DSM 2 represents the *top* of the canopy/buildings. If the V-SLAM maps the *ground* (e.g., altitude 100m) and the GAB tries to anchor it to a DSM *prior* that shows a forest (e.g., altitude 120m), the 20m altitude discrepancy will introduce significant error into the TOH. The Tier-2 DTM (bare-earth) 5 provides a *vastly* superior altitude anchor, as it represents the same ground plane the V-SLAM is tracking, significantly improving the entire 7-DoF pose solution.
|
||||
|
||||
### **3.4 Local Database Generation: Pre-computing Global Descriptors**
|
||||
|
||||
This is the key performance optimization for the GAB. During the pre-flight caching step, the GDB utility does not just *store* tiles; it *processes* them.
|
||||
|
||||
For *every* satellite tile (e.g., 256x256m) in the AOI, the utility will load the tile into the VPR model (e.g., SALAD 8), compute its global descriptor (a compact feature vector), and store this vector in a high-speed vector index (e.g., FAISS).
|
||||
|
||||
This step moves 99% of the GAB's "Stage 1" (Coarse Retrieval) workload into an offline, pre-flight step. The *real-time* GAB query (Section 5.2) is now reduced to: (1) Compute *one* vector for the UAV image, and (2) Perform a very fast K-Nearest-Neighbor search on the pre-computed FAISS index. This is what makes a SOTA deep-learning GAB 6 fast enough to support the real-time refinement loop.
|
||||
|
||||
#### **Table 1: Geospatial Reference Data Analysis (Decision Matrix)**
|
||||
|
||||
| Data Product | Type | Resolution | Geodetic Accuracy (Horiz.) | Type | Cost | AC-2 (20m) Compliant? |
|
||||
| :---- | :---- | :---- | :---- | :---- | :---- | :---- |
|
||||
| Google Maps | Visual | 1m | 1m - 10m | N/A | Free | **Depending on the location** |
|
||||
| Copernicus GLO-30 | Elevation | 30m | \~10-30m | **DSM** (Surface) | Free | **No (Fails Error Budget)** |
|
||||
| **Tier-2: Maxar/Satellogic** | Visual | 0.3m - 0.7m | < 5 m (Est.) | N/A | Commercial | **Yes** |
|
||||
| **Tier-2: WorldDEM Neo** | Elevation | 5m | < 4m | **DTM** (Bare-Earth) | Commercial | **Yes** |
|
||||
|
||||
## **4.0 Component 2: The "Atlas" Relative Motion Front-End**
|
||||
|
||||
This component's sole task is to robustly compute *unscaled* 6-DoF relative motion and handle tracking failures (AC-3, AC-4).
|
||||
|
||||
### **4.1 Feature Matching Sub-System: SuperPoint + LightGlue**
|
||||
|
||||
The system will use **SuperPoint** for feature detection and **LightGlue** for matching. This choice is driven by the project's specific constraints:
|
||||
|
||||
* **Rationale (Robustness):** The UAV flies over "eastern and southern parts of Ukraine," which includes large, low-texture agricultural areas. SuperPoint is a SOTA deep-learning detector renowned for its robustness and repeatability in these challenging, low-texture environments.
|
||||
* **Rationale (Performance):** The RTX 2060 (AC-7) is a *hard* constraint with only 6GB VRAM.34 Performance is paramount. LightGlue is an SOTA matcher that provides a 4-10x speedup over its predecessor, SuperGlue. Its "adaptive" nature is a key optimization: it exits early on "easy" pairs (high-overlap, straight-flight) and spends more compute only on "hard" pairs (turns). This saves critical GPU budget on 95% of normal frames, ensuring the <5s (AC-7) budget is met.
|
||||
|
||||
This subsystem will run on the Image_N_LR (low-res) copy to guarantee it fits in VRAM and meets the real-time budget.
|
||||
|
||||
#### **Table 2: Analysis of State-of-the-Art Feature Matchers (V-SLAM Front-End)**
|
||||
|
||||
| Approach (Tools/Library) | Robustness (Low-Texture) | Speed (RTX 2060) | Fitness for Problem |
|
||||
| :---- | :---- | :---- | :---- |
|
||||
| ORB 33 (e.g., ORB-SLAM3) | Poor. Fails on low-texture. | Excellent (CPU/GPU) | **Good.** Fails robustness in target environment. |
|
||||
| SuperPoint + SuperGlue | Excellent. | Good, but heavy. Fixed-depth GNN. 4-10x Slower than LightGlue.35 | **Good.** Robust, but risks AC-7 budget. |
|
||||
| **SuperPoint + LightGlue** 35 | Excellent. | **Excellent.** Adaptive depth 35 saves budget. 4-10x faster. | **Excellent (Selected).** Balances robustness and performance. |
|
||||
|
||||
### **4.2 The "Atlas" Multi-Map Paradigm (Solution for AC-3, AC-4, AC-6)**
|
||||
|
||||
This architecture is the industry-standard solution for IMU-denied, long-term SLAM and is critical for robustness.
|
||||
|
||||
* **Mechanism (AC-4, Sharp Turn):**
|
||||
1. The system is tracking on $Map_Fragment_0$.
|
||||
2. The UAV makes a sharp turn (AC-4, <5% overlap). The V-SLAM *loses tracking*.
|
||||
3. Instead of failing, the Atlas architecture *initializes a new map*: $Map_Fragment_1$.
|
||||
4. Tracking *resumes instantly* on this new, unanchored map.
|
||||
* **Mechanism (AC-3, 350m Outlier):**
|
||||
1. The system is tracking. A 350m outlier $Image_N$ arrives.
|
||||
2. The V-SLAM fails to match $Image_N$ (a "Transient VO Failure," see 7.3). It is *discarded*.
|
||||
3. $Image_N+1$ arrives (back on track). V-SLAM re-acquires its location on $Map_Fragment_0$.
|
||||
4. The system "correctly continues the work" (AC-3) by simply rejecting the outlier.
|
||||
|
||||
This design turns "catastrophic failure" (AC-3, AC-4) into a *standard operating procedure*. The "problem" of stitching the fragments ($Map_0$, $Map_1$) together is moved from the V-SLAM (which has no global context) to the TOH (which *can* solve it using GAB anchors, see 6.4).
|
||||
|
||||
### **4.3 Local Bundle Adjustment and High-Fidelity 3D Cloud**
|
||||
|
||||
The V-SLAM front-end will continuously run Local Bundle Adjustment (BA) over a sliding window of recent keyframes to minimize drift *within* that fragment. It will also triangulate a sparse, but high-fidelity, 3D point cloud for its *local map fragment*.
|
||||
|
||||
This 3D cloud serves a critical dual function:
|
||||
|
||||
1. It provides a robust 3D map for frame-to-map tracking, which is more stable than frame-to-frame odometry.
|
||||
2. It serves as the **high-accuracy data source** for the object localization output (Section 7.2). This is the key to decoupling object-pointing accuracy from external DEM accuracy 19, a critical flaw in simpler designs.
|
||||
|
||||
## **5.0 Component 3: The Viewpoint-Invariant Geospatial Anchoring Back-End (GAB)**
|
||||
|
||||
This component *replaces* the draft's "Dynamic Warping" (Section 5.0) and implements the "Viewpoint-Invariant Anchoring" principle (Section 2.1).
|
||||
|
||||
### **5.1 Rationale: Viewpoint-Invariant VPR vs. Geometric Warping (Solves Flaw 1.2)**
|
||||
|
||||
As established in 1.2, geometrically warping the image using the V-SLAM's *drifty* roll/pitch estimate creates a *brittle*, high-risk failure spiral. The ASTRAL GAB *decouples* from the V-SLAM's orientation. It uses a SOTA VPR pipeline that *learns* to match oblique UAV images to nadir satellite images *directly*, at the feature level.6
|
||||
|
||||
### **5.2 Stage 1 (Coarse Retrieval): SOTA Global Descriptors**
|
||||
|
||||
When triggered by the TOH, the GAB takes Image_N_LR. It computes a *global descriptor* (a single feature vector) using a SOTA VPR model like **SALAD** 6 or **MixVPR**.7
|
||||
|
||||
This choice is driven by two factors:
|
||||
|
||||
1. **Viewpoint Invariance:** These models are SOTA for this exact task.
|
||||
2. **Inference Speed:** They are extremely fast. SALAD reports < 3ms per image inference 8, and MixVPR is also noted for "fastest inference speed".37 This low overhead is essential for the AC-7 (<5s) budget.
|
||||
|
||||
This vector is used to query the *pre-computed FAISS vector index* (from 3.4), which returns the Top-K (e.g., K=5) most likely satellite tiles from the *entire AOI* in milliseconds.
|
||||
|
||||
#### **Table 3: Analysis of VPR Global Descriptors (GAB Back-End)**
|
||||
|
||||
| Model (Backbone) | Key Feature | Viewpoint Invariance | Inference Speed (ms) | Fitness for GAB |
|
||||
| :---- | :---- | :---- | :---- | :---- |
|
||||
| NetVLAD 7 (CNN) | Baseline | Poor. Not designed for oblique-to-nadir. | Moderate (\~20-50ms) | **Poor.** Fails robustness. |
|
||||
| **SALAD** 8 (DINOv2) | Foundation Model.6 | **Excellent.** Designed for this. | **< 3ms**.8 Extremely fast. | **Excellent (Selected).** |
|
||||
| **MixVPR** 36 (ResNet) | All-MLP aggregator.36 | **Very Good.**.7 | **Very Fast.**.37 | **Excellent (Selected).** |
|
||||
|
||||
### **5.3 Stage 2 (Fine): Local Feature Matching and Pose Refinement**
|
||||
|
||||
The system runs **SuperPoint+LightGlue** 35 to find pixel-level matches, but *only* between the UAV image and the **Top-K satellite tiles** identified in Stage 1.
|
||||
|
||||
A **Multi-Resolution Strategy** is employed to solve the VRAM bottleneck.
|
||||
|
||||
1. Stage 1 (Coarse) runs on the Image_N_LR.
|
||||
2. Stage 2 (Fine) runs SuperPoint *selectively* on the Image_N_HR (6.2K) to get high-accuracy keypoints.
|
||||
3. It then matches small, full-resolution *patches* from the full-res image, *not* the full image.
|
||||
|
||||
This hybrid approach is the *only* way to meet both AC-7 (speed) and AC-2 (accuracy). The 6.2K image *cannot* be processed in <5s on an RTX 2060 (6GB VRAM 34). But its high-resolution *pixels* are needed for the 20m *accuracy*. Using full-res *patches* provides the pixel-level accuracy without the VRAM/compute cost.
|
||||
|
||||
A PnP/RANSAC solver then computes a high-confidence 6-DoF pose. This pose, converted to [Lat, Lon, Alt], is the **$Absolute_Metric_Anchor$** sent to the TOH.
|
||||
|
||||
## **6.0 Component 4: The Scale-Aware Trajectory Optimization Hub (TOH)**
|
||||
|
||||
This component is the system's "brain" and implements the "Continuously-Scaled Trajectory" principle (Section 2.1). It *replaces* the draft's flawed "Single Scale" optimizer.
|
||||
|
||||
### **6.1 The $Sim(3)$ Pose-Graph as the Optimization Backbone**
|
||||
|
||||
The central challenge of IMU-denied monocular SLAM is *scale drift*.11 The V-SLAM (Component 3) produces 6-DoF poses, but they are *unscaled* ($SE(3)$). The GAB (Component 4) produces *metric* 6-DoF poses ($SE(3)$).
|
||||
|
||||
The solution is to optimize the *entire graph* in the 7-DoF "Similarity" group, **$Sim(3)$**.11 This adds a 7th degree of freedom (scale, $s$) to the poses. The optimization backbone will be **Ceres Solver** 14, a SOTA C++ library for large, complex non-linear least-squares problems.
|
||||
|
||||
### **6.2 Advanced Scale-Drift Correction: Modeling Scale as a Per-Keyframe Parameter (Solves Flaw 1.3)**
|
||||
|
||||
This is the *core* of the ASTRAL optimizer, solving Flaw 1.3. The draft's flawed model ($Pose_Graph(Fragment_i) = \\{Pose_1...Pose_n, s_i\\}$) is replaced by ASTRAL's correct model: $Pose_Graph = \\{ (Pose_1, s_1), (Pose_2, s_2),..., (Pose_N, s_N) \\}$.
|
||||
|
||||
The graph is constructed as follows:
|
||||
|
||||
* **Nodes:** Each keyframe pose is a 7-DoF $Sim(3)$ variable $\\{s_k, R_k, t_k\\}$.
|
||||
* **Edge 1 (V-SLAM):** A *relative* $Sim(3)$ constraint between $Pose_k$ and $Pose_{k+1}$ from the V-SLAM Front-End.
|
||||
* **Edge 2 (GAB):** An *absolute* $SE(3)$ constraint on $Pose_j$ from a GAB anchor. This constraint *fixes* the 6-DoF pose $(R_j, t_j)$ to the metric GAB value and *fixes its scale* $s_j = 1.0$.
|
||||
|
||||
This "per-keyframe scale" model 15 enables "elastic" trajectory refinement. When the graph is a long, unscaled "chain" of V-SLAM constraints, a GAB anchor (Edge 2) arrives at $Pose_{100}$, "nailing" it to the metric map and setting $s_{100} = 1.0$. As the V-SLAM continues, scale drifts. When a second anchor arrives at $Pose_{200}$ (setting $s_{200} = 1.0$), the Ceres optimizer 14 has a problem: the V-SLAM data *between* them has drifted.
|
||||
|
||||
The ASTRAL model *allows* the optimizer to solve for all intermediate scales (s_{101}, s_{102},..., s_{199}) as variables. The optimizer will find a *smooth, continuous* scale correction 15 that "elastically" stretches/shrinks the 100-frame sub-segment to *perfectly* fit both metric anchors. This *correctly* models the physics of scale drift 12 and is the *only* way to achieve the 20m accuracy (AC-2) and 1.0px MRE (AC-10).
|
||||
|
||||
### **6.3 Robust M-Estimation (Solution for AC-3, AC-5)**
|
||||
|
||||
A 350m outlier (AC-3) or a bad GAB match (AC-5) will add a constraint with a *massive* error. A standard least-squares optimizer 14 would be *catastrophically* corrupted, pulling the *entire* 3000-image trajectory to try and fit this one bad point.
|
||||
|
||||
This is a solved problem. All constraints (V-SLAM and GAB) *must* be wrapped in a **Robust Loss Function** (e.g., HuberLoss, CauchyLoss) within Ceres Solver. This function mathematically *down-weights* the influence of constraints with large errors (high residuals). It effectively tells the optimizer: "This measurement is insane. Ignore it." This provides automatic, graceful outlier rejection, meeting AC-3 and AC-5.
|
||||
|
||||
### **6.4 Geodetic Map-Merging (Solution for AC-4, AC-6)**
|
||||
|
||||
This mechanism is the robust solution to the "sharp turn" (AC-4) problem.
|
||||
|
||||
* **Scenario:** The UAV makes a sharp turn (AC-4). The V-SLAM (4.2) creates Map_Fragment_0 and Map_Fragment_1. The TOH's graph now has two *disconnected* components.
|
||||
* **Mechanism (Geodetic Merging):**
|
||||
1. The TOH queues the GAB (Section 5.0) to find anchors for *both* fragments.
|
||||
2. GAB returns Anchor_A for Map_Fragment_0 and Anchor_B for Map_Fragment_1.
|
||||
3. The TOH adds *both* of these as absolute, metric constraints (Edge 2) to the *single global pose-graph*.
|
||||
4. The Ceres optimizer 14 now has all the information it needs. It solves for the 7-Dof pose of *both fragments*, placing them in their correct, globally-consistent metric positions.
|
||||
|
||||
The two fragments are *merged geodetically* (by their global coordinates 11) even if they *never* visually overlap. This is a vastly more robust solution to AC-4 and AC-6 than simple visual loop closure.
|
||||
|
||||
## **7.0 Performance, Deployment, and High-Accuracy Outputs**
|
||||
|
||||
### **7.1 Meeting the <5s Budget (AC-7): Mandatory Acceleration with NVIDIA TensorRT**
|
||||
|
||||
The system must run on an RTX 2060 (AC-7). This is a low-end, 6GB VRAM card 34, which is a *severe* constraint. Running three deep-learning models (SuperPoint, LightGlue, SALAD/MixVPR) plus a Ceres optimizer 38 will saturate this hardware.
|
||||
|
||||
* **Solution 1: Multi-Scale Pipeline.** As defined in 5.3, the system *never* processes a full 6.2K image on the GPU. It uses low-res for V-SLAM/GAB-Coarse and high-res *patches* for GAB-Fine.
|
||||
* **Solution 2: Mandatory TensorRT Deployment.** Running these models in their native PyTorch framework will be too slow. All neural networks (SuperPoint, LightGlue, SALAD/MixVPR) *must* be converted from PyTorch into optimized **NVIDIA TensorRT engines**. Research *specifically* on accelerating LightGlue shows this provides **"2x-4x speed gains over compiled PyTorch"**.35 This 200-400% speedup is *not* an optimization; it is a *mandatory deployment step* to make the <5s (AC-7) budget *possible* on an RTX 2060.
|
||||
|
||||
### **7.2 High-Accuracy Object Geolocalization via Ray-Cloud Intersection (Solves AC-2/AC-10)**
|
||||
|
||||
The user must be able to find the GPS of an *object* in a photo. A simple approach of ray-casting from the camera and intersecting with the 30m GLO-30 DEM 2 is fatally flawed. The DEM error itself can be up to 30m 19, making AC-2 impossible.
|
||||
|
||||
The ASTRAL system uses a **Ray-Cloud Intersection** method that *decouples* object accuracy from external DEM accuracy.
|
||||
|
||||
* **Algorithm:**
|
||||
1. The user clicks pixel (u,v) on Image_N.
|
||||
2. The system retrieves the *final, refined, metric 7-DoF pose* P_{sim(3)} = (s, R, T) for Image_N from the TOH.
|
||||
3. It also retrieves the V-SLAM's *local, high-fidelity 3D point cloud* (P_{local_cloud}) from Component 3 (Section 4.3).
|
||||
4. **Step 1 (Local):** The pixel (u,v) is un-projected into a ray. This ray is intersected with the *local* P_{local_cloud}. This finds the 3D point $P_{local} *relative to the V-SLAM map*. The accuracy of this step is defined by AC-10 (MRE < 1.0px).
|
||||
5. **Step 2 (Global):** This *highly-accurate* local point P_{local} is transformed into the global metric coordinate system using the *highly-accurate* refined pose from the TOH: P_{metric} = s * (R * P_{local}) + T.
|
||||
6. **Step 3 (Convert):** P_{metric} (an X,Y,Z world coordinate) is converted to [Latitude, Longitude, Altitude].
|
||||
|
||||
This method correctly isolates error. The object's accuracy is now *only* dependent on the V-SLAM's internal geometry (AC-10) and the TOH's global pose accuracy (AC-1, AC-2). It *completely eliminates* the external 30m DEM error 2 from this critical, high-accuracy calculation.
|
||||
|
||||
### **7.3 Failure Mode Escalation Logic (Meets AC-3, AC-4, AC-6, AC-9)**
|
||||
|
||||
The system is built on a robust state machine to handle real-world failures.
|
||||
|
||||
* **Stage 1: Normal Operation (Tracking):** V-SLAM tracks, TOH optimizes.
|
||||
* **Stage 2: Transient VO Failure (Outlier Rejection):**
|
||||
* *Condition:* Image_N is a 350m outlier (AC-3) or severe blur.
|
||||
* *Logic:* V-SLAM fails to track Image_N. System *discards* it (AC-5). Image_N+1 arrives, V-SLAM re-tracks.
|
||||
* *Result:* **AC-3 Met.**
|
||||
* **Stage 3: Persistent VO Failure (New Map Initialization):**
|
||||
* *Condition:* "Sharp turn" (AC-4) or >5 frames of tracking loss.
|
||||
* *Logic:* V-SLAM (Section 4.2) declares "Tracking Lost." Initializes *new* Map_Fragment_k+1. Tracking *resumes instantly*.
|
||||
* *Result:* **AC-4 Met.** System "correctly continues the work." The >95% registration rate (AC-9) is met because this is *not* a failure, it's a *new registration*.
|
||||
* **Stage 4: Map-Merging & Global Relocalization (GAB-Assisted):**
|
||||
* *Condition:* System is on Map_Fragment_k+1, Map_Fragment_k is "lost."
|
||||
* *Logic:* TOH (Section 6.4) receives GAB anchors for *both* fragments and *geodetically merges* them in the global optimizer.14
|
||||
* *Result:* **AC-6 Met** (strategy to connect separate chunks).
|
||||
* **Stage 5: Catastrophic Failure (User Intervention):**
|
||||
* *Condition:* System is in Stage 3 (Lost) *and* the GAB has failed for 20% of the route. The "absolutely incapable" scenario (AC-6).
|
||||
* *Logic:* TOH triggers the AC-6 flag. UI prompts user: "Please provide a coarse location for the *current* image."
|
||||
* *Action:* This user-click is *not* taken as ground-truth. It is fed to the **GAB (Section 5.0)** as a *strong spatial prior*, narrowing its Stage 1 8 search from "the entire AOI" to "a 5km radius." This *guarantees* the GAB finds a match, which triggers Stage 4, re-localizing the system.
|
||||
* *Result:* **AC-6 Met** (user input).
|
||||
|
||||
## **8.0 ASTRAL Validation Plan and Acceptance Criteria Matrix**
|
||||
|
||||
A comprehensive test plan is required to validate compliance with all 10 Acceptance Criteria. The foundation is a **Ground-Truth Test Harness** using project-provided ground-truth data.
|
||||
|
||||
### **Table 4: ASTRAL Component vs. Acceptance Criteria Compliance Matrix**
|
||||
|
||||
| ID | Requirement | ASTRAL Solution (Component) | Key Technology / Justification |
|
||||
| :---- | :---- | :---- | :---- |
|
||||
| **AC-1** | 80% of photos < 50m error | GDB (C-1) + GAB (C-5) + TOH (C-6) | **Tier-1 (Copernicus)** data 1 is sufficient. SOTA VPR 8 + Sim(3) graph 13 can achieve this. |
|
||||
| **AC-2** | 60% of photos < 20m error | GDB (C-1) + GAB (C-5) + TOH (C-6) | **Requires Tier-2 (Commercial) Data**.4 Mitigates reference error.3 **Per-Keyframe Scale** 15 model in TOH minimizes drift error. |
|
||||
| **AC-3** | Robust to 350m outlier | V-SLAM (C-3) + TOH (C-6) | **Stage 2 Failure Logic** (7.3) discards the frame. **Robust M-Estimation** (6.3) in Ceres 14 automatically rejects the constraint. |
|
||||
| **AC-4** | Robust to sharp turns (<5% overlap) | V-SLAM (C-3) + TOH (C-6) | **"Atlas" Multi-Map** (4.2) initializes new map (Map_Fragment_k+1). **Geodetic Map-Merging** (6.4) in TOH re-connects fragments via GAB anchors. |
|
||||
| **AC-5** | < 10% outlier anchors | TOH (C-6) | **Robust M-Estimation (Huber Loss)** (6.3) in Ceres 14 automatically down-weights and ignores high-residual (bad) GAB anchors. |
|
||||
| **AC-6** | Connect route chunks; User input | V-SLAM (C-3) + TOH (C-6) + UI | **Geodetic Map-Merging** (6.4) connects chunks. **Stage 5 Failure Logic** (7.3) provides the user-input-as-prior mechanism. |
|
||||
| **AC-7** | < 5 seconds processing/image | All Components | **Multi-Scale Pipeline** (5.3) (Low-Res V-SLAM, Hi-Res GAB patches). **Mandatory TensorRT Acceleration** (7.1) for 2-4x speedup.35 |
|
||||
| **AC-8** | Real-time stream + async refinement | TOH (C-5) + Outputs (C-2.4) | Decoupled architecture provides Pose_N_Est (V-SLAM) in real-time and Pose_N_Refined (TOH) asynchronously as GAB anchors arrive. |
|
||||
| **AC-9** | Image Registration Rate > 95% | V-SLAM (C-3) | **"Atlas" Multi-Map** (4.2). A "lost track" (AC-4) is *not* a registration failure; it's a *new map registration*. This ensures the rate > 95%. |
|
||||
| **AC-10** | Mean Reprojection Error (MRE) < 1.0px | V-SLAM (C-3) + TOH (C-6) | Local BA (4.3) + Global BA (TOH14) + **Per-Keyframe Scale** (6.2) minimizes internal graph tension (Flaw 1.3), allowing the optimizer to converge to a low MRE. |
|
||||
|
||||
### **8.1 Rigorous Validation Methodology**
|
||||
|
||||
* **Test Harness:** A validation script will be created to compare the system's Pose_N^{Refined} output against a ground-truth coordinates.csv file, computing Haversine distance errors.
|
||||
* **Test Datasets:**
|
||||
* Test_Baseline: Standard flight.
|
||||
* Test_Outlier_350m (AC-3): A single, unrelated image inserted.
|
||||
* Test_Sharp_Turn_5pct (AC-4): A sequence with a 10-frame gap.
|
||||
* Test_Long_Route (AC-9, AC-7): A 2000-image sequence.
|
||||
* **Test Cases:**
|
||||
* Test_Accuracy: Run Test_Baseline. ASSERT (count(errors < 50m) / total) >= 0.80 (AC-1). ASSERT (count(errors < 20m) / total) >= 0.60 (AC-2).
|
||||
* Test_Robustness: Run Test_Outlier_350m and Test_Sharp_Turn_5pct. ASSERT system completes the run and Test_Accuracy assertions still pass on the valid frames.
|
||||
* Test_Performance: Run Test_Long_Route on min-spec RTX 2060. ASSERT average_time(Pose_N^{Est} output) < 5.0s (AC-7).
|
||||
* Test_MRE: ASSERT TOH.final_MRE < 1.0 (AC-10).
|
||||
|
||||
|
||||
Identify all potential weak points and problems. Address them and find out ways to solve them. Based on your findings, form a new solution draft in the same format.
|
||||
|
||||
If your finding requires a complete reorganization of the flow and different components, state it.
|
||||
Put all the findings regarding what was weak and poor at the beginning of the report.
|
||||
At the very beginning of the report list most profound changes you've made to previous solution.
|
||||
|
||||
Then form a new solution design without referencing the previous system. Remove Poor and Very Poor component choices from the component analysis tables, but leave Good and Excellent ones.
|
||||
In the updated report, do not put "new" marks, do not compare to the previous solution draft, just make a new solution as if from scratch
|
||||
@@ -1,375 +0,0 @@
|
||||
Read carefully about the problem:
|
||||
|
||||
We have a lot of images taken from a wing-type UAV using a camera with at least Full HD resolution. Resolution of each photo could be up to 6200*4100 for the whole flight, but for other flights, it could be FullHD
|
||||
Photos are taken and named consecutively within 100 meters of each other.
|
||||
We know only the starting GPS coordinates. We need to determine the GPS of the centers of each image. And also the coordinates of the center of any object in these photos. We can use an external satellite provider for ground checks on the existing photos
|
||||
|
||||
System has next restrictions and conditions:
|
||||
- Photos are taken by only airplane type UAVs.
|
||||
- Photos are taken by the camera pointing downwards and fixed, but it is not autostabilized.
|
||||
- The flying range is restricted by the eastern and southern parts of Ukraine (To the left of the Dnipro River)
|
||||
- The image resolution could be from FullHD to 6252*4168. Camera parameters are known: focal length, sensor width, resolution and so on.
|
||||
- Altitude is predefined and no more than 1km. The height of the terrain can be neglected.
|
||||
- There is NO data from IMU
|
||||
- Flights are done mostly in sunny weather
|
||||
- We can use satellite providers, but we're limited right now to Google Maps, which could be outdated for some regions
|
||||
- Number of photos could be up to 3000, usually in the 500-1500 range
|
||||
- During the flight, UAVs can make sharp turns, so that the next photo may be absolutely different from the previous one (no same objects), but it is rather an exception than the rule
|
||||
- Processing is done on a stationary computer or laptop with NVidia GPU at least RTX2060, better 3070. (For the UAV solution Jetson Orin Nano would be used, but that is out of scope.)
|
||||
|
||||
Output of the system should address next acceptance criteria:
|
||||
- The system should find out the GPS of centers of 80% of the photos from the flight within an error of no more than 50 meters in comparison to the real GPS
|
||||
|
||||
- The system should find out the GPS of centers of 60% of the photos from the flight within an error of no more than 20 meters in comparison to the real GPS
|
||||
|
||||
- The system should correctly continue the work even in the presence of up to 350 meters of an outlier photo between 2 consecutive pictures en route. This could happen due to tilt of the plane.
|
||||
|
||||
- System should correctly continue the work even during sharp turns, where the next photo doesn't overlap at all, or overlaps in less than 5%. The next photo should be in less than 200m drift and at an angle of less than 70%
|
||||
|
||||
- System should try to operate when UAV made a sharp turn, and all the next photos has no common points with previous route. In that situation system should try to figure out location of the new piece of the route and connect it to the previous route. Also this separate chunks could be more than 2, so this strategy should be in the core of the system
|
||||
|
||||
- In case of being absolutely incapable of determining the system to determine next, second next, and third next images GPS, by any means (these 20% of the route), then it should ask the user for input for the next image, so that the user can specify the location
|
||||
|
||||
- Less than 5 seconds for processing one image
|
||||
|
||||
- Results of image processing should appear immediately to user, so that user shouldn't wait for the whole route to complete in order to analyze first results. Also, system could refine existing calculated results and send refined results again to user
|
||||
|
||||
- Image Registration Rate > 95%. The system can find enough matching features to confidently calculate the camera's 6-DoF pose (position and orientation) and "stitch" that image into the final trajectory
|
||||
|
||||
- Mean Reprojection Error (MRE) < 1.0 pixels. The distance, in pixels, between the original pixel location of the object and the re-projected pixel location.
|
||||
|
||||
- The whole system should work as a background service. The interaction should be done by zeromq. Sevice should be up and running and awaiting for the initial input message. On the input message processing should started, and immediately after the first results system should provide them to the client
|
||||
|
||||
Here is a solution draft:
|
||||
|
||||
# **ASTRAL-Next: A Resilient, GNSS-Denied Geo-Localization Architecture for Wing-Type UAVs in Complex Semantic Environments**
|
||||
|
||||
## **1. Executive Summary and Operational Context**
|
||||
|
||||
The strategic necessity of operating Unmanned Aerial Vehicles (UAVs) in Global Navigation Satellite System (GNSS)-denied environments has precipitated a fundamental shift in autonomous navigation research. The specific operational profile under analysis—high-speed, fixed-wing UAVs operating without Inertial Measurement Units (IMU) over the visually homogenous and texture-repetitive terrain of Eastern and Southern Ukraine—presents a confluence of challenges that render traditional Simultaneous Localization and Mapping (SLAM) approaches insufficient. The target environment, characterized by vast agricultural expanses, seasonal variability, and potential conflict-induced terrain alteration, demands a navigation architecture that moves beyond simple visual odometry to a robust, multi-layered Absolute Visual Localization (AVL) system.
|
||||
|
||||
This report articulates the design and theoretical validation of **ASTRAL-Next**, a comprehensive architectural framework engineered to supersede the limitations of preliminary dead-reckoning solutions. By synthesizing state-of-the-art (SOTA) research emerging in 2024 and 2025, specifically leveraging **LiteSAM** for efficient cross-view matching 1, **AnyLoc** for universal place recognition 2, and **SuperPoint+LightGlue** for robust sequential tracking 1, the proposed system addresses the critical failure modes inherent in wing-type UAV flight dynamics. These dynamics include sharp banking maneuvers, significant pitch variations leading to ground sampling distance (GSD) disparities, and the potential for catastrophic track loss (the "kidnapped robot" problem).
|
||||
|
||||
The analysis indicates that relying solely on sequential image overlap is viable only for short-term trajectory smoothing. The core innovation of ASTRAL-Next lies in its "Hierarchical + Anchor" topology, which decouples the relative motion estimation from absolute global anchoring. This ensures that even during zero-overlap turns or 350-meter positional outliers caused by airframe tilt, the system can re-localize against a pre-cached satellite reference map within the required 5-second latency window.3 Furthermore, the system accounts for the semantic disconnect between live UAV imagery and potentially outdated satellite reference data (e.g., Google Maps) by prioritizing semantic geometry over pixel-level photometric consistency.
|
||||
|
||||
### **1.1 Operational Environment and Constraints Analysis**
|
||||
|
||||
The operational theater—specifically the left bank of the Dnipro River in Ukraine—imposes rigorous constraints on computer vision algorithms. The absence of IMU data removes the ability to directly sense acceleration and angular velocity, creating a scale ambiguity in monocular vision systems that must be resolved through external priors (altitude) and absolute reference data.
|
||||
|
||||
| Constraint Category | Specific Challenge | Implication for System Design |
|
||||
| :---- | :---- | :---- |
|
||||
| **Sensor Limitation** | **No IMU Data** | The system cannot distinguish between pure translation and camera rotation (pitch/roll) without visual references. Scale must be constrained via altitude priors and satellite matching.5 |
|
||||
| **Flight Dynamics** | **Wing-Type UAV** | Unlike quadcopters, fixed-wing aircraft cannot hover. They bank to turn, causing horizon shifts and perspective distortions. "Sharp turns" result in 0% image overlap.6 |
|
||||
| **Terrain Texture** | **Agricultural Fields** | Repetitive crop rows create aliasing for standard descriptors (SIFT/ORB). Feature matching requires context-aware deep learning methods (SuperPoint).7 |
|
||||
| **Reference Data** | **Google Maps (2025)** | Public satellite data may be outdated or lower resolution than restricted military feeds. Matches must rely on invariant features (roads, tree lines) rather than ephemeral textures.9 |
|
||||
| **Compute Hardware** | **NVIDIA RTX 2060/3070** | Algorithms must be optimized for TensorRT to meet the <5s per frame requirement. Heavy transformers (e.g., ViT-Huge) are prohibitive; efficient architectures (LiteSAM) are required.1 |
|
||||
|
||||
The confluence of these factors necessitates a move away from simple "dead reckoning" (accumulating relative movements) which drifts exponentially. Instead, ASTRAL-Next operates as a **Global-Local Hybrid System**, where a high-frequency visual odometry layer handles frame-to-frame continuity, while a parallel global localization layer periodically "resets" the drift by anchoring the UAV to the satellite map.
|
||||
|
||||
## **2. Architectural Critique of Legacy Approaches**
|
||||
|
||||
The initial draft solution ("ASTRAL") and similar legacy approaches typically rely on a unified SLAM pipeline, often attempting to use the same feature extractors for both sequential tracking and global localization. Recent literature highlights substantial deficiencies in this monolithic approach, particularly when applied to the specific constraints of this project.
|
||||
|
||||
### **2.1 The Failure of Classical Descriptors in Agricultural Settings**
|
||||
|
||||
Classical feature descriptors like SIFT (Scale-Invariant Feature Transform) and ORB (Oriented FAST and Rotated BRIEF) rely on detecting "corners" and "blobs" based on local pixel intensity gradients. In the agricultural landscapes of Eastern Ukraine, this approach faces severe aliasing. A field of sunflowers or wheat presents thousands of identical "blobs," causing the nearest-neighbor matching stage to generate a high ratio of outliers.8
|
||||
Research demonstrates that deep-learning-based feature extractors, specifically SuperPoint, trained on large datasets of synthetic and real-world imagery, learn to identify interest points that are semantically significant (e.g., the intersection of a tractor path and a crop line) rather than just texturally distinct.1 Consequently, a redesign must replace SIFT/ORB with SuperPoint for the front-end tracking.
|
||||
|
||||
### **2.2 The Inadequacy of Dead Reckoning without IMU**
|
||||
|
||||
In a standard Visual-Inertial Odometry (VIO) system, the IMU provides a high-frequency prediction of the camera's pose, which the visual system then refines. Without an IMU, the system is purely Visual Odometry (VO). In VO, the scale of the world is unobservable from a single camera (monocular scale ambiguity). A 1-meter movement of a small object looks identical to a 10-meter movement of a large object.5
|
||||
While the prompt specifies a "predefined altitude," relying on this as a static constant is dangerous due to terrain undulations and barometric drift. ASTRAL-Next must implement a Scale-Constrained Bundle Adjustment, treating the altitude not as a hard fact, but as a strong prior that prevents the scale drift common in monocular systems.5
|
||||
|
||||
### **2.3 Vulnerability to "Kidnapped Robot" Scenarios**
|
||||
|
||||
The requirement to recover from sharp turns where the "next photo doesn't overlap at all" describes the classic "Kidnapped Robot Problem" in robotics—where a robot is teleported to an unknown location and must relocalize.14
|
||||
Sequential matching algorithms (optical flow, feature tracking) function on the assumption of overlap. When overlap is zero, these algorithms fail catastrophically. The legacy solution's reliance on continuous tracking makes it fragile to these flight dynamics. The redesigned architecture must incorporate a dedicated Global Place Recognition module that treats every frame as a potential independent query against the satellite database, independent of the previous frame's history.2
|
||||
|
||||
## **3. ASTRAL-Next: System Architecture and Methodology**
|
||||
|
||||
To meet the acceptance criteria—specifically the 80% success rate within 50m error and the <5 second processing time—ASTRAL-Next utilizes a tri-layer processing topology. These layers operate concurrently, feeding into a central state estimator.
|
||||
|
||||
### **3.1 The Tri-Layer Localization Strategy**
|
||||
|
||||
The architecture separates the concerns of continuity, recovery, and precision into three distinct algorithmic pathways.
|
||||
|
||||
| Layer | Functionality | Algorithm | Latency | Role in Acceptance Criteria |
|
||||
| :---- | :---- | :---- | :---- | :---- |
|
||||
| **L1: Sequential Tracking** | Frame-to-Frame Relative Pose | **SuperPoint + LightGlue** | \~50-100ms | Handles continuous flight, bridges small gaps (overlap < 5%), and maintains trajectory smoothness. Essential for the 100m spacing requirement. 1 |
|
||||
| **L2: Global Re-Localization** | "Kidnapped Robot" Recovery | **AnyLoc (DINOv2 + VLAD)** | \~200ms | Detects location after sharp turns (0% overlap) or track loss. Matches current view to the satellite database tile. Addresses the sharp turn recovery criterion. 2 |
|
||||
| **L3: Metric Refinement** | Precise GPS Anchoring | **LiteSAM / HLoc** | \~300-500ms | "Stitches" the UAV image to the satellite tile with pixel-level accuracy to reset drift. Ensures the "80% < 50m" and "60% < 20m" accuracy targets. 1 |
|
||||
|
||||
### **3.2 Data Flow and State Estimation**
|
||||
|
||||
The system utilizes a **Factor Graph Optimization** (using libraries like GTSAM) as the central "brain."
|
||||
|
||||
1. **Inputs:**
|
||||
* **Relative Factors:** Provided by Layer 1 (Change in pose from $t-1$ to $t$).
|
||||
* **Absolute Factors:** Provided by Layer 3 (Global GPS coordinate at $t$).
|
||||
* **Priors:** Altitude constraint and Ground Plane assumption.
|
||||
2. **Processing:** The factor graph optimizes the trajectory by minimizing the error between these conflicting constraints.
|
||||
3. **Output:** A smoothed, globally consistent trajectory $(x, y, z, \\text{roll}, \\text{pitch}, \\text{yaw})$ for every image timestamp.
|
||||
|
||||
### **3.3 ZeroMQ Background Service Architecture**
|
||||
|
||||
As per the requirement, the system operates as a background service.
|
||||
|
||||
* **Communication Pattern:** The service utilizes a REP-REQ (Reply-Request) pattern for control commands (Start/Stop/Reset) and a PUB-SUB (Publish-Subscribe) pattern for the continuous stream of localization results.
|
||||
* **Concurrency:** Layer 1 runs on a high-priority thread to ensure immediate feedback. Layers 2 and 3 run asynchronously; when a global match is found, the result is injected into the Factor Graph, which then "back-propagates" the correction to previous frames, refining the entire recent trajectory.
|
||||
|
||||
## **4. Layer 1: Robust Sequential Visual Odometry**
|
||||
|
||||
The first line of defense against localization loss is robust tracking between consecutive UAV images. Given the challenging agricultural environment, standard feature matching is prone to failure. ASTRAL-Next employs **SuperPoint** and **LightGlue**.
|
||||
|
||||
### **4.1 SuperPoint: Semantic Feature Detection**
|
||||
|
||||
SuperPoint is a fully convolutional neural network trained to detect interest points and compute their descriptors. Unlike SIFT, which uses handcrafted mathematics to find corners, SuperPoint is trained via self-supervision on millions of images.
|
||||
|
||||
* **Relevance to Ukraine:** In a wheat field, SIFT might latch onto hundreds of identical wheat stalks. SuperPoint, however, learns to prioritize more stable features, such as the boundary between the field and a dirt road, or a specific patch of discoloration in the crop canopy.1
|
||||
* **Performance:** SuperPoint runs efficiently on the RTX 2060/3070, with inference times around 15ms per image when optimized with TensorRT.16
|
||||
|
||||
### **4.2 LightGlue: The Attention-Based Matcher**
|
||||
|
||||
**LightGlue** represents a paradigm shift from the traditional "Nearest Neighbor + RANSAC" matching pipeline. It is a deep neural network that takes two sets of SuperPoint features and jointly predicts the matches.
|
||||
|
||||
* **Mechanism:** LightGlue uses a transformer-based attention mechanism. It allows features in Image A to "look at" all features in Image B (and vice versa) to determine the best correspondence. Crucially, it has a "dustbin" mechanism to explicitly reject points that have no match (occlusion or field of view change).12
|
||||
* **Addressing the <5% Overlap:** The user specifies handling overlaps of "less than 5%." Traditional RANSAC fails here because the inlier ratio is too low. LightGlue, however, can confidently identify the few remaining matches because its attention mechanism considers the global geometric context of the points. If only a single road intersection is visible in the corner of both images, LightGlue is significantly more likely to match it correctly than SIFT.8
|
||||
* **Efficiency:** LightGlue is designed to be "light." It features an adaptive depth mechanism—if the images are easy to match, it exits early. If they are hard (low overlap), it uses more layers. This adaptability is perfect for the variable difficulty of the UAV flight path.19
|
||||
|
||||
## **5. Layer 2: Global Place Recognition (The "Kidnapped Robot" Solver)**
|
||||
|
||||
When the UAV executes a sharp turn, resulting in a completely new view (0% overlap), sequential tracking (Layer 1) is mathematically impossible. The system must recognize the new terrain solely based on its appearance. This is the domain of **AnyLoc**.
|
||||
|
||||
### **5.1 Universal Place Recognition with Foundation Models**
|
||||
|
||||
**AnyLoc** leverages **DINOv2**, a massive self-supervised vision transformer developed by Meta. DINOv2 is unique because it is not trained with labels; it is trained to understand the geometry and semantic layout of images.
|
||||
|
||||
* **Why DINOv2 for Satellite Matching:** Satellite images and UAV images have different "domains." The satellite image might be from summer (green), while the UAV flies in autumn (brown). DINOv2 features are remarkably invariant to these texture changes. It "sees" the shape of the road network or the layout of the field boundaries, rather than the color of the leaves.2
|
||||
* **VLAD Aggregation:** AnyLoc extracts dense features from the image using DINOv2 and aggregates them using **VLAD** (Vector of Locally Aggregated Descriptors) into a single, compact vector (e.g., 4096 dimensions). This vector represents the "fingerprint" of the location.21
|
||||
|
||||
### **5.2 Implementation Strategy**
|
||||
|
||||
1. **Database Preparation:** Before the mission, the system downloads the satellite imagery for the operational bounding box (Eastern/Southern Ukraine). These images are tiled (e.g., 512x512 pixels with overlap) and processed through AnyLoc to generate a database of descriptors.
|
||||
2. **Faiss Indexing:** These descriptors are indexed using **Faiss**, a library for efficient similarity search.
|
||||
3. **In-Flight Retrieval:** When Layer 1 reports a loss of tracking (or periodically), the current UAV image is processed by AnyLoc. The resulting vector is queried against the Faiss index.
|
||||
4. **Result:** The system retrieves the top-5 most similar satellite tiles. These tiles represent the coarse global location of the UAV (e.g., "You are in Grid Square B7").2
|
||||
|
||||
## **6. Layer 3: Fine-Grained Metric Localization (LiteSAM)**
|
||||
|
||||
Retrieving the correct satellite tile (Layer 2) gives a location error of roughly the tile size (e.g., 200 meters). To meet the "60% < 20m" and "80% < 50m" criteria, the system must precisely align the UAV image onto the satellite tile. ASTRAL-Next utilizes **LiteSAM**.
|
||||
|
||||
### **6.1 Justification for LiteSAM over TransFG**
|
||||
|
||||
While **TransFG** (Transformer for Fine-Grained recognition) is a powerful architecture for cross-view geo-localization, it is computationally heavy.23 **LiteSAM** (Lightweight Satellite-Aerial Matching) is specifically architected for resource-constrained platforms (like UAV onboard computers or efficient ground stations) while maintaining state-of-the-art accuracy.
|
||||
|
||||
* **Architecture:** LiteSAM utilizes a **Token Aggregation-Interaction Transformer (TAIFormer)**. It employs a convolutional token mixer (CTM) to model correlations between the UAV and satellite images.
|
||||
* **Multi-Scale Processing:** LiteSAM processes features at multiple scales. This is critical because the UAV altitude varies (<1km), meaning the scale of objects in the UAV image will not perfectly match the fixed scale of the satellite image (Google Maps Zoom Level 19). LiteSAM's multi-scale approach inherently handles this discrepancy.1
|
||||
* **Performance Data:** Empirical benchmarks on the **UAV-VisLoc** dataset show LiteSAM achieving an RMSE@30 (Root Mean Square Error within 30 meters) of 17.86 meters, directly supporting the project's accuracy requirements. Its inference time is approximately 61.98ms on standard GPUs, ensuring it fits within the overall 5-second budget.1
|
||||
|
||||
### **6.2 The Alignment Process**
|
||||
|
||||
1. **Input:** The UAV Image and the Top-1 Satellite Tile from Layer 2.
|
||||
2. **Processing:** LiteSAM computes the dense correspondence field between the two images.
|
||||
3. **Homography Estimation:** Using the correspondences, the system computes a homography matrix $H$ that maps pixels in the UAV image to pixels in the georeferenced satellite tile.
|
||||
4. **Pose Extraction:** The camera's absolute GPS position is derived from this homography, utilizing the known GSD of the satellite tile.18
|
||||
|
||||
## **7. Satellite Data Management and Coordinate Systems**
|
||||
|
||||
The reliability of the entire system hinges on the quality and handling of the reference map data. The restriction to "Google Maps" necessitates a rigorous approach to coordinate transformation and data freshness management.
|
||||
|
||||
### **7.1 Google Maps Static API and Mercator Projection**
|
||||
|
||||
The Google Maps Static API delivers images without embedded georeferencing metadata (GeoTIFF tags). The system must mathematically derive the bounding box of each downloaded tile to assign coordinates to the pixels. Google Maps uses the **Web Mercator Projection (EPSG:3857)**.
|
||||
|
||||
The system must implement the following derivation to establish the **Ground Sampling Distance (GSD)**, or meters_per_pixel, which varies significantly with latitude:
|
||||
|
||||
$$ \\text{meters_per_pixel} = 156543.03392 \\times \\frac{\\cos(\\text{latitude} \\times \\frac{\\pi}{180})}{2^{\\text{zoom}}} $$
|
||||
|
||||
For the operational region (Ukraine, approx. Latitude 48N):
|
||||
|
||||
* At **Zoom Level 19**, the resolution is approximately 0.30 meters/pixel. This resolution is compatible with the input UAV imagery (Full HD at <1km altitude), providing sufficient detail for the LiteSAM matcher.24
|
||||
|
||||
**Bounding Box Calculation Algorithm:**
|
||||
|
||||
1. **Input:** Center Coordinate $(lat, lon)$, Zoom Level ($z$), Image Size $(w, h)$.
|
||||
2. **Project to World Coordinates:** Convert $(lat, lon)$ to world pixel coordinates $(px, py)$ at the given zoom level.
|
||||
3. **Corner Calculation:**
|
||||
* px_{NW} = px - (w / 2)
|
||||
* py_{NW} = py - (h / 2)
|
||||
4. Inverse Projection: Convert $(px_{NW}, py_{NW})$ back to Latitude/Longitude to get the North-West corner. Repeat for South-East.
|
||||
This calculation is critical. A precision error here translates directly to a systematic bias in the final GPS output.
|
||||
|
||||
### **7.2 Mitigating Data Obsolescence (The 2025 Problem)**
|
||||
|
||||
The provided research highlights that satellite imagery access over Ukraine is subject to restrictions and delays (e.g., Maxar restrictions in 2025).10 Google Maps data may be several years old.
|
||||
|
||||
* **Semantic Anchoring:** This reinforces the selection of **AnyLoc** (Layer 2) and **LiteSAM** (Layer 3). These algorithms are trained to ignore transient features (cars, temporary structures, vegetation color) and focus on persistent structural features (road geometry, building footprints).
|
||||
* **Seasonality:** Research indicates that DINOv2 features (used in AnyLoc) exhibit strong robustness to seasonal changes (e.g., winter satellite map vs. summer UAV flight), maintaining high retrieval recall where pixel-based methods fail.17
|
||||
|
||||
## **8. Optimization and State Estimation (The "Brain")**
|
||||
|
||||
The individual outputs of the visual layers are noisy. Layer 1 drifts over time; Layer 3 may have occasional outliers. The **Factor Graph Optimization** fuses these inputs into a coherent trajectory.
|
||||
|
||||
### **8.1 Handling the 350-Meter Outlier (Tilt)**
|
||||
|
||||
The prompt specifies that "up to 350 meters of an outlier... could happen due to tilt." This large displacement masquerading as translation is a classic source of divergence in Kalman Filters.
|
||||
|
||||
* **Robust Cost Functions:** In the Factor Graph, the error terms for the visual factors are wrapped in a **Robust Kernel** (specifically the **Cauchy** or **Huber** kernel).
|
||||
* *Mechanism:* Standard least-squares optimization penalizes errors quadratically ($e^2$). If a 350m error occurs, the penalty is massive, dragging the entire trajectory off-course. A robust kernel changes the penalty to be linear ($|e|$) or logarithmic after a certain threshold. This allows the optimizer to effectively "ignore" or down-weight the 350m jump if it contradicts the consensus of other measurements, treating it as a momentary outlier or solving for it as a rotation rather than a translation.19
|
||||
|
||||
### **8.2 The Altitude Soft Constraint**
|
||||
|
||||
To resolve the monocular scale ambiguity without IMU, the altitude ($h_{prior}$) is added as a **Unary Factor** to the graph.
|
||||
|
||||
* $E_{alt} = |
|
||||
|
||||
| z_{est} \- h_{prior} ||*{\\Sigma*{alt}}$
|
||||
|
||||
* $\\Sigma_{alt}$ (covariance) is set relatively high (soft constraint), allowing the visual odometry to adjust the altitude slightly to maintain consistency, but preventing the scale from collapsing to zero or exploding to infinity. This effectively creates an **Altimeter-Aided Monocular VIO** system, where the altimeter (virtual or barometric) replaces the accelerometer for scale determination.5
|
||||
|
||||
## **9. Implementation Specifications**
|
||||
|
||||
### **9.1 Hardware Acceleration (TensorRT)**
|
||||
|
||||
Meeting the <5 second per frame requirement on an RTX 2060 requires optimizing the deep learning models. Python/PyTorch inference is typically too slow due to overhead.
|
||||
|
||||
* **Model Export:** All core models (SuperPoint, LightGlue, LiteSAM) must be exported to **ONNX** (Open Neural Network Exchange) format.
|
||||
* **TensorRT Compilation:** The ONNX models are then compiled into **TensorRT Engines**. This process performs graph fusion (combining multiple layers into one) and kernel auto-tuning (selecting the fastest GPU instructions for the specific RTX 2060/3070 architecture).26
|
||||
* **Precision:** The models should be quantized to **FP16** (16-bit floating point). Research shows that FP16 inference on NVIDIA RTX cards offers a 2x-3x speedup with negligible loss in matching accuracy for these specific networks.16
|
||||
|
||||
### **9.2 Background Service Architecture (ZeroMQ)**
|
||||
|
||||
The system is encapsulated as a headless service.
|
||||
|
||||
**ZeroMQ Topology:**
|
||||
|
||||
* **Socket 1 (REP - Port 5555):** Command Interface. Accepts JSON messages:
|
||||
* {"cmd": "START", "config": {"lat": 48.1, "lon": 37.5}}
|
||||
* {"cmd": "USER_FIX", "lat": 48.22, "lon": 37.66} (Human-in-the-loop input).
|
||||
* **Socket 2 (PUB - Port 5556):** Data Stream. Publishes JSON results for every frame:
|
||||
* {"frame_id": 1024, "gps": [48.123, 37.123], "object_centers": [...], "status": "LOCKED", "confidence": 0.98}.
|
||||
|
||||
Asynchronous Pipeline:
|
||||
The system utilizes a Python multiprocessing architecture. One process handles the camera/image ingest and ZeroMQ communication. A second process hosts the TensorRT engines and runs the Factor Graph. This ensures that the heavy computation of Bundle Adjustment does not block the receipt of new images or user commands.
|
||||
|
||||
## **10. Human-in-the-Loop Strategy**
|
||||
|
||||
The requirement stipulates that for the "20% of the route" where automation fails, the user must intervene. The system must proactively detect its own failure.
|
||||
|
||||
### **10.1 Failure Detection with PDM@K**
|
||||
|
||||
The system monitors the **PDM@K** (Positioning Distance Measurement) metric continuously.
|
||||
|
||||
* **Definition:** PDM@K measures the percentage of queries localized within $K$ meters.3
|
||||
* **Real-Time Proxy:** In flight, we cannot know the true PDM (as we don't have ground truth). Instead, we use the **Marginal Covariance** from the Factor Graph. If the uncertainty ellipse for the current position grows larger than a radius of 50 meters, or if the **Image Registration Rate** (percentage of inliers in LightGlue/LiteSAM) drops below 10% for 3 consecutive frames, the system triggers a **Critical Failure Mode**.19
|
||||
|
||||
### **10.2 The User Interaction Workflow**
|
||||
|
||||
1. **Trigger:** Critical Failure Mode activated.
|
||||
2. **Action:** The Service publishes a status {"status": "REQ_INPUT"} via ZeroMQ.
|
||||
3. **Data Payload:** It sends the current UAV image and the top-3 retrieved satellite tiles (from Layer 2) to the client UI.
|
||||
4. **User Input:** The user clicks a distinctive feature (e.g., a specific crossroad) in the UAV image and the corresponding point on the satellite map.
|
||||
5. **Recovery:** This pair of points is treated as a **Hard Constraint** in the Factor Graph. The optimizer immediately snaps the trajectory to this user-defined anchor, resetting the covariance and effectively "healing" the localized track.19
|
||||
|
||||
## **11. Performance Evaluation and Benchmarks**
|
||||
|
||||
### **11.1 Accuracy Validation**
|
||||
|
||||
Based on the reported performance of the selected components in relevant datasets (UAV-VisLoc, AnyVisLoc):
|
||||
|
||||
* **LiteSAM** demonstrates an accuracy of 17.86m (RMSE) for cross-view matching. This aligns with the requirement that 60% of photos be within 20m error.18
|
||||
* **AnyLoc** achieves high recall rates (Top-1 Recall > 85% on aerial benchmarks), supporting the recovery from sharp turns.2
|
||||
* **Factor Graph Fusion:** By combining sequential and global measurements, the overall system error is expected to be lower than the individual component errors, satisfying the "80% within 50m" criterion.
|
||||
|
||||
### **11.2 Latency Analysis**
|
||||
|
||||
The breakdown of processing time per frame on an RTX 3070 is estimated as follows:
|
||||
|
||||
* **SuperPoint + LightGlue:** \~50ms.1
|
||||
* **AnyLoc (Global Retrieval):** \~150ms (run only on keyframes or tracking loss).
|
||||
* **LiteSAM (Metric Refinement):** \~60ms.1
|
||||
* **Factor Graph Optimization:** \~100ms (using incremental updates/iSAM2).
|
||||
* Total: \~360ms per frame (worst case with all layers active).
|
||||
This is an order of magnitude faster than the 5-second limit, providing ample headroom for higher resolution processing or background tasks.
|
||||
|
||||
## **12.0 ASTRAL-Next Validation Plan and Acceptance Criteria Matrix**
|
||||
|
||||
A comprehensive test plan is required to validate compliance with all 10 Acceptance Criteria. The foundation is a **Ground-Truth Test Harness** using project-provided ground-truth data.
|
||||
|
||||
### **Table 4: ASTRAL Component vs. Acceptance Criteria Compliance Matrix**
|
||||
|
||||
| ID | Requirement | ASTRAL Solution (Component) | Key Technology / Justification |
|
||||
| :---- | :---- | :---- | :---- |
|
||||
| **AC-1** | 80% of photos < 50m error | GDB (C-1) + GAB (C-5) + TOH (C-6) | **Tier-1 (Copernicus)** data 1 is sufficient. SOTA VPR 8 + Sim(3) graph 13 can achieve this. |
|
||||
| **AC-2** | 60% of photos < 20m error | GDB (C-1) + GAB (C-5) + TOH (C-6) | **Requires Tier-2 (Commercial) Data**.4 Mitigates reference error.3 **Per-Keyframe Scale** 15 model in TOH minimizes drift error. |
|
||||
| **AC-3** | Robust to 350m outlier | V-SLAM (C-3) + TOH (C-6) | **Stage 2 Failure Logic** (7.3) discards the frame. **Robust M-Estimation** (6.3) in Ceres 14 automatically rejects the constraint. |
|
||||
| **AC-4** | Robust to sharp turns (<5% overlap) | V-SLAM (C-3) + TOH (C-6) | **"Atlas" Multi-Map** (4.2) initializes new map (Map_Fragment_k+1). **Geodetic Map-Merging** (6.4) in TOH re-connects fragments via GAB anchors. |
|
||||
| **AC-5** | < 10% outlier anchors | TOH (C-6) | **Robust M-Estimation (Huber Loss)** (6.3) in Ceres 14 automatically down-weights and ignores high-residual (bad) GAB anchors. |
|
||||
| **AC-6** | Connect route chunks; User input | V-SLAM (C-3) + TOH (C-6) + UI | **Geodetic Map-Merging** (6.4) connects chunks. **Stage 5 Failure Logic** (7.3) provides the user-input-as-prior mechanism. |
|
||||
| **AC-7** | < 5 seconds processing/image | All Components | **Multi-Scale Pipeline** (5.3) (Low-Res V-SLAM, Hi-Res GAB patches). **Mandatory TensorRT Acceleration** (7.1) for 2-4x speedup.35 |
|
||||
| **AC-8** | Real-time stream + async refinement | TOH (C-5) + Outputs (C-2.4) | Decoupled architecture provides Pose_N_Est (V-SLAM) in real-time and Pose_N_Refined (TOH) asynchronously as GAB anchors arrive. |
|
||||
| **AC-9** | Image Registration Rate > 95% | V-SLAM (C-3) | **"Atlas" Multi-Map** (4.2). A "lost track" (AC-4) is *not* a registration failure; it's a *new map registration*. This ensures the rate > 95%. |
|
||||
| **AC-10** | Mean Reprojection Error (MRE) < 1.0px | V-SLAM (C-3) + TOH (C-6) | Local BA (4.3) + Global BA (TOH14) + **Per-Keyframe Scale** (6.2) minimizes internal graph tension (Flaw 1.3), allowing the optimizer to converge to a low MRE. |
|
||||
|
||||
### **8.1 Rigorous Validation Methodology**
|
||||
|
||||
* **Test Harness:** A validation script will be created to compare the system's Pose_N^{Refined} output against a ground-truth coordinates.csv file, computing Haversine distance errors.
|
||||
* **Test Datasets:**
|
||||
* Test_Baseline: Standard flight.
|
||||
* Test_Outlier_350m (AC-3): A single, unrelated image inserted.
|
||||
* Test_Sharp_Turn_5pct (AC-4): A sequence with a 10-frame gap.
|
||||
* Test_Long_Route (AC-9, AC-7): A 2000-image sequence.
|
||||
* **Test Cases:**
|
||||
* Test_Accuracy: Run Test_Baseline. ASSERT (count(errors < 50m) / total) >= 0.80 (AC-1). ASSERT (count(errors < 20m) / total) >= 0.60 (AC-2).
|
||||
* Test_Robustness: Run Test_Outlier_350m and Test_Sharp_Turn_5pct. ASSERT system completes the run and Test_Accuracy assertions still pass on the valid frames.
|
||||
* Test_Performance: Run Test_Long_Route on min-spec RTX 2060. ASSERT average_time(Pose_N^{Est} output) < 5.0s (AC-7).
|
||||
* Test_MRE: ASSERT TOH.final_MRE < 1.0 (AC-10).
|
||||
|
||||
|
||||
Also, here are more detailed validation plan:
|
||||
## **ASTRAL Validation Plan and Acceptance Criteria Matrix**
|
||||
|
||||
A comprehensive test plan is required to validate compliance with all 10 Acceptance Criteria. The foundation is a **Ground-Truth Test Harness** using project-provided ground-truth data.
|
||||
|
||||
### **Table 4: ASTRAL Component vs. Acceptance Criteria Compliance Matrix**
|
||||
|
||||
| ID | Requirement | ASTRAL Solution (Component) | Key Technology / Justification |
|
||||
| :---- | :---- | :---- | :---- |
|
||||
| **AC-1** | 80% of photos < 50m error | GDB (C-1) + GAB (C-5) + TOH (C-6) | **Tier-1 (Copernicus)** data 1 is sufficient. SOTA VPR 8 + Sim(3) graph 13 can achieve this. |
|
||||
| **AC-2** | 60% of photos < 20m error | GDB (C-1) + GAB (C-5) + TOH (C-6) | **Requires Tier-2 (Commercial) Data**.4 Mitigates reference error.3 **Per-Keyframe Scale** 15 model in TOH minimizes drift error. |
|
||||
| **AC-3** | Robust to 350m outlier | V-SLAM (C-3) + TOH (C-6) | **Stage 2 Failure Logic** (7.3) discards the frame. **Robust M-Estimation** (6.3) in Ceres 14 automatically rejects the constraint. |
|
||||
| **AC-4** | Robust to sharp turns (<5% overlap) | V-SLAM (C-3) + TOH (C-6) | **"Atlas" Multi-Map** (4.2) initializes new map (Map_Fragment_k+1). **Geodetic Map-Merging** (6.4) in TOH re-connects fragments via GAB anchors. |
|
||||
| **AC-5** | < 10% outlier anchors | TOH (C-6) | **Robust M-Estimation (Huber Loss)** (6.3) in Ceres 14 automatically down-weights and ignores high-residual (bad) GAB anchors. |
|
||||
| **AC-6** | Connect route chunks; User input | V-SLAM (C-3) + TOH (C-6) + UI | **Geodetic Map-Merging** (6.4) connects chunks. **Stage 5 Failure Logic** (7.3) provides the user-input-as-prior mechanism. |
|
||||
| **AC-7** | < 5 seconds processing/image | All Components | **Multi-Scale Pipeline** (5.3) (Low-Res V-SLAM, Hi-Res GAB patches). **Mandatory TensorRT Acceleration** (7.1) for 2-4x speedup.35 |
|
||||
| **AC-8** | Real-time stream + async refinement | TOH (C-5) + Outputs (C-2.4) | Decoupled architecture provides Pose_N_Est (V-SLAM) in real-time and Pose_N_Refined (TOH) asynchronously as GAB anchors arrive. |
|
||||
| **AC-9** | Image Registration Rate > 95% | V-SLAM (C-3) | **"Atlas" Multi-Map** (4.2). A "lost track" (AC-4) is *not* a registration failure; it's a *new map registration*. This ensures the rate > 95%. |
|
||||
| **AC-10** | Mean Reprojection Error (MRE) < 1.0px | V-SLAM (C-3) + TOH (C-6) | Local BA (4.3) + Global BA (TOH14) + **Per-Keyframe Scale** (6.2) minimizes internal graph tension (Flaw 1.3), allowing the optimizer to converge to a low MRE. |
|
||||
|
||||
### **8.1 Rigorous Validation Methodology**
|
||||
|
||||
* **Test Harness:** A validation script will be created to compare the system's Pose_N^{Refined} output against a ground-truth coordinates.csv file, computing Haversine distance errors.
|
||||
* **Test Datasets:**
|
||||
* Test_Baseline: Standard flight.
|
||||
* Test_Outlier_350m (AC-3): A single, unrelated image inserted.
|
||||
* Test_Sharp_Turn_5pct (AC-4): A sequence with a 10-frame gap.
|
||||
* Test_Long_Route (AC-9, AC-7): A 2000-image sequence.
|
||||
* **Test Cases:**
|
||||
* Test_Accuracy: Run Test_Baseline. ASSERT (count(errors < 50m) / total) >= 0.80 (AC-1). ASSERT (count(errors < 20m) / total) >= 0.60 (AC-2).
|
||||
* Test_Robustness: Run Test_Outlier_350m and Test_Sharp_Turn_5pct. ASSERT system completes the run and Test_Accuracy assertions still pass on the valid frames.
|
||||
* Test_Performance: Run Test_Long_Route on min-spec RTX 2060. ASSERT average_time(Pose_N^{Est} output) < 5.0s (AC-7).
|
||||
* Test_MRE: ASSERT TOH.final_MRE < 1.0 (AC-10).
|
||||
|
||||
Put all the findings what was weak and poor at the beginning of the report. Put here all new findings, what was updated, replaced, or removed from the previous solution.
|
||||
|
||||
Then form a new solution design without referencing the previous system. Remove Poor and Very Poor component choices from the component analysis tables, but leave Good and Excellent ones.
|
||||
In the updated report, do not put "new" marks, do not compare to the previous solution draft, just make a new solution as if from scratch
|
||||
|
||||
Also, investigate these ideas:
|
||||
- A Cross-View Geo-Localization Algorithm Using UAV Image
|
||||
https://www.mdpi.com/1424-8220/24/12/3719
|
||||
- Exploring the best way for UAV visual localization under Low-altitude Multi-view Observation condition
|
||||
https://arxiv.org/pdf/2503.10692
|
||||
and find out more like this.
|
||||
|
||||
Assess them and try to either integrate or replace some of the components in the current solution draft
|
||||
@@ -1,362 +0,0 @@
|
||||
## The problem description
|
||||
We have a lot of images taken from a wing-type UAV using a camera with at least Full HD resolution. Resolution of each photo could be up to 6200*4100 for the whole flight, but for other flights, it could be FullHD. Photos are taken and named consecutively within 100 meters of each other.
|
||||
We know only the starting GPS coordinates. We need to determine the GPS of the centers of each image. And also the coordinates of the center of any object in these photos. We can use an external satellite provider for ground checks on the existing photos
|
||||
|
||||
## Data samples
|
||||
Are in attachments: images and csv
|
||||
|
||||
## Restrictions for the input data
|
||||
- Photos are taken by only airplane type UAVs.
|
||||
- Photos are taken by the camera pointing downwards and fixed, but it is not autostabilized.
|
||||
- The flying range is restricted by the eastern and southern parts of Ukraine (To the left of the Dnipro River)
|
||||
- The image resolution could be from FullHD to 6252*4168. Camera parameters are known: focal length, sensor width, resolution and so on.
|
||||
- Altitude is predefined and no more than 1km. The height of the terrain can be neglected.
|
||||
- There is NO data from IMU
|
||||
- Flights are done mostly in sunny weather
|
||||
- We can use satellite providers, but we're limited right now to Google Maps, which could be outdated for some regions
|
||||
- Number of photos could be up to 3000, usually in the 500-1500 range
|
||||
- During the flight, UAVs can make sharp turns, so that the next photo may be absolutely different from the previous one (no same objects), but it is rather an exception than the rule
|
||||
- Processing is done on a stationary computer or laptop with NVidia GPU at least RTX2060, better 3070. (For the UAV solution Jetson Orin Nano would be used, but that is out of scope.)
|
||||
|
||||
## Acceptance criteria for the output of the system:
|
||||
- The system should find out the GPS of centers of 80% of the photos from the flight within an error of no more than 50 meters in comparison to the real GPS
|
||||
|
||||
- The system should find out the GPS of centers of 60% of the photos from the flight within an error of no more than 20 meters in comparison to the real GPS
|
||||
|
||||
- The system should correctly continue the work even in the presence of up to 350 meters of an outlier photo between 2 consecutive pictures en route. This could happen due to tilt of the plane.
|
||||
|
||||
- System should correctly continue the work even during sharp turns, where the next photo doesn't overlap at all, or overlaps in less than 5%. The next photo should be in less than 200m drift and at an angle of less than 70%
|
||||
|
||||
- System should try to operate when UAV made a sharp turn, and all the next photos has no common points with previous route. In that situation system should try to figure out location of the new piece of the route and connect it to the previous route. Also this separate chunks could be more than 2, so this strategy should be in the core of the system
|
||||
|
||||
- In case of being absolutely incapable of determining the system to determine next, second next, and third next images GPS, by any means (these 20% of the route), then it should ask the user for input for the next image, so that the user can specify the location
|
||||
|
||||
- Less than 5 seconds for processing one image
|
||||
|
||||
- Results of image processing should appear immediately to user, so that user shouldn't wait for the whole route to complete in order to analyze first results. Also, system could refine existing calculated results and send refined results again to user
|
||||
|
||||
- Image Registration Rate > 95%. The system can find enough matching features to confidently calculate the camera's 6-DoF pose (position and orientation) and "stitch" that image into the final trajectory
|
||||
|
||||
- Mean Reprojection Error (MRE) < 1.0 pixels. The distance, in pixels, between the original pixel location of the object and the re-projected pixel location.
|
||||
|
||||
- The whole system should work as a background service. The interaction should be done by zeromq. Sevice should be up and running and awaiting for the initial input message. On the input message processing should started, and immediately after the first results system should provide them to the client
|
||||
|
||||
## Existing solution draft:
|
||||
|
||||
# **ASTRAL-Next: A Resilient, GNSS-Denied Geo-Localization Architecture for Wing-Type UAVs in Complex Semantic Environments**
|
||||
|
||||
## **1. Executive Summary and Operational Context**
|
||||
|
||||
The strategic necessity of operating Unmanned Aerial Vehicles (UAVs) in Global Navigation Satellite System (GNSS)-denied environments has precipitated a fundamental shift in autonomous navigation research. The specific operational profile under analysis—high-speed, fixed-wing UAVs operating without Inertial Measurement Units (IMU) over the visually homogenous and texture-repetitive terrain of Eastern and Southern Ukraine—presents a confluence of challenges that render traditional Simultaneous Localization and Mapping (SLAM) approaches insufficient. The target environment, characterized by vast agricultural expanses, seasonal variability, and potential conflict-induced terrain alteration, demands a navigation architecture that moves beyond simple visual odometry to a robust, multi-layered Absolute Visual Localization (AVL) system.
|
||||
|
||||
This report articulates the design and theoretical validation of **ASTRAL-Next**, a comprehensive architectural framework engineered to supersede the limitations of preliminary dead-reckoning solutions. By synthesizing state-of-the-art (SOTA) research emerging in 2024 and 2025, specifically leveraging **LiteSAM** for efficient cross-view matching 1, **AnyLoc** for universal place recognition 2, and **SuperPoint+LightGlue** for robust sequential tracking 1, the proposed system addresses the critical failure modes inherent in wing-type UAV flight dynamics. These dynamics include sharp banking maneuvers, significant pitch variations leading to ground sampling distance (GSD) disparities, and the potential for catastrophic track loss (the "kidnapped robot" problem).
|
||||
|
||||
The analysis indicates that relying solely on sequential image overlap is viable only for short-term trajectory smoothing. The core innovation of ASTRAL-Next lies in its "Hierarchical + Anchor" topology, which decouples the relative motion estimation from absolute global anchoring. This ensures that even during zero-overlap turns or 350-meter positional outliers caused by airframe tilt, the system can re-localize against a pre-cached satellite reference map within the required 5-second latency window.3 Furthermore, the system accounts for the semantic disconnect between live UAV imagery and potentially outdated satellite reference data (e.g., Google Maps) by prioritizing semantic geometry over pixel-level photometric consistency.
|
||||
|
||||
### **1.1 Operational Environment and Constraints Analysis**
|
||||
|
||||
The operational theater—specifically the left bank of the Dnipro River in Ukraine—imposes rigorous constraints on computer vision algorithms. The absence of IMU data removes the ability to directly sense acceleration and angular velocity, creating a scale ambiguity in monocular vision systems that must be resolved through external priors (altitude) and absolute reference data.
|
||||
|
||||
| Constraint Category | Specific Challenge | Implication for System Design |
|
||||
| :---- | :---- | :---- |
|
||||
| **Sensor Limitation** | **No IMU Data** | The system cannot distinguish between pure translation and camera rotation (pitch/roll) without visual references. Scale must be constrained via altitude priors and satellite matching.5 |
|
||||
| **Flight Dynamics** | **Wing-Type UAV** | Unlike quadcopters, fixed-wing aircraft cannot hover. They bank to turn, causing horizon shifts and perspective distortions. "Sharp turns" result in 0% image overlap.6 |
|
||||
| **Terrain Texture** | **Agricultural Fields** | Repetitive crop rows create aliasing for standard descriptors (SIFT/ORB). Feature matching requires context-aware deep learning methods (SuperPoint).7 |
|
||||
| **Reference Data** | **Google Maps (2025)** | Public satellite data may be outdated or lower resolution than restricted military feeds. Matches must rely on invariant features (roads, tree lines) rather than ephemeral textures.9 |
|
||||
| **Compute Hardware** | **NVIDIA RTX 2060/3070** | Algorithms must be optimized for TensorRT to meet the <5s per frame requirement. Heavy transformers (e.g., ViT-Huge) are prohibitive; efficient architectures (LiteSAM) are required.1 |
|
||||
|
||||
The confluence of these factors necessitates a move away from simple "dead reckoning" (accumulating relative movements) which drifts exponentially. Instead, ASTRAL-Next operates as a **Global-Local Hybrid System**, where a high-frequency visual odometry layer handles frame-to-frame continuity, while a parallel global localization layer periodically "resets" the drift by anchoring the UAV to the satellite map.
|
||||
|
||||
## **2. Architectural Critique of Legacy Approaches**
|
||||
|
||||
The initial draft solution ("ASTRAL") and similar legacy approaches typically rely on a unified SLAM pipeline, often attempting to use the same feature extractors for both sequential tracking and global localization. Recent literature highlights substantial deficiencies in this monolithic approach, particularly when applied to the specific constraints of this project.
|
||||
|
||||
### **2.1 The Failure of Classical Descriptors in Agricultural Settings**
|
||||
|
||||
Classical feature descriptors like SIFT (Scale-Invariant Feature Transform) and ORB (Oriented FAST and Rotated BRIEF) rely on detecting "corners" and "blobs" based on local pixel intensity gradients. In the agricultural landscapes of Eastern Ukraine, this approach faces severe aliasing. A field of sunflowers or wheat presents thousands of identical "blobs," causing the nearest-neighbor matching stage to generate a high ratio of outliers.8
|
||||
Research demonstrates that deep-learning-based feature extractors, specifically SuperPoint, trained on large datasets of synthetic and real-world imagery, learn to identify interest points that are semantically significant (e.g., the intersection of a tractor path and a crop line) rather than just texturally distinct.1 Consequently, a redesign must replace SIFT/ORB with SuperPoint for the front-end tracking.
|
||||
|
||||
### **2.2 The Inadequacy of Dead Reckoning without IMU**
|
||||
|
||||
In a standard Visual-Inertial Odometry (VIO) system, the IMU provides a high-frequency prediction of the camera's pose, which the visual system then refines. Without an IMU, the system is purely Visual Odometry (VO). In VO, the scale of the world is unobservable from a single camera (monocular scale ambiguity). A 1-meter movement of a small object looks identical to a 10-meter movement of a large object.5
|
||||
While the prompt specifies a "predefined altitude," relying on this as a static constant is dangerous due to terrain undulations and barometric drift. ASTRAL-Next must implement a Scale-Constrained Bundle Adjustment, treating the altitude not as a hard fact, but as a strong prior that prevents the scale drift common in monocular systems.5
|
||||
|
||||
### **2.3 Vulnerability to "Kidnapped Robot" Scenarios**
|
||||
|
||||
The requirement to recover from sharp turns where the "next photo doesn't overlap at all" describes the classic "Kidnapped Robot Problem" in robotics—where a robot is teleported to an unknown location and must relocalize.14
|
||||
Sequential matching algorithms (optical flow, feature tracking) function on the assumption of overlap. When overlap is zero, these algorithms fail catastrophically. The legacy solution's reliance on continuous tracking makes it fragile to these flight dynamics. The redesigned architecture must incorporate a dedicated Global Place Recognition module that treats every frame as a potential independent query against the satellite database, independent of the previous frame's history.2
|
||||
|
||||
## **3. ASTRAL-Next: System Architecture and Methodology**
|
||||
|
||||
To meet the acceptance criteria—specifically the 80% success rate within 50m error and the <5 second processing time—ASTRAL-Next utilizes a tri-layer processing topology. These layers operate concurrently, feeding into a central state estimator.
|
||||
|
||||
### **3.1 The Tri-Layer Localization Strategy**
|
||||
|
||||
The architecture separates the concerns of continuity, recovery, and precision into three distinct algorithmic pathways.
|
||||
|
||||
| Layer | Functionality | Algorithm | Latency | Role in Acceptance Criteria |
|
||||
| :---- | :---- | :---- | :---- | :---- |
|
||||
| **L1: Sequential Tracking** | Frame-to-Frame Relative Pose | **SuperPoint + LightGlue** | \~50-100ms | Handles continuous flight, bridges small gaps (overlap < 5%), and maintains trajectory smoothness. Essential for the 100m spacing requirement. 1 |
|
||||
| **L2: Global Re-Localization** | "Kidnapped Robot" Recovery | **AnyLoc (DINOv2 + VLAD)** | \~200ms | Detects location after sharp turns (0% overlap) or track loss. Matches current view to the satellite database tile. Addresses the sharp turn recovery criterion. 2 |
|
||||
| **L3: Metric Refinement** | Precise GPS Anchoring | **LiteSAM / HLoc** | \~300-500ms | "Stitches" the UAV image to the satellite tile with pixel-level accuracy to reset drift. Ensures the "80% < 50m" and "60% < 20m" accuracy targets. 1 |
|
||||
|
||||
### **3.2 Data Flow and State Estimation**
|
||||
|
||||
The system utilizes a **Factor Graph Optimization** (using libraries like GTSAM) as the central "brain."
|
||||
|
||||
1. **Inputs:**
|
||||
* **Relative Factors:** Provided by Layer 1 (Change in pose from $t-1$ to $t$).
|
||||
* **Absolute Factors:** Provided by Layer 3 (Global GPS coordinate at $t$).
|
||||
* **Priors:** Altitude constraint and Ground Plane assumption.
|
||||
2. **Processing:** The factor graph optimizes the trajectory by minimizing the error between these conflicting constraints.
|
||||
3. **Output:** A smoothed, globally consistent trajectory $(x, y, z, \\text{roll}, \\text{pitch}, \\text{yaw})$ for every image timestamp.
|
||||
|
||||
### **3.3 ZeroMQ Background Service Architecture**
|
||||
|
||||
As per the requirement, the system operates as a background service.
|
||||
|
||||
* **Communication Pattern:** The service utilizes a REP-REQ (Reply-Request) pattern for control commands (Start/Stop/Reset) and a PUB-SUB (Publish-Subscribe) pattern for the continuous stream of localization results.
|
||||
* **Concurrency:** Layer 1 runs on a high-priority thread to ensure immediate feedback. Layers 2 and 3 run asynchronously; when a global match is found, the result is injected into the Factor Graph, which then "back-propagates" the correction to previous frames, refining the entire recent trajectory.
|
||||
|
||||
## **4. Layer 1: Robust Sequential Visual Odometry**
|
||||
|
||||
The first line of defense against localization loss is robust tracking between consecutive UAV images. Given the challenging agricultural environment, standard feature matching is prone to failure. ASTRAL-Next employs **SuperPoint** and **LightGlue**.
|
||||
|
||||
### **4.1 SuperPoint: Semantic Feature Detection**
|
||||
|
||||
SuperPoint is a fully convolutional neural network trained to detect interest points and compute their descriptors. Unlike SIFT, which uses handcrafted mathematics to find corners, SuperPoint is trained via self-supervision on millions of images.
|
||||
|
||||
* **Relevance to Ukraine:** In a wheat field, SIFT might latch onto hundreds of identical wheat stalks. SuperPoint, however, learns to prioritize more stable features, such as the boundary between the field and a dirt road, or a specific patch of discoloration in the crop canopy.1
|
||||
* **Performance:** SuperPoint runs efficiently on the RTX 2060/3070, with inference times around 15ms per image when optimized with TensorRT.16
|
||||
|
||||
### **4.2 LightGlue: The Attention-Based Matcher**
|
||||
|
||||
**LightGlue** represents a paradigm shift from the traditional "Nearest Neighbor + RANSAC" matching pipeline. It is a deep neural network that takes two sets of SuperPoint features and jointly predicts the matches.
|
||||
|
||||
* **Mechanism:** LightGlue uses a transformer-based attention mechanism. It allows features in Image A to "look at" all features in Image B (and vice versa) to determine the best correspondence. Crucially, it has a "dustbin" mechanism to explicitly reject points that have no match (occlusion or field of view change).12
|
||||
* **Addressing the <5% Overlap:** The user specifies handling overlaps of "less than 5%." Traditional RANSAC fails here because the inlier ratio is too low. LightGlue, however, can confidently identify the few remaining matches because its attention mechanism considers the global geometric context of the points. If only a single road intersection is visible in the corner of both images, LightGlue is significantly more likely to match it correctly than SIFT.8
|
||||
* **Efficiency:** LightGlue is designed to be "light." It features an adaptive depth mechanism—if the images are easy to match, it exits early. If they are hard (low overlap), it uses more layers. This adaptability is perfect for the variable difficulty of the UAV flight path.19
|
||||
|
||||
## **5. Layer 2: Global Place Recognition (The "Kidnapped Robot" Solver)**
|
||||
|
||||
When the UAV executes a sharp turn, resulting in a completely new view (0% overlap), sequential tracking (Layer 1) is mathematically impossible. The system must recognize the new terrain solely based on its appearance. This is the domain of **AnyLoc**.
|
||||
|
||||
### **5.1 Universal Place Recognition with Foundation Models**
|
||||
|
||||
**AnyLoc** leverages **DINOv2**, a massive self-supervised vision transformer developed by Meta. DINOv2 is unique because it is not trained with labels; it is trained to understand the geometry and semantic layout of images.
|
||||
|
||||
* **Why DINOv2 for Satellite Matching:** Satellite images and UAV images have different "domains." The satellite image might be from summer (green), while the UAV flies in autumn (brown). DINOv2 features are remarkably invariant to these texture changes. It "sees" the shape of the road network or the layout of the field boundaries, rather than the color of the leaves.2
|
||||
* **VLAD Aggregation:** AnyLoc extracts dense features from the image using DINOv2 and aggregates them using **VLAD** (Vector of Locally Aggregated Descriptors) into a single, compact vector (e.g., 4096 dimensions). This vector represents the "fingerprint" of the location.21
|
||||
|
||||
### **5.2 Implementation Strategy**
|
||||
|
||||
1. **Database Preparation:** Before the mission, the system downloads the satellite imagery for the operational bounding box (Eastern/Southern Ukraine). These images are tiled (e.g., 512x512 pixels with overlap) and processed through AnyLoc to generate a database of descriptors.
|
||||
2. **Faiss Indexing:** These descriptors are indexed using **Faiss**, a library for efficient similarity search.
|
||||
3. **In-Flight Retrieval:** When Layer 1 reports a loss of tracking (or periodically), the current UAV image is processed by AnyLoc. The resulting vector is queried against the Faiss index.
|
||||
4. **Result:** The system retrieves the top-5 most similar satellite tiles. These tiles represent the coarse global location of the UAV (e.g., "You are in Grid Square B7").2
|
||||
|
||||
## **6. Layer 3: Fine-Grained Metric Localization (LiteSAM)**
|
||||
|
||||
Retrieving the correct satellite tile (Layer 2) gives a location error of roughly the tile size (e.g., 200 meters). To meet the "60% < 20m" and "80% < 50m" criteria, the system must precisely align the UAV image onto the satellite tile. ASTRAL-Next utilizes **LiteSAM**.
|
||||
|
||||
### **6.1 Justification for LiteSAM over TransFG**
|
||||
|
||||
While **TransFG** (Transformer for Fine-Grained recognition) is a powerful architecture for cross-view geo-localization, it is computationally heavy.23 **LiteSAM** (Lightweight Satellite-Aerial Matching) is specifically architected for resource-constrained platforms (like UAV onboard computers or efficient ground stations) while maintaining state-of-the-art accuracy.
|
||||
|
||||
* **Architecture:** LiteSAM utilizes a **Token Aggregation-Interaction Transformer (TAIFormer)**. It employs a convolutional token mixer (CTM) to model correlations between the UAV and satellite images.
|
||||
* **Multi-Scale Processing:** LiteSAM processes features at multiple scales. This is critical because the UAV altitude varies (<1km), meaning the scale of objects in the UAV image will not perfectly match the fixed scale of the satellite image (Google Maps Zoom Level 19). LiteSAM's multi-scale approach inherently handles this discrepancy.1
|
||||
* **Performance Data:** Empirical benchmarks on the **UAV-VisLoc** dataset show LiteSAM achieving an RMSE@30 (Root Mean Square Error within 30 meters) of 17.86 meters, directly supporting the project's accuracy requirements. Its inference time is approximately 61.98ms on standard GPUs, ensuring it fits within the overall 5-second budget.1
|
||||
|
||||
### **6.2 The Alignment Process**
|
||||
|
||||
1. **Input:** The UAV Image and the Top-1 Satellite Tile from Layer 2.
|
||||
2. **Processing:** LiteSAM computes the dense correspondence field between the two images.
|
||||
3. **Homography Estimation:** Using the correspondences, the system computes a homography matrix $H$ that maps pixels in the UAV image to pixels in the georeferenced satellite tile.
|
||||
4. **Pose Extraction:** The camera's absolute GPS position is derived from this homography, utilizing the known GSD of the satellite tile.18
|
||||
|
||||
## **7. Satellite Data Management and Coordinate Systems**
|
||||
|
||||
The reliability of the entire system hinges on the quality and handling of the reference map data. The restriction to "Google Maps" necessitates a rigorous approach to coordinate transformation and data freshness management.
|
||||
|
||||
### **7.1 Google Maps Static API and Mercator Projection**
|
||||
|
||||
The Google Maps Static API delivers images without embedded georeferencing metadata (GeoTIFF tags). The system must mathematically derive the bounding box of each downloaded tile to assign coordinates to the pixels. Google Maps uses the **Web Mercator Projection (EPSG:3857)**.
|
||||
|
||||
The system must implement the following derivation to establish the **Ground Sampling Distance (GSD)**, or meters_per_pixel, which varies significantly with latitude:
|
||||
|
||||
$$ \\text{meters_per_pixel} = 156543.03392 \\times \\frac{\\cos(\\text{latitude} \\times \\frac{\\pi}{180})}{2^{\\text{zoom}}} $$
|
||||
|
||||
For the operational region (Ukraine, approx. Latitude 48N):
|
||||
|
||||
* At **Zoom Level 19**, the resolution is approximately 0.30 meters/pixel. This resolution is compatible with the input UAV imagery (Full HD at <1km altitude), providing sufficient detail for the LiteSAM matcher.24
|
||||
|
||||
**Bounding Box Calculation Algorithm:**
|
||||
|
||||
1. **Input:** Center Coordinate $(lat, lon)$, Zoom Level ($z$), Image Size $(w, h)$.
|
||||
2. **Project to World Coordinates:** Convert $(lat, lon)$ to world pixel coordinates $(px, py)$ at the given zoom level.
|
||||
3. **Corner Calculation:**
|
||||
* px_{NW} = px - (w / 2)
|
||||
* py_{NW} = py - (h / 2)
|
||||
4. Inverse Projection: Convert $(px_{NW}, py_{NW})$ back to Latitude/Longitude to get the North-West corner. Repeat for South-East.
|
||||
This calculation is critical. A precision error here translates directly to a systematic bias in the final GPS output.
|
||||
|
||||
### **7.2 Mitigating Data Obsolescence (The 2025 Problem)**
|
||||
|
||||
The provided research highlights that satellite imagery access over Ukraine is subject to restrictions and delays (e.g., Maxar restrictions in 2025).10 Google Maps data may be several years old.
|
||||
|
||||
* **Semantic Anchoring:** This reinforces the selection of **AnyLoc** (Layer 2) and **LiteSAM** (Layer 3). These algorithms are trained to ignore transient features (cars, temporary structures, vegetation color) and focus on persistent structural features (road geometry, building footprints).
|
||||
* **Seasonality:** Research indicates that DINOv2 features (used in AnyLoc) exhibit strong robustness to seasonal changes (e.g., winter satellite map vs. summer UAV flight), maintaining high retrieval recall where pixel-based methods fail.17
|
||||
|
||||
## **8. Optimization and State Estimation (The "Brain")**
|
||||
|
||||
The individual outputs of the visual layers are noisy. Layer 1 drifts over time; Layer 3 may have occasional outliers. The **Factor Graph Optimization** fuses these inputs into a coherent trajectory.
|
||||
|
||||
### **8.1 Handling the 350-Meter Outlier (Tilt)**
|
||||
|
||||
The prompt specifies that "up to 350 meters of an outlier... could happen due to tilt." This large displacement masquerading as translation is a classic source of divergence in Kalman Filters.
|
||||
|
||||
* **Robust Cost Functions:** In the Factor Graph, the error terms for the visual factors are wrapped in a **Robust Kernel** (specifically the **Cauchy** or **Huber** kernel).
|
||||
* *Mechanism:* Standard least-squares optimization penalizes errors quadratically ($e^2$). If a 350m error occurs, the penalty is massive, dragging the entire trajectory off-course. A robust kernel changes the penalty to be linear ($|e|$) or logarithmic after a certain threshold. This allows the optimizer to effectively "ignore" or down-weight the 350m jump if it contradicts the consensus of other measurements, treating it as a momentary outlier or solving for it as a rotation rather than a translation.19
|
||||
|
||||
### **8.2 The Altitude Soft Constraint**
|
||||
|
||||
To resolve the monocular scale ambiguity without IMU, the altitude ($h_{prior}$) is added as a **Unary Factor** to the graph.
|
||||
|
||||
* $E_{alt} = |
|
||||
|
||||
| z_{est} \- h_{prior} ||*{\\Sigma*{alt}}$
|
||||
|
||||
* $\\Sigma_{alt}$ (covariance) is set relatively high (soft constraint), allowing the visual odometry to adjust the altitude slightly to maintain consistency, but preventing the scale from collapsing to zero or exploding to infinity. This effectively creates an **Altimeter-Aided Monocular VIO** system, where the altimeter (virtual or barometric) replaces the accelerometer for scale determination.5
|
||||
|
||||
## **9. Implementation Specifications**
|
||||
|
||||
### **9.1 Hardware Acceleration (TensorRT)**
|
||||
|
||||
Meeting the <5 second per frame requirement on an RTX 2060 requires optimizing the deep learning models. Python/PyTorch inference is typically too slow due to overhead.
|
||||
|
||||
* **Model Export:** All core models (SuperPoint, LightGlue, LiteSAM) must be exported to **ONNX** (Open Neural Network Exchange) format.
|
||||
* **TensorRT Compilation:** The ONNX models are then compiled into **TensorRT Engines**. This process performs graph fusion (combining multiple layers into one) and kernel auto-tuning (selecting the fastest GPU instructions for the specific RTX 2060/3070 architecture).26
|
||||
* **Precision:** The models should be quantized to **FP16** (16-bit floating point). Research shows that FP16 inference on NVIDIA RTX cards offers a 2x-3x speedup with negligible loss in matching accuracy for these specific networks.16
|
||||
|
||||
### **9.2 Background Service Architecture (ZeroMQ)**
|
||||
|
||||
The system is encapsulated as a headless service.
|
||||
|
||||
**ZeroMQ Topology:**
|
||||
|
||||
* **Socket 1 (REP - Port 5555):** Command Interface. Accepts JSON messages:
|
||||
* {"cmd": "START", "config": {"lat": 48.1, "lon": 37.5}}
|
||||
* {"cmd": "USER_FIX", "lat": 48.22, "lon": 37.66} (Human-in-the-loop input).
|
||||
* **Socket 2 (PUB - Port 5556):** Data Stream. Publishes JSON results for every frame:
|
||||
* {"frame_id": 1024, "gps": [48.123, 37.123], "object_centers": [...], "status": "LOCKED", "confidence": 0.98}.
|
||||
|
||||
Asynchronous Pipeline:
|
||||
The system utilizes a Python multiprocessing architecture. One process handles the camera/image ingest and ZeroMQ communication. A second process hosts the TensorRT engines and runs the Factor Graph. This ensures that the heavy computation of Bundle Adjustment does not block the receipt of new images or user commands.
|
||||
|
||||
## **10. Human-in-the-Loop Strategy**
|
||||
|
||||
The requirement stipulates that for the "20% of the route" where automation fails, the user must intervene. The system must proactively detect its own failure.
|
||||
|
||||
### **10.1 Failure Detection with PDM@K**
|
||||
|
||||
The system monitors the **PDM@K** (Positioning Distance Measurement) metric continuously.
|
||||
|
||||
* **Definition:** PDM@K measures the percentage of queries localized within $K$ meters.3
|
||||
* **Real-Time Proxy:** In flight, we cannot know the true PDM (as we don't have ground truth). Instead, we use the **Marginal Covariance** from the Factor Graph. If the uncertainty ellipse for the current position grows larger than a radius of 50 meters, or if the **Image Registration Rate** (percentage of inliers in LightGlue/LiteSAM) drops below 10% for 3 consecutive frames, the system triggers a **Critical Failure Mode**.19
|
||||
|
||||
### **10.2 The User Interaction Workflow**
|
||||
|
||||
1. **Trigger:** Critical Failure Mode activated.
|
||||
2. **Action:** The Service publishes a status {"status": "REQ_INPUT"} via ZeroMQ.
|
||||
3. **Data Payload:** It sends the current UAV image and the top-3 retrieved satellite tiles (from Layer 2) to the client UI.
|
||||
4. **User Input:** The user clicks a distinctive feature (e.g., a specific crossroad) in the UAV image and the corresponding point on the satellite map.
|
||||
5. **Recovery:** This pair of points is treated as a **Hard Constraint** in the Factor Graph. The optimizer immediately snaps the trajectory to this user-defined anchor, resetting the covariance and effectively "healing" the localized track.19
|
||||
|
||||
## **11. Performance Evaluation and Benchmarks**
|
||||
|
||||
### **11.1 Accuracy Validation**
|
||||
|
||||
Based on the reported performance of the selected components in relevant datasets (UAV-VisLoc, AnyVisLoc):
|
||||
|
||||
* **LiteSAM** demonstrates an accuracy of 17.86m (RMSE) for cross-view matching. This aligns with the requirement that 60% of photos be within 20m error.18
|
||||
* **AnyLoc** achieves high recall rates (Top-1 Recall > 85% on aerial benchmarks), supporting the recovery from sharp turns.2
|
||||
* **Factor Graph Fusion:** By combining sequential and global measurements, the overall system error is expected to be lower than the individual component errors, satisfying the "80% within 50m" criterion.
|
||||
|
||||
### **11.2 Latency Analysis**
|
||||
|
||||
The breakdown of processing time per frame on an RTX 3070 is estimated as follows:
|
||||
|
||||
* **SuperPoint + LightGlue:** \~50ms.1
|
||||
* **AnyLoc (Global Retrieval):** \~150ms (run only on keyframes or tracking loss).
|
||||
* **LiteSAM (Metric Refinement):** \~60ms.1
|
||||
* **Factor Graph Optimization:** \~100ms (using incremental updates/iSAM2).
|
||||
* Total: \~360ms per frame (worst case with all layers active).
|
||||
This is an order of magnitude faster than the 5-second limit, providing ample headroom for higher resolution processing or background tasks.
|
||||
|
||||
## **12.0 ASTRAL-Next Validation Plan and Acceptance Criteria Matrix**
|
||||
|
||||
A comprehensive test plan is required to validate compliance with all 10 Acceptance Criteria. The foundation is a **Ground-Truth Test Harness** using project-provided ground-truth data.
|
||||
|
||||
### **Table 4: ASTRAL Component vs. Acceptance Criteria Compliance Matrix**
|
||||
|
||||
| ID | Requirement | ASTRAL Solution (Component) | Key Technology / Justification |
|
||||
| :---- | :---- | :---- | :---- |
|
||||
| **AC-1** | 80% of photos < 50m error | GDB (C-1) + GAB (C-5) + TOH (C-6) | **Tier-1 (Copernicus)** data 1 is sufficient. SOTA VPR 8 + Sim(3) graph 13 can achieve this. |
|
||||
| **AC-2** | 60% of photos < 20m error | GDB (C-1) + GAB (C-5) + TOH (C-6) | **Requires Tier-2 (Commercial) Data**.4 Mitigates reference error.3 **Per-Keyframe Scale** 15 model in TOH minimizes drift error. |
|
||||
| **AC-3** | Robust to 350m outlier | V-SLAM (C-3) + TOH (C-6) | **Stage 2 Failure Logic** (7.3) discards the frame. **Robust M-Estimation** (6.3) in Ceres 14 automatically rejects the constraint. |
|
||||
| **AC-4** | Robust to sharp turns (<5% overlap) | V-SLAM (C-3) + TOH (C-6) | **"Atlas" Multi-Map** (4.2) initializes new map (Map_Fragment_k+1). **Geodetic Map-Merging** (6.4) in TOH re-connects fragments via GAB anchors. |
|
||||
| **AC-5** | < 10% outlier anchors | TOH (C-6) | **Robust M-Estimation (Huber Loss)** (6.3) in Ceres 14 automatically down-weights and ignores high-residual (bad) GAB anchors. |
|
||||
| **AC-6** | Connect route chunks; User input | V-SLAM (C-3) + TOH (C-6) + UI | **Geodetic Map-Merging** (6.4) connects chunks. **Stage 5 Failure Logic** (7.3) provides the user-input-as-prior mechanism. |
|
||||
| **AC-7** | < 5 seconds processing/image | All Components | **Multi-Scale Pipeline** (5.3) (Low-Res V-SLAM, Hi-Res GAB patches). **Mandatory TensorRT Acceleration** (7.1) for 2-4x speedup.35 |
|
||||
| **AC-8** | Real-time stream + async refinement | TOH (C-5) + Outputs (C-2.4) | Decoupled architecture provides Pose_N_Est (V-SLAM) in real-time and Pose_N_Refined (TOH) asynchronously as GAB anchors arrive. |
|
||||
| **AC-9** | Image Registration Rate > 95% | V-SLAM (C-3) | **"Atlas" Multi-Map** (4.2). A "lost track" (AC-4) is *not* a registration failure; it's a *new map registration*. This ensures the rate > 95%. |
|
||||
| **AC-10** | Mean Reprojection Error (MRE) < 1.0px | V-SLAM (C-3) + TOH (C-6) | Local BA (4.3) + Global BA (TOH14) + **Per-Keyframe Scale** (6.2) minimizes internal graph tension (Flaw 1.3), allowing the optimizer to converge to a low MRE. |
|
||||
|
||||
### **8.1 Rigorous Validation Methodology**
|
||||
|
||||
* **Test Harness:** A validation script will be created to compare the system's Pose_N^{Refined} output against a ground-truth coordinates.csv file, computing Haversine distance errors.
|
||||
* **Test Datasets:**
|
||||
* Test_Baseline: Standard flight.
|
||||
* Test_Outlier_350m (AC-3): A single, unrelated image inserted.
|
||||
* Test_Sharp_Turn_5pct (AC-4): A sequence with a 10-frame gap.
|
||||
* Test_Long_Route (AC-9, AC-7): A 2000-image sequence.
|
||||
* **Test Cases:**
|
||||
* Test_Accuracy: Run Test_Baseline. ASSERT (count(errors < 50m) / total) >= 0.80 (AC-1). ASSERT (count(errors < 20m) / total) >= 0.60 (AC-2).
|
||||
* Test_Robustness: Run Test_Outlier_350m and Test_Sharp_Turn_5pct. ASSERT system completes the run and Test_Accuracy assertions still pass on the valid frames.
|
||||
* Test_Performance: Run Test_Long_Route on min-spec RTX 2060. ASSERT average_time(Pose_N^{Est} output) < 5.0s (AC-7).
|
||||
* Test_MRE: ASSERT TOH.final_MRE < 1.0 (AC-10).
|
||||
|
||||
## Role
|
||||
You are a professional software architect
|
||||
|
||||
## Task
|
||||
- Thorougly research in internet about the problem and identify all potential weak points and problems.
|
||||
- Address these problems and find out ways to solve them.
|
||||
- Based on your findings, form a new solution draft in the same format.
|
||||
|
||||
## Output format
|
||||
- Put here all new findings, what was updated, replaced, or removed from the previous solution in the next table:
|
||||
- Old component solution
|
||||
- Weak point
|
||||
- Solution (component's new solution)
|
||||
|
||||
- Form the new solution draft. In the updated report, do not put "new" marks, do not compare to the previous solution draft, just make a new solution as if from scratch. Put it in the next format:
|
||||
- Short Product solution description. Brief component interaction diagram.
|
||||
- Architecture solution that meets restrictions and acceptance criteria.
|
||||
For each component, analyze the best possible solutions, and form a comparison table.
|
||||
Each possible component solution would be a row, and has the next columns:
|
||||
- Tools (library, platform) to solve component tasks
|
||||
- Advantages of this solution. For example, LiteSAM AI feature is picked for UAV - Satellite matching finding, and it make its job perfectly in milliseconds timeframe.
|
||||
- Limitations of this solution. For example, LiteSAM AI feature matcher requires to work efficiently on RTX Gpus and since it is sparsed, the quality a bit lower than densed feature matcher.
|
||||
- Requirements for this solution. For example, LiteSAM AI feature matcher requires that photos it comparing to be aligned by rotation with no more than 45 degree difference. This requires additional preparation step for pre-rotating either UAV either Satellite images in order to be aligned.
|
||||
- How does it fit for the problem component that has to be solved, and the whole solution
|
||||
- Testing strategy. Research how to cover system with tests in order to meet all the acceptance criteria. Form a list of integration functional tests and non-functional tests.
|
||||
|
||||
|
||||
## Additional sources
|
||||
- A Cross-View Geo-Localization Algorithm Using UAV Image
|
||||
https://www.mdpi.com/1424-8220/24/12/3719
|
||||
- Exploring the best way for UAV visual localization under Low-altitude Multi-view Observation condition
|
||||
https://arxiv.org/pdf/2503.10692
|
||||
- find out more like this.
|
||||
Assess them and try to either integrate or replace some of the components in the current solution draft
|
||||
@@ -1,3 +1,4 @@
|
||||
We have a lot of images taken from a wing-type UAV using a camera with at least Full HD resolution. Resolution of each photo could be up to 6200*4100 for the whole flight, but for other flights, it could be FullHD
|
||||
Photos are taken and named consecutively within 100 meters of each other.
|
||||
We know only the starting GPS coordinates. We need to determine the GPS of the centers of each image. And also the coordinates of the center of any object in these photos. We can use an external satellite provider for ground checks on the existing photos
|
||||
We know only the starting GPS coordinates. We need to determine the GPS of the centers of each next image. And also the coordinates of the center of any object in these photos. We can use an external satellite provider for ground checks on the existing photos.
|
||||
The real world examples are in input_data folder
|
||||
Reference in New Issue
Block a user