Separate tutorial.md for developers from commands for AI WIP
38 KiB
Read carefully about the problem:
We have a lot of images taken from a wing-type UAV using a camera with at least Full HD resolution. Resolution of each photo could be up to 6200*4100 for the whole flight, but for other flights, it could be FullHD Photos are taken and named consecutively within 100 meters of each other. We know only the starting GPS coordinates. We need to determine the GPS of the centers of each image. And also the coordinates of the center of any object in these photos. We can use an external satellite provider for ground checks on the existing photos
System has next restrictions and conditions:
- Photos are taken by only airplane type UAVs.
- Photos are taken by the camera pointing downwards and fixed, but it is not autostabilized.
- The flying range is restricted by the eastern and southern parts of Ukraine (To the left of the Dnipro River)
- The image resolution could be from FullHD to 6252*4168. Camera parameters are known: focal length, sensor width, resolution and so on.
- Altitude is predefined and no more than 1km. The height of the terrain can be neglected.
- There is NO data from IMU
- Flights are done mostly in sunny weather
- We can use satellite providers, but we're limited right now to Google Maps, which could be outdated for some regions
- Number of photos could be up to 3000, usually in the 500-1500 range
- During the flight, UAVs can make sharp turns, so that the next photo may be absolutely different from the previous one (no same objects), but it is rather an exception than the rule
- Processing is done on a stationary computer or laptop with NVidia GPU at least RTX2060, better 3070. (For the UAV solution Jetson Orin Nano would be used, but that is out of scope.)
Output of the system should address next acceptance criteria:
- The system should find out the GPS of centers of 80% of the photos from the flight within an error of no more than 50 meters in comparison to the real GPS
- The system should find out the GPS of centers of 60% of the photos from the flight within an error of no more than 20 meters in comparison to the real GPS
- The system should correctly continue the work even in the presence of up to 350 meters of an outlier photo between 2 consecutive pictures en route. This could happen due to tilt of the plane.
- System should correctly continue the work even during sharp turns, where the next photo doesn't overlap at all, or overlaps in less than 5%. The next photo should be in less than 150m drift and at an angle of less than 50%
- The number of outliers during the satellite provider images ground check should be less than 10%
- In case of being absolutely incapable of determining the system to determine next, second next, and third next images GPS, by any means (these 20% of the route), then it should ask the user for input for the next image, so that the user can specify the location
- Less than 5 seconds for processing one image
- Results of image processing should appear immediately to user, so that user shouldn't wait for the whole route to complete in order to analyze first results. Also, system could refine existing calculated results and send refined results again to user
- Image Registration Rate > 95%. The system can find enough matching features to confidently calculate the camera's 6-DoF pose (position and orientation) and "stitch" that image into the final trajectory
- Mean Reprojection Error (MRE) < 1.0 pixels. The distance, in pixels, between the original pixel location of the object and the re-projected pixel location.
Here is a solution draft:
GEo-Referenced Trajectory and Object Localization System (GEORTOLS): A Hybrid SLAM Architecture
1. Executive Summary
This report outlines the technical design for a robust, real-time geolocalization system. The objective is to determine the precise GPS coordinates for a sequence of high-resolution images (up to 6252x4168) captured by a fixed-wing, non-stabilized Unmanned Aerial Vehicle (UAV) [User Query]. The system must operate under severe constraints, including the absence of any IMU data, a predefined altitude of no more than 1km, and knowledge of only the starting GPS coordinate [User Query]. The system is required to handle significant in-flight challenges, such as sharp turns with minimal image overlap (<5%), frame-to-frame outliers of up to 350 meters, and operation over low-texture terrain as seen in the provided sample images [User Query, Image 1, Image 7].
The proposed solution is a Hybrid Visual-Geolocalization SLAM (VG-SLAM) architecture. This system is designed to meet the demanding acceptance criteria, including a sub-5-second initial processing time per image, streaming output with asynchronous refinement, and high-accuracy GPS localization (60% of photos within 20m error, 80% within 50m error) [User Query].
This hybrid architecture is necessitated by the problem's core constraints. The lack of an IMU makes a purely monocular Visual Odometry (VO) system susceptible to catastrophic scale drift.1 Therefore, the system integrates two cooperative sub-systems:
- A Visual Odometry (VO) Front-End: This component uses state-of-the-art deep-learning feature matchers (SuperPoint + SuperGlue/LightGlue) to provide fast, real-time relative pose estimates. This approach is selected for its proven robustness in low-texture environments where traditional features fail.4 This component delivers the initial, sub-5-second pose estimate.
- A Cross-View Geolocalization (CVGL) Module: This component provides absolute, drift-free GPS pose estimates by matching UAV images against the available satellite provider (Google Maps).7 It functions as the system's "global loop closure" mechanism, correcting the VO's scale drift and, critically, relocalizing the UAV after tracking is lost during sharp turns or outlier frames [User Query].
These two systems run in parallel. A Back-End Pose-Graph Optimizer fuses their respective measurements—high-frequency relative poses from VO and high-confidence absolute poses from CVGL—into a single, globally consistent, and incrementally refined trajectory. This architecture directly satisfies the requirements for immediate, streaming results and subsequent asynchronous refinement [User Query].
2. Product Solution Description and Component Interaction
Product Solution Description
The proposed system, "GEo-Referenced Trajectory and Object Localization System (GEORTOLS)," is a real-time, streaming-capable software solution. It is designed for deployment on a stationary computer or laptop equipped with an NVIDIA GPU (RTX 2060 or better) [User Query].
- Inputs:
- A sequence of consecutively named monocular images (FullHD to 6252x4168).
- The absolute GPS coordinate (Latitude, Longitude) of the first image in the sequence.
- A pre-calibrated camera intrinsic matrix.
- Access to the Google Maps satellite imagery API.
- Outputs:
- A real-time, streaming feed of estimated GPS coordinates (Latitude, Longitude, Altitude) and 6-DoF poses (including Roll, Pitch, Yaw) for the center of each image.
- Asynchronous refinement messages for previously computed poses as the back-end optimizer improves the global trajectory.
- A service to provide the absolute GPS coordinate for any user-selected pixel coordinate (u,v) within any geolocated image.
Component Interaction Diagram
The system is architected as four asynchronous, parallel-processing components to meet the stringent real-time and refinement requirements.
- Image Ingestion & Pre-processing: This module acts as the entry point. It receives the new, high-resolution image (Image N). It immediately creates scaled-down, lower-resolution (e.g., 1024x768) copies of the image for real-time processing by the VO and CVGL modules, while retaining the full-resolution original for object-level GPS lookups.
- Visual Odometry (VO) Front-End: This module's sole task is high-speed, frame-to-frame relative pose estimation. It maintains a short-term "sliding window" of features, matching Image N to Image N-1. It uses GPU-accelerated deep-learning models (SuperPoint + SuperGlue) to find feature matches and calculates the 6-DoF relative transform. This result is immediately sent to the Back-End.
- Cross-View Geolocalization (CVGL) Module: This is a heavier, slower, asynchronous module. It takes the pre-processed Image N and queries the Google Maps database to find an absolute GPS pose. This involves a two-stage retrieval-and-match process. When a high-confidence match is found, its absolute pose is sent to the Back-End as a "global-pose constraint."
- Trajectory Optimization Back-End: This is the system's central "brain," managing the complete pose graph.10 It receives two types of data:
- High-frequency, low-confidence relative poses from the VO Front-End.
- Low-frequency, high-confidence absolute poses from the CVGL Module.
It continuously fuses these constraints in a pose-graph optimization framework (e.g., g2o or Ceres Solver). When the VO Front-End provides a new relative pose, it is quickly added to the graph to produce the "Initial Pose" (<5s). When the CVGL Module provides a new absolute pose, it triggers a more comprehensive re-optimization of the entire graph, correcting drift and broadcasting "Refined Poses" to the user.11
3. Core Architectural Framework: Hybrid Visual-Geolocalization SLAM (VG-SLAM)
Rationale for the Hybrid Approach
The core constraints of this problem—monocular, IMU-less flight over potentially long distances (up to 3000 images at ~100m intervals equates to a 300km flight) [User Query]—render simple solutions unviable.
A VO-Only system is guaranteed to fail. Monocular Visual Odometry (and SLAM) suffers from an inherent, unobservable ambiguity: the scale of the world.1 Because there is no IMU to provide an accelerometer-based scale reference or a gravity vector 12, the system has no way to know if it moved 1 meter or 10 meters. This leads to compounding scale drift, where the entire trajectory will grow or shrink over time.3 Over a 300km flight, the resulting positional error would be measured in kilometers, not the 20-50 meters required [User Query].
A CVGL-Only system is also unviable. Cross-View Geolocalization (CVGL) matches the UAV image to a satellite map to find an absolute pose.7 While this is drift-free, it is a large-scale image retrieval problem. Querying the entire map of Ukraine for a match for every single frame is computationally impossible within the <5 second time limit.13 Furthermore, this approach is brittle; if the Google Maps data is outdated (a specific user restriction) [User Query], the CVGL match will fail, and the system would have no pose estimate at all.
Therefore, the Hybrid VG-SLAM architecture is the only robust solution.
- The VO Front-End provides the fast, high-frequency relative motion. It works even if the satellite map is outdated, as it tracks features in the real, current world.
- The CVGL Module acts as the only mechanism for scale correction and absolute georeferencing. It provides periodic, drift-free "anchors" to the real-world GPS coordinates.
- The Back-End Optimizer fuses these two data streams. The CVGL poses function as "global loop closures" in the SLAM pose graph. They correct the scale drift accumulated by the VO and, critically, serve to relocalize the system after a "kidnapping" event, such as the specified sharp turns or 350m outliers [User Query].
Data Flow for Streaming and Refinement
This architecture is explicitly designed to meet the <5s initial output and asynchronous refinement criteria [User Query]. The data flow for a single image (Image N) is as follows:
- T = 0.0s: Image N (6200x4100) is received by the Ingestion Module.
- T = 0.2s: Image N is pre-processed (scaled to 1024px) and passed to the VO and CVGL modules.
- T = 1.0s: The VO Front-End completes GPU-accelerated matching (SuperPoint+SuperGlue) of Image N -> Image N-1. It computes the Relative_Pose(N-1 -> N).
- T = 1.1s: The Back-End Optimizer receives this Relative_Pose. It appends this pose to the graph relative to the last known pose of N-1.
- T = 1.2s: The Back-End broadcasts the Initial Pose_N_Est to the user interface. (<5s criterion met).
- (Parallel Thread) T = 1.5s: The CVGL Module (on a separate thread) begins its two-stage search for Image N against the Google Maps database.
- (Parallel Thread) T = 6.0s: The CVGL Module successfully finds a high-confidence Absolute_Pose_N_Abs from the satellite match.
- T = 6.1s: The Back-End Optimizer receives this new, high-confidence absolute constraint for Image N.
- T = 6.2s: The Back-End triggers a graph re-optimization. This new "anchor" corrects any scale or positional drift for Image N and all surrounding poses in the graph.
- T = 6.3s: The Back-End broadcasts a Pose_N_Refined (and Pose_N-1_Refined, Pose_N-2_Refined, etc.) to the user interface. (Refinement criterion met).
4. Component Analysis: Front-End (Visual Odometry and Relocalization)
The task of the VO Front-End is to rapidly and robustly estimate the 6-DoF relative motion between consecutive frames. This component's success is paramount for the high-frequency tracking required to meet the <5s criterion.
The primary challenge is the nature of the imagery. The specified operational area and sample images (e.g., Image 1, Image 7) show vast, low-texture agricultural fields [User Query]. These environments are a known failure case for traditional, gradient-based feature extractors like SIFT or ORB, which rely on high-gradient corners and cannot find stable features in "weak texture areas".5 Furthermore, the non-stabilized camera [User Query] will introduce significant rotational motion and viewpoint change, breaking the assumptions of many simple trackers.16
Deep-learning (DL) based feature extractors and matchers have been developed specifically to overcome these "challenging visual conditions".5 Models like SuperPoint, SuperGlue, and LoFTR are trained to find more robust and repeatable features, even in low-texture scenes.4
Table 1: Analysis of State-of-the-Art Feature Extraction and Matching Techniques
| Approach (Tools/Library) | Advantages | Limitations | Requirements | Fitness for Problem Component |
|---|---|---|---|---|
| SIFT + BFMatcher/FLANN (OpenCV) | - Scale and rotation invariant. - High-quality, robust matches. - Well-studied and mature.15 | - Computationally slow (CPU-based). - Poor performance in low-texture or weakly-textured areas.14 - Patented (though expired). | - High-contrast, well-defined features. | Poor. Too slow for the <5s target and will fail to find features in the low-texture agricultural landscapes shown in sample images. |
| ORB + BFMatcher (OpenCV) | - Extremely fast and lightweight. - Standard for real-time SLAM (e.g., ORB-SLAM).21 - Rotation invariant. | - Not scale invariant (uses a pyramid). - Performs very poorly in low-texture scenes.5 - Unstable in high-blur scenarios. | - CPU, lightweight. - High-gradient corners. | Very Poor. While fast, it fails on the robustness requirement. It is designed for textured, indoor/urban scenes, not sparse, natural terrain. |
| SuperPoint + SuperGlue (PyTorch, C++/TensorRT) | - SOTA robustness in low-texture, high-blur, and challenging conditions.4 - End-to-end learning for detection and matching.24 - Multiple open-source SLAM integrations exist (e.g., SuperSLAM).25 | - Requires a powerful GPU for real-time performance. - Sparse feature-based (not dense). | - NVIDIA GPU (RTX 2060+). - PyTorch (research) or TensorRT (deployment).26 | Excellent. This approach is designed for the exact "challenging conditions" of this problem. It provides SOTA robustness in low-texture scenes.4 The user's hardware (RTX 2060+) meets the requirements. |
| LoFTR (PyTorch) | - Detector-free dense matching.14 - Extremely robust to viewpoint and texture challenges.14 - Excellent performance on natural terrain and low-overlap images.19 | - High computational and VRAM cost. - Can cause CUDA Out-of-Memory (OOM) errors on very high-resolution images.30 - Slower than sparse-feature methods. | - High-end NVIDIA GPU. - PyTorch. | Good, but Risky. While its robustness is excellent, its dense, Transformer-based nature makes it vulnerable to OOM errors on the 6252x4168 images.30 The sparse SuperPoint approach is a safer, more-scalable choice for the VO front-end. |
Selected Approach (VO Front-End): SuperPoint + SuperGlue/LightGlue
The selected approach is a VO front-end based on SuperPoint for feature extraction and SuperGlue (or its faster successor, LightGlue) for matching.18
- Robustness: This combination is proven to provide superior robustness and accuracy in sparse-texture scenes, extracting more and higher-quality matches than ORB.4
- Performance: It is designed for GPU acceleration and is used in SOTA real-time SLAM systems, demonstrating its feasibility within the <5s target on an RTX 2060.25
- Scalability: As a sparse-feature method, it avoids the memory-scaling issues of dense matchers like LoFTR when faced with the user's maximum 6252x4168 resolution.30 The image can be downscaled for real-time VO, and SuperPoint will still find stable features.
5. Component Analysis: Back-End (Trajectory Optimization and Refinement)
The task of the Back-End is to fuse all incoming measurements (high-frequency/low-accuracy relative VO poses, low-frequency/high-accuracy absolute CVGL poses) into a single, globally consistent trajectory. This component's design is dictated by the user's real-time streaming and refinement requirements [User Query].
A critical architectural choice must be made between a traditional, batch Structure from Motion (SfM) pipeline and a real-time SLAM (Simultaneous Localization and Mapping) pipeline.
- Batch SfM: (e.g., COLMAP).32 This approach is an offline process. It collects all 1500-3000 images, performs feature matching, and then runs a large, non-real-time "Bundle Adjustment" (BA) to solve for all camera poses and 3D points simultaneously.35 While this produces the most accurate possible result, it can take hours to compute. It cannot meet the <5s/image or "immediate results" criteria.
- Real-time SLAM: (e.g., ORB-SLAM3).28 This approach is online and incremental. It maintains a "pose graph" of the trajectory.10 It provides an immediate pose estimate based on the VO front-end. When a new, high-quality measurement arrives (like a loop closure 37, or in our case, a CVGL fix), it triggers a fast re-optimization of the graph, publishing a refined result.11
The user's requirements for "results...appear immediately" and "system could refine existing calculated results" [User Query] are a textbook description of a real-time SLAM back-end.
Table 2: Analysis of Trajectory Optimization Strategies
| Approach (Tools/Library) | Advantages | Limitations | Requirements | Fitness for Problem Component |
|---|---|---|---|---|
| Incremental SLAM (Pose-Graph Optimization) (g2o, Ceres Solver, GTSAM) | - Real-time / Online: Provides immediate pose estimates. - Supports Refinement: Explicitly designed to refine past poses when new "loop closure" (CVGL) data arrives.10 - Meets the <5s and streaming criteria. | - Initial estimate is less accurate than a full batch process. - Susceptible to drift until a loop closure (CVGL fix) is made. | - A graph optimization library (g2o, Ceres). - A robust cost function to reject outliers. | Excellent. This is the only architecture that satisfies the user's real-time streaming and asynchronous refinement constraints. |
| Batch Structure from Motion (Global Bundle Adjustment) (COLMAP, Agisoft Metashape) | - Globally Optimal Accuracy: Produces the most accurate possible 3D reconstruction and trajectory.35 - Can import custom DL matches.38 | - Offline: Cannot run in real-time or stream results. - High computational cost (minutes to hours). - Fails all timing and streaming criteria. | - All images must be available before processing starts. - High RAM and CPU. | Unsuitable (for the online system). This approach is ideal for an optional, post-flight, high-accuracy refinement, but it cannot be the primary system. |
Selected Approach (Back-End): Incremental Pose-Graph Optimization (g2o/Ceres)
The system's back-end will be built as an Incremental Pose-Graph Optimizer using a library like g2o or Ceres Solver. This is the only way to meet the real-time streaming and refinement constraints [User Query].
The graph will contain:
- Nodes: The 6-DoF pose of each camera frame.
- Edges (Constraints):
- Odometry Edges: Relative 6-DoF transforms from the VO Front-End (SuperPoint+SuperGlue). These are high-frequency but have accumulating drift/scale error.
- Georeferencing Edges: Absolute 6-DoF poses from the CVGL Module. These are low-frequency but are drift-free and provide the absolute scale.
- Start-Point Edge: A high-confidence absolute pose for Image 1, fixed to the user-provided start GPS.
This architecture allows the system to provide an immediate estimate (from odometry) and then drastically improve its accuracy (correcting scale and drift) whenever a new georeferencing edge is added.
6. Component Analysis: Global-Pose Correction (Georeferencing Module)
This module is the most critical component for meeting the accuracy requirements. Its task is to provide absolute GPS pose estimates by matching the UAV's nadir-pointing-but-non-stabilized images to the Google Maps satellite provider [User Query]. This is the only component that can correct the monocular scale drift.
This task is known as Cross-View Geolocalization (CVGL).7 It is extremely challenging due to the "domain gap" 44 between the two image sources:
- Viewpoint: The UAV is at low altitude (<1km) and non-nadir (due to fixed-wing tilt) 45, while the satellite is at a very high altitude and is perfectly nadir.
- Appearance: The images come from different sensors, with different lighting (shadows), and at different times. The Google Maps data may be "outdated" [User Query], showing different seasons, vegetation, or man-made structures.47
A simple, brute-force feature match is computationally impossible. The solution is a hierarchical, two-stage approach that mimics SOTA research 7:
- Stage 1: Coarse Retrieval. We cannot run expensive matching against the entire map. Instead, we treat this as an image retrieval problem. We use a Deep Learning model (e.g., a Siamese or Dual CNN trained on this task 50) to generate a compact "embedding vector" (a digital signature) for the UAV image. In an offline step, we pre-compute embeddings for all satellite map tiles in the operational area. The UAV image's embedding is then used to perform a very fast (e.g., FAISS library) similarity search against the satellite database, returning the Top-K most likely-matching satellite tiles.
- Stage 2: Fine-Grained Pose. Only for these Top-K candidates do we perform the heavy-duty feature matching. We use our selected SuperPoint+SuperGlue matcher 53 to find precise correspondences between the UAV image and the K satellite tiles. If a high-confidence geometric match (e.g., >50 inliers) is found, we can compute the precise 6-DoF pose of the UAV relative to that tile, thus yielding an absolute GPS coordinate.
Table 3: Analysis of State-of-the-Art Cross-View Geolocalization (CVGL) Techniques
| Approach (Tools/Library) | Advantages | Limitations | Requirements | Fitness for Problem Component |
|---|---|---|---|---|
| Coarse Retrieval (Siamese/Dual CNNs) (PyTorch, ResNet18) | - Extremely fast for retrieval (database lookup). - Learns features robust to seasonal and appearance changes.50 - Narrows search space from millions to a few. | - Does not provide a precise 6-DoF pose, only a "best match" tile. - Requires training on a dataset of matched UAV-satellite pairs. | - Pre-trained model (e.g., on ResNet18).52 - Pre-computed satellite embedding database. | Essential (as Stage 1). This is the only computationally feasible way to "find" the UAV on the map. |
| Fine-Grained Feature Matching (SuperPoint + SuperGlue) | - Provides a highly-accurate 6-Dof pose estimate.53 - Re-uses the same robust matcher from the VO Front-End.54 | - Too slow to run on the entire map. - Requires a good initial guess (from Stage 1) to be effective. | - NVIDIA GPU. - Top-K candidate tiles from Stage 1. | Essential (as Stage 2). This is the component that actually computes the precise GPS pose from the coarse candidates. |
| End-to-End DL Models (Transformers) (PFED, ReCOT, etc.) | - SOTA accuracy in recent benchmarks.13 - Can be highly efficient (e.g., PFED).13 - Can perform retrieval and pose estimation in one model. | - Often research-grade, not robustly open-sourced. - May be complex to train and deploy. - Less modular and harder to debug than the two-stage approach. | - Specific, complex model architectures.13 - Large-scale training datasets. | Not Recommended (for initial build). While powerful, these are less practical for a version 1 build. The two-stage approach is more modular, debuggable, and uses components already required by the VO system. |
Selected Approach (CVGL Module): Hierarchical Retrieval + Matching
The CVGL module will be implemented as a two-stage hierarchical system:
- Stage 1 (Coarse): A Siamese CNN 52 (or similar model) generates an embedding for the UAV image. This embedding is used to retrieve the Top-5 most similar satellite tiles from a pre-computed database.
- Stage 2 (Fine): The SuperPoint+SuperGlue matcher 53 is run between the UAV image and these 5 tiles. The match with the highest inlier count and lowest reprojection error is used to calculate the absolute 6-DoF pose, which is then sent to the Back-End optimizer.
7. Addressing Critical Acceptance Criteria and Failure Modes
This hybrid architecture's logic is designed to handle the most difficult acceptance criteria [User Query] through a robust, multi-stage escalation process.
Stage 1: Initial State (Normal Operation)
- Condition: VO(N-1 -> N) succeeds.
- System Logic: The VO Front-End provides the high-frequency relative pose. This is added to the graph, and the Initial Pose is sent to the user (<5s).
- Resolution: The CVGL Module runs asynchronously to provide a Refined Pose later, which corrects for scale drift.
Stage 2: Transient Failure / Outlier Handling (AC-3)
- Condition: VO(N-1 -> N) fails (e.g., >350m jump, severe motion blur, low overlap) [User Query]. This triggers an immediate, high-priority CVGL(N) query.
- System Logic:
- If CVGL(N) succeeds, the system has conflicting data: a failed VO link and a successful CVGL pose. The Back-End Optimizer uses a robust kernel to reject the high-error VO link as an outlier and accepts the CVGL pose.56 The trajectory "jumps" to the correct location, and VO resumes from Image N+1.
- If CVGL(N) also fails (e.g., due to cloud cover or outdated map), the system assumes Image N is a single bad frame (an outlier).
- Resolution (Frame Skipping): The system buffers Image N and, upon receiving Image N+1, the VO Front-End attempts to "bridge the gap" by matching VO(N-1 -> N+1).
- If successful, a pose for N+1 is found. Image N is marked as a rejected outlier, and the system continues.
- If VO(N-1 -> N+1) fails, it repeats for VO(N-1 -> N+2).
- If this "bridging" fails for 3 consecutive frames, the system concludes it is not a transient outlier but a persistent tracking loss. This escalates to Stage 3.
Stage 3: Persistent Tracking Loss / Sharp Turn Handling (AC-4)
- Condition: VO tracking is lost, and the "frame-skipping" in Stage 2 fails (e.g., a "sharp turn" with no overlap) [User Query].
- System Logic (Multi-Map "Chunking"): The Back-End Optimizer declares a "Tracking Lost" state and creates a new, independent map ("Chunk 2").
- The VO Front-End is re-initialized and begins populating this new chunk, tracking VO(N+3 -> N+4), VO(N+4 -> N+5), etc. This new chunk is internally consistent but has no absolute GPS position (it is "floating").
- Resolution (Asynchronous Relocalization):
- The CVGL Module now runs asynchronously on all frames in this new "Chunk 2".
- Crucially, it uses the last known GPS coordinate from "Chunk 1" as a search prior, narrowing the satellite map search area to the vicinity.
- The system continues to build Chunk 2 until the CVGL module successfully finds a high-confidence Absolute_Pose for any frame in that chunk (e.g., for Image N+20).
- Once this single GPS "anchor" is found, the Back-End Optimizer performs a full graph optimization. It calculates the 7-DoF transformation (3D position, 3D rotation, and scale) to align all of Chunk 2 and merge it with Chunk 1.
- This "chunking" method robustly handles the "correctly continue the work" criterion by allowing the system to keep tracking locally even while globally lost, confident it can merge the maps later.
Stage 4: Catastrophic Failure / User Intervention (AC-6)
- Condition: The system has entered Stage 3 and is building "Chunk 2," but the CVGL Module has also failed for a prolonged period (e.g., 20% of the route, or 50+ consecutive frames) [User Query]. This is a "worst-case" scenario where the UAV is in an area with no VO features (e.g., over a lake) and no CVGL features (e.g., heavy clouds or outdated maps).
- System Logic: The system is "absolutely incapable" of determining its pose.
- Resolution (User Input): The system triggers the "ask the user for input" event. A UI prompt will show the last known good image (from Chunk 1) on the map and the new, "lost" image (e.g., N+50). It will ask the user to "Click on the map to provide a coarse location." This user-provided GPS point is then fed to the CVGL module as a strong prior, drastically narrowing the search space and enabling it to re-acquire a lock.
8. Implementation and Output Generation
Real-time Workflow (<5s Initial, Async Refinement)
A concrete implementation plan for processing Image N:
- T=0.0s: Image[N] (6200px) received.
- T=0.1s: Image pre-processed: Scaled to 1024px for VO/CVGL. Full-res original stored.
- T=0.5s: VO Front-End (GPU): SuperPoint features extracted for 1024px image.
- T=1.0s: VO Front-End (GPU): SuperGlue matches 1024px Image[N] -> 1024px Image[N-1]. Relative_Pose (6-DoF) estimated via RANSAC/PnP.
- T=1.1s: Back-End: Relative_Pose added to graph. Optimizer updates trajectory.
- T=1.2s: OUTPUT: Initial Pose_N_Est (GPS) sent to user. (<5s criterion met).
- T=1.3s: CVGL Module (Async Task) (GPU): Siamese/Dual CNN generates embedding for 1024px Image[N].
- T=1.5s: CVGL Module (Async Task): Coarse retrieval (FAISS lookup) returns Top-5 satellite tile candidates.
- T=4.0s: CVGL Module (Async Task) (GPU): Fine-grained matching. SuperPoint+SuperGlue runs 5 times (Image[N] vs. 5 satellite tiles).
- T=4.5s: CVGL Module (Async Task): A high-confidence match is found. Absolute_Pose_N_Abs (6-DoF) is computed.
- T=4.6s: Back-End: High-confidence Absolute_Pose_N_Abs added to pose graph. Graph re-optimization is triggered.
- T=4.8s: OUTPUT: Pose_N_Refined (GPS) sent to user. (Refinement criterion met).
Determining Object-Level GPS (from Pixel Coordinate)
The requirement to find the "coordinates of the center of any object in these photos" [User Query] is met by projecting a pixel to its 3D world coordinate. This requires the (u,v) pixel, the camera's 6-DoF pose, and the camera's intrinsic matrix (K).
Two methods will be implemented to support the streaming/refinement architecture:
- Method 1 (Immediate, <5s): Flat-Earth Projection.
- When the user clicks pixel (u,v) on Image[N], the system uses the Initial Pose_N_Est.
- It assumes the ground is a flat plane at the predefined altitude (e.g., 900m altitude if flying at 1km and ground is at 100m) [User Query].
- It computes the 3D ray from the camera center through (u,v) using the intrinsic matrix (K).
- It calculates the 3D intersection point of this ray with the flat ground plane.
- This 3D world point is converted to a GPS coordinate and sent to the user. This is very fast but less accurate in non-flat terrain.
- Method 2 (Refined, Post-BA): Structure-from-Motion Projection.
- The Back-End's pose-graph optimization, as a byproduct, will create a sparse 3D point cloud of the world (i.e., the "SfM" part of SLAM).35
- When the user clicks (u,v), the system uses the Pose_N_Refined.
- It raycasts from the camera center through (u,v) and finds the 3D intersection point with the actual 3D point cloud generated by the system.
- This 3D point's coordinate (X,Y,Z) is converted to GPS. This is far more accurate as it accounts for real-world topography (hills, ditches) captured in the 3D map.
9. Testing and Validation Strategy
A rigorous testing strategy is required to validate all 10 acceptance criteria. The foundation of this strategy is the creation of a Ground-Truth Test Dataset. This will involve flying several test routes and manually creating a "checkpoint" (CP) file, similar to the provided coordinates.csv 58, using a high-precision RTK/PPK GPS. This provides the "real GPS" for validation.59
Accuracy Validation Methodology (AC-1, AC-2, AC-5, AC-8, AC-9)
These tests validate the system's accuracy and completion metrics.59
- A test flight of 1000 images with high-precision ground-truth CPs is prepared.
- The system is run given only the first GPS coordinate.
- A test script compares the system's final refined GPS output for each image against its ground-truth CP. The Haversine distance (error in meters) is calculated for all 1000 images.
- This yields a list of 1000 error values.
- Test_Accuracy_50m (AC-1): ASSERT (count(errors < 50m) / 1000) >= 0.80
- Test_Accuracy_20m (AC-2): ASSERT (count(errors < 20m) / 1000) >= 0.60
- Test_Outlier_Rate (AC-5): ASSERT (count(un-localized_images) / 1000) < 0.10
- Test_Image_Registration_Rate (AC-8): ASSERT (count(localized_images) / 1000) > 0.95
- Test_Mean_Reprojection_Error (AC-9): ASSERT (Back-End.final_MRE) < 1.0
- Test_RMSE: The overall Root Mean Square Error (RMSE) of the entire trajectory will be calculated as a primary performance benchmark.59
Integration and Functional Tests (AC-3, AC-4, AC-6)
These tests validate the system's logic and robustness to failure modes.62
- Test_Low_Overlap_Relocalization (AC-4):
- Setup: Create a test sequence of 50 images. From this, manually delete images 20-24 (simulating 5 lost frames during a sharp turn).63
- Test: Run the system on this "broken" sequence.
- Pass/Fail: The system must report "Tracking Lost" at frame 20, initiate a new "chunk," and then "Tracking Re-acquired" and "Maps Merged" when the CVGL module successfully localizes frame 25 (or a subsequent frame). The final trajectory error for frame 25 must be < 50m.
- Test_350m_Outlier_Rejection (AC-3):
- Setup: Create a test sequence. At image 30, insert a "rogue" image (Image 30b) known to be 350m away.
- Test: Run the system on this sequence (..., 29, 30, 30b, 31,...).
- Pass/Fail: The system must correctly identify Image 30b as an outlier (RANSAC failure 56), reject it (or jump to its CVGL-verified pose), and "correctly continue the work" by successfully tracking Image 31 from Image 30 (using the frame-skipping logic). The trajectory must not be corrupted.
- Test_User_Intervention_Prompt (AC-6):
- Setup: Create a test sequence with 50 consecutive "bad" frames (e.g., pure sky, lens cap) to ensure the transient and chunking logics are bypassed.
- Test: Run the system.
- Pass/Fail: The system must enter a "LOST" state, attempt and fail to relocalize via CVGL for 50 frames, and then correctly trigger the "ask for user input" event.
Non-Functional Tests (AC-7, AC-8, Hardware)
These tests validate performance and resource requirements.66
- Test_Performance_Per_Image (AC-7):
- Setup: Run the 1000-image test set on the minimum-spec RTX 2060.
- Test: Measure the time from "Image In" to "Initial Pose Out" for every frame.
- Pass/Fail: ASSERT average_time < 5.0s.
- Test_Streaming_Refinement (AC-8):
- Setup: Run the 1000-image test set.
- Test: A logger must verify that two poses are received for >80% of images: an "Initial" pose (T < 5s) and a "Refined" pose (T > 5s, after CVGL).
- Pass/Fail: The refinement mechanism is functioning correctly.
- Test_Scalability_Large_Route (Constraints):
- Setup: Run the system on a full 3000-image dataset.
- Test: Monitor system RAM, VRAM, and processing time per frame over the entire run.
- Pass/Fail: The system must complete the run without memory leaks, and the processing time per image must not degrade significantly as the pose graph grows.
Identify all potential weak points and problems. Address them and find out ways to solve them. Based on your findings, form a new solution draft in the same format.
If your finding requires a complete reorganization of the flow and different components, state it. Put all the findings regarding what was weak and poor at the beginning of the report. Put here all new findings, what was updated, replaced, or removed from the previous solution.
Then form a new solution design without referencing the previous system. Remove Poor and Very Poor component choices from the component analysis tables, but leave Good and Excellent ones. In the updated report, do not put "new" marks, do not compare to the previous solution draft, just make a new solution as if from scratch