mirror of
https://github.com/azaion/gps-denied-onboard.git
synced 2026-04-23 00:36:38 +00:00
Refactor acceptance criteria, problem description, and restrictions for UAV GPS-Denied system. Enhance clarity and detail in performance metrics, image processing requirements, and operational constraints. Introduce new sections for UAV specifications, camera details, satellite imagery, and onboard hardware.
This commit is contained in:
@@ -1,21 +1,50 @@
|
||||
- The system should find out the GPS of centers of 80% of the photos from the flight within an error of no more than 50 meters in comparison to the real GPS
|
||||
# Position Accuracy
|
||||
|
||||
- The system should find out the GPS of centers of 60% of the photos from the flight within an error of no more than 20 meters in comparison to the real GPS
|
||||
- The system should determine GPS coordinates of frame centers for 80% of photos within 50m error compared to real GPS
|
||||
- The system should determine GPS coordinates of frame centers for 60% of photos within 20m error compared to real GPS
|
||||
- Maximum cumulative VO drift between satellite correction anchors should be less than 100 meters
|
||||
- System should report a confidence score per position estimate (high = satellite-anchored, low = VO-extrapolated with drift)
|
||||
|
||||
- The system should correctly continue the work even in the presence of up to 350 meters of an outlier photo between 2 consecutive pictures en route. This could happen due to tilt of the plane.
|
||||
# Image Processing Quality
|
||||
|
||||
- System should correctly continue the work even during sharp turns, where the next photo doesn't overlap at all, or overlaps in less than 5%. The next photo should be in less than 200m drift and at an angle of less than 70%
|
||||
- Image Registration Rate > 95% for normal flight segments. The system can find enough matching features to confidently calculate the camera's 6-DoF pose and stitch that image into the trajectory
|
||||
- Mean Reprojection Error (MRE) < 1.0 pixels
|
||||
|
||||
- System should try to operate when UAV made a sharp turn, and all the next photos has no common points with previous route. In that situation system should try to figure out location of the new piece of the route and connect it to the previous route. Also this separate chunks could be more than 2, so this strategy should be in the core of the system
|
||||
# Resilience & Edge Cases
|
||||
|
||||
- In case of being absolutely incapable of determining the system to determine next, second next, and third next images GPS, by any means (these 20% of the route), then it should ask the user for input for the next image, so that the user can specify the location
|
||||
- The system should correctly continue work even in the presence of up to 350m outlier between 2 consecutive photos (due to tilt of the plane)
|
||||
- System should correctly continue work during sharp turns, where the next photo doesn't overlap at all or overlaps less than 5%. The next photo should be within 200m drift and at an angle of less than 70 degrees. Sharp-turn frames are expected to fail VO and should be handled by satellite-based re-localization
|
||||
- System should operate when UAV makes a sharp turn and next photos have no common points with previous route. It should figure out the location of the new route segment and connect it to the previous route. There could be more than 2 such disconnected segments, so this strategy must be core to the system
|
||||
- In case the system cannot determine the position of 3 consecutive frames by any means, it should send a re-localization request to the ground station operator via telemetry link. While waiting for operator input, the system continues attempting VO/IMU dead reckoning and the flight controller uses last known position + IMU extrapolation
|
||||
|
||||
- Less than 5 seconds for processing one image
|
||||
# Real-Time Onboard Performance
|
||||
|
||||
- Results of image processing should appear immediately to user, so that user shouldn't wait for the whole route to complete in order to analyze first results. Also, system could refine existing calculated results and send refined results again to user
|
||||
- Less than 400ms end-to-end per frame: from camera capture to GPS coordinate output to the flight controller (camera shoots at ~3fps)
|
||||
- Memory usage should stay below 8GB shared memory (Jetson Orin Nano Super — CPU and GPU share the same 8GB LPDDR5 pool)
|
||||
- The system must output calculated GPS coordinates directly to the flight controller via MAVLink GPS_INPUT messages (using MAVSDK)
|
||||
- Position estimates are streamed to the flight controller frame-by-frame; the system does not batch or delay output
|
||||
- The system may refine previously calculated positions and send corrections to the flight controller as updated estimates
|
||||
|
||||
- Image Registration Rate > 95%. The system can find enough matching features to confidently calculate the camera's 6-DoF pose (position and orientation) and "stitch" that image into the final trajectory
|
||||
# Startup & Failsafe
|
||||
|
||||
- Mean Reprojection Error (MRE) < 1.0 pixels. The distance, in pixels, between the original pixel location of the object and the re-projected pixel location.
|
||||
- The system initializes using the last known valid GPS position from the flight controller before GPS denial begins
|
||||
- If the system completely fails to produce any position estimate for more than N seconds (TBD), the flight controller should fall back to IMU-only dead reckoning and the system should log the failure
|
||||
- On companion computer reboot mid-flight, the system should attempt to re-initialize from the flight controller's current IMU-extrapolated position
|
||||
|
||||
- The whole system should work as a background service exposed via REST API with Server-Sent Events (SSE) for real-time streaming. Service should be up and running and awaiting for the initial request. On the request processing should start, and immediately after the first results system should provide them to the client via SSE stream
|
||||
# Ground Station & Telemetry
|
||||
|
||||
- Position estimates and confidence scores should be streamed to the ground station via telemetry link for operator situational awareness
|
||||
- The ground station can send commands to the onboard system (e.g., operator-assisted re-localization hint with approximate coordinates)
|
||||
- Output coordinates in WGS84 format
|
||||
|
||||
# Object Localization
|
||||
|
||||
- Other onboard AI systems can request GPS coordinates of objects detected by the AI camera
|
||||
- The GPS-Denied system calculates object coordinates trigonometrically using: current UAV GPS position (from GPS-Denied), known AI camera angle, zoom, and current flight altitude. Flat terrain is assumed
|
||||
- Accuracy is consistent with the frame-center position accuracy of the GPS-Denied system
|
||||
|
||||
# Satellite Reference Imagery
|
||||
|
||||
- Satellite reference imagery resolution must be at least 0.5 m/pixel, ideally 0.3 m/pixel
|
||||
- Satellite imagery for the operational area should be less than 2 years old where possible
|
||||
- Satellite imagery must be pre-processed and loaded onto the companion computer before flight. Offline preprocessing time is not time-critical (can take minutes/hours)
|
||||
Reference in New Issue
Block a user