# Integration Test: Sequential Visual Odometry (Layer 1) ## Summary Test the SuperPoint + LightGlue sequential tracking pipeline for frame-to-frame relative pose estimation in continuous UAV flight scenarios. ## Component Under Test **Component**: Sequential Visual Odometry (Layer 1) **Technologies**: SuperPoint (feature detection), LightGlue (attention-based matching) **Location**: `gps_denied_07_sequential_visual_odometry` ## Dependencies - Model Manager (TensorRT models for SuperPoint and LightGlue) - Image Input Pipeline (preprocessed images) - Configuration Manager (algorithm parameters) ## Test Scenarios ### Scenario 1: Normal Sequential Tracking **Input Data**: - Images: AD000001.jpg through AD000010.jpg (10 consecutive images) - Ground truth: coordinates.csv - Camera parameters: data_parameters.md (400m altitude, 25mm focal length) **Expected Output**: - Relative pose transformations between consecutive frames - Feature match count >100 matches per frame pair - Inlier ratio >70% after geometric verification - Translation vectors consistent with ~120m spacing **Maximum Execution Time**: 100ms per frame pair **Success Criteria**: - All 9 frame pairs successfully matched - Estimated relative translations within 20% of ground truth distances - Rotation estimates within 5 degrees of expected values ### Scenario 2: Low Overlap (<5%) **Input Data**: - Images: AD000042, AD000044, AD000045 (sharp turn with gap) - Sharp turn causes minimal overlap between AD000042 and AD000044 **Expected Output**: - LightGlue adaptive depth mechanism activates (more layers) - Lower match count (10-50 matches) but high confidence - System reports low confidence flag for downstream fusion **Maximum Execution Time**: 200ms per difficult frame pair **Success Criteria**: - At least 10 high-quality matches found - Inlier ratio >50% despite low overlap - Confidence metric accurately reflects matching difficulty ### Scenario 3: Repetitive Agricultural Texture **Input Data**: - Images from AD000015-AD000025 (likely agricultural fields) - High texture repetition challenge **Expected Output**: - SuperPoint detects semantically meaningful features (field boundaries, roads) - LightGlue dustbin mechanism rejects ambiguous matches - Stable tracking despite texture repetition **Maximum Execution Time**: 100ms per frame pair **Success Criteria**: - Match count >80 per frame pair - No catastrophic matching failures (>50% outliers) - Tracking continuity maintained across sequence ## Performance Requirements - SuperPoint inference: <20ms per image (RTX 2060/3070) - LightGlue matching: <80ms per frame pair - Combined pipeline: <100ms per frame (normal overlap) - TensorRT FP16 optimization mandatory ## Quality Metrics - Match count: Mean >100, Min >50 (normal overlap) - Inlier ratio: Mean >70%, Min >50% - Feature distribution: >30% of image area covered - Geometric consistency: Epipolar error <1.0 pixels