# Source Registry ## Source #1 - **Title**: Ultralytics YOLO26 Documentation - **Link**: https://docs.ultralytics.com/models/yolo26/ - **Tier**: L1 - **Publication Date**: 2026-01-14 - **Timeliness Status**: Currently valid - **Version Info**: YOLO26, Ultralytics 8.4.x - **Summary**: Official YOLO26 docs — NMS-free, edge-first, MuSGD optimizer, improved small object detection, instance segmentation with semantic loss. ## Source #2 - **Title**: YOLOE: Real-Time Seeing Anything — Ultralytics Docs - **Link**: https://docs.ultralytics.com/models/yoloe/ - **Tier**: L1 - **Publication Date**: 2025-2026 - **Timeliness Status**: Currently valid - **Version Info**: YOLOE, YOLOE-26 (yoloe-26n-seg.pt through yoloe-26x-seg.pt) - **Summary**: Official YOLOE docs — open-vocabulary detection/segmentation, text/visual/prompt-free modes, RepRTA, SAVPE, LRPC, zero inference overhead when re-parameterized. ## Source #3 - **Title**: YOLOE-26 Paper - **Link**: https://arxiv.org/abs/2602.00168 - **Tier**: L1 - **Publication Date**: 2026-02 - **Timeliness Status**: Currently valid - **Summary**: Integration of YOLO26 with YOLOE for real-time open-vocabulary instance segmentation. NMS-free, end-to-end. ## Source #4 - **Title**: Ultralytics YOLO26 Jetson Benchmarks - **Link**: https://docs.ultralytics.com/guides/nvidia-jetson - **Tier**: L1 - **Publication Date**: 2026 - **Timeliness Status**: Currently valid - **Version Info**: YOLO11 benchmarks on Jetson Orin Nano Super, TensorRT FP16 - **Summary**: YOLO11n TensorRT FP16 on Jetson Orin Nano Super: 6.93ms at 640px. YOLO11s: 13.50ms. YOLO11m: 17.48ms. ## Source #5 - **Title**: Cosmos-Reason2-2B on Jetson Orin Nano Super - **Link**: https://www.thenextgentechinsider.com/pulse/cosmos-reason2-runs-on-jetson-orin-nano-super-with-w4a16-quantization - **Tier**: L2 - **Publication Date**: 2026-02 - **Timeliness Status**: Currently valid - **Summary**: 4.7 tok/s on Jetson Orin Nano Super with W4A16 quantization. ## Source #6 - **Title**: UAV-VL-R1 Paper - **Link**: https://arxiv.org/pdf/2508.11196 - **Tier**: L1 - **Publication Date**: 2025 - **Timeliness Status**: Currently valid - **Summary**: Lightweight VLM for aerial reasoning. 48% better zero-shot than Qwen2-VL-2B. 2.5GB INT8, 3.9GB FP16. Open source. ## Source #7 - **Title**: SmolVLM 256M & 500M Blog - **Link**: https://huggingface.co/blog/smolervlm - **Tier**: L1 - **Publication Date**: 2025-01 - **Timeliness Status**: Currently valid - **Summary**: SmolVLM-500M: 1.8GB GPU RAM, ONNX/WebGPU support, 93M SigLIP vision encoder. ## Source #8 - **Title**: Moondream 0.5B Blog - **Link**: https://moondream.ai/blog/introducing-moondream-0-5b - **Tier**: L1 - **Publication Date**: 2024-12 - **Timeliness Status**: Currently valid - **Summary**: 500M params, 816 MiB INT4, detect()/point() APIs, Raspberry Pi compatible. ## Source #9 - **Title**: ViewPro ViewLink Serial Protocol V3.3.3 - **Link**: https://www.viewprotech.com/index.php?ac=article&at=read&did=510 - **Tier**: L1 - **Publication Date**: 2024 - **Timeliness Status**: Currently valid - **Summary**: Serial command protocol for ViewPro gimbal cameras. UART 115200. ## Source #10 - **Title**: ArduPilot ViewPro Gimbal Integration - **Link**: https://ardupilot.org/copter/docs/common-viewpro-gimbal.html - **Tier**: L1 - **Publication Date**: 2025 - **Version Info**: ArduPilot 4.5+ - **Summary**: MNT1_TYPE=11 (Viewpro), SERIAL2_PROTOCOL=8, TTL serial, MAVLink 10Hz. ## Source #11 - **Title**: UAV-YOLO12 Road Segmentation - **Link**: https://www.mdpi.com/2072-4292/17/9/1539 - **Tier**: L1 - **Publication Date**: 2025 - **Summary**: F1=0.825 for paths from UAV imagery. 11.1ms inference. SKNet + PConv modules. ## Source #12 - **Title**: FootpathSeg GitHub - **Link**: https://github.com/WennyXY/FootpathSeg - **Tier**: L3 - **Publication Date**: 2025 - **Summary**: DINO-MC pre-training + UNet fine-tuning for footpath segmentation. GIS layer generation. ## Source #13 - **Title**: Herbivore Trail Segmentation (UNet+MambaOut) - **Link**: https://arxiv.org/pdf/2504.12121 - **Tier**: L1 - **Publication Date**: 2025-04 - **Summary**: UNet+MambaOut achieves best accuracy for trail detection from aerial photographs. ## Source #14 - **Title**: Open-Vocabulary Camouflaged Object Segmentation - **Link**: https://arxiv.org/html/2506.19300v1 - **Tier**: L1 - **Publication Date**: 2025 - **Summary**: VLM + SAM cascaded approach for camouflage detection. VLM-derived features as prompts to SAM. ## Source #15 - **Title**: YOLO Training Best Practices - **Link**: https://docs.ultralytics.com/yolov5/tutorials/tips_for_best_training_results - **Tier**: L1 - **Publication Date**: 2025 - **Summary**: ≥1500 images/class, ≥10,000 instances/class. 0-10% background images. Pretrained weights recommended. ## Source #16 - **Title**: Jetson AI Lab LLM/VLM Benchmarks - **Link**: https://www.jetson-ai-lab.com/tutorials/genai-benchmarking/ - **Tier**: L1 - **Publication Date**: 2025-2026 - **Summary**: Llama-3.1-8B W4A16 on Jetson Orin Nano Super: 44.19 tok/s output, 32ms TTFT. vLLM as inference engine. ## Source #17 - **Title**: servopilot Python Library - **Link**: https://pypi.org/project/servopilot/ - **Tier**: L3 - **Publication Date**: 2025 - **Summary**: Anti-windup PID controller for gimbal control. Dual-axis support. Zero dependencies. ## Source #18 - **Title**: Multi-Model AI Resource Allocation for Humanoid Robots: A Survey on Jetson Orin Nano Super - **Link**: https://dev.to/ankk98/multi-model-ai-resource-allocation-for-humanoid-robots-a-survey-on-jetson-orin-nano-super-310i - **Tier**: L3 - **Publication Date**: 2025 - **Summary**: Running VLA + YOLO concurrently on Orin Nano Super is "mostly theoretical". GPU sharing causes 10-40% latency jitter. Needs lighter edge-optimized models. ## Source #19 - **Title**: TensorRT Multiple Engines on Single GPU - **Link**: https://github.com/NVIDIA/TensorRT/issues/4358 - **Tier**: L2 - **Publication Date**: 2025 - **Summary**: NVIDIA recommends single engine with async CUDA streams over multiple separate engines. CUDA context push/pop needed for multiple engines. ## Source #20 - **Title**: TensorRT High Memory Usage on Jetson Orin Nano (Ultralytics) - **Link**: https://github.com/ultralytics/ultralytics/issues/21562 - **Tier**: L2 - **Publication Date**: 2025 - **Summary**: YOLOv8-OBB TRT engine consumes ~2.6GB on Jetson Orin Nano. cuDNN/CUDA binary loading adds ~940MB-1.1GB overhead per engine. ## Source #21 - **Title**: NVIDIA Forum: Jetson Orin Nano Super Insufficient GPU Memory - **Link**: https://forums.developer.nvidia.com/t/jetson-orin-nano-super-insufficient-gpu-memory/330777 - **Tier**: L2 - **Publication Date**: 2025-04 - **Summary**: Orin Nano Super shows 3.7GB/7.6GB free GPU memory after OS. Even 1.5B Q4 model fails to load due to KV cache buffer requirements (model weight 876MB + temp buffer 10.7GB needed). ## Source #22 - **Title**: YOLO26 TensorRT Confidence Misalignment on Jetson - **Link**: https://www.hackster.io/qwe018931/pushing-limits-yolov8-vs-v26-on-jetson-orin-nano-b89267 - **Tier**: L2 - **Publication Date**: 2026 - **Summary**: YOLO26 exhibits bounding box drift and inaccurate confidence scores when converted to TRT for C++ deployment on Jetson. YOLOv8 works fine. Architecture-specific export issue. ## Source #23 - **Title**: YOLO26 INT8 TensorRT Export Fails on Jetson Orin (Ultralytics Issue #23841) - **Link**: https://github.com/ultralytics/ultralytics/issues/23841 - **Tier**: L2 - **Publication Date**: 2026 - **Summary**: YOLO26n INT8 TRT export fails with checkLinks error during calibration on Jetson Orin with TensorRT 10.3.0 / JetPack 6. ## Source #24 - **Title**: PatchBlock: Lightweight Defense Against Adversarial Patches for Edge AI - **Link**: https://arxiv.org/abs/2601.00367 - **Tier**: L1 - **Publication Date**: 2026-01 - **Summary**: CPU-based preprocessing module recovers up to 77% model accuracy under adversarial patch attacks. Minimal clean accuracy loss. Suitable for edge deployment. ## Source #25 - **Title**: Qrypt Quantum-Secure Encryption for NVIDIA Jetson Edge AI - **Link**: https://thequantuminsider.com/2026/03/12/qrypt-quantum-secure-encryption-nvidia-jetson-edge-ai/ - **Tier**: L2 - **Publication Date**: 2026-03 - **Summary**: BLAST encryption protocol for Jetson Orin Nano and Thor. Quantum-secure end-to-end encryption, independent key generation. ## Source #26 - **Title**: Adversarial Patch Attacks on YOLO Edge Deployment (Springer) - **Link**: https://link.springer.com/article/10.1007/s10207-025-01067-3 - **Tier**: L1 - **Publication Date**: 2025 - **Summary**: Smaller YOLO models on edge devices are more vulnerable to adversarial attacks. Trade-off between latency and security. ## Source #27 - **Title**: Synthetic Data for Military Camouflaged Object Detection (IEEE) - **Link**: https://ieeexplore.ieee.org/document/10660900/ - **Tier**: L1 - **Publication Date**: 2024 - **Summary**: Synthetic data generation approach for military camouflage detection training. ## Source #28 - **Title**: GenCAMO: Environment-Aware Camouflage Image Generation - **Link**: https://arxiv.org/abs/2601.01181 - **Tier**: L1 - **Publication Date**: 2026-01 - **Summary**: Scene graph + generative models for synthetic camouflage data with multi-modal annotations. Improves complex scene detection. ## Source #29 - **Title**: Camouflage Anything (CVPR 2025) - **Link**: https://openaccess.thecvf.com/content/CVPR2025/html/Das_Camouflage_Anything_... - **Tier**: L1 - **Publication Date**: 2025 - **Summary**: Controlled out-painting for realistic camouflage dataset generation. CamOT metric. Improves detection baselines when used for fine-tuning. ## Source #30 - **Title**: YOLOE Visual+Text Multimodal Fusion PR (Ultralytics) - **Link**: https://github.com/ultralytics/ultralytics/pull/21966 - **Tier**: L2 - **Publication Date**: 2025 - **Summary**: Multimodal fusion of text + visual prompts for YOLOE. Concat mode (zero overhead) and weighted-sum mode (fuse_alpha). Merged into Ultralytics. ## Source #31 - **Title**: Learnable Morphological Skeleton for Remote Sensing (IEEE TGRS 2025) - **Link**: https://ui.adsabs.harvard.edu/abs/2025ITGRS..63S1458X - **Tier**: L1 - **Publication Date**: 2025 - **Summary**: Learnable morphological skeleton priors integrated into SAM for slender object segmentation. Addresses downsampling information loss. ## Source #32 - **Title**: GraphMorph: Topologically Accurate Tubular Structure Extraction - **Link**: https://arxiv.org/pdf/2502.11731 - **Tier**: L1 - **Publication Date**: 2025 - **Summary**: Branch-level graph decoder + SkeletonDijkstra for centerline extraction. Reduces false positives vs pixel-level segmentation. ## Source #33 - **Title**: UAV Gimbal PID Control for Camera Stabilization (IEEE 2024) - **Link**: https://ieeexplore.ieee.org/document/10569310/ - **Tier**: L1 - **Publication Date**: 2024 - **Summary**: PID controllers applied in gimbal construction for stabilization and tracking. ## Source #34 - **Title**: Kalman Filter Steady Aiming for UAV Gimbal (IEEE) - **Link**: https://ieeexplore.ieee.org/ielx7/6287639/10005208/10160027.pdf - **Tier**: L1 - **Publication Date**: 2023 - **Summary**: Kalman filter + coordinate transformation eliminates attitude and mounting errors in UAV gimbal. Better accuracy than PID alone during flight. ## Source #35 - **Title**: vLLM on Jetson Orin Nano Deployment Guide - **Link**: https://learnopencv.com/deployment-on-edge-vllm-on-jetson/ - **Tier**: L2 - **Publication Date**: 2026 - **Summary**: vLLM can run 2B models on Orin Nano 8GB. Shared memory must be increased to 8GB. Memory management critical. ## Source #36 - **Title**: Jetson Orin Nano LLM Bottleneck Analysis - **Link**: https://ericxliu.me/posts/benchmarking-llms-on-jetson-orin-nano/ - **Tier**: L2 - **Publication Date**: 2025 - **Summary**: Bottleneck is memory bandwidth (68 GB/s), not compute. Only 5.2GB usable VRAM after OS overhead. 40 TOPS largely underutilized for LLM inference. ## Source #37 - **Title**: TRT-LLM: No Edge Device Support Statement - **Link**: https://github.com/NVIDIA/TensorRT-LLM/issues/7978 - **Tier**: L1 - **Publication Date**: 2025 - **Summary**: TensorRT-LLM developers explicitly state they do not aim to support edge devices/platforms. ## Source #38 - **Title**: Qwen3-VL-2B on Orin Nano Super (NVIDIA Forum) - **Link**: https://forums.developer.nvidia.com/t/performance-inquiry-optimizing-qwen3-vl-2b-inference-for-2-qps-target-on-orin-nano-super/359639 - **Tier**: L2 - **Publication Date**: 2026 - **Summary**: Performance inquiry for Qwen3-VL-2B targeting 2 QPS on Orin Nano Super. Indicates active community attempts to deploy 2B VLMs on this hardware.