- Introduced a matrix for building on both linux/arm64 and linux/amd64 platforms.
- Updated image tags to include platform-specific identifiers for better versioning.
- Enhanced the labels section to dynamically set the platform label based on the build matrix.
- Added a guideline to place all source code under the `src/` directory in `coderule.mdc`.
- Removed the outdated guideline regarding the `src/` layout in `python.mdc` to streamline project structure.
These updates improve project organization and maintainability by clarifying the structure for source code and project files.
Added functionality to set camera ready for bringing it down or up.
Camera will be made available for AI after bringCameraDown command is given via UDPSocket.
Camera will be made unavailable for AI after bringCameraUp command is given via UDPSocket.
Tool has three inputs map_tile_folder, current_latitude and current_longitude.
With these tool will got through map tiles in given folder and list tiles in order based on their distance from given location.
- uses MAVSDK::MissionRaw objects for missions
- added new state AZ_DRONE_STATE_MISSION_UPLOADED
- new state is used in AzDroneControllerPlane before waiting for AUTO switch
TODO!!
- move to AzMissionController
- use JSON file instead of hard coded mission items
- removed QtSerialPort from the Qt CONFIG parameters
- remove compiler warnings
- reduced logging
- fixed FPS to show what AI really analyzed
- RTSP reader tries to connect to the stream once per second until it succeeds
- removed unnecessary logging
- print start date and time when application starts
- use std::cout instead of qDebug()
- better logging in DroneController classes
- renamed Controller states for better readability
When the application is started with the command parameter "plane", the
application uses the AzDroneControllerPlane class to control
initialisation. It doesn't arm or takeoff the drone. Instead, it waits
for the user to mode to AUTO (in Ardupilot, Mission in MAVSDK) with the
RC controller. When AUTO mode has been detected, the application will
start a normal mission handling.
Save bmp images of inference results to /tmp as bmp files. BMP was
chosen to reduce encoding time. Saving is fully threaded. It can be
enable with qmake CONFIG+=save_images option
Also:
- use antialised fonts in RKNN inference
- moved class strings to inference base class
- fixed silly segfault in ONNX inference
- prevent writing results if class if exceeds valid values
Issue: https://denyspopov.atlassian.net/browse/AZ-38
Type: Improvement
- added qmake option yolo_onnx to use normal YOLOv8 ONNX models. This makes possible to test
gimbals camera inside without real model.
- reduced confidence threshold requirement in AiEngineInferencevOnnxRuntime from 0.5 to 0.2
- make printing prettier with ONNX Runtime
- removed unnecessary cv::Mat::clone()
Type: Improvement
Issue: https://denyspopov.atlassian.net/browse/AZ-39
- fixed probability showing 0.50 all the time
- removed commented out code
- fixed bug which prevented used of OPI5 inference in the case of early failure
Type: Bugfix
Issue: https://denyspopov.atlassian.net/browse/AZ-37
Functionality has been written to rtsp_ai_player.
TODO!!
- move functionality of camera module misc/rtsp_ai_player/aienginegimbalserver.cpp
- implement all signals in AiEngineGimbalClient
- get drone position from autopilot and send it to AiEngineGimbalServer
MAVSDK/ArduPilot never reached take-off altitude. Added simple logic to
start the mission when 90% of set take-off altitude has been reached.
Type: Improvement
Issue: https://denyspopov.atlassian.net/browse/AZ-22