OPTIMIZING INFORMATION-EXTREMAL MACHINE LEARNING PARAMETERS FOR GROUND OBJECT RECOGNITION BY UNMANNED AERIAL VEHICLES

Authors

DOI:

https://doi.org/10.35546/kntu2078-4481.2025.2.2.31

Keywords:

UAV, machine learning, information-extreme approach, autonomous navigation

Abstract

The article is devoted to the justification of an innovative approach to autonomous navigation of unmanned aerial vehicles (UAVs) based on machine learning using the information-extreme method. The main focus is on optimizing machine learning parameters to increase the accuracy of recognition of ground objects in a geospatial scene.The study determined the optimal values of the vector machine learning parameters, which allowed constructing geometric containers of recognition classes and forming effective decision rules based on them. Functional testing of the algorithm confirmed the accuracy of the system’s operation based on the training matrix, and the examination stage demonstrated the high accuracy of machine learning of an autonomous UAV. The key theoretical result is the justification of the information-extreme synthesis problem of the onboard system of an unmanned aerial vehicle, which consists in finding the global maximum of the information criterion for optimizing the training parameters in different zones of the geospatial scene. Additionally, a functional categorical machine learning model of the second level of depth is proposed, which can significantly improve the quality of recognition in a dynamic environment.A new functional categorical machine learning model of the second level of depth is proposed, which significantly improves the quality of recognition of complex objects in dynamic conditions. The effectiveness of the model lies in the combination of the information-extreme method with the categorical machine learning model. The practical significance of the results obtained is manifested in the possibility of their use for the development of intelligent UAV control systems, geospatial data processing and application in the military, cartographic and monitoring spheres.The work opens up new prospects for the creation of highly efficient autonomous navigation systems capable of operating in real time in a dynamic, changing environment.

References

Afghah F., Razi A., Chakareski J., Ashdown J. Wildfire Monitoring in Remote Areas using Autonomous Unmanned Aerial Vehicles // arXiv preprint. 2019. № arXiv:1905.00492. URL: https://arxiv.org/abs/1905.00492

An Onboard Vision-Based System for Autonomous Landing of a Low-Cost Quadrotor on a Novel Landing Pad / Liu et al. Sensors. 2019. Vol. 19, no. 21. P. 4703. URL: https://doi.org/10.3390/s19214703

Badrinarayanan V., Kendall A., Cipolla R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2017. Vol. 39, no. 12. P. 2481–2495. URL: https://doi.org/10.1109/tpami.2016.2644615

Current status and perspective of remote sensing application in crop management / M. Jurišić et al. Journal of Central European Agriculture. 2021. Vol. 22, no. 1. P. 156–166. URL: :10.5513/JCEA01/22.1.3042

DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs / L.-C. Chen et al. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2018. Vol. 40, no. 4. P. 834–848. URL: https://doi.org/10.1109/tpami.2017.2699184

Engel J., Koltun V., Cremers D. Direct Sparse Odometry. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2018. Vol. 40, no. 3. P. 611–625. URL: https://doi.org/10.1109/tpami.2017.2658577

Forster C., Pizzoli M., Scaramuzza D. SVO: Fast semi-direct monocular visual odometry. 2014 IEEE International Conference on Robotics and Automation (ICRA). 2014. P. 15–22. URL: https://ieeexplore.ieee.org/document/6906584/citations#citations

G. Huang. Visual-Inertial Navigation: A Concise Review. International Conference on Robotics and Automation (ICRA). 2019. P. 9572-9582. URL: 10.1109/ICRA.2019.8793604

Information-Extreme Machine Learning of On-Board Vehicle Recognition System / A. S. Dovbysh et al. Cybernetics and Systems Analysis. 2020. Vol. 56, no. 4. P. 534–543. URL: https://doi.org/10.1007/s10559-020-00269-y

Liu Y., Zhang L. Machine Vision and Deep Learning for Autonomous UAV Navigation. Journal of Field Robotics. 2018. No. 35(4). P. 665–684. URL: https://doi.org/10.1002/rob.21855

Otroshchenko M., Myronenko M. DETERMINING THE LOCATION OF THE AUTONOMOUS UAV. Information Technology and Implementation (Satellite): Conference Proceedings, Kyiv, 24 November 2024. Kyiv, 2024. P. 56–57. URL: http://iti.fit.univ.kiev.ua/wp-content/uploads/Збірка-19_12_2024_ITI_2024-е.pdf

Park J., Lee H. GPS/IMU and Vision Integration for Autonomous UAV Navigation. International Journal of Aerospace Engineering. 2017. P. 1–10. URL: https://doi.org/10.1155/2017/4792174

Radar/electro-optical data fusion for non-cooperative UAS sense and avoid / G. Fasano et al. Aerospace Science and Technology. 2015. Vol. 46. P. 436–450. URL: https://doi.org/10.1016/j.ast.2015.08.010

Ronneberger O., Fischer P., Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. Lecture Notes in Computer Science. Cham, 2015. P. 234–241. URL: https://doi.org/10.1007/978-3-319-24574-4_28

Scaramuzza D., Fraundorfer F. Visual Odometry [Tutorial]. IEEE Robotics & Automation Magazine. 2011. Vol. 18, no. 4. P. 80–92. URL: https://doi.org/10.1109/mra.2011.943233

Singh, I. P., Patel, A. Visual odometry for autonomous vehicles. Int. J. of Adv. Res. 7. 2019. P. 1136–1144. URL: https://dx.doi.org/10.21474/IJAR01/9765

Published

2025-06-05