INFORMATION TECHNOLOGY FOR INTELLIGENT TRAFFIC VIDEO STREAM ANALYSIS BASED ON EDGE-CLOUD ARCHITECTURE

Authors

DOI:

https://doi.org/10.35546/kntu2078-4481.2026.1.41

Keywords:

information technology, Edge-Cloud architecture, computer vision, object detection, multi-object tracking, anomaly detection, intelligent transportation systems, YOLOv8, ByteTrack, Jetson Orin Nano, TensorRT

Abstract

The paper presents the development of an information technology for intelligent analysis of road traffic video streams based on a hybrid Edge-Cloud architecture. The proposed solution is designed to enable real-time data processing, significantly reduce network load, and ensure autonomous operation of edge nodes under conditions of limited bandwidth. The system is implemented as a modular structure comprising four main functional blocks: user interface, API layer, Edge layer, and Cloud layer. Each block performs clearly defined tasks and interacts via standardized protocols (REST, gRPC, MQTT, WebSocket), providing high scalability, flexibility, and the possibility of independent component evolution. The Edge layer executes the complete video processing pipeline directly on the NVIDIA Jetson Orin Nano device. The pipeline includes object detection using a modified YOLOv8-nano model enhanced with C3Ghost-CBAM modules, multiobject tracking with an improved ByteTrack+ method, and anomaly detection in road traffic. Only compact metadata (bounding boxes, track IDs, anomaly types, statistics) are transmitted to the Cloud layer, eliminating the need to send large volumes of video data. UML sequence diagrams were developed for the system initialization phase and continuous video stream processing, as well as an activity diagram for the object detection model training process. The software implementation is based on a modern technology stack: Python 3.10+, PyTorch 2.1+, Ultralytics YOLOv8 with export to TensorRT (FP16/INT8 quantization), PostgreSQL with TimescaleDB extension for time-series data, Redis for caching and message queuing, FastAPI for REST/gRPC interfaces, and Docker for containerization. Experimental evaluation confirmed the high efficiency of the technology: network traffic reduction of 98–99.8 % (from typical 4–6 Mbps or ≈0.5–0.75 MB/s per camera for full H.264 1080p@30fps video stream to 8–15 KB/s of JSON metadata), processing speed of 55–75 FPS on NVIDIA Jetson Orin Nano using TensorRT, and an average Matthews Correlation Coefficient (MCC) of 0.742 for anomaly detection. The proposed architecture demonstrates strong modularity, scalability, and fault tolerance: the edge device can operate autonomously during temporary loss of Cloud connectivity, storing detected events in a local buffer and automatically synchronizing them upon connection restoration. The developed technology shows strong potential for application in intelligent transportation systems (ITS), road safety monitoring in smart cities, and scenarios with constrained network bandwidth (4G/5G, remote areas).

References

World Health Organization. Global status report on road safety 2023. Geneva : WHO, 2023. 81 p. ISBN 978-92-4-008651-7. URL: https://www.who.int/publications/i/item/9789240086517 (дата звернення: 09.02.2026).

Duong H.-T., Le V.-T., Hoang V. T. Deep learning-based anomaly detection in video surveillance: A survey // Sensors. 2023. Vol. 23, No. 11. Art. 5024. DOI: 10.3390/s23115024.

Human-Centric Anomaly Detection in Surveillance Videos Using YOLO-World and Spatio-Temporal Deep Learning [Електронний ресурс] / arXiv. 2025. URL: https://arxiv.org/abs/2510.22056 (дата звернення: 09.02.2026).

What are the challenges of AI image processing on edge devices? [Електронний ресурс] / Tencent Cloud. 2025. URL: https://www.tencentcloud.com/techpedia/125406 (дата звернення: 09.02.2026).

Advancing multi-object tracking through occlusion-awareness and trajectory optimization // Knowledge-Based Systems. 2025. Vol. 310. Art. 112930. DOI: 10.1016/j.knosys.2024.112930.

Романець В., Маслій Р. В. Виявлення аномалій у відеопотоці трафіку транспорту засобами комп’ютерного зору та глибинного навчання // Вісник Вінницького політехнічного інституту. 2025. Вип. 5. С. 146–155.

Pradeep Kumar P., Kant K. TU-DAT: A computer vision dataset on road traffic anomalies // Sensors. 2025. Vol. 25, No. 11. Art. 3259. DOI: 10.3390/s25113259. URL: https://www.mdpi.com/1424-8220/25/11/3259 (дата звернення: 09.02.2026).

Redmon J., Divvala S., Girshick R., Farhadi A. You only look once: Unified, real-time object detection // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016. P. 779–788.

Jocher G., Chaurasia A., Qiu J. Ultralytics YOLOv8 [Електронний ресурс]. 2023. URL: https://github.com/ultralytics/ultralytics (дата звернення: 09.02.2026).

Романець В., Бісікало О. Виявлення об’єктів дорожнього руху з камер відеоспостереження // Вісник Хмельницького національного університету. Технічні науки. 2025. Т. 355, № 4. С. 491–497. DOI: 10.31891/2307-5732-2025-355-70

Zhang Y., Sun P., Jiang Y. et al. ByteTrack: Multi-object tracking by associating every detection box // Proceedings of the European Conference on Computer Vision (ECCV). 2022. P. 1–21. DOI: 10.1007/978-3-031-20047-2_1

Cao J., Weng X., Khirodkar R. et al. Observation-centric SORT: Rethinking SORT for robust multi-object tracking // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2023. P. 9686–9696.

Романець В., Бісікало О. Метод багатооб’єктного трекінгу з обробкою оклюзій для систем відеоспостереження // Вимірювальна та обчислювальна техніка в технологічних процесах. 2025. Т. 84, № 4. С. 440–445. DOI: 10.31891/2219-9365-2025-84-53

Chen J., Ran X. Deep learning with edge computing: A review // Proceedings of the IEEE. 2019. Vol. 107, No. 8. P. 1655–1674. DOI: 10.1109/JPROC.2019.2921977

Wang X., Han Y., Leung V. C. M., Niyato D., Yan X., Chen X. Convergence of edge computing and deep learning: A comprehensive survey // IEEE Communications Surveys & Tutorials. 2020. Vol. 22, No. 2. P. 869–904. DOI: 10.1109/COMST.2020.2970550

Lin Y., Lockyer S., Stanek F., Zarbock M., Evans A., Li W., Zhang N. SAE-MCVT: A Real-Time and Scalable Multi-Camera Vehicle Tracking Framework Powered by Edge Computing [Електронний ресурс] / arXiv. 2025. arXiv:2511.13904v1. URL: https://arxiv.org/pdf/2511.13904 (дата звернення: 09.02.2026).

Onsu M. A., Lohan P., Kantarci B., Syed A., Andrews M., Kennedy S. Semantic Edge–Cloud Communication for Real-Time Urban Traffic Surveillance with ViT and LLMs over Mobile Networks [Електронний ресурс] / arXiv. 2025. arXiv:2509.21259v1. URL: https://arxiv.org/pdf/2509.21259 (дата звернення: 09.02.2026).

Published

2026-04-30