INTELLIGENT APPROACH TO ENHANCING THE STABILITY OF AUTOMATED TESTING SYSTEMS BASED ON REINFORCEMENT LEARNING
DOI:
https://doi.org/10.35546/kntu2078-4481.2025.4.3.13Keywords:
artificial intelligence, test stability, QA automation, adaptive algorithmsAbstract
This research develops and substantiates an intelligent reinforcement-based architecture aimed at enhancing the stability of automated testing systems, using Selenium as a practical foundation. The study identifies the root causes of instability in UI testing – DOM volatility, asynchronous rendering, network latency, and rigid synchronization – and formulates a multilayered adaptive model that mitigates these factors through reinforcement learning mechanisms. The proposed Intelligent Reinforcement Layer restructures Selenium’s classical hierarchy, namely IDE, Client–Server, and Grid, into a unified self-learning framework capable of autonomous adjustment to environmental drift. Within the IRL, the IDE Reinforcement Agent performs local adaptation by regulating playback timing and locator selection in response to structural changes in the DOM. The Client–Server Adaptive Layer, comprising the Execution Stability Agent and Hierarchical Locator Adaptation Agent, ensures execution stability by adjusting synchronization intervals, wait policies, and locator recovery strategies based on real-time telemetry. At the distributed level, the Grid Learning Environment applies cooperative multi-agent learning to balance test workloads across nodes, optimizing system-wide stability and throughput. The research formalizes unified reinforcement-learning logic, defines state–action structures for each subsystem, and introduces a collective experience buffer for cross-layer training. Comparative analysis demonstrates that while traditional frameworks such as Cypress and Playwright implement fixed stabilization procedures, the proposed IRL architecture achieves adaptive resilience through continuous learning and feedback. The study concludes that integrating reinforcement learning into Selenium’s architecture yields a self-optimizing testing environment capable of maintaining long-term stability, reducing flakiness, and improving the robustness of automated QA processes under evolving application conditions.
References
Ahmad A., Leifler O., Sandahl K. An Evaluation of Machine Learning Methods for Predicting Flaky Tests. Linköping University, Sweden. DiVA Portal, 2020. URL: https://www.diva-portal.org/smash/get/diva2:1537340/ FULLTEXT01.pdf
Dutta S., Shi A., Choudhary R., Zhang Z., Jain A., Misailovic S. Detecting Flaky Tests in Probabilistic and Machine Learning Applications. Cornell University, 2020. 14 p. URL: https://saikatdutta.web.illinois.edu/papers/flash-issta20.pdf
Akli A., Haben G., Habchi S., Papadakis M., Le Traon Y. FlakyCat: Predicting Flaky Tests Categories using Few- Shot Learning. University of Luxembourg, 2023. 12 p. URL: https://orbilu.uni.lu/bitstream/10993/55848/1/FlakyCat.pdf
Lam W., Oei R., Shi A., Marinov D., Xie T. iDFlakies: A Framework for Detecting and Partially Classifying Flaky Tests. 12th IEEE Conference on Software Testing, Validation and Verification (ICST), 2019. P. 312–322. DOI: 10.1109/ICST.2019.00038
Alshammari A., Morris C., Hilton M., Bell J. FlakeFlagger: Predicting Flakiness without Rerunning Tests. Proceedings of the International Conference on Software Engineering (ICSE), 2021.
Parry O., Kapfhammer G. M., Hilton M., McMinn P. Empirically Evaluating Flaky Test Detection Techniques Combining Test Case Rerunning and Machine Learning Models. Empirical Software Engineering. 2023. Vol. 28. No. 72. DOI: 10.1007/s10664-023-10307-w
Lin S., Liu R., Tahvildari L. FlaKat: A Machine Learning-Based Categorization Framework for Flaky Tests, 2024. arXiv:2403.01003 [cs.SE]. URL: https://arxiv.org/pdf/2403.01003v1
Nyamathulla S., Ratnababu P., Shaik N. S. A Review on Selenium Web Driver with Python. Annals of the Romanian Society for Cell Biology, 2021. P. 16760–16768.
Alferidah S. K., Ahmed S. Automated Software Testing Tools. 2020. Proceedings of the International Conference on Computing and Information Technology (ICCIT-1441). IEEE.
Ulili S. Playwright vs Puppeteer vs Cypress vs Selenium (E2E Testing). 2025. Better Stack Community Comparisons. URL: https://betterstack.com/community/comparisons/playwright-cypress-puppeteer-selenium-comparison/
Revolutionizing Visual Testing on Web using Automation and AI: Halodoc’s Journey with Percy. 2024. Percy Visual Testing Blog, Halodoc. URL: https://blogs.halodoc.io/percy-web/







