MODELING PLAYER BEHAVIOR IN WEB GAMES THROUGH ARTIFICIAL INTELLIGENCE: TESTING AND RESULTS

Authors

DOI:

https://doi.org/10.35546/kntu2078-4481.2025.1.2.17

Keywords:

automation, artificial intelligence, computer vision, decision-making algorithms, “Fireboy and Watergirl” game, path planning

Abstract

The article is dedicated to the development of an artificial intelligence (AI) system for modeling player behavior in web games, using the example of automatically completing levels in the popular game “Fireboy and Watergirl”, where two characters must overcome obstacles and solve puzzles. The main goal is to create an efficient AI capable of detecting objects on the screen, analyzing situations, and making informed decisions for planning movement, interacting with objects, and overcoming obstacles. An analysis of existing automation methods revealed that they need improvement to ensure accuracy and speed of decision-making in real-time. The article proposes using computer vision technologies to detect objects by color and contours, as well as path planning and decision-making algorithms.By implementing modern reinforcement learning algorithms such as Proximal Policy Optimization (PPO), the developed AI can adapt its behavior based on experience gained during numerous game episodes. This allows the agents, Fireboy and Watergirl, to effectively interact with game elements, avoid dangers, and optimize strategies to achieve goals. Tests have shown that the AI can independently make strategic decisions, such as choosing optimal routes and avoiding undesirable areas, significantly enhancing autonomy and efficiency.Integrating AI into the gaming process enables the creation of a more dynamic and interactive environment that can adapt to player actions in real-time. This opens new opportunities for developers to create games that can provide a unique experience for each user. Furthermore, such technologies can be applied not only in the entertainment sector but also in other fields where process automation and decision-making under uncertainty are required. The results obtained can be used for further improvement of the gaming process, increasing the level of interactivity and adaptability of games, as well as for developing new approaches to automation in various industries.

References

Гра Вогонь і вода. URL: https://ua.sgames.org/188503/ (дата звернення: 06.01.2025).

Proximal Policy OptimizationA lgorithms / Schulman J. et al. arXiv:1707.06347v2. 2017,A ug 28. URL: https://arxiv.org/ pdf/1707.06347v2

DhanushKumar. PPO Algorithm. 2024, Feb 21. URL: https://medium.com/@danushidk507/ppo-algorithm- 3b33195de14a (дата звернення: 06.01.2025).

A Study on Overfitting in Deep Reinforcement Learning / Zhang C., Vinyals O., Munos R., Bengio S. A. arXiv:1804.06893v2. 2018, Apr 18. URL: https://arxiv.org/pdf /1804.06893 (дата звернення: 06.01.2025).

Hart P. E., Nilsson N. J., Raphael B. A Formal Basis for the Heuristic Determination of MinimumCost Paths. IEEE Transactions on Systems Science and Cybernetics. 1968. № 4(2). P. 100–107. URL: https://ieeexplore.ieee.org/ document/4082128 (дата звернення: 06.01.2025).

Mastering the Game of Go with Deep Neural Networks and Tree Search / Silver D. et al. Nature. 2016, Jan 27. № 529. P. 484–489. URL: https://www.nature.com/articles/nature16961 (дата звернення: 06.01.2025).

Human-level control through deep reinforcement learning / Mnih V. et al. Nature. 2015, Feb 25. № 518. P. 529–533. URL: https://www.nature.com/articles/nature14236 (дата звернення: 06.01.2025).

Hunt J. J., Pritzel A., Heess N., Erez T., Tassa Y., Silver D., Wierstra D. Continuous control with deep reinforcement learning / Lillicrap T. P. et al. arXiv:1509.02971v6>. 2015, Sep 9. URL: https://arxiv.org/abs/1509.02971 (дата звер- нення: 06.01.2025).

Published

2025-02-25