NEURAL NETWORK MODELING TO DETERMINE THE STAGE OF ALZHEIMER'S DISEASE

Authors

DOI:

https://doi.org/10.35546/kntu2078-4481.2024.1.39

Keywords:

recognition, classification, convolutional neural network, cooking, YOLO, iOS

Abstract

Object recognition plays a significant role in improving living conditions, especially in the context of recognizing culinary ingredients. Systems capable of automatically identifying foods based on images using neural networks have the potential to greatly simplify cooking and make it more accessible and convenient for many people. The task of definition is the most important in this case. Modern trends in the field of information technologies determine the urgency of developing intelligent systems capable of analyzing and visualizing data in real time. In the context of the culinary world, creating an intelligent system for identifying ingredients and suggesting recipes on the iOS platform opens new horizons for user experience and innovative approaches to cooking. With object recognition functionality in the field of cooking, users can simply point the device's camera at their products or dishes, and the system will provide detailed information about the composition and nutritional properties. This is especially valuable for those who follow a diet, deal with allergies, or just want to eat more consciously. Such technology can also serve as an educational tool, helping people learn new ingredients and experiment with different recipes. All this contributes to the variety and quality of food preparation in domestic conditions. the choice of the iOS platform for the development of an intelligent ingredient recognition system is justified not only by the wide popularity of Apple devices, but also by the outstanding set of tools provided by the company. Apple has created an excellent MP Swift, which includes convenient powerful frameworks, and, importantly, ensures the availability of the program on most of its devices, including even older models. The aim of the work is to improve the recognition efficiency of food objects for further recommendation of recipes using the ARKit stack on the iOS platform. For this, it was necessary to develop methods and a mathematical model specially adapted to the ecosystem of Apple devices, to ensure maximum efficiency and further growth of the ecosystem of similar applications. Both speed and optimization in the context of mobile devices are considered. The aim of the work is to increase the efficiency and number of recognitions of food objects-ingredients for further recommendation of recipes. The work examines the subject area and relevance of the research. The technology stack and its suitability are analyzed, the scientific situation and similar studies are analyzed. An important point is the theory and justification of the methods that have been used. The methods and approaches that have been used to solve the problems of improving the identification of ingredients are considered. Technical and mathematical aspects of selected effective solutions are developed. A basic mathematical model has been developed that guides the chosen methods – the YOLO algorithm of Apple's implementation through the darknet network, as well as transfer learning methods, since understanding the models is crucial for the effective adaptation and fine-tuning of algorithms to a specific research task – improving ingredient identification. Selected frameworks, programming language and methods were described and demonstrated. The experiments section will demonstrate the results of model training and collected metrics. In the conclusions of the work, it is described which model invented itself the best.

References

Zhengxia Zou; Keyan Chen; Zhenwei Shi; Yuhong Guo; Jieping Ye (2023), "Object Detection in 20 Years: A Survey", pp. 1–15.

Annotating objects in augmented reality. Режим доступу: https://heartbeat.comet.ml/core-ml-arkit-annotatingobjects-in-augmented-reality-493952a94a5f

Zhengxia Zou; Keyan Chen; Zhenwei Shi; Yuhong Guo; Jieping Ye (2023), "Object Detection in 20 Years: A Survey", pp. 1–15.

T Diwan, G Anirudh, JV Tembhurne (2023), "Object detection using YOLO: Challenges, architectural successors, datasets and applications", Stages of object detection, pp. 10–11.

H Le, M Nguyen, WQ Yan, H Nguyen (2021), "H Le, M Nguyen, WQ Yan, H Nguyen", Object Detection, pp. 3–4.

Q Wang, Z Wang, B Li, D Wei (2021), "An Improved YOLOv3 Object Detection Network for Mobile Augmented Reality", Introduction, pp. 1–3.

NHH Cuong, TH Trinh, P Meesad (2022), "Improved YOLO object detection algorithm to detect ripe pineapple phase", Introduction, pp. 1–3.

AB Wahyutama, M Hwang (2022), YOLO-based object detection for separate collection of recyclables and capacity monitoring of trash bins.

R Silitonga, J Arif, R Analia, ER Jamzuri, DS Pamungkas (2023), "Tiny-YOLO distance measurement and object detection coordination system for the BarelangFC robot."

CreateML Overview. Режим доступу: https://developer.apple.com/documentation/createml#overview

AB Wahyutama, M Hwang (2022), YOLO-based object detection for separate collection of recyclables and capacity monitoring of trash bins.

TechTalks WWDC 2019 Режим доступу: https://developer.apple.com/videos/play/tech-talks/10155/

Core ML Integrate machine learning models into your app. Режим доступу: https://developer.apple.com/documentation/coreml

Rey Wenderlich (2019), "Machine Learning by Tutorials".

Published

2024-05-01