RESEARCH OF IMAGE SEGMENTATION METHODS FOR APPLIED TASKS
DOI:
https://doi.org/10.32782/2618-0340-2019-3-2Keywords:
image segmentation, computer vision, pattern recognition, future information analytical systemsAbstract
The necessity of constructing a mathematical model arises immediately when using a computer for image processing. By evaluating the "eye" affiliation of a pixel to a particular segment, we do not think about how it is done but for computer we need write algorithm. If the task is some adaptation we need to have written all possibility conditions. Instructing this computer, we have to teach him to perform similar actions, that is, to put in it the corresponding data and algorithms. The paper investigates the methods of transformation that are carried out primarily in order to reduce the information redundancy of the image for specific time conditions, leaving it only the information that is needed to solve a particular task at a specific time point. In the binary image, the parts that are of interest to us (for example, the outlines of the displayed objects) must be preserved and insignificant features (background) are excluded. The aim of the study is to improve computer perception by developing an adaptive approach to the environment. The main idea is to integrate the intellectual property of the future with the characters in the system of perception. In particular, the computer should feel and understand the dynamics of the real world. Therefore, the author investigates the models and means of synthesizing the methods of perception of data of the visual spectrum, arriving in real time. Continue of research of the machine-machine interface element is concerned on the possibility of dynamic adaptation for improving the perception of reflection of the visual specter of environment by developing a methodic or/and methods for adapting the computer vision to the visual spectrum. The various threshold methods of image segmentation for apply task are investigated and compared among themselves. The methods were applied to segmentation tasks of divide of image to two and three classes and results (quality estimation) for different parameters are shown. Adaptation concept estimation for practical tasks are shown. Triangle, Otsu, Bottom threshold, Yen, Roshenfeld, SIS, k-means, Sezan, Ramesh methods are estimated in the paper. Mean squared error method was taken as procedure for estimating of segmentation quality. Results of bottom threshold is taken as the basic value for estimation by mean squared error method. Results of study are shown in one table and five figures. Reader cans see results as visual reflection in pictures and as digital reflection in the table and in the figures of program visualization.
References
Грицик В.В. Оцінка якості передавання і комп’ютерна обробка даних образів. Доповіді НАН України. 2008. № 9 : Інформатика та кібернетика. С. 43-48.
Audio-Visual Answer to Modern Computing. Research*eu Results Supplement. 2010. № 26. P. 31−32.
Мічо Кайку. Фізика майбутнього / переклала з англ.. Анжела Кам’янець. Львів: Літопис, 2013. 432 с.
Software: Running Commentary for Smarter Surveillance? Research*eu Results Supplement. 2010. № 24. P. 29.
Hrytsyk V., Grondzal A., Bilenkyj A. Augmented Reality for People with Disabilities. Proceedings of the International Conference on Computer Sciences and Information Technologies, CSIT’2015 (Lviv, 2015, September 14–17). Lviv: Polytechnic National University, 2015. P. 188–191.
Korzynska A., Roszkowiak L., Lopez C., Bosch R., Witkowski L., Lejeune M. Validation of Various Adaptive Threshold Methods of Segmentation Applied to Follicular Lymphoma Digital Images Stained with 3,3’-Diaminobenzidine&Haematoxylin. Diagnostic Pathology. 2013. Vol. 8. Issue 48. https://doi.org/10.1186/1746-1596-8-48
Sauvola J., Pietikainen M. Adaptive document image binarization. Pattern Recognition. 2000. № 33. P. 225–236. DOI: 10.1016/S0031-3203(99)00055-2.
Грицик В.В., Дунас А.Я. Дослідження методів розпізнавання образів для систем комп’ютерного зору роботів майбутнього. Вісник ХНТУ. 2017. № 3, Т. 1. C. 297−301.