METHOD FOR INTELLIGENT MEASUREMENT OF PSYCHOLINGUISTIC INDICATORS FOR DYNAMIC SYSTEMS OF CONTINUOUS PSYCHOLOGICAL STATE MONITORING

Authors

DOI:

https://doi.org/10.35546/kntu2078-4481.2025.4.3.35

Keywords:

intelligent measurement, psychophysiological state, fusion model, multimodal systems, continuous monitoring systems, psycholinguistics automation

Abstract

This article addresses a relevant scientific and practical problem of psycholinguistic refinement of psychological functional states identified from video data. This refinement is achieved through intelligent measurement of psychological indicators in multimodal dynamic systems for continuous monitoring of employees' psychological states. A two-phase method for intelligent measurement of psycholinguistic indicators is developed and formalized. The first phase, primary psycholinguistic analysis, ensures the formation of an individual psychological profile. The second phase, deep psycholinguistic analysis, allows the refinement of primary psychological state features obtained from video monitoring using regression analysis, correlating them with the psychological profile determined after the first phase. Measurement weights in the linguistic (text) and non-verbal (video) modalities are harmonized via an adaptive fusion integration model, enabling personalization of psycholinguistic indicators. A structural-logical scheme of the method is provided to visually demonstrate the transfer of parameters across its functional stages. Preliminary validation of the method was conducted through a simulation experiment. The results indicate that the proposed method improves the integral accuracy of psycholinguistic indicator measurement by 4–7% compared to the most effective contemporary approaches. Experimental findings confirm the effectiveness of adaptive weight harmonization across text and video modalities with consideration of psycholinguistic personalization. The proposed method requires further validation on empirical data. Future research is suggested in the direction of integrating additional modalities and enhancing adaptive updating of weight coefficients. The practical significance lies in its application for multimodal systems of continuous intelligent monitoring of employees’ psychological states in both governmental and commercial organizations.

References

Future Market Insights. Insider threat protection market analysis size and share forecast outlook 2025-2035 / S. Saha. Future Market Insights, 2025. 520 p. URL: https://www.futuremarketinsights.com/reports/insider-threat-protection-market (дата звернення: 01.11.2025).

Saddica M., Ruohonen Ju. SoK: the psychology of insider threats. EAI Endorsed Transactions on Security and Safety. 2025. Vol. 9, № 1. URL: https://doi.org/10.4108/eetss.v9i1.9298

Moving toward the digitalization of neuropsychological tests: an exploratory study on usability and operator perception / M.G. Maggio et al. Digital Health. 2025. Vol. 11. URL: https://10.1177/20552076251334449 (дата звер- нення: 03.11.2025).

Lee P., Son M., Jia Z. AI-powered automatic item generation for psychological tests: a conceptual framework for an LLM-based multiagent AIG system. Journal of Business and Psychology. 2025. URL: https://10.1007/s10869-025-10067-y (дата звернення: 03.11.2025).

Psychometric evaluation of large language model embeddings for personality trait prediction / J. Maharjan et al. Journal of Medical Internet Research. 2025. Vol. 27. URL: https://doi.org/10.2196/75347 (дата звернення: 03.11.2025).

Towards dynamic theory of mind: evaluating LLM adaptation to temporal evolution of human states / Y. Xiao et al. Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vienna, Austria. Stroudsburg, PA, USA, 2025. P. 24036–24057. DOI: https://doi.org/10.18653/v1/2025.acl-long.1171

Sert B., Ulker S.V. A review of LWIC and machine learning approaches on mental health diagnosis. Social Review of Technology and Change. 2023. Vol. 1, № 2. P. 71–92.

When LLMs meets acoustic landmarks: an efficient approach to integrate speech into large language models for depression detection / X. Zhang et al. Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Miami, Florida, USA. Strodsburg, PA, USA, 2024. P. 146–158. DOI: https://doi.org/10.18653/v1/2024.emnlp-main.8

Шаповал В.П., Тарасенко Я.В. Метод інтелектуального відеоконтролю первинних ознак психологічного стану. Вчені записки ТНУ імені В.І. Вернадського. Серія: технічні науки. 2025. Том 36 (75), № 2. С. 222–227. DOI: https://doi.org/10.32782/2663-5941/2025.2.2/30

Machine learning for multimodal mental health detection: a systematic review of passive sensing approaches / L.S. Khoo et al. Sensors. 2024. Vol. 24, № 2. P. 348. DOI: https://doi.org/10.3390/s24020348

Uncertainty-aware multi-modal random network prediction / H. Wang et al. Lecture notes in computer science. Cham, 2022. P. 200-217. DOI: https://doi.org/10.1007/978-3-031-19836-6_12

Knowledge-guided dynamic modality attention fusion framework for multimodal sentiment analysis / X. Feng et al. Findings of the association for computational linguistics: EMNLP 2024, Miami, Florida, USA. Stroudsburg, PA, USA, 2024. P. 14755-14766. DOI: https://doi.org/10.18653/v1/2024.findings-emnlp.865

Harnessing multimodal approaches for depression detection using large language models and facial expressions / M. Sadeghi et al. NPJ Menthal Health Research. Vol. 3, № 1. DOI: https://doi.org/10.1038/s44184-024-00112-8

Published

2025-12-31