INTERPRETABLE CLASSIFICATION OF HEMODYNAMIC STATE BASED ON THE SELECTION OF KEY NON-INVASIVE PARAMETERS AND FEATURE IMPORTANCE ANALYSIS
DOI:
https://doi.org/10.35546/kntu2078-4481.2025.3.2.47Keywords:
hemodynamic state, machine learning, interpretable models, non-invasive parameters, SHAP analysis, explainable artificial intelligence, classificationAbstract
The article addresses the problem of interpretable classification of the hemodynamic state based on the selection of key non-invasive parameters and feature importance analysis. Modern medicine increasingly relies on the development of automated systems for monitoring vital functions of the human body, using machine learning and artificial intelligence methods. However, the implementation of such systems faces a major challenge related to trust from physicians and patients, since most machine learning models act as “black boxes” without providing understandable reasoning behind their predictions. This significantly limits the clinical value of even highly accurate algorithms, as medical specialists require not only the classification result but also a justified explanation of why a certain condition was determined.The aim of this study is to develop an interpretable classification model for assessing the hemodynamic state, applying feature selection techniques to identify the most informative non-invasive physiological parameters and using model explanation methods. The investigated parameters include heart rate, blood pressure, heart rate variability, blood oxygen saturation (SpO₂), and respiratory rate. Experiments were conducted with Random Forest, XGBoost, Logistic Regression, and interpretable decision tree models to compare the classification performance and interpretability of results. SHAP analysis was employed to quantify the contribution of each feature to the model’s decision-making process.The results demonstrate that blood oxygen saturation (SpO₂) and heart rate variability (HRV) are the most significant features for evaluating the hemodynamic state. Their combination provides the highest diagnostic informativeness in distinguishing between compensated and decompensated conditions. Heart rate and blood pressure act as additional predictors, while respiratory rate plays a supplementary role. The use of interpretable algorithms made it possible to formulate classification rules expressed in clinically understandable dependencies, which significantly improves the trustworthiness of the system and lays the foundation for its practical implementation.Thus, the proposed approach combines high diagnostic accuracy with transparency and explainability, making it a promising tool for integration into clinical decision support systems, mobile sensor devices, and telemedicine platforms. Future research should focus on testing the proposed model on extended clinical datasets and integrating it into intelligent information systems for personalized medicine.
References
Vincent J. L., De Backer D. Circulatory shock // New England Journal of Medicine. 2013. Vol. 369, No. 18. P. 1726–1734.
Cecconi M., De Backer D., Antonelli M., et al. Consensus on circulatory shock and hemodynamic monitoring // Intensive Care Medicine. 2014. Vol. 40, No. 12. P. 1795–1815.
Tamura T., Maeda Y., Sekine M., Yoshida M. Wearable photoplethysmographic sensors – Past and present // Electronics. 2014. Vol. 3, No. 2. P. 282–302.
Shaffer F., Ginsberg J. P. An overview of heart rate variability metrics and norms // Frontiers in Public Health. 2017. Vol. 5. P. 258.
Rajkomar A., Dean J., Kohane I. Machine learning in medicine // New England Journal of Medicine. 2019. Vol. 380, No. 14. P. 1347–1358.
Johnson A. E. W., Ghassemi M., Nemati S., et al. Machine learning and decision support in critical care // Proceedings of the IEEE. 2016. Vol. 104, No. 2. P. 444–466.
Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead // Nature Machine Intelligence. 2019. Vol. 1, No. 5. P. 206–215.
Holzinger A., Langs G., Denk H., Zatloukal K., Müller H. Causability and explainability of artificial intelligence in medicine // Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. 2019. Vol. 9, No. 4. e1312.
Lundberg S. M., Lee S.-I. A unified approach to interpreting model predictions // Advances in Neural Information Processing Systems. 2017. Vol. 30. P. 4765–4774.
Ribeiro M. T., Singh S., Guestrin C. «Why Should I Trust You?»: Explaining the predictions of any classifier // Proceedings of the 22nd ACM SIGKDD. 2016. P. 1135–1144.







