COMPUTATION EFFICIENCY ENHANCEMENT IN DISTRIBUTED COMPUTER SYSTEMS WITH THE MPI LIBRARY ASSISTANCE
DOI:
https://doi.org/10.32782/mathematical-modelling/2023-6-1-7Keywords:
“marching cubes”, distributed memory, MPI, R‑functionAbstract
The development of computing technology, characterized by an increase in the amount of data and massive and diverse parallelism, poses new challenges to developers. Traditionally, the development of scalable applications has been carried out on the computer core, ignoring simulation performance. For applications with lower output than currently available, scientists can archive simulation results for later interpretation. However, for applications of extreme scale, the data output often contains too much data to be stored in the main memory or is limited by the I/O bandwidth. Hence, there is a current necessity to develop scalable applications that include modeling, simulation, analysis, and visualization. The objective of the study is to develop a parallel-distributed method of modeling geometric objects using the functional approach of the MPI library. This article is focused on the description of the operation of the “marching cubes” algorithm in a distributed system, the analysis of its properties and practical application in the construction of objects using parallel programming of the MPI and Open MP libraries. The development of an effective parallel software component is analyzed, which, in addition to direct rendering, enables efficient storage and construction of geometric models and is able to use several devices simultaneously. The examples of building objects in the Qt Creator environment are also presented. The results will beneficial for theoretical and practical research on the visual representation of models with distributed memory. Models built by means of the improved “marching cubes” algorithm enables to solve some modeling problems in a time-consuming way and make appropriate decisions regarding the object construction. Therefore, 3D objects building based on a functional approach can be more efficient by using the MPI distributed approach library.
References
Incorporating long messages into the LogP model – one step closer towards a realistic model for parallel computation / A. Alexandrov et al. USA : Tech. Rep, 1995. Р. 206.
Flexible collective communication tuning architecture applied to Open MPI, PVM/MPI / G.E. Fagg et al. USA : Manning Publications, 2006. Р. 14.
Constructing a prior-dependent graph for data clustering and dimension reduction in the edge of AIoT, Future Gener / Т. Guo et al. Comput. Syst. 128. Kobe, 2022. Р. 381.
Predicting MPI collective communication performance using machine learning, in: 2020 IEEE International Conference on Cluster Computing (CLUSTER) / S. Hunold et al. Kobe, 2020. Р. 259.
On construction of sensors, edge, and cloud (iSEC) framework for smart system integration and applications / Е. Kristiani et al. IEEE Int. Things J. 2020. № 8(1). Р. 309.
Rico-Gallego J., Lastovetsky A.L., Martín J.C.D. Model-based estimation of the communication cost of hybrid data-parallel applications on heterogeneous clusters. IEEE Trans. Parallel Distrib. Syst. 2017. № 28(11). Р. 217.