INFORMATION ENTROPY OF IEEE 754 COMPUTING STANDARDS: THEORETICAL ANALYSIS AND PRACTICAL IMPLICATIONS

Authors

DOI:

https://doi.org/10.35546/kntu2078-4481.2026.1.45

Keywords:

information entropy; IEEE 754; floating-point numbers; specific entropy efficiency; Shannon entropy; NaN redundancy; denormalized numbers; Benford’s law; numerical data compression; neural network formats

Abstract

This paper proposes an original approach to the study of IEEE 754 floating-point number formats through the lens of Shannon’s information theory. Unlike traditional research that examines IEEE 754 formats exclusively from the perspective of computational accuracy or hardware efficiency, this work systematically investigates the informationentropy characteristics of each functional field – the sign bit, the exponent field, and the mantissa field. An original metric called “specific entropy efficiency” (SEE) is proposed to quantitatively compare the information density of different floating-point formats. A detailed entropy analysis of four standard formats – binary16, binary32, binary64, and binary128 – is conducted, accounting for the specifics of denormalized numbers, dual zero encoding, and the semantic redundancy of NaN values. For the first time, four sources of entropy redundancy in IEEE 754 are systematically classified and an analytical dependence between exponent field width and NaN-redundancy level is established. A binary analogue of Benford’s law in IEEE 754 formats is identified and quantified: non-uniform distribution of the upper mantissa bits creates a compression potential of approximately 2 % per mantissa bit. The results provide a theoretical foundation for designing new numerical formats with optimal information characteristics, particularly for neural network accelerators and large-scale numerical data processing systems.

References

Shannon C. E. A Mathematical Theory of Communication. Bell System Technical Journal. 1948. Vol. 27, No. 3. Pp. 379–423.

Goldberg D. What every computer scientist should know about floating-point arithmetic. ACM Computing Surveys. 1991. Vol. 23, No. 1. Pp. 5–48.

Lehr J., Hintz J., Bertone A. Statistical distribution of floating-point numbers in scientific computing. Journal of Computational Science. 2019. Vol. 35. Pp. 28–37.

Lindstrom P. Fixed-Rate Compressed Floating-Point Arrays. IEEE Transactions on Visualization and Computer Graphics. 2014. Vol. 20, No. 12. Pp. 2674–2683.

Benford F. The law of anomalous numbers. Proceedings of the American Philosophical Society. 1938. Vol. 78, No. 4. Pp. 551–572.

IEEE Standard for Floating-Point Arithmetic. IEEE Std 754-2019. New York : IEEE, 2019. 84 p.

Muller J.-M. et al. Handbook of Floating-Point Arithmetic. 2nd ed. Birkhäuser, 2018. 627 p.

Higham N. J. Accuracy and Stability of Numerical Algorithms. 2nd ed. Philadelphia : SIAM, 2002. 680 p.

Kalamkar D. et al. A Study of BFLOAT16 for Deep Learning Training. arXiv:1905.12322. 2019. 8 p.

Published

2026-04-30