HIGHLY PRODUCTIVE NEURAL SHADING OF THREE-DIMENSIONAL FIGURES USING PIX2PIX MODEL

Authors

  • Ye.K. ZAVALNIUK
  • O.N. ROMANIUK
  • T.I. KOROBEINIKOVA

DOI:

https://doi.org/10.32782/mathematical-modelling/2023-6-1-6

Keywords:

rendering, neural rendering, Pix2Pix, generative neural networks, convolutional neural networks

Abstract

In the article the generative neural networks-based two-stage system for three-dimensional figures shading is developed. The advantages and disadvantages of standard three-dimensional images rendering approaches are analyzed. The features of neural rendering are analyzed. The neural research directions of generating shaded images from figures’ geometrical data and two-dimensional sketches, getting geometrical data from images are examined. The features of architecture and usage of generative adversarial networks are described. The need in the development of new neural rendering methods for increasing the productivity of three-dimensional figures’ surfaces shading is justified. The proposed system for figures neural shading that contains Pix2Pix models for images formation and improving their quality is described. The development of dataset based on ShapeNet figures array for neural network training is described. The proposed volumetric representation of figures’ vertices information that is used as neural system input is examined. The architectures of generator and discriminator of Pix2Pix for figures shading are described. The information about neural network training duration and used error functions is provided. The plots of discriminator’s and generator’s error functions changes during the Pix2Pix training for figures shading are built. Using the SSIM metric and test figures dataset the image generation quality level is evaluated. The architectures of generator and discriminator of Pix2Pix for improving the quality of generated images and their scaling are described. The plots of generator’s and discriminator’s error metrics changes during the training of Pix2Pix for images quality improvement are built. The examples of generated by two-stage neural system shaded figures images are provided. Using the SSIM metric the quality of generated by the second system’s stage images is evaluated. The speed of figures shading using the proposed system and render Blender Eevee are compared. The developed neural system allows to generate realistic images and improve the productivity of figures’ surfaces shading.

References

Романюк О.Н. Комп’ютерна графіка : навчальний посібник. Вінниця : ВДТУ, 1999. 130 с.

Романюк О.Н., Романюк С.О., Романюк О.В. Основні процедури графічного конвеєра. Інформаційні технології в культурі, мистецтві, освіті, науці, економіці та бізнесі : матеріали VІI Міжнародної науково-практичної конференції. Київ, 2022. С. 44–47.

Saleem U. Ray Tracing vs Rasterized Rendering – Explained. Appuals. URL: https://appuals.com/ray-tracing-vs-rasterized-rendering-explained/ (accessed 25.08.2023).

The Overview of Neural Rendering / E.K. Zavalniuk et al. Modern Engineering and Innovative Technologies. 2023. Issue № 27. Part 1. Р. 129–134.

RenderNet: A deep convolutional network for differentiable rendering from 3D shapes / T. Nguyen-Phuoc et al. NeurIPS 2018. Montreal, 2018. Р. 7891–7901.

Generative adversarial networks / І. Goodfellow et al. Communications of the ACM. 2020. Issue 11. Р. 139–144.

Harris-Dewey J., Klein R. Generative Adversarial Networks for Non-Raytraced Global Illumination on Older GPU Hardware. International Journal of Electronics and Electrical Engineering. 2022. № 1. Р. 1–7.

ShapeNet: official web site. URL: https://shapenet.org (accessed 25.08.2023).

Brownlee J. How to Develop a Pix2Pix GAN for Image-To-Image Translation. MachineLearningMastery. URL: https://machinelearningmastery.com/how-to-develop-apix2pix-gan-for-image-to-image-translation/ (accessed on: 25.08.2023).

Image-to-Image Translation with Conditional Adversarial Networks / Р. Isola et al. 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, 2017. Р. 5967–5976.

An approach to correlation of QoE metrics applied to VoD service on IPTV using a Diffserv Network / D. Botia et al. IEEE LATINCOM 2012. Cuenca, 2012. Р. 140–144.

Завальнюк Є.К., Романюк О.Н. Огляд метрик порівняння якості зображень. Молодь в науці: дослідження, проблеми, перспективи (МН-2023) : матеріали Всеукраїнської науково-практичної інтернет-конференції. Вінниця, 2023. С. 571–574.

The Analysis Of Subjective Metrics and Expert Methods for Image Quality Assessment / O.N. Romanyuk et al. Intellektuelles Kapital – die Grundlage für innovative Entwicklung: Technik, Informatik, Landwirtschaft. Monografische Reihe «Europäische Wissenschaft». Karlsruhe : ScientificWorld-NetAkhatAV, 2023. 174 p.

Streijl R.C., Winkler S., Hands D. Mean opinion score (MOS) revisited: methods and applications, limitations and alternatives. Multimedia Systems. 2016. № 2. Р. 213–227.

Published

2023-11-17