Taming High-Resolution Auxiliary G-Buffers for Deep Supersampling of Rendered Content
Authors: Wang, P., Yuan, C., Guo, J., Yang, X., Li, H., Stephenson, I., Chang, J. and Cao, Y.
Journal: IEEE Transactions on Visualization and Computer Graphics
Volume: 31
Issue: 12
Pages: 10609-10623
eISSN: 1941-0506
ISSN: 1077-2626
DOI: 10.1109/TVCG.2025.3609456
Abstract:High-resolution images come with rich color infor mation and texture details. Due to the rapid upgrading of display devices and rendering technologies, high-resolution real-time ren deringfacesthecomputationaloverheadchallenge.Toaddressthis, the current mainstream solution is to render at a lower resolu tion and then upsample to the target resolution by supersampling techniques. However, while manyprior supersampling approaches have attempted to exploit rich rendered data such as color, depth, motion vectors at low resolution, there is little discussion on how to harness high-frequency information that is readily available in the high-resolution (HR) G-buffers of modern renders. In this article, we seek to investigate how to fully leverage information fromHRG-bufferstomaximizethevisualqualityofsupersampling results. We propose a neural network for real-time supersampling of rendered content, which is based on several core designs, in cluding gated G-buffers encoder, G-buffers attended encoder and reflection-awareloss.Thesedesignsareespeciallymadeforthesake of effectively using HR G-buffers, enabling faithful recovery of a variety of high-frequency scene details from low-resolution, highly aliased inputs. Furthermore, a simple occlusion-aware blender is proposed to efficiently rectify dis-occluded features in the warped previous frame, allowing us to better exploit history information to improvetemporalstability.Theexperimentsshowthatourmethod, equipped with strong ability to harness HR G-buffer informa tion, significantly improves the visual fidelity of high-resolution reconstructions upon previous state-of-the-art methods, even for challenging 4 × 4 upsampling, while still being compute-efficient.
Source: Scopus
Taming High-Resolution Auxiliary G-Buffers for Deep Supersampling of Rendered Content.
Authors: Wang, P., Yuan, C., Guo, J., Yang, X., Li, H., Stephenson, I., Chang, J. and Cao, Y.
Journal: IEEE Trans Vis Comput Graph
Volume: 31
Issue: 12
Pages: 10609-10623
eISSN: 1941-0506
DOI: 10.1109/TVCG.2025.3609456
Abstract:High-resolution images come with rich color information and texture details. Due to the rapid upgrading of display devices and rendering technologies, high-resolution real-time rendering faces the computational overhead challenge. To address this, the current mainstream solution is to render at a lower resolution and then upsample to the target resolution by supersampling techniques. However, while many prior supersampling approaches have attempted to exploit rich rendered data such as color, depth, motion vectors at low resolution, there is little discussion on how to harness high-frequency information that is readily available in the high-resolution (HR) G-buffers of modern renders. In this article, we seek to investigate how to fully leverage information from HR G-buffers to maximize the visual quality of supersampling results. We propose a neural network for real-time supersampling of rendered content, which is based on several core designs, including gated G-buffers encoder, G-buffers attended encoder and reflection-aware loss. These designs are especially made for the sake of effectively using HR G-buffers, enabling faithful recovery of a variety of high-frequency scene details from low-resolution, highly aliased inputs. Furthermore, a simple occlusion-aware blender is proposed to efficiently rectify dis-occluded features in the warped previous frame, allowing us to better exploit history information to improve temporal stability. The experiments show that our method, equipped with strong ability to harness HR G-buffer information, significantly improves the visual fidelity of high-resolution reconstructions upon previous state-of-the-art methods, even for challenging $4 \times 4$4×4 upsampling, while still being compute-efficient.
Source: PubMed
Taming High-Resolution Auxiliary G-Buffers for Deep Supersampling of Rendered Content
Authors: Wang, P., Yuan, C., Guo, J., Yang, X., Li, H., Stephenson, I., Chang, J. and Cao, Y.
Journal: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
Volume: 31
Issue: 12
Pages: 10609-10623
eISSN: 1941-0506
ISSN: 1077-2626
DOI: 10.1109/TVCG.2025.3609456
Source: Web of Science (Lite)
Taming High-Resolution Auxiliary G-Buffers for Deep Supersampling of Rendered Content.
Authors: Wang, P., Yuan, C., Guo, J., Yang, X., Li, H., Stephenson, I., Chang, J. and Cao, Y.
Journal: IEEE transactions on visualization and computer graphics
Volume: 31
Issue: 12
Pages: 10609-10623
eISSN: 1941-0506
ISSN: 1077-2626
DOI: 10.1109/tvcg.2025.3609456
Abstract:High-resolution images come with rich color information and texture details. Due to the rapid upgrading of display devices and rendering technologies, high-resolution real-time rendering faces the computational overhead challenge. To address this, the current mainstream solution is to render at a lower resolution and then upsample to the target resolution by supersampling techniques. However, while many prior supersampling approaches have attempted to exploit rich rendered data such as color, depth, motion vectors at low resolution, there is little discussion on how to harness high-frequency information that is readily available in the high-resolution (HR) G-buffers of modern renders. In this article, we seek to investigate how to fully leverage information from HR G-buffers to maximize the visual quality of supersampling results. We propose a neural network for real-time supersampling of rendered content, which is based on several core designs, including gated G-buffers encoder, G-buffers attended encoder and reflection-aware loss. These designs are especially made for the sake of effectively using HR G-buffers, enabling faithful recovery of a variety of high-frequency scene details from low-resolution, highly aliased inputs. Furthermore, a simple occlusion-aware blender is proposed to efficiently rectify dis-occluded features in the warped previous frame, allowing us to better exploit history information to improve temporal stability. The experiments show that our method, equipped with strong ability to harness HR G-buffer information, significantly improves the visual fidelity of high-resolution reconstructions upon previous state-of-the-art methods, even for challenging $4 \times 4$4×4 upsampling, while still being compute-efficient.
Source: Europe PubMed Central