Comparisons
Resolution Enhancement
To assess the resolution enhancement performance, we compared our approach with SOTA non-deep
learning real-time rendering super-resolution methods. Only videos from a single side were utilized to
facilitate comparisons. This table presents the quantitative comparisons of reconstructed HR image quality by
various methods
at an upscaling factor of ×2. The results demonstrate that the reconstruction results of our method
surpass those of previous methods across the evaluated scenes in both PSNR and SSIM metrics. Notably, in
comparison to our baseline method Mob-FGSR, our approach shows clear improvements, with average enhancements of
0.48 in PSNR and 0.004 in SSIM.
Comparison of resolution enhancement methods
The figure provides a visual qualitative comparison, where our
method retains sharper textures without suffering from ghosting issues. Conversely, methods such as FSR 2 and
TAAU occasionally produce ghosting artifacts due to insufficient cleaning of disoccluded pixels.
Resolution Enhancement
Unshaded View Synthesis
To evaluate the image synthesis performance separately, the resolution enhancement component of our method was
disabled. We compared our approach against SOTA image synthesis methods for stereo rendering. All methods used
a single viewpoint 1080P image as input to output an unshaded viewpoint. In terms of quantitative metrics, our
method achieves better results than the re-shading
approach HRR in some cases. This improvement is attributed to our pixel selection strategy, which more
accurately constructs non-Lambertian surface reflections, closely aligning with the ground truth.
Comparison of unshaded view synthesis methods
To provide a
clearer comparison of the disocclusion filling results across methods, we present visual results in
this figure, where key disoccluded areas are magnified. Here, the HRR method, employing a re-shading
strategy for disocclusion, theoretically delivers the best filling results. Our approach also achieves
satisfactory results by utilizing history information for filling, outperforming the EHIW method, which relies
solely on the depth-based low-pass filter.
Unshaded View Synthesis