Texture Defragmentation for Photo-Reconstructed 3D Models
We propose a method to improve an existing parametrization (UV-map layout) of a textured 3D model, targeted explicitly at alleviating typical defects afflicting models generated with automatic photo-reconstruction tools from real-world objects. This class of 3D data is becoming increasingly important thanks to the growing popularity of reliable, ready-to-use photogrammetry software packages. The resulting textured models are richly detailed, but their underlying parametrization typically falls short of many practical requirements, particularly exhibiting excessive fragmentation and consequent problems. Producing a completely new UV-map, with standard parametrization techniques, and then resampling a new texture image, is often neither practical nor desirable for at least two reasons: first, these models have characteristics (such as inconsistencies, high resolution) that make them unfit for automatic or manual parametrization; second, the required resampling leads to unnecessary signal degradation because this process is unaware of the original texel densities. In contrast, our method improves the existing UV-map instead of replacing it, balancing the reduction of the map fragmentation with signal degradation due to resampling, while also avoiding oversampling of the original signal. The proposed approach is fully automatic and extensively tested on a large benchmark of photo-reconstructed models; quantitative evaluation evidences a drastic and consistent improvement of the mappings.
https://doi.org/10.1111/cgf.142615
Reference implementation available at https://github.com/maggio-a/texture-defrag
Images and movies
BibTex references
@Article\{MCT21, author = "Maggiordomo, Andrea and Cignoni, Paolo and Tarini, Marco", title = "Texture Defragmentation for Photo-Reconstructed 3D Models", journal = "Computer Graphics Forum", number = "2", volume = "40", pages = "65--78", year = "2021", url = "http://vcg-legacy.isti.cnr.it/Publications/2021/MCT21" }