三维重建场景的纹理优化算法综述
Survey of texture optimization algorithms for 3D reconstructed scenes
- 2024年29卷第8期 页码:2303-2318
纸质出版日期: 2024-08-16
DOI: 10.11834/jig.230478
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2024-08-16 ,
移动端阅览
于柳, 吴晓群. 2024. 三维重建场景的纹理优化算法综述. 中国图象图形学报, 29(08):2303-2318
Yu Liu, Wu Xiaoqun. 2024. Survey of texture optimization algorithms for 3D reconstructed scenes. Journal of Image and Graphics, 29(08):2303-2318
三维重建场景的纹理优化是计算机图形学和计算机视觉等领域的基础任务之一,其目的是优化纹理映射,减小重建几何体和纹理之间的对齐误差,提升重建场景的细节表现。为了对三维重建场景纹理优化算法的现状进行全面研究,本文从传统优化算法和基于深度学习的优化算法两个方面对现有三维重建场景的纹理优化算法进行综述。传统的纹理优化算法一般通过优化相机姿态、校正图像颜色、提高重建几何精度等步骤达到三维场景纹理优化的目的,按照优化方式的不同,主要包括基于图像融合的优化算法、基于图像拼接的优化算法以及纹理与几何联合优化算法,而基于深度学习的优化算法则利用神经网络优化三维场景纹理。同时,本文汇总了常用的三维重建场景纹理优化的数据集与评价指标,并重点讨论了不同数据集和评价指标的特点与用法。此外,本文对现有的各类纹理优化算法进行了定性分析和定量对比,重点阐述了这些算法的原理及优缺点,最后探讨了三维重建场景的纹理优化面临的挑战和发展方向。
Texture optimization is an important task in the field of computer graphics and computer vision, and it plays an essential role in creating realistic 3D reconstructed scenes, which aims to enhance the quality of texture mapping for 3D reconstructed scenes. It has various applications in digital entertainment, heritage restoration, smart cities, virtual/augmented reality, and other fields. To achieve complete texture mapping, a single image is often insufficient, and multiple angle color images are required. By combining these images and projecting them onto the 3D scene, high-quality texture mapping can be achieved, ideally with consistent luminosity across all images. However, the accuracy of 3D reconstruction can be impacted by errors in camera pose estimation and geometry, which result in misaligned projected images. This limitation restricts the use of 3D scenes in various fields, which makes 3D scene texture optimization a crucial task. The texture mapping process involves projecting a 2D image onto a 3D surface to create a realistic representation of the scene. However, this process can be challenging due to the complexity of the 3D data and the inherent inaccuracies in the 3D reconstruction process. Texture optimization algorithms attempt to reduce the errors in texture mapping and improve the visual quality of the resulting scene. Texture optimization algorithms have two main types: traditional and deep learning-based optimization algorithms. Traditional optimization algorithms typically aim to reduce the accumulated error of camera pose and the impact of reconstruction geometry accuracy on texture mapping quality. These algorithms often involve techniques such as image fusion, image stitching, and joint texture and geometry optimization. Image fusion-based optimization algorithms attempt to reduce blurring and artifacts in texture mapping by optimizing the camera pose and adding deformation functions to the images before weighting the color samples of multiple texture images from different views using a mixed weight function. This approach can help improve the visual quality of the scene and reduce the impact of reconstruction errors on texture mapping. Image stitching-based optimization algorithms select an optimal texture image for each triangular slice on the model and then deal with the seams caused by stitching multiple images. This approach can help reduce the impact of reconstruction errors on texture mapping and improve the visual quality of the resulting scene. Joint texture and geometry optimization algorithms further improve the quality of texture mapping by jointly optimizing the camera pose, geometric vertex position, and texture image color. This approach can help minimize the impact of reconstruction errors on texture mapping and achieve more realistic texture mapping. However, traditional optimization algorithms can be computationally expensive due to the complexity of 3D data. To address this challenge, researchers have developed deep learning-based optimization algorithms, which use neural networks to optimize 3D scene textures. These algorithms can be further classified into convolutional neural network-based optimization algorithms, generative adversarial network (GAN)-based optimization algorithms, neural textures, and text-driven diffusion model optimization algorithms. Convolutional neural network-based optimization algorithms use deep neural networks to learn the texture features of the input images and generate high-quality texture maps for 3D reconstructed scenes. GAN-based optimization algorithms use a GAN to generate high-quality texture maps that are visually indistinguishable from real images. Neural textures use a neural network to synthesize new textures that can be applied to 3D reconstructed scenes. Text-driven diffusion model optimization algorithms use a diffusion model to optimize the texture of large missing 3D scenes based on a given text description. Deep learning-based optimization algorithms have shown promising results in improving the visual quality of texture mapping and reducing computational costs. They can effectively combine prior knowledge with neural networks to infer the texture of large missing 3D scenes. This feature not only reduces computation but also ensures texture details and greatly enhances visual realism. In addition to texture optimization algorithms, various datasets and evaluation metrics have been developed to evaluate the performance of these algorithms. The commonly used datasets include the bundle fusion dataset and the ScanNet dataset. The evaluation metrics used to evaluate the performance of texture optimization algorithms include peak signal-to-noise ratio, structural similarity index, and mean squared error. In conclusion, texture optimization of 3D reconstructed scenes is a challenging research topic in the fields of computer graphics and computer vision. Traditional and deep learning-based optimization algorithms have been developed to address this challenge. The future trend of texture optimization is likely to be deep learning-based optimization algorithms, which can effectively reduce computational costs and enhance visual realism. However, many challenges and opportunities exist in this field, and researchers need to continue exploring new approaches to improve the quality of texture mapping for 3D reconstructed scenes.
场景重建纹理优化图像融合图像拼接联合优化
scene reconstructiontexture optimizationimage fusionimage stitchingjoint optimization
Allene C, Pons J P and Keriven R. 2008. Seamless image-based texture atlases using multi-band blending//Proceedings of the 19th International Conference on Pattern Recognition. Tampa, USA: IEEE: 1-4 [DOI: 10.1109/ICPR.2008.4761913http://dx.doi.org/10.1109/ICPR.2008.4761913]
Baatz H, Granskog J, Papas M, Rousselle F and Novk J. 2022. NeRF-Tex: neural reflectance field textures. Computer Graphics Forum, 41(6): 287-301 [DOI: 10.1111/cgf.14449http://dx.doi.org/10.1111/cgf.14449]
Bi S, Kalantari N K and Ramamoorthi R. 2017. Patch-based optimization for image-based texture mapping. ACM Transactions on Graphics, 36(4): #106 [DOI: 10.1145/3072959.3073610http://dx.doi.org/10.1145/3072959.3073610]
Božič A, Palafox P, Thies J, Dai A and Nießner M. 2021. TransformerFusion: monocular RGB scene reconstruction using Transformers. [EB/OL]. [2023-07-05]. http://arxiv.org/pdf/2107.02191.pdfhttp://arxiv.org/pdf/2107.02191.pdf
Chabra R, Straub J, Sweeney C, Newcombe R and Fuchs H. 2019. StereoDRNet: dilated residual StereoNet//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 11778-11787 [DOI: 10.1109/CVPR.2019.01206http://dx.doi.org/10.1109/CVPR.2019.01206]
Chang A, Dai A, Funkhouser T, Halber M, Niebner M, Savva M, Song S R, Zeng A and Zhang Y D. 2017. Matterport3D: learning from RGB-D data in indoor environments//Proceedings of 2017 International Conference on 3D Vision. Qingdao, China: IEEE: 667-676 [DOI: 10.1109/3DV.2017.00081http://dx.doi.org/10.1109/3DV.2017.00081]
Chang A X, Funkhouser T, Guibas L, Hanrahan P, Huang Q X, Li Z M, Savarese S, Savva M, Song S R, Su H, Xiao J X, Yi L and Yu F. 2015. ShapeNet: an information-rich 3D model repository [EB/OL]. [2023-07-05]. https://arxiv.org/pdf/1512.03012.pdfhttps://arxiv.org/pdf/1512.03012.pdf
Curless B and Levoy M. 1996. A volumetric method for building complex models from range images//Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques. New Orleans, USA: ACM: 303-312 [DOI: 10.1145/237170.237269http://dx.doi.org/10.1145/237170.237269]
Dai A, Chang A X, Savva M, Halber M, Funkhouser T and Nießner M. 2017a. ScanNet: richly-annotated 3D reconstructions of indoor scenes//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 2432-2443 [DOI: 10.1109/CVPR.2017.261http://dx.doi.org/10.1109/CVPR.2017.261]
Dai A, Nießner M, Zollhöfer M, Izadi S and Theobalt C. 2017b. Bundlefusion: real-time globally consistent 3D reconstruction using on-the-fly surface reintegration. ACM Transactions on Graphics, 36(4): #76a [DOI: 10.1145/3072959.3054739http://dx.doi.org/10.1145/3072959.3054739]
Dai A, Siddiqui Y, Thies J, Valentin J and Nießner M. 2021. SPSG: self-supervised photometric scene generation from RGB-D scans//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: 1747-1756 [DOI: 10.1109/CVPR46437.2021.00179http://dx.doi.org/10.1109/CVPR46437.2021.00179]
Du S L, Dang H, Zhao M H and Shi Z H. 2023. Low-light image enhancement and denoising with internal and external priors. Journal of Image and Graphics, 28(9): 2844-2855
都双丽, 党慧, 赵明华, 石争浩. 2023. 结合内外先验知识的低照度图像增强与去噪算法. 中国图象图形学报, 28(9): 2844-2855 [DOI: 10.11834/jig.220707http://dx.doi.org/10.11834/jig.220707]
Fu Y P, Yan Q G, Liao J and Xiao C X. 2020. Joint texture and geometry optimization for RGB-D reconstruction//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 5949-5958 [DOI: 10.1109/CVPR42600.2020.00599http://dx.doi.org/10.1109/CVPR42600.2020.00599]
Fu Y P, Yan Q G, Liao J, Zhou H J, Tang J and Xiao C X. 2023. Seamless texture optimization for RGB-D reconstruction. IEEE Transactions on Visualization and Computer Graphics, 29(3): 1845-1859 [DOI: 10.1109/TVCG.2021.3134105http://dx.doi.org/10.1109/TVCG.2021.3134105]
Fu Y P, Yan Q G, Yang L, Liao J and Xiao C X. 2018. Texture mapping for 3D reconstruction with RGB-D sensor//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 4645-4653 [DOI: 10.1109/CVPR.2018.00488http://dx.doi.org/10.1109/CVPR.2018.00488]
Gal R, Wexler Y, Ofek E, Hoppe H and Cohen-Or D. 2010. Seamless montage for texturing models. Computer Graphics Forum, 29(2): 479-486 [DOI: 10.1111/j.1467-8659.2009.01617.xhttp://dx.doi.org/10.1111/j.1467-8659.2009.01617.x]
Gao M H, Zhang Y S, Wang Y J and Li Y J. 2019. Texture synthesis optimization method based on convolutional neural network. Computer Engineering and Design, 40(12): 3551-3556
高明慧, 张尤赛, 王亚军, 李垣江. 2019. 应用卷积神经网络的纹理合成优化方法. 计算机工程与设计, 40(12): 3551-3556 [DOI: 10.16208/j.issn1000-7024.2019.12.031http://dx.doi.org/10.16208/j.issn1000-7024.2019.12.031]
Handa A, Pătrăucean V, Stent S and Cipolla R. 2016. SceneNet: an annotated model generator for indoor scene understanding//Proceedings of 2016 IEEE International Conference on Robotics and Automation. Stockholm, Sweden: IEEE: 5737-5743 [DOI: 10.1109/ICRA.2016.7487797http://dx.doi.org/10.1109/ICRA.2016.7487797]
Handa A, Whelan T, McDonald J and Davison A J. 2014. A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM//Proceedings of 2014 IEEE International Conference on Robotics and Automation. Hong Kong, China: IEEE: 1524-1531 [DOI: 10.1109/ICRA.2014.6907054http://dx.doi.org/10.1109/ICRA.2014.6907054]
Huang J W, Dai A, Guibas L and Nießner M. 2017. 3Dlite: towards commodity 3D scanning for content creation. ACM Transactions on Graphics, 36(6): #203 [DOI: 10.1145/3130800.3130824http://dx.doi.org/10.1145/3130800.3130824]
Huang J W, Thies J, Dai A, Kundu A, Jiang C Y, Guibas L J, Niessner M and Funkhouser T. 2020. Adversarial texture optimization from RGB-D scans//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 1556-1565 [DOI: 10.1109/CVPR42600.2020.00163http://dx.doi.org/10.1109/CVPR42600.2020.00163]
Izadi S, Kim D, Hilliges O, Molyneaux D, Newcombe R, Kohli P, Shotton J, Hodges S, Freeman D, Davison A and Fitzgibbon A. 2011. KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera//Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology. Santa Barbara, USA: ACM: 559-568 [DOI: 10.1145/2047196.2047270http://dx.doi.org/10.1145/2047196.2047270]
Jeon J, Jung Y, Kim H and Lee S. 2016. Texture map generation for 3D reconstructed scenes. The Visual Computer, 32(6/8): 955-965 [DOI: 10.1007/s00371-016-1249-5http://dx.doi.org/10.1007/s00371-016-1249-5]
Jiang H Q, Wang B S, Zhang G F and Bao H J. 2015. High-quality texture mapping for complex 3D scenes. Chinese Journal of Computers, 38(12): 2349-2360
姜翰青, 王博胜, 章国锋, 鲍虎军. 2015. 面向复杂三维场景的高质量纹理映射. 计算机学报, 38(12): 2349-2360 [DOI: 10.11897/SP.J.1016.2015.02349http://dx.doi.org/10.11897/SP.J.1016.2015.02349]
Karras T, Laine S, Aittala M, Hellsten J, Lehtinen J and Aila T. 2020. Analyzing and improving the image quality of StyleGAN//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 8107-8116 [DOI: 10.1109/CVPR42600.2020.00813http://dx.doi.org/10.1109/CVPR42600.2020.00813]
Kim J, Kim H, Nam H, Park J and Lee S. 2022. TextureMe: high-quality textured scene reconstruction in real time. ACM Transactions on Graphics, 41(3): #24 [DOI: 10.1145/3503926http://dx.doi.org/10.1145/3503926]
Lee J H, Ha H, Dong Y, Tong X and Kim M H. 2020. TextureFusion: high-quality texture acquisition for real-time RGB-D scanning//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 1269-1277 [DOI: 10.1109/CVPR42600.2020.00135http://dx.doi.org/10.1109/CVPR42600.2020.00135]
Lempitsky V and Ivanov D. 2007. Seamless mosaicing of image-based texture maps//Proceedings of 2007 IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis, USA: IEEE: 1-6 [DOI: 10.1109/CVPR.2007.383078http://dx.doi.org/10.1109/CVPR.2007.383078]
Li S H, Xiao X W, Guo B X and Zhang L. 2020. A novel OpenMVS-based texture reconstruction method based on the fully automatic plane segmentation for 3D mesh models. Remote Sensing, 12(23): #3908 [DOI: 10.3390/rs12233908http://dx.doi.org/10.3390/rs12233908]
Li W, Gong H J and Yang R G. 2019a. Fast texture mapping adjustment via local/global optimization. IEEE Transactions on Visualization and Computer Graphics, 25(6): 2296-2303 [DOI: 10.1109/TVCG.2018.2831220http://dx.doi.org/10.1109/TVCG.2018.2831220]
Li W, Xiao X and Hahn J. 2019b. 3D reconstruction and texture optimization using a sparse set of RGB-D cameras//Proceedings of 2019 IEEE Winter Conference on Applications of Computer Vision. Waikoloa, USA: IEEE: 1413-1422 [DOI: 10.1109/WACV.2019.00155http://dx.doi.org/10.1109/WACV.2019.00155]
Liu B, Chen X N and Xue J S. 2015. Seamless texture mapping algorithm based on multi parameter weighted. Journal of Image and Graphics, 20(7): 929-936
刘彬, 陈向宁, 薛俊诗. 2015. 多参数加权的无缝纹理映射算法. 中国图象图形学报, 20(7): 929-936 [DOI: 10.11834/jig.20150709http://dx.doi.org/10.11834/jig.20150709]
Liu Z N, Cao Y P, Kuang Z F, Kobbelt L and Hu S M. 2021. High-quality textured 3D shape reconstruction with cascaded fully convolutional networks. IEEE Transactions on Visualization and Computer Graphics, 27(1): 83-97 [DOI: 10.1109/TVCG.2019.2937300http://dx.doi.org/10.1109/TVCG.2019.2937300]
Maier R, Kim K, Cremers D, Kautz J and Nießner M. 2017. Intrinsic3D: high-quality 3D reconstruction by joint appearance and geometry optimization with spatially-varying lighting//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 3133-3141 [DOI: 10.1109/ICCV.2017.338http://dx.doi.org/10.1109/ICCV.2017.338]
Murez Z, Van As T, Bartolozzi J, Sinha A, Badrinarayanan V and Rabinovich A. 2020. Atlas: end-to-end 3D scene reconstruction from posed images//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer: 414-431 [DOI: 10.1007/978-3-030-58571-6_25http://dx.doi.org/10.1007/978-3-030-58571-6_25]
Nießner M, Zollhöfer M, Izadi S and Stamminger M. 2013. Real-time 3D reconstruction at scale using voxel hashing. ACM Transactions on Graphics, 32(6): #169 [DOI: 10.1145/2508363.2508374http://dx.doi.org/10.1145/2508363.2508374]
Richardson E, Metzer G, Alaluf Y, Giryes R and Cohen-Or D. 2023. Texture: text-guided texturing of 3D shapes//Proceedings of 2023 ACM SIGGRAPH Conference. Los Angeles, USA: ACM: 1-11 [DOI: 10.1145/3588432.3591503http://dx.doi.org/10.1145/3588432.3591503]
Sheng X, Yuan J, Tao W B, Tao B and Liu L M. 2021. Efficient convex optimization-based texture mapping for large-scale 3D scene reconstruction. Information Sciences, 556: 143-159 [DOI: 10.1016/j.ins.2020.12.052http://dx.doi.org/10.1016/j.ins.2020.12.052]
Siddiqui Y, Thies J, Ma F C, Shan Q, Nießner M and Dai A. 2022. Texturify: generating textures on 3D shape surfaces//Proceedings of the 17th European Conference on Computer Vision. Tel Aviv, Israel: Springer: 72-88 [DOI: 10.1007/978-3-031-20062-5_5http://dx.doi.org/10.1007/978-3-031-20062-5_5]
Simakov D, Caspi Y, Shechtman E and Irani M. 2008. Summarizing visual data using bidirectional similarity//Proceedings of 2008 IEEE Conference on Computer Vision and Pattern Recognition. Anchorage, USA: IEEE: 1-8 [DOI: 10.1109/CVPR.2008.4587842http://dx.doi.org/10.1109/CVPR.2008.4587842]
Song L C, Cao L L, Xu H Y, Kang K, Tang F, Yuan J S and Zhao Y. 2023. RoomDreamer: text-driven 3D indoor scene synthesis with coherent geometry and texture [EB/OL]. [2023-07-05]. https://arxiv.org/pdf/2305.11337.pdfhttps://arxiv.org/pdf/2305.11337.pdf
Straub J, Whelan T, Ma L N, Chen Y F, Wijmans E, Green S, Engel J J, Mur-Artal R, Ren C, Verma S, Clarkson A, Yan M F, Budge B, Yan Y J, Pan X Q, Yon J, Zou Y Y, Leon K, Carter N, Briales J, Gillingham T, Mueggler E, Pesqueira L, Savva M, Batra D, Strasdat H M, De Nardi R, Goesele M, Lovegrove S and Newcombe R. 2019. The Replica dataset: a digital replica of indoor spaces [EB/OL]. [2019-06-13]. https://arxiv.org/pdf/1906.05797.pdfhttps://arxiv.org/pdf/1906.05797.pdf
Sturm J, Engelhard N, Endres F, Burgard W and Cremers D. 2012. A benchmark for the evaluation of RGB-D SLAM systems//Proceedings of 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. Vilamoura-Algarve, Portugal: IEEE: 573-580 [DOI: 10.1109/IROS.2012.6385773http://dx.doi.org/10.1109/IROS.2012.6385773]
Sun J X, Wang X, Wang L Z, Li X Y, Zhang Y, Zhang H W and Liu Y B. 2023. Next3D: generative neural texture rasterization for 3D-aware head avatars//Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver, Canada: IEEE: 20991-21002 [DOI: 10.48550/arXiv.2211.11208http://dx.doi.org/10.48550/arXiv.2211.11208]
Thies J, Zollhöfer M and Nießner M. 2019. Deferred neural rendering: image synthesis using neural textures. ACM Transactions on Graphics, 38(4): #66 [DOI: 10.1145/3306346.3323035http://dx.doi.org/10.1145/3306346.3323035]
Waechter M, Moehrle N and Goesele M. 2014. Let there be color! Large-scale texturing of 3D reconstructions//Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland: Springer: 836-850 [DOI: 10.1007/978-3-319-10602-1_54http://dx.doi.org/10.1007/978-3-319-10602-1_54]
Wang C and Guo X H. 2018. Plane-based optimization of geometry and texture for RGB-D reconstruction of indoor scenes//Proceedings of 2018 International Conference on 3D Vision. Verona, Italy: IEEE: 533-541 [DOI: 10.1109/3DV.2018.00067http://dx.doi.org/10.1109/3DV.2018.00067]
Wang C and Guo X H. 2019. Efficient plane-based optimization of geometry and texture for indoor RGB-D reconstruction [EB/OL]. [2023-07-05]. http://arxiv.org/pdf/1905.08853.pdfhttp://arxiv.org/pdf/1905.08853.pdf
Wei X K, Chen Z Q, Fu Y W, Cui Z P and Zhang Y D. 2021. Deep hybrid self-prior for full 3D mesh generation//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 5785-5794 [DOI: 10.1109/ICCV48922.2021.00575http://dx.doi.org/10.1109/ICCV48922.2021.00575]
Wimbauer F, Yang N, Von Stumberg L, Zeller N and Cremers D. 2021. MonoRec: semi-supervised dense reconstruction in dynamic environments from a single moving camera//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 6108-6118 [DOI: 10.1109/CVPR46437.2021.00605http://dx.doi.org/10.1109/CVPR46437.2021.00605]
Xiao J X, Owens A and Torralba A. 2013. Sun3D: a database of big spaces reconstructed using SFM and object labels//Proceedings of 2013 IEEE International Conference on Computer Vision. Sydney, Australia: IEEE: 1625-1632 [DOI: 10.1109/ICCV.2013.458http://dx.doi.org/10.1109/ICCV.2013.458]
Zhai C J and Qin X Y. 2017. Intrinsic texture reconstruction for 3D handheld scanner. Journal of Image and Graphics, 22(3): 395-404
翟昌杰, 秦学英. 2017. 面向手持式3维扫描设备的本征纹理重建. 中国图象图形学报, 22(3): 395-404 [DOI: 10.11834/jig.20170314http://dx.doi.org/10.11834/jig.20170314]
Zhang J B, Wan Z Y and Liao J. 2023. Adaptive joint optimization for 3D reconstruction with differentiable rendering. IEEE Transactions on Visualization and Computer Graphics, 29(6): 3039-3051 [DOI: 10.1109/TVCG.2022.3148245http://dx.doi.org/10.1109/TVCG.2022.3148245]
Zhao M H, Cheng D N, Du S L, Hu J, Shi C and Shi Z H. 2022. An improved fusion strategy based on transparency-guided backlit image enhancement. Journal of Image and Graphics, 27(5): 1554-1564
赵明华, 程丹妮, 都双丽, 胡静, 石程, 石争浩. 2022. 改进融合策略下透明度引导的逆光图像增强. 中国图象图形学报, 27(5): 1554-1564 [DOI: 10.11834/jig.210739http://dx.doi.org/10.11834/jig.210739]
Zhi T C, Lassner C, Tung T, Stoll C, Narasimhan S G and Vo M. 2020. TexMesh: reconstructing detailed human texture and geometry from RGB-D video//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer: 492-509 [DOI: 10.1007/978-3-030-58607-2_29http://dx.doi.org/10.1007/978-3-030-58607-2_29]
Zhong D W, Han L and Fang L. 2019. IDFusion: globally consistent dense 3D reconstruction from RGB-D and inertial measurements//Proceedings of the 27th ACM International Conference on Multimedia. Nice, France: ACM: 962-970 [DOI: 10.1145/3343031.3351085http://dx.doi.org/10.1145/3343031.3351085]
Zhou Q Y and Koltun V. 2014. Color map optimization for 3D reconstruction with consumer depth cameras. ACM Transactions on Graphics, 33(4): #155 [DOI: 10.1145/2601097.2601134http://dx.doi.org/10.1145/2601097.2601134]
相关作者
相关机构