数字人脸渲染与外观恢复方法综述
Survey of digital face rendering and appearance recovery methods
- 2024年29卷第9期 页码:2513-2540
纸质出版日期: 2024-09-16
DOI: 10.11834/jig.230683
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2024-09-16 ,
移动端阅览
郝琮晖, 杜悠扬, 王璐, 王贝贝. 2024. 数字人脸渲染与外观恢复方法综述. 中国图象图形学报, 29(09):2513-2540
Hao Conghui, Du Youyang, Wang Lu, Wang Beibei. 2024. Survey of digital face rendering and appearance recovery methods. Journal of Image and Graphics, 29(09):2513-2540
数字人技术引起了数字孪生、元宇宙等领域的广泛关注,其中人脸作为数字人的重要构成部分,其数字化生成和呈现成为人们关注的焦点,且相关技术已经在电影、游戏等领域得到了广阔应用。人们对实现逼真的人脸效果以及精确恢复人脸的需求日益增长,但由于人脸的多层材质结构、复杂的半透明皮肤效果以及毛孔、褶皱等微观特征的综合影响,实现高保真的、高效的人脸渲染一直是领域内的难题。此外,通过采集设备对人脸的几何和外观进行恢复是构建人脸数据的重要方式,然而对人脸的高品质恢复也同样受限于高成本的采集设备和相关数据集的不足。本文对数字人脸的渲染与恢复的相关方法进行综述。首先介绍了真实感人脸的渲染方法,根据其不同的渲染原理,将它们分为基于扩散近似的渲染方法和基于蒙特卡洛采样的渲染方法,并着重分析了基于近似扩散渲染方法的发展现状及面临的问题。进一步,将人脸恢复工作分类为基于专业采集设备的高精度恢复和基于深度学习的低精度恢复。针对高精度人脸恢复,从主动照明和被动捕获两个分支,对相应的工作进行了总结。针对结合深度学习的低精度人脸恢复方法,将其分类为几何细节的恢复、纹理贴图的恢复以及人脸材质信息的恢复3个方面进行介绍。本文系统地论述了各类方法的核心思路,并进行了横向对比和分析。最后,对未来人脸渲染及恢复方法的发展趋势进行了展望。希望本文可以为人脸渲染和外观恢复的初学者提供一些背景知识和思路启发。
Digital human technology has attracted widespread attention in digital twins and metaverse fields. As an integral part of digital humans, people have started focusing on facial digitization and presentation. Consequently, the associated techniques find extensive applications in film, gaming, and virtual reality. A growing demand for facial realism rendering and high-quality facial inverse recovery has been observed. However, given the complex and multilayered material structure of the face, facial realism rendering presents a challenge. Furthermore, the composition of internal skin chemicals, such as melanin and hemoglobin, highly influences skin rendering. Factors, such as temperature and blood flow rate, may influence the skin’s appearance. The semitransparency of the skin introduces difficulties in the simulation of subsurface scattering effects, in addition to the wide presence of microscopic geometric features, such as pores and wrinkles on the face. All the issues mentioned above cause problems in the rendering domain and raise the demand for the quality of facial recovery. In addition, as a result of people’s exposure to real human faces in daily life, a heightened sensitivity to the texture and details of digital human faces has been observed, and this condition places greater demands on their realism and accuracy. Meanwhile, recovery of facial geometry and appearance is a crucial method for the construction of facial datasets. However, the high costs of acquisition equipment often constrain high-quality facial recovery, and most studies are limited by the acquisition speed for facial data, which result in the challenging capture of dynamic facial appearance. Lightweight recovery methods also encounter challenges related to the lack of facial material datasets. This paper presents an overview of recent advances in rendering and recovery of digital human faces. First, we introduce methods for realistic facial rendering and categorize them based on diffusion approximation and Monte Carlo approaches. Methods based on diffusion approximation, which focus on the efficient achievement of the semitransparency effect of the skin, are constrained by strict assumptions and suffer from certain limitations in precision. However, their simplified subsurface scattering models can render satisfactory images relatively quickly. Dynamic and interactive applications, such as games, often apply these methods. On the other hand, methods based on the Monte Carlo approach yield high precision and robust results via the meticulous and comprehensive simulation of the complex interactions between light and skin but require long computation times to converge. In applications, such as movies, where highly realistic visual effects are needed, they often become the preferred choice. We emphasized the development and challenges of methods based on diffusion approximation and divided them into improvements in the diffusion profiles, with real-time implementation of subsurface scattering, and hybrid methods combined with Monte Carlo techniques for detailed discussion. A recent Monte Carlo research aimed at improving the convergence rate for applications in facial rendering, including zero-variance random walks, next-event estimation, and path guiding. Second, we divided facial recovery work into two categories: high-precision recovery based on specialized acquisition equipment and low-precision recovery based on deep learning. This paper further categorizes the former based on the use of specialized lighting equipment, which distinguishes between active illumination and passive capture techniques, with provided detailed explanations for each category. Active illumination relies on professional lighting equipment, such as the application of gradient lighting to recover high-precision normal maps, to improve recovery quality. Conversely, passive capture methods are independent of professional lighting equipment, and any artificially provided lighting is limited to uniform illumination to reduce the interference of scene lighting on recovery and similar auxiliary roles. The exploration also focuses on low-precision facial recovery methods incorporating deep learning and classifies them into three categories, namely, geometric detail, texture mapping, and facial material information recoveries, to provide in-depth insights into each approach. We discuss a strategy for overcoming the limitations of geometric recovery based on parametric models, introduced refined parametric expressions of models, and predicted a range of maps, including displacement maps, that represent the model surface’s geometric details. For texture recovery, we explored the application of deep neural networks in generative tasks in the prediction of high-fidelity and personalized facial skin textures. Comprehensive reviews the various attempts to mitigate the ill-posed problem of separating reflectance information. In addition, we introduce the facial recovery work using multiview images and video sequences. These low-precision facial recovery methods can gain a wide application space given their flexibility and achieve improved recovery results with the rapid development of deep learning technology. Finally, the future trends in facial realism rendering and recovery methods based on the current state of research our outlined. In the realm of facial realism, existing works often represent the material properties of faces using texture maps and neglect the unique principles of skin coloration as a biological material. Furthermore, the rapid development of deep learning technology increases the importance of exploring of its integration with currently rendering techniques. In terms of inverse recovery, the lack of high-quality open-source datasets often poses limitations on data-based facial recovery methods. In addition, substantial improvement is needed in modeling and recovering details at the skin pore level. Combination of inverse recovery with text-based generative work also holds enormous potential and application scenarios. Hopefully, this paper can provide novice researchers in facial rendering and appearance recovery with valuable background knowledge and inspiration from harmony and ideas.
人脸真实感渲染次表面散射人脸逆向恢复主动照明被动捕获深度学习
facial realism renderingsubsurface scatteringfacial inverse recoveryactive illuminationpassive capturedeep learning
Abdal R, Zhu P H, Mitra N J and Wonka P. 2021. StyleFlow: attribute-conditioned exploration of styleGAN-generated images using conditional continuous normalizing flows. ACM Transactions on Graphics, 40(3): #21 [DOI: 10.1145/3447648http://dx.doi.org/10.1145/3447648]
Aldrian O and Smith W A P. 2013. Inverse rendering of faces with a 3D morphable model. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(5): 1080-1093 [DOI: 10.1109/TPAMI.2012.206http://dx.doi.org/10.1109/TPAMI.2012.206]
Aliaga C, Xia M Q, Xie X, Jarabo A, Braun G and Hery C. 2023. A hyperspectral space of skin tones for inverse rendering of biophysical skin properties. Computer Graphics Forum, 42(4): #14887 [DOI: 10.1111/cgf.14887http://dx.doi.org/10.1111/cgf.14887]
Aneja S, Thies J, Dai A and Niessner M. 2023. ClipFace: text-guided editing of textured 3D morphable models//Proceedings of ACM SIGGRAPH 2023 Conference. Los Angeles, USA: Association for Computing Machinery: #70 [DOI: 10.1145/3588432.3591566http://dx.doi.org/10.1145/3588432.3591566]
Azinović D, Maury O, Hery C, Nießner M and Thies J. 2023. High-res facial appearance capture from polarized smartphone images//Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver, Canada: IEEE: 16836-16846[DOI: 10.1109/CVPR52729.2023.01615http://dx.doi.org/10.1109/CVPR52729.2023.01615]
Bai H R, Kang D, Zhang H X, Pan J S and Bao L C. 2023. FFHQ-UV: normalized facial UV-texture dataset for 3D face reconstruction//Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver, Canada: IEEE: 362-371[DOI: 10.1109/CVPR52729.2023.00043http://dx.doi.org/10.1109/CVPR52729.2023.00043]
Bao L C, Lin X K, Chen Y J, Zhang H X, Wang S, Zhe X, Kang D, Huang H Z, Jiang X W, Wang J, Yu D and Zhang Z Y. 2022. High-fidelity 3D digital human head creation from RGB-D selfies. ACM Transactions on Graphics, 41(1): #3[DOI: 10.1145/3472954http://dx.doi.org/10.1145/3472954]
Beeler T, Bickel B, Beardsley P, Sumner B and Gross M. 2010. High-quality single-shot capture of facial geometry//Proceedings of ACM SIGGRAPH 2010 Papers. Los Angeles, USA: Association for Computing Machinery: #40 [DOI: 10.1145/1833349.1778777http://dx.doi.org/10.1145/1833349.1778777]
Beeler T, Hahn F, Bradley D, Bickel B, Beardsley P, Gotsman C, Sumner R W and Gross M. 2011. High-quality passive facial performance capture using anchor frames. ACM Transactions on Graphics, 30(4): #75 [DOI: 10.1145/2010324.1964970http://dx.doi.org/10.1145/2010324.1964970]
Blanz V and Vetter T. 1999. A morphable model for the synthesis of 3D faces//Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques. Los Angeles, USA: ACM Press/Addison-Wesley Publishing Co.: 187-194[DOI: 10.1145/311535.311556http://dx.doi.org/10.1145/311535.311556]
Bolkart T and Wuhrer S. 2015. A groupwise multilinear correspondence optimization for 3D faces//Proceedings of the 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE: 3604-3612 [DOI: 10.1109/ICCV.2015.411http://dx.doi.org/10.1109/ICCV.2015.411]
Booth J, Roussos A, Zafeiriou S, Ponniah A and Dunaway D. 2016. A 3D morphable model learnt from 10,000 faces//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 5543-5552[DOI: 10.1109/CVPR.2016.598http://dx.doi.org/10.1109/CVPR.2016.598]
Borshukov G and Lewis J P. 2005. Realistic human face rendering for "The Matrix Reloaded"//Proceedings of ACM SIGGRAPH 2005 Courses. Los Angeles, USA: Association for Computing Machinery: #13 [DOI: 10.1145/1198555.1198593http://dx.doi.org/10.1145/1198555.1198593]
Bradley D, Heidrich W, Popa T and Sheffer A. 2010. High resolution passive facial performance capture//Proceedings of ACM SIGGRAPH 2010 Papers. Los Angeles, USA: Association for Computing Machinery: #41 [DOI: 10.1145/1833349.1778778http://dx.doi.org/10.1145/1833349.1778778]
Brunton A, Bolkart T and Wuhrer S. 2014a. Multilinear wavelets: a statistical shape space for human faces//Proceedings of the 13th European Conference on Computer Vision—ECCV. Zurich, Switzerland: Springer: 297-312 [DOI: 10.1007/978-3-319-10590-1_20http://dx.doi.org/10.1007/978-3-319-10590-1_20]
Brunton A, Salazar A, Bolkart T and Wuhrer S. 2014b. Review of statistical shape spaces for 3D data with comparative analysis for human faces. Computer Vision and Image Understanding, 128: 1-17[DOI: 10.1016/j.cviu.2014.05.005http://dx.doi.org/10.1016/j.cviu.2014.05.005]
Cao C, Bradley D, Zhou K and Beeler T. 2015. Real-time high-fidelity facial performance capture. ACM Transactions on Graphics, 34(4): #46 [DOI: 10.1145/2766943http://dx.doi.org/10.1145/2766943]
Cao C, Weng Y L, Zhou S, Tong Y Y and Zhou K. 2014. FaceWarehouse: a 3D facial expression database for visual computing. IEEE Transactions on Visualization and Computer Graphics, 20(3): 413-425 [DOI: 10.1109/TVCG.2013.249http://dx.doi.org/10.1109/TVCG.2013.249]
Chai Z H, Zhang T K, He T Y, Tan X, Baltrusaitis T, Wu H T, Li R N, Zhao S, Yuan C and Bian J. 2023. HiFace: high-fidelity 3D face reconstruction by learning static and dynamic details//Proceedings of 2023 IEEE/CVF International Conference on Computer Vision. Paris, France: IEEE: #834 [DOI: 10.1109/ICCV51070.2023.00834http://dx.doi.org/10.1109/ICCV51070.2023.00834]
Chen A P, Chen Z, Zhang G L, Mitchell K and Yu J Y. 2019. Photo-Realistic facial details synthesis from single image//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 9428-9438[DOI: 10.1109/ICCV.2019.00952http://dx.doi.org/10.1109/ICCV.2019.00952]
Chiang M J Y, Kutz P and Burley B. 2016. Practical and controllable subsurface scattering for production path tracing//Proceedings of ACM SIGGRAPH 2016 Talks. Anaheim, USA: Association for Computing Machinery: #49 [DOI: 10.1145/2897839.2927433http://dx.doi.org/10.1145/2897839.2927433]
Christensen P H and Burley B. 2015. Approximate reflectance profiles for efficient subsurface scattering [EB/OL]. [2023-09-25]. https://graphics.pixar.com/library/ApproxBSSRDF/paper.pdfhttps://graphics.pixar.com/library/ApproxBSSRDF/paper.pdf
Dai H, Pears N, Smith W and Duncan C. 2017. A 3D morphable model of craniofacial shape and texture variation//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 3104-3112 [DOI: 10.1109/ICCV.2017.335http://dx.doi.org/10.1109/ICCV.2017.335]
Debevec P, Hawkins T, Tchou C, Duiker H P, Sarokin W and Sagar M. 2000. Acquiring the reflectance field of a human face//Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. New Orleans, USA: ACM Press/Addison-Wesley Publishing Co.: 145-156 [DOI: 10.1145/344779.344855http://dx.doi.org/10.1145/344779.344855]
Deng H, Wang B B, Wang R and Holzschuch N. 2020. A practical path guiding method for participating media. Computational Visual Media, 6(1): 37-51 [DOI: 10.1007/s41095-020-0160-1http://dx.doi.org/10.1007/s41095-020-0160-1]
Deng J K, Cheng S Y, Xue N N, Zhou Y X and Zafeiriou S. 2018. UV-GAN: adversarial facial UV map completion for pose-invariant face recognition//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 7093-7102 [DOI: 10.1109/CVPR.2018.00741http://dx.doi.org/10.1109/CVPR.2018.00741]
Deng Y, Yang J L, Xu S C, Chen D, Jia Y D and Tong X. 2019. Accurate 3D face reconstruction with weakly-supervised learning: From single image to image set//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Long Beach, USA: IEEE: 285-295 [DOI: 10.1109/CVPRW.2019.00038http://dx.doi.org/10.1109/CVPRW.2019.00038]
d’Eon E. 2012. A better dipole [EB/OL]. [2023-09-25]. http://www.eugenedeon.com/pdfs/betterdipole.pdfhttp://www.eugenedeon.com/pdfs/betterdipole.pdf
d’Eon E and Irving G. 2011. A quantized-diffusion model for rendering translucent materials. ACM Transactions on Graphics, 30(4): 56 [DOI: 10.1145/2010324.1964951http://dx.doi.org/10.1145/2010324.1964951]
d’Eon E, Luebke D and Enderton E. 2007. Efficient rendering of human skin//Proceedings of the 18th Eurographics Conference on Rendering Techniques. Grenoble, France: Eurographics Association: 147-157 [DOI: 10.2312/EGWR/EGSR07/147-157http://dx.doi.org/10.2312/EGWR/EGSR07/147-157]
Dib A, Ahn J, Thébault C, Gosselin P H and Chevallier L. 2023. S2F2: self-supervised high fidelity face reconstruction from monocular image//Proceedings of the 17th IEEE International Conference on Automatic Face and Gesture Recognition. Waikoloa Beach, USA: IEEE Press: 1-8[DOI: 10.1109/FG57933.2023.10042713http://dx.doi.org/10.1109/FG57933.2023.10042713]
Dib A, Bharaj G, Ahn J, Thébault C, Gosselin P, Romeo M and Chevallier L. 2021a. Practical face reconstruction via differentiable ray tracing. Computer Graphics Forum, 40(2): 153-164 [DOI: 10.1111/cgf.142622http://dx.doi.org/10.1111/cgf.142622]
Dib A, Thébault C, Ahn J, Gosselin P H, Theobalt C and Chevallier L. 2021b. Towards high fidelity monocular face reconstruction with rich reflectance using self-supervised learning and ray tracing//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 12799-12809 [DOI: 10.1109/ICCV48922.2021.01258http://dx.doi.org/10.1109/ICCV48922.2021.01258]
Donner C and Jensen H W. 2005. Light diffusion in multi-layered translucent materials. ACM Transactions on Graphics, 24(3): 1032-1039 [DOI: 10.1145/1073204.1073308http://dx.doi.org/10.1145/1073204.1073308]
Donner C and Jensen H W. 2006. A spectral BSSRDF for shading human skin//Proceedings of the 17th Eurographics Conference on Rendering Techniques. Nicosia, Cyprus: Eurographics Association: #2383946
Donner C and Jensen H W. 2008. Rendering translucent materials using photon diffusion//Proceedings of ACM SIGGRAPH 2008 Classes. Los Angeles, USA: Association for Computing Machinery: #4[DOI: 10.1145/1401132.1401138http://dx.doi.org/10.1145/1401132.1401138]
Donner C, Lawrence J, Ramamoorthi R, Hachisuka T, Jensen H W and Nayar S. 2009. An empirical BSSRDF model//Proceedings of ACM SIGGRAPH 2009 Papers. New Orleans, USA: Association for Computing Machinery: #30 [DOI: 10.1145/1576246.1531336http://dx.doi.org/10.1145/1576246.1531336]
Donner C, Weyrich T, d'Eon E, Ramamoorthi R and Rusinkiewicz S. 2008. A layered, heterogeneous reflectance model for acquiring and rendering human skin. ACM Transactions on Graphics, 27(5): #140 [DOI: 10.1145/1409060.1409093http://dx.doi.org/10.1145/1409060.1409093]
Egger B, Schönborn S, Schneider A, Kortylewski A, Morel-Forster A, Blumer C and Vetter T. 2018. Occlusion-aware 3D morphable models and an illumination prior for face image analysis. International Journal of Computer Vision, 126(12): 1269-1287 [DOI: 10.1007/s11263-018-1064-8http://dx.doi.org/10.1007/s11263-018-1064-8]
Fan J H, Wang B B, Hasan M, Yang J and Yan L Q. 2022. Neural layered BRDFs//Proceedings of ACM SIGGRAPH 2022 Conference. Vancouver, USA: Association for Computing Machinery: #4[DOI: 10.1145/3528233.3530732http://dx.doi.org/10.1145/3528233.3530732]
Fan J H, Wang B B, Hasan M, Yang J and Yan L Q. 2023. Neural biplane representation for BTF rendering and acquisition//Proceedings of ACM SIGGRAPH 2023 Conference. Los Angeles, USA: Association for Computing Machinery: #81[DOI: 10.1145/3588432.3591505http://dx.doi.org/10.1145/3588432.3591505]
Feng H W, Bolkart T, Tesch J, Black M J and Abrevaya V. 2022. Towards racially unbiased skin tone estimation via scene disambiguation//Proceedings of the 17th European Conference on Computer Vision (ECCV 2022). Tel Aviv, Israel: Springer: 72-90 [DOI: 10.1007/978-3-031-19778-9_5http://dx.doi.org/10.1007/978-3-031-19778-9_5]
Feng Y, Feng H W, Black M J and Bolkart T. 2021. Learning an animatable detailed 3D face model from in-the-wild images. ACM Transactions on Graphics, 40(4): #88 [DOI: 10.1145/3450626.3459936http://dx.doi.org/10.1145/3450626.3459936]
Frisvad J R, Hachisuka T and Kjeldsen T K. 2014. Directional dipole model for subsurface scattering. ACM Transactions on Graphics, 34(1): #5 [DOI: 10.1145/2682629http://dx.doi.org/10.1145/2682629]
Fyffe G. 2010. Single-shot photometric stereo by spectral multiplexing//Proceedings of ACM SIGGRAPH ASIA 2010 Sketches. Seoul, Republic of Korea: Association for Computing Machinery: #20[DOI: 10.1145/1899950.1899970http://dx.doi.org/10.1145/1899950.1899970]
Fyffe G and Debevec P. 2015. Single-shot reflectance measurement from polarized color gradient illumination//Proceedings of 2015 IEEE International Conference on Computational Photography. Houston, USA: IEEE: 1-10 [DOI: 10.1109/ICCPHOT.2015.7168375http://dx.doi.org/10.1109/ICCPHOT.2015.7168375]
Fyffe G, Graham P, Tunwattanapong B, Ghosh A and Debevec P. 2016. Near-instant capture of high-resolution facial geometry and reflectance. Computer Graphics Forum, 35(2): 353-363 [DOI: 10.1111/cgf.12837http://dx.doi.org/10.1111/cgf.12837]
Garrido P, Valgaert L, Wu C L and Theobalt C. 2013. Reconstructing detailed dynamic face geometry from monocular video. ACM Transactions on Graphics, 32(6): #158[DOI: 10.1145/2508363.2508380http://dx.doi.org/10.1145/2508363.2508380]
Gecer B, Deng J K and Zafeiriou S. 2021. OSTeC: one-shot texture completion//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 7624-7634 [DOI: 10.1109/CVPR46437.2021.00754http://dx.doi.org/10.1109/CVPR46437.2021.00754]
Gecer B, Ploumpis S, Kotsia I and Zafeiriou S. 2019. GANFIT: generative adversarial network fitting for high fidelity 3D face reconstruction//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 1155-1164 [DOI: 10.1109/CVPR.2019.00125http://dx.doi.org/10.1109/CVPR.2019.00125]
Gecer B, Ploumpis S, Kotsia I and Zafeiriou S. 2022. Fast-GANFIT: generative adversarial network for high fidelity 3D face reconstruction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(9): 4879-4893 [DOI: 10.1109/TPAMI.2021.3084524http://dx.doi.org/10.1109/TPAMI.2021.3084524]
Genova K, Cole F, Maschinot A, Sarna A, Vlasic D and Freeman W T. 2018. Unsupervised training for 3D morphable model regression//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 8377-8386 [DOI: 10.1109/CVPR.2018.00874http://dx.doi.org/10.1109/CVPR.2018.00874]
Gerig T, Morel-Forster A, Blumer C, Egger B, Luthi M, Schönborn S and Vetter T. 2018. Morphable face models-an open framework. Proceedings of the 13th IEEE International Conference on Automatic Face and Gesture Recognition. Xi’an, China: IEEE: 75-82[DOI: 10.1109/FG.2018.00021http://dx.doi.org/10.1109/FG.2018.00021]
Ghosh A, Fyffe G, Tunwattanapong B, Busch J, Yu X M and Debevec P. 2011. Multiview face capture using polarized spherical gradient illumination. ACM Transactions on Graphics, 30(6): 1-10 [DOI: 10.1145/2070781.2024163http://dx.doi.org/10.1145/2070781.2024163]
Ghosh A, Hawkins T, Peers P, Frederiksen S and Debevec P. 2008. Practical modeling and acquisition of layered facial reflectance//Proceedings of ACM SIGGRAPH Asia 2008 Papers. Singapore: Association for Computing Machinery: #139[DOI: 10.1145/1457515.1409092http://dx.doi.org/10.1145/1457515.1409092]
Goodfellow I J, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A and Bengio Y. 2014. Generative adversarial nets//Proceedings of the 27th International Conference on Neural Information Processing Systems—Volume 2. Montreal, Canada: MIT Press: 2672-2680
Gosselin D. 2004. Real time skin rendering. Game Developer Conference, D3D Tutorial (Vol. 9).
Gotardo P, Riviere J, Bradley D, Ghosh A and Beeler T. 2018. Practical dynamic facial appearance modeling and acquisition. ACM Transactions on Graphics, 37(6): #232 [DOI: 10.1145/3272127.3275073http://dx.doi.org/10.1145/3272127.3275073]
Graham P, Tunwattanapong B, Busch J, Yu X M, Jones A, Debevec P and Ghosh A. 2012. Measurement-based synthesis of facial microgeometry//Proceedings of ACM SIGGRAPH 2012 Talks. Los Angeles: Association for Computing Machinery: #9 [DOI: 10.1145/2343045.2343057http://dx.doi.org/10.1145/2343045.2343057]
Guo Y D, Zhang J Y, Cai J F, Jiang B Y and Zheng J M. 2019. CNN-based real-time dense face reconstruction with inverse-rendered photo-realistic face images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(6): 1294-1307[DOI: 10.1109/TPAMI.2018.2837742http://dx.doi.org/10.1109/TPAMI.2018.2837742]
Habel R, Christensen P H and Jarosz W. 2013. Photon beam diffusion: a hybrid monte carlo method for subsurface scattering. Computer Graphics Forum, 32(4): 27-37 [DOI: 10.1111/cgf.12148http://dx.doi.org/10.1111/cgf.12148]
Hanika J, Droske M and Fascione L. 2015. Manifold next event estimation. Computer Graphics Forum, 34(4): 87-97 [DOI: 10.1111/cgf.12681http://dx.doi.org/10.1111/cgf.12681]
Hanrahan P and Krueger W. 1993. Reflection from layered surfaces due to subsurface scattering//Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques. Anaheim, USA: Association for Computing Machinery: 165-174 [DOI: 10.1145/166117.166139http://dx.doi.org/10.1145/166117.166139]
Herholz S, Zhao Y Y, Elek O, Nowrouzezahrai D, Lensch H P A and Křivnek J. 2019. Volume path guiding based on zero-variance random walk theory. ACM Transactions on Graphics, 38(3): #25[DOI: 10.1145/3230635http://dx.doi.org/10.1145/3230635]
Hernndez C and Vogiatzis G. 2010. Self-calibrating a real-time monocular 3d facial capture system//Proceedings International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT). [s.l.]: [s.n.]
Hertzmann A, Jacobs C E, Oliver N, Curless B and Salesin D H. 2001. Image analogies//Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques. Los Angeles, USA: Association for Computing Machinery: 327-340 [DOI: 10.1145/383259.383295http://dx.doi.org/10.1145/383259.383295]
Hu G S, Mortazavian P, Kittler J and Christmas W. 2013. A facial symmetry prior for improved illumination fitting of 3D morphable model//Proceedings of 2013 International Conference on Biometrics. Madrid, Spain: IEEE: 1-6 [DOI: 10.1109/ICB.2013.6613000http://dx.doi.org/10.1109/ICB.2013.6613000]
Hu L W, Saito S, Wei L Y, Nagano K, Seo J, Fursund J, Sadeghi I, Sun C, Chen Y C and Li H. 2017. Avatar digitization from a single image for real-time rendering. ACM Transactions on Graphics, 36(6): #195 [DOI: 10.1145/3130800.31310887http://dx.doi.org/10.1145/3130800.31310887]
Huber P, Hu G S, Tena R, Mortazavian P, Koppen W P, Christmas W J, Rätsch M and Kittler J. 2016. A multiresolution 3D morphable face model and fitting framework//Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Application. Rome, Italy: SciTePress: 79-86 [DOI: 10.5220/0005669500790086http://dx.doi.org/10.5220/0005669500790086]
Huynh L, Chen W K, Saito S, Xing J, Nagano K, Jones A, Debevec P and Li H. 2018. Mesoscopic facial geometry inference using deep neural networks//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 8407-8416 [DOI: 10.1109/CVPR.2018.00877http://dx.doi.org/10.1109/CVPR.2018.00877]
Ichim A E, Bouaziz S and Pauly M. 2015. Dynamic 3D avatar creation from hand-held video input. ACM Transactions on Graphics, 34(4): #45 [DOI: 10.1145/2766974http://dx.doi.org/10.1145/2766974]
Jensen H W and Buhler J. 2002. A rapid hierarchical rendering technique for translucent materials. ACM Transactions on Graphics, 21(3): 576-581 [DOI: 10.1145/566654.566619http://dx.doi.org/10.1145/566654.566619]
Jensen H W and Christensen P H. 1998. Efficient simulation of light transport in scenes with participating media using photon maps//Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques. Orlando, USA: Association for Computing Machinery: 311-320 [DOI: 10.1145/280814.280925http://dx.doi.org/10.1145/280814.280925]
Jensen H W, Marschner S R, Levoy M and Hanrahan P. 2001. A practical model for subsurface light transport//Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques. Los Angeles, USA: Association for Computing Machinery: 511-518 [DOI: 10.1145/383259.383319http://dx.doi.org/10.1145/383259.383319]
Jimenez J, Scully T, Barbosa N, Donner C, Alvarez X, Vieira T, Matts P, Orvalho V, Gutierrez D and Weyrich T. 2010. A practical appearance model for dynamic facial color. ACM Transactions on Graphics, 29(6): #141 [DOI: 10.1145/1882261.1866167http://dx.doi.org/10.1145/1882261.1866167]
Jimenez J, Sundstedt V and Gutierrez D. 2009. Screen-space perceptual rendering of human skin. ACM Transactions on Applied Perception, 6(4): #23 [DOI: 10.1145/1609967.1609970http://dx.doi.org/10.1145/1609967.1609970]
Jimenez J, Zsolnai K, Jarabo A, Freude C, Auzinger T, Wu X C, von der Pahlen J, Wimmer M and Gutierrez D. 2015. Separable subsurface scattering. Computer Graphics Forum, 34(6): 188-197 [DOI: 10.1111/cgf.12529http://dx.doi.org/10.1111/cgf.12529]
Johnson J, Ravi N, Reizenstein J, Novotny D, Tulsiani S, Lassner C and Branson S. 2020. Accelerating 3D deep learning with PyTorch3D//SIGGRAPH Asia 2020 Courses. [s.l.]: Association for Computing Machinery: #10 [DOI: 10.1145/3415263.3419160http://dx.doi.org/10.1145/3415263.3419160]
Joseph J H, Wiscombe W J and Weinman J A. 1976. The Delta-Eddington approximation for radiative flux transfer. Journal of the Atmospheric Sciences, 33(12): 2452-2459 [DOI: 10.1175/1520-0469(1976)0332452:TDEAFR2.0.CO;2http://dx.doi.org/10.1175/1520-0469(1976)0332452:TDEAFR2.0.CO;2]
Kampouris C, Zafeiriou S and Ghosh A. 2018. Diffuse-specular separation using binary spherical gradient illumination//Proceedings of the Eurographics Symposium on Rendering: Experimental Ideas and Implementations. Karlsruhe, Germany: Eurographics Association: 1-10 [DOI: 10.2312/sre.20181167http://dx.doi.org/10.2312/sre.20181167]
Karras T, Laine S and Aila T. 2019. A style-based generator architecture for generative adversarial networks//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 4401-4410 [DOI: 10.1109/CVPR.2019.00453http://dx.doi.org/10.1109/CVPR.2019.00453]
Karras T, Laine S, Aittala M, Hellsten J, Lehtinen J and Aila T. 2020. Analyzing and improving the image quality of styleGAN//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE: 8107-8116 [DOI: 10.1109/CVPR42600.2020.00813http://dx.doi.org/10.1109/CVPR42600.2020.00813]
Kim H, Zollhöfer M, Tewari A, Thies J, Richardt C and Theobalt C. 2018. InverseFaceNet: deep monocular inverse face rendering//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 4625-4634 [DOI: 10.1109/CVPR.2018.00486http://dx.doi.org/10.1109/CVPR.2018.00486]
Kim J, Yang J L and Tong X. 2021. Learning high-fidelity face texture completion without complete face texture//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: IEEE: 13970-13979 [DOI: 10.1109/ICCV48922.2021.01373http://dx.doi.org/10.1109/ICCV48922.2021.01373]
Klaudiny M, Hilton A and Edge J. 2010. High-detail 3d capture of facial performance//Proceedings of International Symposium on 3D Data Processing, Visualization and Transmission. [s.l.]: [s.n.]
Koerner D, Novk J, Kutz P, Habel R and Jarosz W. 2016. Subdivision next-event estimation for path-traced subsurface scattering//The 27th Eurographics Symposium on Rendering—Experimental Ideas & Implementations. Dublin, Ireland: Eurographics Association: 91-96 [DOI: 10.5555/3056507.3056525http://dx.doi.org/10.5555/3056507.3056525]
Křivnek J and d’Eon E. 2014. A zero-variance-based sampling scheme for Monte Carlo subsurface scattering//ACM SIGGRAPH 2014 Talks (SIGGRAPH’14). Vancouver, Canada: Association for Computing Machinery: #66 [DOI: 10.1145/2614106.2614138http://dx.doi.org/10.1145/2614106.2614138]
Kubelka P and Munk F. 1931. An article on optics of paint layers. Z. Tech. Phys, 12(593-601): 259-274.
Lafortune E P and Willems Y D. 1996. Rendering participating media with bidirectional path tracing//Proceedings of the Eurographics Workshop on Rendering Techniques’96. Porto, Portugal: Springer-Verlag: 91-100 [DOI: 10.1007/978-3-7091-7484-5_10http://dx.doi.org/10.1007/978-3-7091-7484-5_10]
Lattas A, Lin Y M, Kannan J, Ozturk E, Filipi L, Guarnera G C, Chawla G and Ghosh A. 2022. Practical and scalable desktop-based high-quality facial capture//Proceedings of the 17th European Conference on Computer Vision. Tel Aviv, Israel: Springer: 522-537 [DOI: 10.1007/978-3-031-20068-7_30http://dx.doi.org/10.1007/978-3-031-20068-7_30]
Lattas A, Moschoglou S, Gecer B, Ploumpis S, Triantafyllou V, Ghosh A and Zafeiriou S. 2020. AvatarMe: realistically renderable 3D facial reconstruction “in-the-wild”//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 757-766 [DOI: 10.1109/CVPR42600.2020.00084http://dx.doi.org/10.1109/CVPR42600.2020.00084]
Lattas A, Moschoglou S, Ploumpis S, Gecer B, Deng J K and Zafeiriou S. 2023. FitMe: deep photorealistic 3D morphable model avatars//Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver, Canada: IEEE: 8629-8640 [DOI: 10.1109/CVPR52729.2023.00834http://dx.doi.org/10.1109/CVPR52729.2023.00834]
Lattas A, Moschoglou S, Ploumpis S, Gecer B, Ghosh A and Zafeiriou S. 2021. AvatarMe++: facial shape and brdf inference with photorealistic rendering-aware gans. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(12): 9269-9284 [DOI: 10.1109/TPAMI.2021.3125598http://dx.doi.org/10.1109/TPAMI.2021.3125598]
Lei B W, Ren J Q, Feng M Y, Cui M M and Xie X S. 2023. A hierarchical representation network for accurate and detailed face reconstruction from in-the-wild images//Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver, Canada: IEEE: 394-403[DOI: 10.1109/CVPR52729.2023.00046http://dx.doi.org/10.1109/CVPR52729.2023.00046]
Lensch H P A, Kautz J, Goesele M, Heidrich W and Seidel H P. 2001. Image-based reconstruction of spatially varying materials//Proceedings of the Eurographics Workshop on Rendering Techniques 2001. London, United Kingdom: Springer: 103-114 [DOI: 10.1007/978-3-7091-6242-2_10http://dx.doi.org/10.1007/978-3-7091-6242-2_10]
Li H S, Pellacini F and Torrance K E. 2005. A hybrid monte carlo method for accurate and efficient subsurface scattering//Proceedings of the 16th Eurographics Conference on Rendering Techniques. Konstanz, Germany: Eurographics Association: 283-290 [DOI: 10.2312/EGWR/EGSR05/283-290http://dx.doi.org/10.2312/EGWR/EGSR05/283-290]
Li R L, Bladin K, Zhao Y J, Chinara C, Ingraham O, Xiang P D, Ren X L, Prasad P, Kishore B, Xing J and Li H. 2020. Learning formation of physically-based face attributes//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 3407-3416 [DOI: 10.1109/CVPR42600.2020.00347http://dx.doi.org/10.1109/CVPR42600.2020.00347]
Li T Y, Bolkart T, Black M J, Li H and Romero J. 2017. Learning a model of facial shape and expression from 4D scans. ACM Transactions on Graphics, 36(6): #194 [DOI: 10.1145/3130800.3130813http://dx.doi.org/10.1145/3130800.3130813]
Li T Y, Liu S C, Bolkart T, Liu J Y, Li H and Zhao Y J. 2021. Topologically consistent multi-view face inference using volumetric sampling//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 3804-3814 [DOI: 10.1109/ICCV48922.2021.00380http://dx.doi.org/10.1109/ICCV48922.2021.00380]
Lin J K, Yuan Y, Shao T J and Zhou K. 2020. Towards high-fidelity 3D face reconstruction from in-the-wild images using graph convolutional networks//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 5890-5899 [DOI: 10.1109/CVPR42600.2020.00593http://dx.doi.org/10.1109/CVPR42600.2020.00593]
Liu S C, Cai Y X, Chen H W, Zhou Y C and Zhao Y J. 2022. Rapid face asset acquisition with recurrent feature alignment. ACM Transactions on Graphics, 41(6): #214[DOI: 10.1145/3550454.3555509http://dx.doi.org/10.1145/3550454.3555509]
Ma W C, Hawkins T, Peers P, Chabert C F, Weiss M and Debevec P E. 2007. Rapid acquisition of specular and diffuse normal maps from polarized spherical gradient illumination//Proceedings of the 18th Eurographics Conference on Rendering Techniques. Grenoble, France: Eurographics Association: 183-194 [DOI: 10.2312/EGWR/EGSR07/183-194http://dx.doi.org/10.2312/EGWR/EGSR07/183-194]
Meng J, Hanika J and Dachsbacher C. 2016. Improving the Dwivedi sampling scheme. Computer Graphics Forum, 35(4): 37-44 [DOI: 10.1111/cgf.12947http://dx.doi.org/10.1111/cgf.12947]
Mildenhall B, Srinivasan P P, Tancik M, Barron J T, Ramamoorthi R and Ng R. 2022. NeRF: representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1): 99-106 [DOI: 10.1145/3503250http://dx.doi.org/10.1145/3503250]
Nagano K, Fyffe G, Alexander O, Barbič J, Li H, Ghosh A and Debevec P. 2015. Skin microstructure deformation with displacement map convolution. ACM Transactions on Graphics, 34(4): #109 [DOI: 10.1145/2766894http://dx.doi.org/10.1145/2766894]
Nicodemus F E, Richmond J C, Hsia J J, Ginsberg I W and Limperis T. 1977. Geometrical Considerations and Nomenclature for Reflectance. U.S. Department of Commerce [DOI: 10.6028/nbs.mono.160http://dx.doi.org/10.6028/nbs.mono.160]
Papaioannou A, Gecer B, Cheng S Y, Chrysos G, Deng J K, Fotiadou E, Kampouris C, Kollias D, Moschoglou S, Songsri-In K, Ploumpis S, Trigeorgis G, Tzirakis P, Ververas E, Zhou Y X, Ponniah A, Roussos A and Zafeiriou S. 2022. MimicME: a large scale diverse 4D database for facial expression analysis//Proceedings of the 17th European Conference on Computer Vision (ECCV 2022). Tel Aviv, Israel: Springer-Verlag: 467-484 [DOI: 10.1007/978-3-031-20074-8_27http://dx.doi.org/10.1007/978-3-031-20074-8_27]
Pauly M, Kollig T and Keller A. 2000. Metropolis light transport for participating media//Proceedings of the Eurographics Workshop on Rendering Techniques 2000. Brno, Czech Republic: Springer-Verlag: 11-22 [DOI: 10.1007/978-3-7091-6303-0_2http://dx.doi.org/10.1007/978-3-7091-6303-0_2]
Paysan P, Lüthi M, Albrecht T, Lerch A, Amberg B, Santini F and Vetter T. 2009. Face reconstruction from skull shapes and physical attributes//Proceedings of the 31st DAGM Symposium on Pattern Recognition. Jena, Germany: Springer: 232-241 [DOI: 10.1007/978-3-642-03798-6_24http://dx.doi.org/10.1007/978-3-642-03798-6_24]
Pérez P, Gangnet M and Blake A. 2003. Poisson image editing. ACM Transactions on Graphics, 22(3): 313-318[DOI: 10.1145/882262.882269http://dx.doi.org/10.1145/882262.882269]
Piotraschke M and Blanz V. 2016. Automated 3D face reconstruction from multiple images using quality measures//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 3418-3427[DOI: 10.1109/CVPR.2016.372http://dx.doi.org/10.1109/CVPR.2016.372]
Radford A, Kim J W, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J, Krueger G and Sutskever I. 2021. Learning transferable visual models from natural language supervision//Proceedings of the 38th International Conference on Machine Learning. [s.l.]: PMLR: #20
Rainer G, Ghosh A, Jakob W and Weyrich T. 2020. Unified neural encoding of BTFs. Computer Graphics Forum, 39(2): 167-178[DOI: 10.1111/cgf.13921http://dx.doi.org/10.1111/cgf.13921]
Rainer G, Jakob W, Ghosh A and Weyrich T. 2019. Neural BTF compression and interpolation. Computer Graphics Forum, 38(2): 235-244 [DOI: 10.1111/cgf.13633http://dx.doi.org/10.1111/cgf.13633]
Raman C, Hewitt C, Wood E and Baltrušaitis T. 2023. Mesh-tension driven expression-based wrinkles for synthetic faces//Proceedings of 2023 IEEE/CVF Winter Conference on Applications of Computer Vision. Waikoloa, USA: IEEE: 3504-3514 [DOI: 10.1109/WACV56688.2023.00351http://dx.doi.org/10.1109/WACV56688.2023.00351]
Riviere J, Gotardo P, Bradley D, Ghosh A and Beeler T. 2020. Single-shot high-quality facial geometry and skin appearance capture. ACM Transactions on Graphics, 39(4): #81 [DOI: 10.1145/3386569.3392464http://dx.doi.org/10.1145/3386569.3392464]
Rombach R, Blattmann A, Lorenz D, Esser P and Ommer B. 2022. High-resolution image synthesis with latent diffusion models//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE: 10674-10685[DOI: 10.1109/CVPR52688.2022.01042http://dx.doi.org/10.1109/CVPR52688.2022.01042]
Saito S, Wei L Y, Hu L W, Nagano K and Li H. 2017. Photorealistic facial texture inference using deep neural networks//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 2326-2335 [DOI: 10.1109/CVPR.2017.250http://dx.doi.org/10.1109/CVPR.2017.250]
Schlick C. 1994. An inexpensive BRDF model for physically‐based rendering. Computer Graphics Forum, 13(3): 233-246 [DOI: 10.1111/1467-8659.1330233http://dx.doi.org/10.1111/1467-8659.1330233]
Shen Y J, Gu J J, Tang X O and Zhou B L. 2020. Interpreting the latent space of GANs for semantic face editing//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 9240-9249 [DOI: 10.1109/CVPR42600.2020.00926http://dx.doi.org/10.1109/CVPR42600.2020.00926]
Shi F H, Wu H T, Tong X and Chai J X. 2014. Automatic acquisition of high-fidelity facial performances using monocular videos. ACM Transactions on Graphics, 33(6): #222 [DOI: 10.1145/2661229.2661290http://dx.doi.org/10.1145/2661229.2661290]
Smith W A P, Seck A, Dee H, Tiddeman B, Tenenbaum J B and Egger B. 2020. A morphable face albedo model//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 5010-5019 [DOI: 10.1109/CVPR42600.2020.00506http://dx.doi.org/10.1109/CVPR42600.2020.00506]
Stam J. 1995. Multiple scattering as a diffusion process//Proceedings of the Eurographics Workshop on Rendering Techniques’95. Dublin, Ireland: Springer: 41-50 [DOI: 10.1007/978-3-7091-9430-0_5http://dx.doi.org/10.1007/978-3-7091-9430-0_5]
Stratou G, Ghosh A, Debevec P and Morency L P. 2011. Effect of illumination on automatic expression recognition: a novel 3D relightable facial database//Proceedings of 2011 IEEE International Conference on Automatic Face and Gesture Recognition. Santa Barbara, USA: IEEE: 611-618 [DOI: 10.1109/FG.2011.5771467http://dx.doi.org/10.1109/FG.2011.5771467]
Sztrajman A, Rainer G, Ritschel T and Weyrich T. 2021. Neural BRDF representation and importance sampling. Computer Graphics Forum, 40(6): 332-346 [DOI: 10.1111/cgf.14335http://dx.doi.org/10.1111/cgf.14335]
Tewari A, Zollhöfer M, Garrido P, Bernard F, Kim H, Pérez P and Theobalt C. 2018. Self-supervised multi-level face model learning for monocular reconstruction at over 250 Hz//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 2549-2559 [DOI: 10.1109/CVPR.2018.00270http://dx.doi.org/10.1109/CVPR.2018.00270]
Tewari A, Zollhöfer M, Kim H, Garrido P, Bernard F, Pérez P and Theobalt C. 2017. MoFA: model-based deep convolutional face autoencoder for unsupervised monocular reconstruction//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 3735-3744[DOI: 10.1109/ICCV.2017.401http://dx.doi.org/10.1109/ICCV.2017.401]
Torrance K E and Sparrow E M. 1967. Theory for off-specular reflection from roughened surfaces. Journal of the Optical Society of America, 57(9): 1105-1114 [DOI: 10.1364/JOSA.57.001105http://dx.doi.org/10.1364/JOSA.57.001105]
Tuan Tran A, Hassner T, Masi I and Medioni G. 2017. Regressing robust and discriminative 3D morphable models with a very deep neural network//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 1493-1502 [DOI: 10.1109/CVPR.2017.163http://dx.doi.org/10.1109/CVPR.2017.163]
Valgaerts L, Wu C L, Bruhn A, Seidel H P and Theobalt C. 2012. Lightweight binocular facial performance capture under uncontrolled lighting. ACM Transactions on Graphics, 31(6): #187[DOI: 10.1145/2366145.2366206http://dx.doi.org/10.1145/2366145.2366206]
Wang L H, Jacques S L and Zheng L Q. 1995. MCML—Monte Carlo modeling of light transport in multi-layered tissues. Computer Methods and Programs in Biomedicine, 47(2): 131-146 [DOI: 10.1016/0169-2607(95)01640-Fhttp://dx.doi.org/10.1016/0169-2607(95)01640-F]
Wang L Z, Zhao X C, Sun J X, Zhang Y X, Zhang H W, Yu T and Liu Y B. 2023a. StyleAvatar: real-time photo-realistic portrait avatar from a single video//ACM SIGGRAPH 2023 Conference Proceedings. Los Angeles, USA: Association for Computing Machinery: #67[DOI: 10.1145/3588432.3591517http://dx.doi.org/10.1145/3588432.3591517]
Wang Q, Luan F J, Dai Y X, Huo Y C, Bao H J and Wang R. 2023b. A biophysically-based skin model for heterogeneous volume rendering[EB/OL]. [2023-09-25]. https://luanfujun.com/files/cvm2023/skin_paper.pdfhttps://luanfujun.com/files/cvm2023/skin_paper.pdf
Wang T C, Liu M Y, Zhu J Y, Tao A, Kautz J and Catanzaro B. 2018. High-resolution image synthesis and semantic manipulation with conditional GANs//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 8798-8807 [DOI: 10.1109/CVPR.2018.00917http://dx.doi.org/10.1109/CVPR.2018.00917]
Wang T F, Zhang B, Zhang T, Gu S Y, Bao J M, Baltrusaitis T, Shen J J, Chen D, Wen F, Chen Q F and Guo B N. 2023c. RODIN: a generative model for sculpting 3D digital avatars using diffusion//Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver, Canada: IEEE: 4563-4573 [DOI: 10.1109/CVPR52729.2023.00443http://dx.doi.org/10.1109/CVPR52729.2023.00443]
Wang Y F, Holynski A, Zhang X M and Zhang X F. 2023d. SunStage: portrait reconstruction and relighting using the sun as a light stage//Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver, Canada: IEEE: 20792-20802[DOI: 10.1109/CVPR52729.2023.01992http://dx.doi.org/10.1109/CVPR52729.2023.01992]
Weber P, Hanika J and Dachsbacher C. 2017. Multiple vertex next event estimation for lighting in dense, forward-scattering media. Computer Graphics Forum, 36(2): 21-30[DOI: 10.1111/cgf.13103http://dx.doi.org/10.1111/cgf.13103]
Weiss S, Moulin J, Chandran P, Zoss G, Gotardo P and Bradley D. 2023. Graph-based synthesis for skin micro wrinkles. Computer Graphics Forum, 42(5): #14904 [DOI: 10.1111/cgf.14904http://dx.doi.org/10.1111/cgf.14904]
Weyrich T, Matusik W, Pfister H, Bickel B, Donner C, Tu C E, McAndless J, Lee J, Ngan A, Jensen H W and Gross M. 2006. Analysis of human faces using a measurement-based skin reflectance model. ACM Transactions on Graphics, 25(3): 1013-1024 [DOI: 10.1145/1141911.1141987http://dx.doi.org/10.1145/1141911.1141987]
Wood E, Baltrušaitis T, Hewitt C, Dziadzio S, Cashman T J and Shotton J. 2021. Fake it till you make it: face analysis in the wild using synthetic data alone//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 3661-3671 [DOI: 10.1109/ICCV48922.2021.00366http://dx.doi.org/10.1109/ICCV48922.2021.00366]
Woodham R J. 1980. Photometric method for determining surface orientation from multiple images. Optical Engineering, 19(1): #191139 [DOI: 10.1117/12.7972479http://dx.doi.org/10.1117/12.7972479]
Wrenninge M, Villemin R and Hery C. 2017. Path traced subsurface scattering using anisotropic phase functions and non-exponential free flights [EB/OL]. [2023-09-25]. https://graphics.pixar.com/library/PathTracedSubsurface/paper.pdfhttps://graphics.pixar.com/library/PathTracedSubsurface/paper.pdf
Wu F Z, Bao L C, Chen Y J, Ling Y G, Song Y B, Li S M, Ngan K N and Liu W. 2019. MVF-Net: multi-view 3D face morphable model regression//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 959-968 [DOI: 10.1109/CVPR.2019.00105http://dx.doi.org/10.1109/CVPR.2019.00105]
Wu M H, Zhu H, Huang L J, Zhuang Y Y, Lu Y X and Cao X. 2023. High-fidelity 3D face generation from natural language descriptions//Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver, Canada: IEEE: 4521-4530 [DOI: 10.1109/CVPR52729.2023.00439http://dx.doi.org/10.1109/CVPR52729.2023.00439]
Wu W S, Wang B B and Yan L Q. 2022. A survey on rendering homogeneous participating media. Computational Visual Media, 8(2): 177-198 [DOI: 10.1007/s41095-021-0249-1http://dx.doi.org/10.1007/s41095-021-0249-1]
Yamaguchi S, Saito S, Nagano K, Zhao Y J, Chen W K, Olszewski K, Morishima S and Li H. 2018. High-fidelity facial reflectance and geometry inference from an unconstrained image. ACM Transactions on Graphics, 37(4): #162 [DOI: 10.1145/3197517.3201364http://dx.doi.org/10.1145/3197517.3201364]
Yang H T, Zhu H, Wang Y R, Huang M K, Shen Q, Yang R G and Cao X. 2020. Facescape: a large-scale high quality 3D face dataset and detailed riggable 3D face prediction//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 598-607 [DOI: 10.1109/CVPR42600.2020.00068http://dx.doi.org/10.1109/CVPR42600.2020.00068]
Yu C C, Lu G S, Zeng Y H, Sun J, Liang X D, Li H B, Xu Z B, Xu S C, Zhang W and Xu H. 2023. Towards high-fidelity text-guided 3D face generation and manipulation using only images//Proceedings of 2023 IEEE/CVF International Conference on Computer Vision. Paris, France: IEEE: 15280-15291[DOI: 10.1109/ICCV51070.2023.01406http://dx.doi.org/10.1109/ICCV51070.2023.01406]
Zeltner T, Rousselle F, Weidlich A, Clarberg P, Novk J, Bitterli B, Evans A, Davidovič T, Kallweit S and Lefohn A. 2023. Real-time neural appearance models [EB/OL]. [2023-09-25]. https://arxiv.org/pdf/2305.02678.pdfhttps://arxiv.org/pdf/2305.02678.pdf
Zhang L W, Qiu Q W, Lin H Y, Zhang Q X, Shi C, Yang W, Shi Y, Yang S B, Xu L and Yu J Y. 2023a. DreamFace: progressive generation of animatable 3D faces under text guidance. ACM Transactions on Graphics, 42(4): #138 [DOI: 10.1145/3592094http://dx.doi.org/10.1145/3592094]
Zhang L W, Zhao Z J, Cong X Z, Zhang Q X, Gu S Q, Gao Y C, Zheng R, Yang W, Xu L and Yu J Y. 2023b. HACK: Learning a parametric head and neck model for high-fidelity animation. ACM Transactions on Graphics, 42(4): #41 [DOI: 10.1145/3592093http://dx.doi.org/10.1145/3592093]
Zhao X C, Wang L Z, Sun J X, Zhang H W, Suo J L and Liu Y B. 2024. HAvatar: high-fidelity head avatar via facial model conditioned neural radiance field. ACM Transactions on Graphics, 43(1): #6 [DOI: 10.1145/3626316http://dx.doi.org/10.1145/3626316]
Zheng M W, Zhang H Y, Yang H Y and Huang D. 2023. NeuFace: realistic 3D neural face rendering from multi-view images//Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver, Canada: IEEE: 16868-16877 [DOI: 10.1109/CVPR52729.2023.01618http://dx.doi.org/10.1109/CVPR52729.2023.01618]
Zickler T, Mallick S P, Kriegman D J and Belhumeur P N. 2008. Color subspaces as photometric invariants//Proceedings of 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. New York, USA: IEEE: 2000-2010[DOI: 10.1109/CVPR.2006.77http://dx.doi.org/10.1109/CVPR.2006.77]
相关作者
相关机构