Survey of visualization methods for multiscene visual cue information in immersive environments
- Vol. 29, Issue 1, Pages: 1-21(2024)
Published: 16 January 2024
DOI: 10.11834/jig.221147
移动端阅览
浏览全部资源
扫码关注微信
Published: 16 January 2024 ,
移动端阅览
任洋甫, 李志强, 张松海. 2024. 沉浸式环境中多场景视觉提示信息可视化方法综述. 中国图象图形学报, 29(01):0001-0021
Ren Yangfu, Li Zhiqiang, Zhang Songhai. 2024. Survey of visualization methods for multiscene visual cue information in immersive environments. Journal of Image and Graphics, 29(01):0001-0021
沉浸式环境是通过虚拟现实(virtual reality,VR)等技术为用户呈现趋近于真实的环境体验。虚拟现实是通过计算机生成现实世界的模拟环境,可以为用户提供丰富的沉浸感、交互性和想象力体验。用户在虚拟现实场景中,通过视觉可以快速熟悉环境,获取场景内外的信息,还可以通过视觉完成与场景的交互,增强用户的感知。增强现实(augmented reality,AR)会将虚拟信息放置在真实场景中,用户可以与真实场景中的虚拟信息进行交互。为了充分了解视觉提示信息在虚拟现实等不同沉浸式场景中的研究,探究视觉信息提示方法的本源,本文按照信息提示位置的不同、功能和应用的不同进行区分,首先综述近年来在普通二维场景中的方法,通过技术对比和改进深入讨论了在三维虚拟现实或增强现实环境下对视觉提示信息可视化方法的研究。分析在虚拟现实或增强现实环境下与普通二维场景中显示的异同,同时展开视觉提示信息在多场景下对用户注意力等使用功能方面的研究介绍,以及全景视频观看等实际场景中的应用研究说明。本文通过对二维和三维场景视野外、场景中标签布局和注意力引导,以及全景视频观看等实际应用中的讨论,可以更详细地展示视觉提示信息在沉浸式环境和多场景中的研究前景与发展方向。
The immersive environment presents a near-real environment experience to users through technologies such as virtual reality (VR). Virtual reality is a simulated environment that generates the natural world through computers, providing users with a rich immersion experience, interactivity, and imagination. In virtual reality, the eyes are the window for viewing the scene and a meaningful way to interact with the virtual world. Virtual reality is a very convenient way for users to obtain information about the location through visual means. The interaction of the scene enhances the users’ perception. Augmented reality (AR) places virtual information in a natural setting, and users can interact with the virtual information in the actual location. To understand the application research of visual cue information in different immersive scenarios such as virtual reality, combined with the relationship between visual information cue methods, this paper mainly distinguishes information according to the information display position, function use, and actual application field summarizes and discusses recent years — a study of information visualization methods for visual cues in immersive multiscene environments. First, the prompt information outside the field of view is analyzed. Due to the limitation of hardware devices, the view range of the content in the scene viewed by users through different devices is affected. The methods for visual prompt information outside the field of view can be divided into overview + details and focus + context and discussed separately for 2D and 3D scenarios. The situation within the field of view focuses on the label placement around the model, which is slightly different from the problem of label layout in the scene and the problem of information outside the field of view. When the information in the field of view is displayed, more attention is given to the displayed information. Whether the content is occluded or overlapped, the authors hope to ensure users are more comfortable when viewing information content. At the same time, different requirements are for the layout and distribution of label information. The primary purpose of research on label layout in the scene is to reduce label placement. Good overlapping and confusing problems reduce users’ cognitive load when observing. From the perspective of information management, for the information in the scene, how to place the information reasonably and avoid occlusion can not only improve the user’s scene awareness but also improve the user’s experience in the scene. Considering the use function of visual prompt information, the fast visual information added in VR helps users know their position or scene information as soon as possible during the scene roaming. Just like the user in reality, when the users enter an unfamiliar scene, their location in the scene can be obtained through map navigation or eye-catching signs. Therefore, enhancing user perception through visual prompts is conducive to improving users’ interactive experience in the scene, helping them grasp scene information quickly and find points of interest as soon as possible. In practical applications, the problems encountered in panoramic videos and their solutions, as well as problems encountered in industrial and life scenarios, such as applying the currently popular video barrage in panoramic videos, are discussed. Panoramic videos are viewed in immersive scenes or applications in entertainment and industrial production. However, judging from the current state of technology, some challenges remain in the visualization method of multiscene visual cue information in the virtual reality environment in the following aspects: first, the method needs to improve user experience on display devices of different sizes in 2D scenes, improve the efficiency of users’ acquisition and understanding of information, reduce the cognitive load of users when using machines, and improve the exploration of adapting to multisize screen information prompt methods. Second, the user’s information prompting method outside the field of view in different 3D scenes can ensure a better immersion and a more convenient interaction method, providing users comprehensive information prompts. Third, the research on the technique of tag placement and layout in the scene, considering the user experience, mainly covers three aspects, namely, the way users view and interact with tags, the location and display time of titles, and the movement of tags and scenes. At the same time, users pay more attention to whether the interaction mode in the background is convenient, whether the viewing method is comfortable, and whether the effect on the scene’s content is small enough. Fourth, the map navigation and attention guidance methods in the scene are critical for users to obtain information. Therefore, the users’ interactive experience when using map navigation and how to obtain important news and to improve the users’ attention guidance need to be improved. Accuracy and reducing the effect of prompt information on immersion are of excellent research importance. Fifth, watching videos in virtual reality can provide users a good viewing experience, so how to enhance their sense of immersion further in the viewing, present a better way to guide the storyline, and reduce the feeling of dizziness in the viewing are all issues worth discussing. Sixth, in scenarios such as industrial production, education, and entertainment, how visual prompt information can form a good interaction with users and help users complete a series of tasks is important. The discussion in this paper on practical applications such as out-of-field, in-scene label layout and attention guidance in 2D and 3D scenes, and panoramic video viewing enables showing in more detail the effects of visual cue information in immersive environments and multiplescenes: research prospects and development directions.
沉浸式环境虚拟现实(VR)增强现实(AR)多场景视觉提示信息全景视频注意力引导
immersive environmentvirtual reality (VR)augmented reality (AR)multiscenevisual prompt informationpanoramic videoguidance of attention
Alghofaili R, Sawahata Y, Huang H K, Wang H C, Shiratori T and Yu L F. 2019. Lost in style: gaze-driven adaptive aid for VR navigation//Proceedings of 2019 CHI Conference on Human Factors in Computing Systems. Glasgow, UK: ACM: #348 [DOI: 10.1145/3290605.3300578http://dx.doi.org/10.1145/3290605.3300578]
Azuma R and Furmanski C. 2003. Evaluating label placement for augmented reality view management//The 2nd IEEE and ACM International Symposium on Mixed and Augmented Reality. Tokyo, Japan: IEEE: 66-75 [DOI: 10.1109/ISMAR.2003.1240689http://dx.doi.org/10.1109/ISMAR.2003.1240689]
Barbotin N, Baumeister J, Cunningham A, Duval T, Grisvard O and Thomas B H. 2022. Evaluating visual cues for future airborne surveillance using simulated augmented reality displays//Proceedings of 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). Christchurch, New Zealand: IEEE: 213-221 [DOI: 10.1109/VR51125.2022.00040http://dx.doi.org/10.1109/VR51125.2022.00040]
Baudisch P and Rosenholtz R. 2003. Halo: a technique for visualizing off-screen objects//Proceedings of 2003 SIGCHI Conference on Human Factors in Computing Systems. Ft. Lauderdale, USA: ACM: 481-488 [DOI: 10.1145/642611.642695http://dx.doi.org/10.1145/642611.642695]
Bell B, Feiner S and Höllerer T. 2001. View management for virtual and augmented reality//The 14th Annual ACM Symposium on User Interface Software and Technology. Orlando, Florida, USA: ACM: 101-110 [DOI: 10.1145/502348.502363http://dx.doi.org/10.1145/502348.502363]
Binetti N, Wu L Y, Chen S P, Kruijff E, Julier S and Brumby D P. 2021. Using visual and auditory cues to locate out-of-view objects in head-mounted augmented reality. Displays, 69: #102032 [DOI: 10.1016/j.displa.2021.102032http://dx.doi.org/10.1016/j.displa.2021.102032]
Biocca F, Tang A, Owen C and Xiao F. 2006. Attention funnel: omnidirectional 3D cursor for mobile augmented reality platforms//Proceedings of 2006 SIGCHI Conference on Human Factors in Computing Systems. Montréal, Canada: ACM: 1115-1122 [DOI: 10.1145/1124772.1124939http://dx.doi.org/10.1145/1124772.1124939]
Bork F, Schnelzer C, Eck U and Navab N. 2018. Towards efficient visual guidance in limited field-of-view head-mounted displays. IEEE Transactions on Visualization and Computer Graphics, 24(11): 2983-2992 [DOI: 10.1109/TVCG.2018.2868584http://dx.doi.org/10.1109/TVCG.2018.2868584]
Boustila S, Milgram P and Jamieson G A. 2020. Map displays and landmark effects on wayfinding in unfamiliar environments//Proceedings of 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). Atlanta, USA: IEEE: 628-629 [DOI: 10.1109/VRW50115.2020.00165http://dx.doi.org/10.1109/VRW50115.2020.00165]
Burigat S and Chittaro L. 2011. Visualizing references to off-screen content on mobile devices: a comparison of Arrows, Wedge and Overview + Detail. Interacting with Computers, 23(2): 156-166 [DOI: 10.1016/j.intcom.2011.02.005http://dx.doi.org/10.1016/j.intcom.2011.02.005]
Burigat S, Chittaro L and Gabrielli S. 2006. Visualizing locations of off-screen objects on mobile devices: a comparative evaluation of three approaches//Proceedings of the 8th Conference on Human-Computer Interaction with Mobile Devices and Services. Helsinki, Finland: ACM: 239-246 [DOI: 10.1145/1152215.1152266http://dx.doi.org/10.1145/1152215.1152266]
Burigat S, Chittaro L and Vianello A. 2012. Dynamic visualization of large numbers of off-screen objects on mobile devices: an experimental comparison of wedge and overview + detail//Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services. San Francisco, USA: ACM: 93-102 [DOI: 10.1145/2371574.2371590http://dx.doi.org/10.1145/2371574.2371590]
Burova A, Mäkelä J, Hakulinen J, Keskinen T, Heinonen H, Siltanen S and Turunen M. 2020. Utilizing VR and gaze tracking to develop AR solutions for industrial maintenance//Proceedings of 2020 CHI Conference on Human Factors in Computing Systems. Honolulu, USA: ACM: 1-13 [DOI: 10.1145/3313831.3376405http://dx.doi.org/10.1145/3313831.3376405]
Chittaro L and Burigat S. 2004. 3D location-pointing as a navigation aid in Virtual Environments//Proceedings of the Working Conference on Advanced Visual Interfaces. Gallipoli, Italy: ACM: 267-274 [DOI: 10.1145/989863.989910http://dx.doi.org/10.1145/989863.989910]
Chittaro L and Venkataraman S. 2006. Navigation aids for multi-floor virtual buildings: a comparative evaluation of two approaches//2006 ACM Symposium on Virtual Reality Software and Technology. Limassol, Cyprus: ACM: 227-235 [DOI: 10.1145/1180495.1180542http://dx.doi.org/10.1145/1180495.1180542]
Cosgrove S and LaViola J J. 2020. Visual guidance methods in immersive and interactive VR environments with connected 360° videos//Proceedings of 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). Atlanta, USA: IEEE: 652-653 [DOI: 10.1109/VRW50115.2020.00177http://dx.doi.org/10.1109/VRW50115.2020.00177]
Evangelista A, Manghisi V M, Laera F, Gattullo M, Uva A E and Fiorentino M. 2022. CompassbAR: a technique for visualizing out-of-view objects in a mixed reality environment//Proceedings of the 2nd International Conference on Design Tools and Methods in Industrial Engineering. Rome, Italy: Springer: 141-148 [DOI: 10.1007/978-3-030-91234-5_14http://dx.doi.org/10.1007/978-3-030-91234-5_14]
Ghosh S, Winston L, Panchal N, Kimura-Thollander P, Hotnog J, Cheong D, Reyes G and Abowd G D. 2018. NotifiVR: exploring interruptions and notifications in virtual reality. IEEE Transactions on Visualization and Computer Graphics, 24(4): 1447-1456 [DOI: 10.1109/TVCG.2018.2793698http://dx.doi.org/10.1109/TVCG.2018.2793698]
Grasset R, Langlotz T, Kalkofen D, Tatzgern M and Schmalstieg D. 2012. Image-driven view management for augmented reality browsers//2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). Atlanta, USA: IEEE: 177-186 [DOI: 10.1109/ISMAR.2012.6402555http://dx.doi.org/10.1109/ISMAR.2012.6402555]
Gruenefeld U, El Ali A, Boll S and Heuten W. 2018b. Beyond halo and wedge: visualizing out-of-view objects on head-mounted virtual and augmented reality devices//Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services. Barcelona, Spain: ACM: #40 [DOI: 10.1145/3229434.3229438http://dx.doi.org/10.1145/3229434.3229438]
Gruenefeld U, El Ali A, Heuten W and Boll S. 2017b. Visualizing out-of-view objects in head-mounted augmented reality//Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services. Vienna, Austria: ACM: #81 [DOI: 10.1145/3098279.3122124http://dx.doi.org/10.1145/3098279.3122124]
Gruenefeld U, Ennenga D, El Ali A, Heuten W and Boll S. 2017a. EyeSee360: designing a visualization technique for out-of-view objects in head-mounted augmented reality//The 5th Symposium on Spatial User Interaction. Brighton, United Kingdom: ACM: 109-118 [DOI: 10.1145/3131277.3132175http://dx.doi.org/10.1145/3131277.3132175]
Gruenefeld U, Hsiao D and Heuten W. 2018a. EyeSeeX: visualization of out-of-view objects on small field-of-view augmented and virtual reality devices//The 7th ACM International Symposium on Pervasive Displays. Munich, Germany: ACM: #26 [DOI: 10.1145/3205873.3210706http://dx.doi.org/10.1145/3205873.3210706]
Gruenefeld U, Koethe I, Lange D, Weiß S and Heuten W. 2019b. Comparing techniques for visualizing moving out-of-view objects in head-mounted virtual reality//Proceedings of 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). Osaka, Japan: IEEE: 742-746 [DOI: 10.1109/VR.2019.8797725http://dx.doi.org/10.1109/VR.2019.8797725]
Gruenefeld U, Lange D, Hammer L, Boll S and Heuten W. 2018d. FlyingARrow: pointing towards out-of-view objects on augmented reality devices//The 7th ACM International Symposium on Pervasive Displays. Munich, Germany: ACM: #20 [DOI: 10.1145/3205873.3205881http://dx.doi.org/10.1145/3205873.3205881]
Gruenefeld U, Löcken A, Brueck Y, Boll S and Heuten W. 2018c. Where to look: exploring peripheral cues for shifting attention to spatially distributed out-of-view objects//Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Toronto, Canada: ACM: 221-228 [DOI: 10.1145/3239060.3239080http://dx.doi.org/10.1145/3239060.3239080]
Gruenefeld U, Prädel L and Heuten W. 2019a. Improving search time performance for locating out-of-view objects in augmented reality//Proceedings of Mensch Und Computer 2019. Hamburg, Germany: ACM: 481-485 [DOI: 10.1145/3340764.3344443http://dx.doi.org/10.1145/3340764.3344443]
Gruenefeld U, Stratmann T C, El Ali A, Boll S and Heuten W. 2018e. RadialLight: exploring radial peripheral LEDs for directional cues in head-mounted displays//Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services. Barcelona, Spain: ACM: #39 [DOI: 10.1145/3229434.3229437http://dx.doi.org/10.1145/3229434.3229437]
Gruenefeld U, Stratmann T C, Prädel L and Heuten W. 2018f. MonoculAR: a radial light display to point towards out-of-view objects on augmented reality devices//Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct. Barcelona, Spain: ACM: 16-22 [DOI: 10.1145/3236112.3236115http://dx.doi.org/10.1145/3236112.3236115]
Gustafson S, Baudisch P, Gutwin C and Irani P. 2008. Wedge: clutter-free visualization of off-screen locations//Proceedings of 2008 SIGCHI Conference on Human Factors in Computing Systems. Florence, Italy: ACM: 787-796 [DOI: 10.1145/1357054.1357179http://dx.doi.org/10.1145/1357054.1357179]
Gustafson S G and Irani P P. 2007. Comparing visualizations for tracking off-screen moving targets//Proceedings of CHI’07 Extended Abstracts on Human Factors in Computing Systems. San Jose, USA: ACM: 2399-2404 [DOI: 10.1145/1240866.1241014http://dx.doi.org/10.1145/1240866.1241014]
Gutkowski N, Quigley P, Ogle T, Hicks D, Taylor J, Tucker T and Bowman D A. 2021. Designing historical tours for head-worn AR//2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). Bari, Italy: IEEE: 26-33 [DOI: 10.1109/ISMAR-Adjunct54149.2021.00016http://dx.doi.org/10.1109/ISMAR-Adjunct54149.2021.00016]
Harada Y and Ohyama J. 2022. Quantitative evaluation of visual guidance effects for 360-degree directions. Virtual Reality, 26(2): 759-770 [DOI: 10.1007/s10055-021-00574-7http://dx.doi.org/10.1007/s10055-021-00574-7]
Hartmann K, Götzelmann T, Ali K and Strothotte T. 2005. Metrics for functional and aesthetic label layouts//The 5th International Symposium on Smart Graphics. Frauenwörth Cloister, Germany: Springer: 115-126 [DOI: 10.1007/11536482_10http://dx.doi.org/10.1007/11536482_10]
Hossain Z, Hasan K, Liang H N and Irani P. 2012. EdgeSplit: facilitating the selection of off-screen objects//Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services. San Francisco, USA: ACM: 79-82 [DOI: 10.1145/2371574.2371588http://dx.doi.org/10.1145/2371574.2371588]
Hu S, Malloch J and Reilly D. 2020. A comparative evaluation of techniques for locating out of view targets in virtual reality [EB/OL]. [2022-12-22]. https://openreview.net/pdf?id=1S3TXjkEVmHhttps://openreview.net/pdf?id=1S3TXjkEVmH
Huang Z H, Gao S B, Cai C X, Chen H L and Yan Y Y. 2020. Visual fusion analysis of multiple-processing traffic congestion detection. Journal of Image and Graphics, 25(2): 409-418
黄子赫, 高尚兵, 蔡创新, 陈浩霖, 严云洋. 2020. 多重处理的道路拥堵识别可视化融合分析. 中国图象图形学报, 25(2): 409-418 [DOI: 10.11834/jig.190272http://dx.doi.org/10.11834/jig.190272]
Ion A, Chang Y L B, Haller M, Hancock M and Scott S D. 2013. Canyon: providing location awareness of multiple moving objects in a detail view on large displays//Proceedings of 2013 SIGCHI Conference on Human Factors in Computing Systems. Paris, France: ACM: 3149-3158 [DOI: 10.1145/2470654.2466431http://dx.doi.org/10.1145/2470654.2466431]
Jo H, Hwang S, Park H and Ryu J H. 2011. Aroundplot: focus + context interface for off-screen objects in 3D environments. Computers and Graphics, 35(4): 841-853 [DOI: 10.1016/j.cag.2011.04.005http://dx.doi.org/10.1016/j.cag.2011.04.005]
Jung J, Lee H, Choi J, Nanda A, Gruenefeld U, Stratmann T and Heuten W. 2018. Ensuring safety in augmented reality from trade-off between immersion and situation awareness//2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). Munich, Germany: IEEE: 70-79 [DOI: 10.1109/ISMAR.2018.00032http://dx.doi.org/10.1109/ISMAR.2018.00032]
Köppel T, Gröller M E and Wu H Y. 2021. Context-responsive labeling in augmented reality//The 14th IEEE Pacific Visualization Symposium (PacificVis). Tianjin, China: IEEE: 91-100 [DOI: 10.1109/PacificVis52677.2021.00020http://dx.doi.org/10.1109/PacificVis52677.2021.00020]
Lang F and Machulla T. 2021. Pressing a button you cannot see: evaluating visual designs to assist persons with low vision through augmented reality//The 27th ACM Symposium on Virtual Reality Software and Technology. Osaka, Japan: ACM: #39 [DOI: 10.1145/3489849.3489873http://dx.doi.org/10.1145/3489849.3489873]
Lange D, Stratmann T C, Gruenefeld U and Boll S. 2020. HiveFive: immersion preserving attention guidance in virtual reality//Proceedings of 2020 CHI Conference on Human Factors in Computing Systems. Honolulu, USA: ACM: 1-13 [DOI: 10.1145/3313831.3376803http://dx.doi.org/10.1145/3313831.3376803]
Lauber F and Butz A. 2013. View management for driver assistance in an HMD//2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). Adelaide, Australia: IEEE: 1-6 [DOI: 10.1109/ISMAR.2013.6671828http://dx.doi.org/10.1109/ISMAR.2013.6671828]
Li G, Liu Y and Wang Y T. 2017. Evaluation of labelling layout methods in augmented reality//Proceedings of 2017 IEEE Virtual Reality (VR). Los Angeles, USA: IEEE: 351-352 [DOI: 10.1109/VR.2017.7892321http://dx.doi.org/10.1109/VR.2017.7892321]
Li G, Liu Y and Wang Y T. 2018. An empirical evaluation of labelling method in augmented reality//Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry. Tokyo, Japan: ACM: #7 [DOI: 10.1145/3284398.3284422http://dx.doi.org/10.1145/3284398.3284422]
Li Y J, Shi J C, Zhang F L and Wang M. 2022. Bullet comments for 360° video//Proceedings of 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). Christchurch, New Zealand: IEEE: 1-10 [DOI: 10.1109/VR51125.2022.00017http://dx.doi.org/10.1109/VR51125.2022.00017]
Lin T C, Singh R, Yang Y L, Nobre C, Beyer J, Smith M A and Pfister H. 2021. Towards an understanding of situated AR visualization for basketball free-throw training//Proceedings of 2021 CHI Conference on Human Factors in Computing Systems. Yokohama, Japan: ACM: #461 [DOI: 10.1145/3411764.3445649http://dx.doi.org/10.1145/3411764.3445649]
Lin T C, Yang Y L, Beyer J and Pfister H. 2023. Labeling out-of-view objects in immersive analytics to support situated visual searching. IEEE Transactions on Visualization and Computer Graphics, 29(3): 1831-1844 [DOI: 10.1109/TVCG.2021.3133511http://dx.doi.org/10.1109/TVCG.2021.3133511]
Lin Y C, Chang Y J, Hu H N, Cheng H T, Huang C W and Sun M. 2017b. Tell me where to look: investigating ways for assisting focus in 360° video//Proceedings of 2017 CHI Conference on Human Factors in Computing Systems. Denver, USA: ACM: 2535-2545 [DOI: 10.1145/3025453.3025757http://dx.doi.org/10.1145/3025453.3025757]
Lin Y T, Liao Y C, Teng S Y, Chung Y J, Chan L W and Chen B Y. 2017a. Outside-in: visualizing out-of-sight regions-of-interest in a 360° video using spatial picture-in-picture previews//Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. Québec City, Canada: ACM: 255-265 [DOI: 10.1145/3126594.3126656http://dx.doi.org/10.1145/3126594.3126656]
Lindlbauer D, Feit A M and Hilliges O. 2019. Context-aware online adaptation of mixed reality interfaces//Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. New Orleans, USA: ACM: 147-160 [DOI: 10.1145/3332165.3347945http://dx.doi.org/10.1145/3332165.3347945]
Lu W Q, Duh B L H and Feiner S. 2012. Subtle cueing for visual search in augmented reality//2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). Atlanta, USA: IEEE: 161-166 [DOI: 10.1109/ISMAR.2012.6402553http://dx.doi.org/10.1109/ISMAR.2012.6402553]
Lu W Q, Feng D, Feiner S, Zhao Q and Duh H B L. 2013. Subtle cueing for visual search in head-tracked head worn displays//2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). Adelaide, Australia: IEEE: 271-272 [DOI: 10.1109/ISMAR.2013.6671800http://dx.doi.org/10.1109/ISMAR.2013.6671800]
Luo W, Lehmann A, Yang Y S and Dachselt R. 2021. Investigating document layout and placement strategies for collaborative sensemaking in augmented reality//Extended Abstracts of 2021 CHI Conference on Human Factors in Computing Systems. Yokohama, Japan: ACM: #456 [DOI: 10.1145/3411763.3451588http://dx.doi.org/10.1145/3411763.3451588]
Madsen J B, Tatzgern M, Madsen C B, Schmalstieg D and Kalkofen D. 2016. Temporal coherence strategies for augmented reality labeling. IEEE Transactions on Visualization and Computer Graphics, 22(4): 1415-1423 [DOI: 10.1109/TVCG.2016.2518318http://dx.doi.org/10.1109/TVCG.2016.2518318]
Makita K, Kanbara M and Yokoya N. 2009. View management of annotations for wearable augmented reality//Proceedings of 2009 IEEE International Conference on Multimedia and Expo. New York, USA: IEEE: 982-985 [DOI: 10.1109/ICME.2009.5202661http://dx.doi.org/10.1109/ICME.2009.5202661]
Marquardt A, Kruijff E, Trepkowski C, Maiero J, Schwandt A, Hinkenjann A, Stuerzlinger W and Schöning J. 2018. Audio-tactile proximity feedback for enhancing 3D manipulation//The 24th ACM Symposium on Virtual Reality Software and Technology. Tokyo, Japan: ACM: #2 [DOI: 10.1145/3281505.3281525http://dx.doi.org/10.1145/3281505.3281525]
Marquardt A, Trepkowski C, Eibich T D, Maiero J and Kruijff E. 2019. Non-visual cues for view management in narrow field of view augmented reality displays//2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). Beijing, China: IEEE: 190-201 [DOI: 10.1109/ISMAR.2019.000-3http://dx.doi.org/10.1109/ISMAR.2019.000-3]
Marquardt A, Trepkowski C, Eibich T D, Maiero J, Kruijff E and Schöning J. 2020. Comparing non-visual and visual guidance methods for narrow field of view augmented reality displays. IEEE Transactions on Visualization and Computer Graphics, 26(12): 3389-3401 [DOI: 10.1109/TVCG.2020.3023605http://dx.doi.org/10.1109/TVCG.2020.3023605]
Mathis F, Zhang X S, McGill M, Simeone A L and Khamis M. 2020. Assessing social text placement in mixed reality TV//Proceedings of 2020 ACM International Conference on Interactive Media Experiences. Cornella, Spain: ACM: 205-211 [DOI: 10.1145/3391614.3399402http://dx.doi.org/10.1145/3391614.3399402]
McNamara A, Boyd K, George J, Jones W, Oh S and Suther A. 2019. Information placement in virtual reality//Proceedings of 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). Osaka, Japan: IEEE: 1765-1769 [DOI: 10.1109/VR.2019.8797891http://dx.doi.org/10.1109/VR.2019.8797891]
Miau D and Feiner S. 2016. Personalized compass: a compact visualization for direction and location//Proceedings of 2016 CHI Conference on Human Factors in Computing Systems. San Jose, USA: ACM: 5114-5125 [DOI: 10.1145/2858036.2858068http://dx.doi.org/10.1145/2858036.2858068]
Miyagawa S. 2022. OptWedge: cognitive optimized guidance toward off-screen POIs [EB/OL]. [2022-06-09]. https://arxiv.org/pdf/2206.04293.pdfhttps://arxiv.org/pdf/2206.04293.pdf
Miyashita T, Meier P, Tachikawa T, Orlic S, Eble T, Scholz V, Gapel A, Gerl O, Arnaudov S and Lieberknecht S. 2008. An augmented reality museum guide//The 7th IEEE/ACM International Symposium on Mixed and Augmented Reality. Cambridge, UK: IEEE: 103-106 [DOI: 10.1109/ISMAR.2008.4637334http://dx.doi.org/10.1109/ISMAR.2008.4637334]
Orlosky J, Kiyokawa K, Toyama T and Sonntag D. 2015. Halo content: context-aware viewspace management for non-invasive augmented reality//Proceedings of the 20th International Conference on Intelligent User Interfaces. Atlanta, USA: ACM: 369-373 [DOI: 10.1145/2678025.2701375http://dx.doi.org/10.1145/2678025.2701375]
Peterson S, Axholt M and Ellis S R. 2008b. Managing visual clutter: a generalized technique for label segregation using stereoscopic disparity//Proceedings of 2008 IEEE Virtual Reality Conference. Reno, USA: IEEE: 169-176 [DOI: 10.1109/VR.2008.4480769http://dx.doi.org/10.1109/VR.2008.4480769]
Peterson S D, Axholt M, Cooper M and Ellis S R. 2009. Visual clutter management in augmented reality: effects of three label separation methods on spatial judgments//2009 IEEE Symposium on 3D User Interfaces. Lafayette, USA: IEEE: 111-118 [DOI: 10.1109/3DUI.2009.4811215http://dx.doi.org/10.1109/3DUI.2009.4811215]
Peterson S D, Axholt M, Cooper M and Ellis S R. 2010. Detection thresholds for label motion in visually cluttered displays//Proceedings of 2010 IEEE Virtual Reality Conference (VR). Boston, USA: IEEE: 203-206 [DOI: 10.1109/VR.2010.5444788http://dx.doi.org/10.1109/VR.2010.5444788]
Peterson S D, Axholt M and Ellis S R. 2008a. Comparing disparity based label segregation in augmented and virtual reality//2008 ACM Symposium on Virtual Reality Software and Technology. Bordeaux, France: ACM: 285-286 [DOI: 10.1145/1450579.1450655http://dx.doi.org/10.1145/1450579.1450655]
Petford J, Carson I, Nacenta M A and Gutwin C. 2019. A comparison of notification techniques for out-of-view objects in full-coverage displays//Proceedings of 2019 CHI Conference on Human Factors in Computing Systems. Glasgow, UK: ACM: #58 [DOI: 10.1145/3290605.3300288http://dx.doi.org/10.1145/3290605.3300288]
Ping J M, Liu Y and Weng D D. 2021. Review of depth perception in virtual and real fusion environment. Journal of Image and Graphics, 26(6): 1503-1520
平佳敏, 刘越, 翁冬冬. 2021. 虚实融合场景中的深度感知研究综述. 中国图象图形学报, 26(6): 1503-1520 [DOI: 10.11834/jig.210027http://dx.doi.org/10.11834/jig.210027]
Polys N F, Kim S and Bowman D A. 2005. Effects of information layout, screen size, and field of view on user performance in information-rich virtual environments//2005 ACM Symposium on Virtual Reality Software and Technology. Monterey, USA: ACM: 46-55 [DOI: 10.1145/1101616.1101626http://dx.doi.org/10.1145/1101616.1101626]
Renner P and Pfeiffer T. 2017. Attention guiding techniques using peripheral vision and eye tracking for feedback in augmented-reality-based assistance systems//2017 IEEE Symposium on 3D User Interfaces (3DUI). Angeles, USA: IEEE: 186-194 [DOI: 10.1109/3DUI.2017.7893338http://dx.doi.org/10.1109/3DUI.2017.7893338]
Rothe S, Buschek D and Hußmann H. 2019. Guidance in cinematic virtual reality-taxonomy, research status and challenges. Multimodal Technologies and Interaction, 3(1): #19 [DOI: 10.3390/mti3010019http://dx.doi.org/10.3390/mti3010019]
Rzayev R, Korbely S, Maul M, Schark A, Schwind V and Henze N. 2020. Effects of position and alignment of notifications on AR glasses during social interaction//Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society. Tallinn, Estonia: ACM: #30 [DOI: 10.1145/3419249.3420095http://dx.doi.org/10.1145/3419249.3420095]
Rzayev R, Mayer S, Krauter C and Henze N. 2019. Notification in VR: the effect of notification placement, task and environment//The Annual Symposium on Computer-Human Interaction in Play. Barcelona, Spain: ACM: 199-211 [DOI: 10.1145/3311350.3347190http://dx.doi.org/10.1145/3311350.3347190]
Satriadi K A, Ens B, Cordeil M, Czauderna T and Jenny B. 2020. Maps around me: 3D multiview layouts in immersive spaces. Proceedings of the ACM on Human-Computer Interaction, 4(ISS): #201 [DOI: 10.1145/3427329http://dx.doi.org/10.1145/3427329]
Schinke T, Henze N and Boll S. 2010. Visualization of off-screen objects in mobile augmented reality//Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services. Lisbon, Portugal: ACM: 313-316 [DOI: 10.1145/1851600.1851655http://dx.doi.org/10.1145/1851600.1851655]
Schmitz A, MacQuarrie A, Julier S, Binetti N and Steed A. 2020. Directing versus attracting attention: exploring the effectiveness of central and peripheral cues in panoramic videos//Proceedings of 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). Atlanta, USA: IEEE: 63-72 [DOI: 10.1109/VR46266.2020.00024http://dx.doi.org/10.1109/VR46266.2020.00024]
Seeliger A, Merz G, Holz C and Feuerriegel S. 2021. Exploring the effect of visual cues on eye gaze during AR-guided picking and assembly tasks//2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). Bari, Italy: IEEE: 159-164 [DOI: 10.1109/ISMAR-Adjunct54149.2021.00041http://dx.doi.org/10.1109/ISMAR-Adjunct54149.2021.00041]
Seeliger A, Weibel R P and Feuerriegel S. 2022. Context-adaptive visual cues for safe navigation in augmented reality using machine learning. International Journal of Human-Computer Interaction [EB/OL]. [2022-09-22]. https://www.tandfonline.com/doi/epdf/10.1080/10447318.2022.2122114?needAccess=truehttps://www.tandfonline.com/doi/epdf/10.1080/10447318.2022.2122114?needAccess=true
Sidenmark L, Kiefer N and Gellersen H. 2019. Subtitles in interactive virtual reality: using gaze to address depth conflicts [EB/OL]. [2022-12-22]. https://eprints.lancs.ac.uk/id/eprint/132411/4/Subtitles_in_Interactive_Virtual_Reality_Using_Gaze_to_Address_Depth_Conflicts.pdfhttps://eprints.lancs.ac.uk/id/eprint/132411/4/Subtitles_in_Interactive_Virtual_Reality_Using_Gaze_to_Address_Depth_Conflicts.pdf
Siu T and Herskovic V. 2013. SidebARs: improving awareness of off-screen elements in mobile augmented reality//Proceedings of 2013 Chilean Conference on Human-Computer Interaction. Temuco, Chile: ACM: 36-41 [DOI: 10.1145/2535597.2535608http://dx.doi.org/10.1145/2535597.2535608]
Tai Y H and Shi J S. 2021. Application of immersive 3D imaging technology in the clinic medical field. Journal of Image and Graphics, 26(6): 1536-1544
邰永航, 石俊生. 2021. 沉浸式立体显示技术在临床医学领域中的应用. 中国图象图形学报, 26(6): 1536-1544 [DOI: 10.11834/jig.200851http://dx.doi.org/10.11834/jig.200851]
Tatzgern M, Kalkofen D, Grasset R and Schmalstieg D. 2014. Hedgehog labeling: view management techniques for external labels in 3D space//Proceedings of 2014 IEEE Virtual Reality (VR). Minneapolis, USA: IEEE: 27-32 [DOI: 10.1109/VR.2014.6802046http://dx.doi.org/10.1109/VR.2014.6802046]
Tong L W, Jung S, Li R C, Lindeman R W and Regenbrecht H. 2020. Action units: exploring the use of directorial cues for effective storytelling with swivel-chair virtual reality//Proceedings of the 32nd Australian Conference on Human-Computer Interaction. Sydney, Australia: ACM: 45-54 [DOI: 10.1145/3441000.3441063http://dx.doi.org/10.1145/3441000.3441063]
Tonnis M and Klinker G. 2006. Effective control of a car driver’s attention for visual and acoustic guidance towards the direction of imminent dangers//2006 IEEE/ACM International Symposium on Mixed and Augmented Reality. Santa Barbara, USA: IEEE: 13-22 [DOI: 10.1109/ISMAR.2006.297789http://dx.doi.org/10.1109/ISMAR.2006.297789]
Trepkowski C, Marquardt A, Eibich T D, Shikanai Y, Maiero J, Kiyokawa K, Kruijff E, Schoning J and König P. 2022. Multisensory proximity and transition cues for improving target awareness in narrow field of view augmented reality displays. IEEE Transactions on Visualization and Computer Graphics, 28(2): 1342-1362 [DOI: 10.1109/TVCG.2021.3116673http://dx.doi.org/10.1109/TVCG.2021.3116673]
Wallgrün J O, Bagher M M, Sajjadi P and Klippel A. 2020. A comparison of visual attention guiding approaches for 360° image-based VR tours//Proceedings of 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). Atlanta, USA: IEEE: 83-91 [DOI: 10.1109/VR46266.2020.00026http://dx.doi.org/10.1109/VR46266.2020.00026]
Wang M, Li Y J, Zhang W X, Richardt C and Hu S M. 2020. Transitioning360: content-aware NFoV virtual camera paths for 360° video playback//2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). Porto de Galinhas, Brazil: IEEE: 185-194 [DOI: 10.1109/ISMAR50242.2020.00040http://dx.doi.org/10.1109/ISMAR50242.2020.00040]
Weiß M, Angerbauer K, Voit A, Schwarzl M, Sedlmair M and Mayer S. 2021. Revisited: comparison of empirical methods to evaluate visualizations supporting crafting and assembly purposes. IEEE Transactions on Visualization and Computer Graphics, 27(2): 1204-1213 [DOI: 10.1109/TVCG.2020.3030400http://dx.doi.org/10.1109/TVCG.2020.3030400]
Yamaguchi S, Ogawa N and Narumi T. 2021. Now I’m not afraid: reducing fear of missing out in 360° videos on a head-mounted display using a panoramic thumbnail//2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). Bari, Italy: IEEE: 176-183 [DOI: 10.1109/ISMAR52148.2021.00032http://dx.doi.org/10.1109/ISMAR52148.2021.00032]
Yu D F, Liang H N, Fan K X, Zhang H, Fleming C and Papangelis K. 2020. Design and evaluation of visualization techniques of off-screen and occluded targets in virtual reality environments. IEEE Transactions on Visualization and Computer Graphics, 26(9): 2762-2774 [DOI: 10.1109/TVCG.2019.2905580http://dx.doi.org/10.1109/TVCG.2019.2905580]
Zellweger P T, Mackinlay J D, Good L, Stefik M and Baudisch P. 2003. City lights: contextual views in minimal sp
Extended Abstracts on Human Factors in Computing Systems. Ft. Lauderdale, USA: ACM: 838-839 [DOI: 10.1145/765891.766022http://dx.doi.org/10.1145/765891.766022]
Zhang F and Sun H Q. 2005. Dynamic labeling management in virtual and augmented environments//Proceedings of the 9th International Conference on Computer Aided Design and Computer Graphics (CAD-CG’05). Hong Kong, China: IEEE: 376-382 [DOI: 10.1109/CAD-CG.2005.36http://dx.doi.org/10.1109/CAD-CG.2005.36]
Zhou Z J, Wang L L and Popescu V. 2021. A partially-sorted concentric layout for efficient label localization in augmented reality. IEEE Transactions on Visualization and Computer Graphics, 27(11): 4087-4096 [DOI: 10.1109/TVCG.2021.3106492http://dx.doi.org/10.1109/TVCG.2021.3106492]
相关作者
相关机构