面向虚实融合的人机交互
Human-computer interaction for virtual-real fusion
- 2023年28卷第6期 页码:1513-1542
纸质出版日期: 2023-06-16
DOI: 10.11834/jig.230020
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2023-06-16 ,
移动端阅览
陶建华, 龚江涛, 高楠, 傅四维, 梁山, 喻纯. 2023. 面向虚实融合的人机交互. 中国图象图形学报, 28(06):1513-1542
Tao Jianhua, Gong Jiangtao, Gao Nan, Fu Siwei, Liang Shan, Yu Chun. 2023. Human-computer interaction for virtual-real fusion. Journal of Image and Graphics, 28(06):1513-1542
面向虚实融合的人机交互涉及计算机科学、认知心理学、人机工程学、多媒体技术和虚拟现实等领域,旨在提高人机交互的效率,同时响应人类认知与情感的需求,在办公教育、机器人和虚拟/增强现实设备中都有广泛应用。本文从人机交互涉及感知计算、人与机器人交互及协同、个性化人机对话和数据可视化等4个维度系统阐述面向虚实融合人机交互的发展现状。对国内外研究现状进行对比,展望未来的发展趋势。本文认为兼具可迁移与个性化的感知计算、具备用户行为深度理解的人机协同、用户自适应的对话系统等是本领域的重要研究方向。
Virtual-real human-computer interaction (VR-HCI) is an interdisciplinary field that encompasses human and computer interactions to address human-related cognitive and emotional needs. This interdisciplinary knowledge integrates domains such as computer science, cognitive psychology, ergonomics, multimedia technology, and virtual reality. With the advancement of big data and artificial intelligence, VR-HCI benefits industries like education, healthcare, robotics, and entertainment, and is increasingly recognized as a key supporting technology for metaverse-related development. In recent years, machine learning-based human cognitive and emotional analysis has evolved, particularly in applications like robotics and wearable interaction devices. As a result, VR-HCI has focused on the challenging issue of creating “intelligent” and “anthropomorphic” interaction systems. This literature review examines the growth of VR-HCI from four aspects: perceptual computing, human-machine interaction and coordination, human-computer dialogue interaction, and data visualization. Perceptual computin
g
aims to model human daily life behavior, cognitive processes, and emotional contexts for personalized and efficient human-computer interactions. This discussion covers three perceptual aspects related to pathways, objects, and scenes. Human-machine interaction
scenarios involve virtual and real-world integration and perceptual pathways, which are divided into primary perception types: visual-based, sensor-based, and wireless non-contact. Object-based perception is subdivided into personal and group contexts, while scene-based perception is subdivided into physical behavior and cognitive contexts. Human-machine interaction primarily encompasses technical disciplines such as mechanical and electrical engineering, computer and control science, artificial intelligence, and other related arts or humanistic disciplines like psychology and design. Human-robot interaction can be categorized by functional mechanisms into 1) collaborative operation robots, 2) service and assistance robots, and 3) social, entertainment, and educational robots. Key modules in human-computer dialogue interaction systems include speech recognition, speaker recognition, dialogue system, and speech synthesis. The level of intelligence in these interaction systems can be further enhanced by considering users' inherent characteristics, such as speech pronunciation, preferences, emotions, and other attributes. For human-machine interaction, it mainly involves technical disciplines in relevant to mechanical and electrical engineering, computer and control science, and artificial intelligence, as well as other related arts or humanistic disciplines like psychology and design. Humans-robots interaction can be segmented into three categories in terms of its functional mechanism: 1) collaborative operation robots, 2) service and assistance robots, and 3) social, entertainment and educational robots. For human-computer dialogue interaction, the system consists of such key modules like speech recognition, speaker recognition, dialogue system, and speech synthesis. The microphone sensor can pick up the speech signal, which is then converted to text information through the speech recognition module. The dialogue system can process the text information, understand the user's intention, and generates a reply. Finally, the speech-synthesized module can convert the reply information into speech information, completing the interaction process. In recent years, the level of intelligence of the interaction system can be further improved by combining users' inherent characteristics such as speech pronunciation, preferences, emotions, and other characteristics, optimizing the various modules of the interaction system. For data transformation and visualization, it is benched for performing data cleaning tasks on tabular data, and various tools in R and Python can perform these tasks as well. Many software systems have developed graphical user interfaces to assist users in completing data transformation tasks, such as Microsoft Excel, Tableau Prep Builder, and OpenRefine. Current recommendation-based algorithms interactive systems are beneficial for users transform data easily. Researchers have also developed tools that can transform network structures. We analyze its four aspects of 1) interactive data transformation, 2) data transformation visualization, 3) data table visual comparison, and 4) code visualization in human-computer interaction systems. We identify several future research directions in VR-HCI, namely 1) designing generalized and personalized perceptual computing, 2) building human-machine cooperation with a deep understanding of user behavior, and 3) expanding user-adaptive dialogue systems. For perceptual computing, it still lacks joint perception of multiple devices and individual differences in human behavior perception. Furthermore, most perceptual research can use generalized models, neglecting individual differences, resulting in lower perceptual accuracy, making it difficult to apply in actual settings. Therefore, future perceptual computing research trends are required for multimodal, transferable, personalized, and scalable research. For human-machine interaction and coordination, a systematic approach is necessary for constructing a design for human-machine interaction and collaboration. This approach requires in-depth research on user understanding, construction of interaction datasets, and long-term user experience. For human-computer dialogue interaction, current research mostly focuses on open-domain systems, which use pre-trained models to improve modeling accuracy for emotions, intentions, and knowledge. Future research should be aimed at developing more intelligent human-machine conversations that cater to individual user needs. For data transformation and visualization in HCI, the future directions can be composed of two parts: 1) the intelligence level of data transformation can be improved through interaction for individual data workers on several aspects, e.g., appropriate algorithms for multiple types of data, recommendations for consistent user behavior and real-time analysis to support massive data. 2) The focus is on the integration of data transformation and visualization among multiple users, including designing collaborative mechanisms, resolving conflicts in data operation, visualizing complex data transformation codes, evaluating the effectiveness of various visualization methods, and recording and displaying multiple human behaviors. In summary, the development of VR-HCI can provide new opportunities and challenges for human-computer interaction towards Metaverse, which has the potential to seamlessly integrate virtual and real worlds.
人机交互(HCI)感知计算人机协同对话系统数据可视化
human-computer interaction(HCI)perceptual computinghuman-machine cooperationdialogue systemdata visualization
Abbou C C, Hoznek A, Salomon L, Olsson L E, Lobontiu A, Saint F, Cicco A, Antiphon P and Chopin D. 2017. Laparoscopic radical prostatectomy with a remote controlled robot. The Journal of Urology, 197(2S): S210-S212 [DOI: 10.1016/j.juro.2016.10.107http://dx.doi.org/10.1016/j.juro.2016.10.107]
Abedjan Z, Morcos J, Ilyas I F, Ouzzani M, Papotti P and Stonebraker M. 2016. DataXFormer: a robust transformation discovery system//Proceedings of the 32nd IEEE International Conference on Data Engineering (ICDE). Helsinki, Finland: IEEE: 1134-1145 [DOI: 10.1109/ICDE.2016.7498319http://dx.doi.org/10.1109/ICDE.2016.7498319]
Adib F and Katabi D. 2013. See through walls with WiFi//Proceedings of the ACM SIGCOMM 2013 Conference on SIGCOMM. Hong Kong, China: ACM: 75-86 [DOI: 10.1145/2486001.2486039http://dx.doi.org/10.1145/2486001.2486039]
Ajoudani A, Fang C, Tsagarakis N and Bicchi A. 2018. Reduced-complexity representation of the human arm active endpoint stiffness for supervisory control of remote manipulation. The International Journal of Robotics Research, 37(1): 155-167 [DOI: 10.1177/0278364917744035http://dx.doi.org/10.1177/0278364917744035]
An S M, Ling Z H and Dai L R. 2017. Emotional statistical parametric speech synthesis using LSTM-RNNS//Proceedings of 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference. Kuala Lumpur, Malaysia: IEEE: 1613-1616 [DOI: 10.1109/APSIPA.2017.8282282http://dx.doi.org/10.1109/APSIPA.2017.8282282]
Arief-Ang I B, Hamilton M and Salim F D. 2018. A scalable room occupancy prediction with transferable time series decomposition of CO2 sensor data. ACM Transactions on Sensor Networks (TOSN), 14(3/4): #21 [DOI: 10.1145/3217214http://dx.doi.org/10.1145/3217214]
Arshad S, Feng C H, Liu Y H, Hu Y P, Yu R Y, Zhou S W and Li H. 2017. Wi-chase: a WiFi based human activity recognition system for sensorless environments//Proceedings of the 18th IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM). Macau, China: IEEE: #7974315 [DOI: 10.1109/WoWMoM.2017.7974315http://dx.doi.org/10.1109/WoWMoM.2017.7974315]
Atal B S. 1974. Effectiveness of linear prediction characteristics of the speech wave for automatic speaker identification and verification. Journal of the Acoustical Society of America, 55(6): 1304-1312 [DOI: 10.1121/1.1914702http://dx.doi.org/10.1121/1.1914702]
Baevski A, Hsu W N, Conneau A and Auli M. 2021. Unsupervised speech recognition//Proceedings of the 35th Conference on Neural Information Processing Systems. Montreal, Canada: Curran Associates, Inc.: 27826-27839
Bainbridge W A, Hart J W, Kim E S and Scassellati B. 2011. The benefits of interactions with physically present robots over video-displayed agents. International Journal of Social Robotics, 3(1): 41-52 [DOI: 10.1007/s12369-010-0082-7http://dx.doi.org/10.1007/s12369-010-0082-7]
Balogh G and Beszédes Á. 2013. CodeMetrpolis — A minecraft based collaboration tool for developers//Proceedings of the 1st IEEE Working Conference on Software Visualization (VISSOFT). Eindhoven, Netherlands: IEEE: #6650528 [DOI: 10.1109/VISSOFT.2013.6650528http://dx.doi.org/10.1109/VISSOFT.2013.6650528]
Bazzano F and Lamberti F. 2018. Human-robot interfaces for interactive receptionist systems and wayfinding applications. Robotics, 7(3): #56 [DOI: 10.3390/robotics7030056http://dx.doi.org/10.3390/robotics7030056]
Belpaeme T, Kennedy J, Ramachandran A, Scassellati B and Tanaka F. 2018. Social robots for education: a review. Science Robotics, 3(21): #eaat5954 [DOI: 10.1126/scirobotics.aat5954http://dx.doi.org/10.1126/scirobotics.aat5954]
Bhattacharjee T, Gordon E K, Scalise R, Cabrera M E, Caspi A, Cakmak M and Srinivasa S S. 2020. Is more autonomy always better? exploring preferences of users with mobility impairments in robot-assisted feeding//Proceedings of the 15th ACM/IEEE International Conference on Human-Robot Interaction. Cambridge, UK: ACM: 181-190 [DOI: 10.1145/3319502.3374818http://dx.doi.org/10.1145/3319502.3374818]
Bigelow A, Nobre C, Meyer M and Lex A. 2019. Origraph: interactive network wrangling//Proceedings of 2019 IEEE Conference on Visual Analytics Science and Technology (VAST). Vancouver, Canada: 81-92 [DOI: 10.1109/VAST47406.2019.8986909http://dx.doi.org/10.1109/VAST47406.2019.8986909]
Bogomolov A, Lepri B, Staiano J, Oliver N, Pianesi F and Pentland A. 2014. Once upon a crime: towards crime prediction from demographics and mobile data//Proceedings of the 16th International Conference on Multimodal Interaction. Istanbul, Turkey: ACM: 427-434 [DOI: 10.1145/2663204.2663254http://dx.doi.org/10.1145/2663204.2663254]
Breazeal C L. 2000. Sociable Machines: Expressive Social Exchange between Humans and Robots. Cambridge, USA: Massachusetts Institute of Technology
Casner S M, Hutchins E L and Norman D. 2016. The challenges of partially automated driving. Communications of the ACM, 59(5): 70-77 [DOI: 10.1145/2830565http://dx.doi.org/10.1145/2830565]
Chen J. 2021. Study on Cyber Physical Modeling and Control Method of Intelligent Vehicle in Human Vehicle Co-piloting. Chongqing: Chongqing University
陈进. 2021. 智能汽车人机共驾信息物理建模及控制方法研究. 重庆: 重庆大学 [DOI: 10.27670/d.cnki.gcqdu.2021.003802http://dx.doi.org/10.27670/d.cnki.gcqdu.2021.003802]
Chen L, Chang C, Chen Z, Tan B W, Gasic M and Yu K. 2018. Policy adaptation for deep reinforcement learning-based dialogue management//Proceedings of 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Calgary, Canada: IEEE: 6074-6078 [DOI: 10.1109/ICASSP.2018.8462272http://dx.doi.org/10.1109/ICASSP.2018.8462272]
Chen N X, Watanabe S, Villalba J, Żelasko P and Dehak N. 2020a. Non-autoregressive transformer for speech recognition. IEEE Signal Processing Letters, 28: 121-125 [DOI: 10.1109/LSP.2020.3044547http://dx.doi.org/10.1109/LSP.2020.3044547]
Chen R, Weng D, Huang Y W, Shu X H, Zhou J Y, Sun G D and Wu Y C. 2023. Rigel: transforming tabular data by declarative mapping. IEEE Transactions on Visualization and Computer Graphics, 29(1): 128-138 [DOI: 10.1109/TVCG.2022.3209385http://dx.doi.org/10.1109/TVCG.2022.3209385]
Chen X L. 2022. Study on Teleoperation Control Method of Flexible Needle Puncture Assisted by Pneumatic Surgical Robot. Xuzhou: China University of Mining and Technology
陈肖利. 2022. 气动手术机器人辅助柔性针穿刺的遥操作控制方法研究. 徐州: 中国矿业大学 [DOI: 10.27623/d.cnki.gzkyu.2022.000485http://dx.doi.org/10.27623/d.cnki.gzkyu.2022.000485]
Chen Z Y, Wang S and Qian Y M. 2020b. Adversarial domain adaptation for speaker verification using partially shared network//Proceedings of the 20th Annual Conference of the International Speech Communication Association. Shanghai, China: [s.n.]: 3017-3021 [DOI: 10.21437/Interspeech.2020-2226http://dx.doi.org/10.21437/Interspeech.2020-2226]
Cheon J, Kang D and Woo G. 2015. VizMe: an annotation-based program visualization system generating a compact visualization//Proceedings of the International Conference on Data Engineering 2015 (DaEng-2015). Singapore,Singapore: Springer: 433-441 [DOI: 10.1007/978-981-13-1799-6_45http://dx.doi.org/10.1007/978-981-13-1799-6_45]
Chi E A, Salazar J and Kirchhoff K. 2020. Align-Refine: non-autoregressive speech recognition via iterative realignment [EB/OL]. [2023-01-13]. https://arxiv.org/pdf/2010.14233.pdfhttps://arxiv.org/pdf/2010.14233.pdf
Chu M D, Zong K Y, Shu X, Gong J T, Lu Z C, Guo K M. Dai X Y and Zhou G Y. 2023. Work with AI and work for AI: autonomous vehicle safety drivers’ lived experiences//Proceedings of 2023 CHI Conference on Human Factors in Computing Systems. Hamburg, Germany. ACM: 1-16 [DOI: 10.1145/3544548.35815http://dx.doi.org/10.1145/3544548.35815]
Chung Y A, Wang Y X, Hsu W N, Zhang Y and Skerry-Ryan R J. 2019. Semi-supervised training for improving data efficiency in end-to-end speech synthesis//Proceedings of 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Brighton, UK: IEEE: 6940-6944 [DOI: 10.1109/ICASSP.2019.8683862http://dx.doi.org/10.1109/ICASSP.2019.8683862]
Claessen J H T and van Wijk J J. 2011. Flexible linked axes for multivariate data visualization. IEEE Transactions on Visualization and Computer Graphics, 17(12): 2310-2316 [DOI: 10.1109/TVCG.2011.201http://dx.doi.org/10.1109/TVCG.2011.201]
Coghlan S, Waycott J, Lazar A and Neves B B. 2021. Dignity, autonomy, and style of company: dimensions older adults consider for robot companions. Proceedings of the ACM on Human-Computer Interaction, 5: #104 [DOI: 10.1145/3449178http://dx.doi.org/10.1145/3449178]
Coradeschi S, Loutfi A, Kristoffersson A, Cortellessa G and Eklundh K S. 2011. Social robotic telepresence//Proceedings of the 6th International Conference on Human-Robot Interaction. Lausanne, Switzerland: Association for Computing Machinery: #1957660 [DOI: 10.1145/1957656.1957660http://dx.doi.org/10.1145/1957656.1957660]
Cross E S, Hortensius R and Wykowska A. 2019. From social brains to social robots: applying neurocognitive insights to human-robot interaction. Philosophical Transactions of the Royal Society B: Biological Sciences, 374(1771): #20180024 [DOI: 10.1098/rstb.2018.0024http://dx.doi.org/10.1098/rstb.2018.0024]
Dai Y H, Chen T Y, Zhi T and Li Q Y. 2022. Design and implementation of service robot and existing system facilities fusion application in high speed railway station. Railway Transport and Economy, 44(9): 77-82
戴彦华, 陈天煜, 支涛, 李全印. 2022. 服务机器人与铁路客运站既有系统设施融合应用的设计与实现. 铁道运输与经济, 44(9): 77-82 [DOI: 10.16668/j.cnki.issn.1003-1421.2022.09.11http://dx.doi.org/10.16668/j.cnki.issn.1003-1421.2022.09.11]
Daruwalla Z J, Collins D R and Moore D P. 2010. “Orthobot, to your station!” The application of the remote presence robotic system in orthopaedic surgery in Ireland: a pilot study on patient and nursing staff satisfaction. Journal of Robotic Surgery, 4(3): 177-182 [DOI: 10.1007/s11701-010-0207-xhttp://dx.doi.org/10.1007/s11701-010-0207-x]
Davis S and Mermelstein P. 1980. Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Transactions on Acoustics, Speech, and Signal Processing, 28(4): 357-366 [DOI: 10.1109/tassp.1980.1163420http://dx.doi.org/10.1109/tassp.1980.1163420]
Dehak N, Kenny P J, Dehak R, Dumouchel P and Ouellet P. 2011. Front-end factor analysis for speaker verification. IEEE Transactions on Audio, Speech, and Language Processing, 19(4): 788-798 [DOI: 10.1109/TASL.2010.2064307http://dx.doi.org/10.1109/TASL.2010.2064307]
Delcroix M, Watanabe S, Ogawa A, Karita S and Nakatani T. 2018. Auxiliary feature based adaptation of end-to-end ASR systems//Proceedings of 2018 Annual Conference of the International Speech Communication Association. Hyderabad, India: [s.n.]: 2444-2448 [DOI: 10.21437/Interspeech.2018-1438http://dx.doi.org/10.21437/Interspeech.2018-1438]
Demetrescu C, Finocchi I and Stasko J T. 2002. Specifying algorithm visualizations: interesting events or state mapping?//Software Visualization. Castle, Germany: Springer: 16-30 [DOI: 10.1007/3-540-45875-1_2http://dx.doi.org/10.1007/3-540-45875-1_2]
Dey A K, Salber D, Abowd G D and Futakawa M. 1999. The conference assistant: combining context-awareness with wearable computing//Proceedings of Digest of Papers. The 3rd International Symposium on Wearable Computers. San Francisco, USA: IEEE: 21-28 [DOI: 10.1109/ISWC.1999.806639http://dx.doi.org/10.1109/ISWC.1999.806639]
Di Lascio E, Gashi S and Santini S. 2018. Unobtrusive assessment of students’ emotional engagement during lectures using electrodermal activity sensors. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(3): #103 [DOI: 10.1145/3264913http://dx.doi.org/10.1145/3264913]
Drosos I, Barik T, Guo P J, DeLine R and Gulwani S. 2020. Wrex: a unified programming-by-example interaction for synthesizing readable code for data scientists//Proceedings of 2020 CHI Conference on Human Factors in Computing Systems. Honolulu, USA: ACM: 1-12 [DOI: 10.1145/3313831.3376442http://dx.doi.org/10.1145/3313831.3376442]
Fan Z Y, Li J, Zhou S Y and Xu B. 2019. Speaker-aware speech-transformer//Proceedings of 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). Singapore, Singapore: IEEE: 222-229 [DOI: 10.1109/ASRU46091.2019.9003844http://dx.doi.org/10.1109/ASRU46091.2019.9003844]
Fan Z Y, Li M, Zhou S Y and Xu B. 2021b. Exploring wav2vec 2.0 on speaker verification and language identification//Proceedings of 2012 Annual Conference of the International Speech Communication Association. Brno, Czechia: [s.n.]: 1509-1513
Fan Z Y, Liang Z L, Dong L H, Liu Y, Zhou S Y, Cai M, Zhang J, Ma Z J and Xu B. 2022. Token-level speaker change detection using speaker difference and speech content via continuous integrate-and-fire//Proceedings of 2022 Annual Conference of the International Speech Communication Association. Incheon, Korea (South): [s.n.]: 3749-3753
Fan Z Y, Zhou S Y and Xu B. 2021a. Two-stage pre-training for sequence to sequence speech recognition//Proceedings of 2021 International Joint Conference on Neural Networks (IJCNN). Shenzhen, China: IEEE: #9534170 [DOI: 10.1109/IJCNN52387.2021.9534170http://dx.doi.org/10.1109/IJCNN52387.2021.9534170]
Fang B, Guo D, Sun F C, Liu H P and Wu Y P. 2015. A robotic hand-arm teleoperation system using human arm/hand with a novel data glove//Proceedings of 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO). Zhuhai, China: IEEE: 2483-2488 [DOI: 10.1109/ROBIO.2015.7419712http://dx.doi.org/10.1109/ROBIO.2015.7419712]
Figueredo L F C, De Castro Aguiar R, Chen L P, Richards T C, Chakrabarty S and Dogar M. 2021. Planning to minimize the human muscular effort during forceful human-robot collaboration. ACM Transactions on Human-Robot Interaction, 11(1): #10 [DOI: 10.1145/3481587http://dx.doi.org/10.1145/3481587]
Finn C, Abbeel P and Levine S. 2017. Model-agnostic meta-learning for fast adaptation of deep networks//Proceedings of the 34th International Conference on Machine Learning. Sydney, Australia: JMLR.org: 1126-1135
Foster M E. 2019. Natural language generation for social robotics: Opportunities and challenges. Philosophical Transactions of the Royal Society B: Biological Sciences, 374(1771): #20180027 [DOI: 10.1098/rstb.2018.0027http://dx.doi.org/10.1098/rstb.2018.0027]
Franzluebbers A and Johnson K. 2019. Remote robotic arm teleoperation through virtual reality//Proceedings of 2019 Symposium on Spatial User Interaction. New Orleans, USA: Association for Computing Machinery: #27 [DOI: 10.1145/3357251.3359444http://dx.doi.org/10.1145/3357251.3359444]
Fridman L. 2018. Human-centered autonomous vehicle systems: principles of effective shared autonomy [EB/OL]. [2023-01-13]. https://arxiv.org/pdf/1810.01835.pdfhttps://arxiv.org/pdf/1810.01835.pdf
Fua Y H, Ward M O and Rundensteiner E A. 1999. Hierarchical parallel coordinates for exploration of large datasets//Proceedings of the Visualization’ 99. San Francisco, USA: IEEE: 43-50 [DOI: 10.1109/VISUAL.1999.809866http://dx.doi.org/10.1109/VISUAL.1999.809866]
Furmanova K, Gratzl S, Stitz H, Zichner T, Jaresova M, Lex A and Streit M. 2020. Taggle: combining overview and details in tabular data visualizations. Information Visualization, 19(2): 114-136 [DOI: 10.1177/1473871619878085http://dx.doi.org/10.1177/1473871619878085]
Gales M and Young S. 2008. The application of hidden Markov models in speech recognition. Foundations and Trends® in Signal Processing, 1(3): 195-304 [DOI: 10.1561/2000000004http://dx.doi.org/10.1561/2000000004]
Galin R and Meshcheryakov R. 2021. Collaborative robots: development of robotic perception system, safety issues, and integration of AI to imitate human behavior//Proceedings of 15th International Conference on Electromechanics and Robotics “Zavalishin’s Readings”. Ufa, Russia: Springer: 175-185 [DOI: 10.1007/978-981-15-5580-0_14http://dx.doi.org/10.1007/978-981-15-5580-0_14]
Gangadharaiah R and Narayanaswamy B. 2019. Joint multiple intent detection and slot labeling for goal-oriented dialog//Proceedings of 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Minnesota,USA: Association for Computational Linguistics: 564-569 [DOI: 10.18653/v1/N19-1055http://dx.doi.org/10.18653/v1/N19-1055]
Gao B J, Xu J J, Du J and Huang R H. 2020. Functional analysis framework and case study of educational robot products. Modern Educational Technology, 30(1): 18-24
高博俊, 徐晶晶, 杜静, 黄荣怀. 2020. 教育机器人产品的功能分析框架及其案例研究. 现代教育技术, 30(1): 18-24 [DOI: 10.3969/j.issn.1009-8097.2020.01.003http://dx.doi.org/10.3969/j.issn.1009-8097.2020.01.003]
Gao N, Rahaman M S, Shao W, Ji K X and Salim F D. 2022a. Individual and group-wise classroom seating experience: effects on student engagement in different courses. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 6(3): #115 [DOI: 10.1145/3550335http://dx.doi.org/10.1145/3550335]
Gao N, Shao W, Rahaman M S and Salim F D. 2020b. n-Gage: predicting in-class emotional, behavioural and cognitive engagement in the wild. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4(3): #79 [DOI: 10.1145/3411813http://dx.doi.org/10.1145/3411813]
Gao N, Shao W, Rahaman M S, Zhai J, David K and Salim F D. 2021. Transfer learning for thermal comfort prediction in multiple cities. Building and Environment, 195: #107725 [DOI: 10.1016/j.buildenv.2021.107725http://dx.doi.org/10.1016/j.buildenv.2021.107725]
Gao N, Shao W and Salim F D. 2019. Predicting personality traits from physical activity intensity. Computer, 52(7): 47-56 [DOI: 10.1109/MC.2019.2913751http://dx.doi.org/10.1109/MC.2019.2913751]
Gao S L, Takanobu R, Bosselut A and Huang M L. 2022b. End-to-end task-oriented dialog modeling with semi-structured knowledge management. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30: 2173-2187 [DOI: 10.1109/TASLP.2022.3153255http://dx.doi.org/10.1109/TASLP.2022.3153255]
Gao J S, Gong J T, Zhou G Y, Guo H L and Qi T. 2022c. Learning with yourself: a tangible twin robot system to promote STEM education. IEEE/RSJ International Conference on Intelligent Robots and Systems, 4981-4988 [DOI: 10.1109/IROS47612.2022.9981423http://dx.doi.org/10.1109/IROS47612.2022.9981423]
Gauvain J L and Lee C H. 1992. Bayesian learning for hidden Markov model with Gaussian mixture state observation densities. Speech Communication, 11(2/3): 205-213 [DOI: 10.1016/0167-6393(92)90015-Yhttp://dx.doi.org/10.1016/0167-6393(92)90015-Y]
Gauvain J L and Lee C H. 1994. Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains. IEEE Transactions on Speech and Audio Processing, 2(2): 291-298 [DOI: 10.1109/89.279278http://dx.doi.org/10.1109/89.279278]
Gorham J. 1988. The relationship between verbal teacher immediacy behaviors and student learning. Communication Education, 37(1): 40-53 [DOI: 10.1080/03634528809378702http://dx.doi.org/10.1080/03634528809378702]
Gouaillier D, Hugel V, Blazevic P, Kilner C, Monceaux J, Lafourcade P, Marnier B, Serre J and Maisonnier B. 2009. Mechatronic design of NAO humanoid//Proceedings of 2009 IEEE International Conference on Robotics and Automation. Kobe, Japan: IEEE: 769-774 [DOI: 10.1109/ROBOT.2009.5152516http://dx.doi.org/10.1109/ROBOT.2009.5152516]
Gratzl S, Lex A, Gehlenborg N, Pfister H and Streit M. 2013. LineUp: visual analysis of multi-attribute rankings. IEEE Transactions on Visualization and Computer Graphics, 19(12): 2277-2286 [DOI: 10.1109/TVCG.2013.173http://dx.doi.org/10.1109/TVCG.2013.173]
Graves A, Mohamed A R and Hinton G. 2013. Speech recognition with deep recurrent neural networks//Proceedings of 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. Vancouver, Canada: IEEE: 6645-6649 [DOI: 10.1109/ICASSP.2013.6638947http://dx.doi.org/10.1109/ICASSP.2013.6638947]
Guerreiro J, Sato D, Asakawa S, Dong H X, Kitani K M and Asakawa C. 2019. CaBot: designing and evaluating an autonomous navigation robot for blind people//Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility. Pittsburgh, USA: Association for Computing Machinery: 68-82 [DOI: 10.1145/3308561.3353771http://dx.doi.org/10.1145/3308561.3353771]
Gunes H, Celiktutan O and Sariyanidi E. 2019. Live human-robot interactive public demonstrations with automatic emotion and personality prediction. Philosophical Transactions of the Royal Society B: Biological Sciences, 374(1771): #20180026 [DOI: 10.1098/rstb.2018.0026http://dx.doi.org/10.1098/rstb.2018.0026]
Guo B, Yu Z W, Chen L M, Zhou X S and Ma X J. 2016. MobiGroup: enabling lifecycle support to social activity organization and suggestion with mobile crowd sensing. IEEE Transactions on Human-Machine Systems, 46(3): 390-402 [DOI: 10.1109/THMS.2015.2503290http://dx.doi.org/10.1109/THMS.2015.2503290]
Guo P C, Boyer F, Chang X K, Hayashi T, Higuchi Y, Inaguma H, Kamo N, Li C D, Garcia-Romero D, Shi J T, Shi J, Watanabe S, Wei K, Zhang W Y and Zhang Y K. 2021. Recent developments on Espnet toolkit boosted by conformer//Proceedings of 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toronto, Canada: IEEE: 5874-5878
Guo P J. 2013. Online python tutor: embeddable web-based program visualization for CS education//Proceedings of the 44th ACM Technical Symposium on Computer Science Education. Denver, USA: ACM: 579-584 [DOI: 10.1145/2445196.2445368http://dx.doi.org/10.1145/2445196.2445368]
Guo P J, Kandel S, Hellerstein J M and Heer J. 2011. Proactive wrangling: mixed-initiative end-user programming of data transformation scripts//Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology. Santa Barbara, USA: ACM: 65-74 [DOI: 10.1145/2047196.2047205http://dx.doi.org/10.1145/2047196.2047205]
Han J Y. 2022. Research on Human-Machine Haptic Interactive Shared Steering Control Based on Driver Behavior Understanding for Intelligent Vehicle. Changchun: Jilin University
韩嘉懿. 2022. 基于驾驶人行为理解的智能汽车人机触觉交互协同转向控制研究. 长春: 吉林大学 [DOI: 10.27162/d.cnki.gjlin.2022.001199http://dx.doi.org/10.27162/d.cnki.gjlin.2022.001199]
Handa A, Van Wyk K, Yang W, Liang J, Chao Y W, Wan Q, Birchfield S, Ratliff N and Fox D. 2020. DexPilot: vision-based teleoperation of dexterous robotic hand-arm system//Proceedings of 2020 IEEE International Conference on Robotics and Automation (ICRA). Paris, France: IEEE: 9164-9170 [DOI: 10.1109/ICRA40945.2020.9197124http://dx.doi.org/10.1109/ICRA40945.2020.9197124]
Hansen S, Narayanan N H and Hegarty M. 2002. Designing educationally effective algorithm visualizations. Journal of Visual Languages and Computing, 13(3): 291-317 [DOI: 10.1006/jvlc.2002.0236http://dx.doi.org/10.1006/jvlc.2002.0236]
Hashimoto S, Ishida A, Inami M and Igarashi T. 2011. TouchMe: an augmented reality based remote robot manipulation//Proceedings of the 21st International Conference on Artificial Reality and Telexistence. Osaka, Japan: The Virtual Reality Society of Japan: 61-66
Heigold G, Moreno I, Bengio S and Shazeer N. 2015. End-to-end text-dependent speaker verification//Proceedings of 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Shanghai, China: IEEE: 5115-5119 [DOI: 10.1109/ICASSP.2016.7472652http://dx.doi.org/10.1109/ICASSP.2016.7472652]
Hentout A, Aouache M, Maoudj A and Akli I. 2019. Human-robot interaction in industrial collaborative robotics: a literature review of the decade 2008-2017. Advanced Robotics, 33(15/16): 764-799 [DOI: 10.1080/01691864.2019.1636714http://dx.doi.org/10.1080/01691864.2019.1636714]
Higuchi Y, Inaguma H, Watanabe S, Ogawa T and Kobayashi T. 2021. Improved mask-CTC for non-autoregressive end-to-end ASR//Proceedings of 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toronto, Canada: IEEE: 8363-8367 [DOI: 10.1109/ICASSP39728.2021.9414198http://dx.doi.org/10.1109/ICASSP39728.2021.9414198]
Hood D, Lemaignan S and Dillenbourg P. 2015. When children teach a robot to write: an autonomous teachable humanoid which uses simulated handwriting//Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction. Portland, USA: Association for Computing Machinery: 83-90 [DOI: 10.1145/2696454.2696479http://dx.doi.org/10.1145/2696454.2696479]
Hori A, Kawakami M and Ichii M. 2019. CodeHouse: VR code visualization tool//Proceedings of 2019 Working Conference on Software Visualization (VISSOFT). Cleveland, USA: IEEE: 83-87 [DOI: 10.1109/VISSOFT.2019.00018http://dx.doi.org/10.1109/VISSOFT.2019.00018]
Hu H R, Song Y, Dai L R, McLoughlin I and Liu L. 2022. Class-aware distribution alignment based unsupervised domain adaptation for speaker verification//Interspeech 2022. Incheon, Korea (South): [s.n.]: 3689-3693 [DOI: 10.21437/Interspeech.2022-591http://dx.doi.org/10.21437/Interspeech.2022-591]
Hu S. 2021. The use of program visualization in the teaching of programing design courses. Computer Knowledge and Techniques, 17(7):104-105
胡珊. 2021. 程序可视化方法在程序设计课程教学中的应用. 电脑知识与技术, 17(7): 104-105 [DOI: 10.14004/j.cnki.ckt.2021.0747http://dx.doi.org/10.14004/j.cnki.ckt.2021.0747]
Hu Y. 2020. On Collision Detection Method of Human-Machine Cooperative Robot. Qingdao: Shandong University of Science and Technology
胡钰. 2020. 人机协作机器人的碰撞检测方法研究. 青岛: 山东科技大学 [DOI: 10.27275/d.cnki.gsdku.2020.001003http://dx.doi.org/10.27275/d.cnki.gsdku.2020.001003]
Huang J H, Floyd M F, Tateosian L G and Hipp J A. 2022. Exploring public values through Twitter data associated with urban parks pre- and post- COVID-19. Landscape and Urban Planning, 227: #104517 [DOI: 10.1016/j.landurbplan.2022.104517http://dx.doi.org/10.1016/j.landurbplan.2022.104517]
Huang W Y, Hu W C, Yeung Y T and Chen X. 2020. Conv-transformer transducer: low latency, low frame rate, streamable end-to-end speech recognition [EB/OL].[2023-01-13]. https://arxiv.org/pdf/2008.05750.pdfhttps://arxiv.org/pdf/2008.05750.pdf
Huang Z, Zhuang X D, Liu D B, Xiao X Q, Zhang Y C and Siniscalchi S M. 2019. Exploring retraining-free speech recognition for intra-sentential code-switching//Proceedings of 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Brighton, UK: IEEE: 6066-6070 [DOI: 10.1109/ICASSP.2019.8682478http://dx.doi.org/10.1109/ICASSP.2019.8682478]
Huynh S, Kim S, Ko J G, Balan R K and Lee Y. 2018. EngageMon: multi-modal engagement sensing for mobile games. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(1): #13 [DOI: 10.1145/3191745http://dx.doi.org/10.1145/3191745]
Inaguma H, Mimura M and Kawahara T. 2020. Enhancing monotonic multihead attention for streaming ASR [EB/OL]. [2023-01-13]. https://arxiv.org/pdf/2005.09394.pdfhttps://arxiv.org/pdf/2005.09394.pdf
Inala J P and Singh R. 2017. WebRelate: integrating web data with spreadsheets using examples. Proceedings of the ACM on Programming Languages, 2: #2 [DOI: 10.1145/3158090http://dx.doi.org/10.1145/3158090]
Itakura F. 1975. Minimum prediction residual principle applied to speech recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing, 23(1): 67-72 [DOI: 10.1109/tassp.1975.1162641http://dx.doi.org/10.1109/tassp.1975.1162641]
Ivanov S H, Webster C and Berezina K. 2017. Adoption of robots and service automation by tourism and hospitality companies. Revista Turismo and Desenvolvimento, (27/28): 1501-1517
Jain S and Argall B. 2020. Probabilistic human intent recognition for shared autonomy in assistive robotics. ACM Transactions on Human-Robot Interaction, 9(1): #2 [DOI: 10.1145/3359614http://dx.doi.org/10.1145/3359614]
Jalal A, Kim Y H, Kim Y J, Kamal S and Kim D. 2017. Robust human activity recognition from depth video using spatiotemporal multi-fused features. Pattern Recognition, 61: 295-308 [DOI: 10.1016/j.patcog.2016.08.003http://dx.doi.org/10.1016/j.patcog.2016.08.003]
Jbara A, Agbaria M, Adoni A, Jabareen M and Yasin A. 2019. ICSD: interactive visual support for understanding code control structure//Proceedings of the 26th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER). Hangzhou, China: IEEE: 644-648 [DOI: 10.1109/SANER.2019.8667981http://dx.doi.org/10.1109/SANER.2019.8667981]
Jiao Z Q, Meng Q L, Shao H C and Yu H L. 2022. Design and research of the upper limb rehabilitation training and assisted living robot. Chinese Journal of Rehabilitation Medicine, 37(9): 1219-1222
焦宗琪, 孟巧玲, 邵海存, 喻洪流. 2022. 上肢康复训练与生活辅助机器人的设计与研究. 中国康复医学杂志, 37(9): 1219-1222 [DOI: 10.3969/j.issn.1001-1242.2022.09.011http://dx.doi.org/10.3969/j.issn.1001-1242.2022.09.011]
Jin Z J, Anderson M R, Cafarella M and Jagadish H V. 2017. Foofah: transforming data by example//Proceedings of 2017 ACM International Conference on Management of Data. Chicago, USA: ACM: 683-698 [DOI: 10.1145/3035918.3064034http://dx.doi.org/10.1145/3035918.3064034]
Jobanputra C, Bavishi J and Doshi N. 2019. Human activity recognition: a survey. Procedia Computer Science, 155: 698-703 [DOI: 10.1016/j.procs.2019.08.100http://dx.doi.org/10.1016/j.procs.2019.08.100]
Johnsson S L and Krawitz R L. 1992. Cooley-Tukey FFT on the connection machine. Parallel Computing, 18(11): 1201-1221 [DOI: 10.1016/0167-8191(92)90066-Ghttp://dx.doi.org/10.1016/0167-8191(92)90066-G]
Kachouie R, Sedighadeli S M, Khosla R and Chu M T. 2014. Socially assistive robots in elderly care: a mixed-method systematic literature review. International Journal of Human-Computer Interaction, 30(5): 369-393 [DOI: 10.1080/10447318.2013.873278http://dx.doi.org/10.1080/10447318.2013.873278]
Kahn J, Lee A and Hannun A. 2020. Self-training for end-to-end speech recognition//Proceedings of 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Barcelona, Spain: IEEE: 7084-7088 [DOI: 10.1109/ICASSP40776.2020.9054295http://dx.doi.org/10.1109/ICASSP40776.2020.9054295]
Kandel S, Paepcke A, Hellerstein J and Heer J. 2011. Wrangler: interactive visual specification of data transformation scripts//Proceedings of 2011 SIGCHI Conference on Human Factors in Computing Systems. Vancouver, Canada: ACM: 3363-3372 [DOI: 10.1145/1978942.1979444http://dx.doi.org/10.1145/1978942.1979444]
Karita S, Watanabe S, Iwata T, Ogawa A and Delcroix M. 2018. Semi-supervised end-to-end speech recognition//Proceedings of 2018 Annual Conference of the International Speech Communication Association (INTERSPEECH). Hyderabad, India: ISCA: 2-6 [DOI: 10.21437/Interspeech.2018-1746http://dx.doi.org/10.21437/Interspeech.2018-1746]
Kennedy J, Baxter P and Belpaeme T. 2015. Comparing robot embodiments in a guided discovery learning interaction with children. International Journal of Social Robotics, 7(2): 293-308 [DOI: 10.1007/s12369-014-0277-4http://dx.doi.org/10.1007/s12369-014-0277-4]
Kenny P, Boulianne G, Ouellet P and Dumouchel P. 2007. Joint factor analysis versus eigenchannels in speaker recognition. IEEE Transactions on Audio, Speech, and Language Processing, 15(4): 1435-1447 [DOI: 10.1109/TASL.2006.881693http://dx.doi.org/10.1109/TASL.2006.881693]
Khadri H O. 2021. University academics’ perceptions regarding the future use of telepresence robots to enhance virtual transnational education: an exploratory investigation in a developing country. Smart Learning Environments, 8(1): #28 [DOI: 10.1186/s40561-021-00173-8http://dx.doi.org/10.1186/s40561-021-00173-8]
Khaloo P, Maghoumi M, Taranta E, Bettner D and Laviola J. 2017. Code park: a new 3D code visualization tool//Proceedings of 2017 IEEE Working Conference on Software Visualization (VISSOFT). Shanghai, China: IEEE: 43-53 [DOI: 10.1109/VISSOFT.2017.10http://dx.doi.org/10.1109/VISSOFT.2017.10]
Khan M, Xu L, Nandi A and Hellerstein J M. 2017. Data tweening: incremental visualization of data transforms. Proceedings of the VLDB Endowment, 10(6): 661-672 [DOI: 10.14778/3055330.3055333http://dx.doi.org/10.14778/3055330.3055333]
Kidd C D and Breazeal C. 2004. Effect of a robot on user perceptions//Proceedings of 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Sendai, Japan: IEEE: 3559-3564 [DOI: 10.1109/IROS.2004.1389967http://dx.doi.org/10.1109/IROS.2004.1389967]
Kim M, Kim G, Lee S W and Ha J W. 2021a. St-Bert: cross-modal language model pre-training for end-to-end spoken language understanding//Proceedings of 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toronto, Canada: IEEE: 7478-7482 [DOI: 10.1109/ICASSP39728.2021.9414558http://dx.doi.org/10.1109/ICASSP39728.2021.9414558]
Kim S, Kim G, Shin S and Lee S. 2021b. Two-stage textual knowledge distillation for end-to-end spoken language understanding [EB/OL]. [2021-06-10]. https://arxiv.org/pdf/2010.13105.pdfhttps://arxiv.org/pdf/2010.13105.pdf
Köse H, Uluer P, Akalın N, Yorgancı R, Özkul A and Ince G. 2015. The effect of embodiment in sign language tutoring with assistive humanoid robots. International Journal of Social Robotics, 7(4): 537-548 [DOI: 10.1007/s12369-015-0311-1http://dx.doi.org/10.1007/s12369-015-0311-1]
Kosower D A, Lopez-Villarejo J J and Roubtsov S. 2014. Flowgen: flowchart-based documentation framework for C++//Proceedings of the 14th IEEE International Working Conference on Source Code Analysis and Manipulation. Victoria, Canada: IEEE: 59-64 [DOI: 10.1109/SCAM.2014.35http://dx.doi.org/10.1109/SCAM.2014.35]
Kristoffersson A, Coradeschi S, and Loutfi A. 2013. A review of mobile robotic telepresence. Adv. in Hum.-Comp. Int.:#3 [DOI:10.1155/2013/902316http://dx.doi.org/10.1155/2013/902316]
Kuo C M, Chen L C, Tseng C Y. 2017.Investigating an innovative service with hospitality robots. International Journal of Contemporary Hospitality Management,29(5):1305-1321
Kumar N S, Revanth Babu P N, Sai Eashwar K S, Srinath M P and Bothra S. 2021. Code-Viz: data structure specific visualization and animation tool for user-provided code//Proceedings of 2021 International Conference on Smart Generation Computing, Communication and Networking (SMART GENCON). Pune, India: IEEE: #9645747 [DOI: 10.1109/SMARTGENCON51891.2021.9645747http://dx.doi.org/10.1109/SMARTGENCON51891.2021.9645747]
Lai C I, Chuang Y S, Lee H Y, Li S W and Glass J. 2021. Semi-supervised spoken language understanding via self-supervised speech and language model pretraining//Proceedings of 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toronto, Canada: IEEE: 7468-7472 [DOI: 10.1109/ICASSP39728.2021.9414922http://dx.doi.org/10.1109/ICASSP39728.2021.9414922]
Lan O Y, Zhu S and Yu K. 2018. Semi-supervised training using adversarial multi-task learning for spoken language understanding//Proceedings of 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Calgary, Canada: IEEE: 6049-6053 [DOI: 10.1109/ICASSP.2018.8462669http://dx.doi.org/10.1109/ICASSP.2018.8462669]
Lei Y, Yang S, Cong J, Xie L and Su D. 2022. Glow-WaveGAN 2: high-quality zero-shot text-to-speech synthesis and any-to-any voice conversion [EB/OL]. [2022-07-05]. https://arxiv.org/pdf/2207.01832.pdfhttps://arxiv.org/pdf/2207.01832.pdf
Leite I, Pereira A, Castellano G, Mascarenhas S, Martinho C and Paiva A. 2011. Social robots in learning environments: a case study of an empathic chess companion//Proceedings of 2011 International Workshop on Personalization Approaches in Learning Environments. Girona, Spain: CEUR-WS: 8-12
Lex A, Schulz H J, Streit M, Partl C and Schmalstieg D. 2011. VisBricks: multiform visualization of large, inhomogeneous data. IEEE Transactions on Visualization and Computer Graphics, 17(12): 2291-2300 [DOI: 10.1109/TVCG.2011.250http://dx.doi.org/10.1109/TVCG.2011.250]
Lex A, Streit M, Schulz H J, Partl C, Schmalstieg D, Park P J and Gehlenborg N. 2012. StratomeX: visual analysis of large-scale heterogeneous genomics data for cancer subtype characterization. Computer Graphics Forum, 31: 1175-1184 [DOI: 10.1111/j.1467-8659.2012.03110.xhttp://dx.doi.org/10.1111/j.1467-8659.2012.03110.x]
Leyzberg D, Spaulding S, Toneva M and Scassellati B. 2012. The physical presence of a robot tutor increases cognitive learning gains//Proceedings of the 34th Annual Meeting of the Cognitive Science Society. Sapporo, Japan: the Cognitive Science Society: 1882-1887
Li G Z, Li R F, Wang Z C, Liu C H, Lu M and Wang G R. 2023. HiTailor: interactive transformation and visualization for hierarchical tabular data. IEEE Transactions on Visualization and Computer Graphics, 29(1): 139-148 [DOI: 10.1109/TVCG.2022.3209354http://dx.doi.org/10.1109/TVCG.2022.3209354]
Li J. 2015. The benefit of being physically present: a survey of experimental works comparing copresent robots, telepresent robots and virtual agents. International Journal of Human-Computer Studies, 77: 23-37 [DOI: 10.1016/j.ijhcs.2015.01.001http://dx.doi.org/10.1016/j.ijhcs.2015.01.001]
Li J, Fang X, Chu F, Gao T, Song Y and Dai R L. 2022a. Acoustic feature shuffling network for text-independent speaker verification//Proceedings of Interspeech 2022. Incheon, Korea (South): [s.n.]: 4790-4794 [DOI: 10.21437/Interspeech.2022-10278http://dx.doi.org/10.21437/Interspeech.2022-10278]
Li J B, Meng Y, Wu X X, Wu Z Y, Jia J, Meng H L, Tian Q, Wang Y P and Wang Y X. 2022b. Inferring speaking styles from multi-modal conversational context by multi-scale relational graph convolutional networks//Proceedings of the 30th ACM International Conference on Multimedia. Lisbon, Portugal: ACM: 5811-5820 [DOI: 10.1145/3503161.3547831http://dx.doi.org/10.1145/3503161.3547831]
Li N. 2018. Research and Implementation on Path Planning of Industrial Robot towards Safety Assurance of Human-Robot Collaboration. Wuhan: Wuhan University of Technology
李娜. 2018. 面向人机协作安全保障的工业机器人路径规划研究与实现. 武汉: 武汉理工大学
Li P L. 2018. An Interactive Method for Telepresence System Based on Stereo Vision and Gesture Recognition. Beijing: Beijing Institute of Technology
李佩霖. 2018. 一种基于立体视觉与手势识别的远程呈现交互方法. 北京: 北京理工大学 [DOI: 10.26948/d.cnki.gbjlu.2018.000433http://dx.doi.org/10.26948/d.cnki.gbjlu.2018.000433]
Li Z X. 2022. Research on Intelligent Assistance Method of Nursing Robot in the Stand-to-sit Interaction. Shenyang: Shenyang University of Technology
李泽新. 2022. 站坐交互过程中护理机器人智能辅助方法研究. 沈阳: 沈阳工业大学 [DOI: 10.27322/d.cnki.gsgyu.2022.000214http://dx.doi.org/10.27322/d.cnki.gsgyu.2022.000214]
Liang C, Yu C, Qin Y, Wang Y T and Shi Y C. 2021. DualRing: enabling subtle and expressive hand interaction with dual IMU rings. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 5(3): #115 [DOI: 10.1145/3478114http://dx.doi.org/10.1145/3478114]
Liao H. 2013. Speaker adaptation of context dependent deep neural networks//Proceedings of 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. Vancouver, Canada: IEEE: 7947-7951 [DOI: 10.1109/ICASSP.2013.6639212http://dx.doi.org/10.1109/ICASSP.2013.6639212]
Lin Y P, Wang C H, Jung T P, Wu T L, Jeng S K, Duann J R and Chen J H. 2010. EEG-based emotion recognition in music listening. IEEE Transactions on Biomedical Engineering, 57(7): 1798-1806 [DOI: 10.1109/TBME.2010.2048568http://dx.doi.org/10.1109/TBME.2010.2048568]
Liu F, Mao Q R, Wang L J, Ruwa N, Gou J P and Zhan Y Z. 2019. An emotion-based responding model for natural language conversation. World Wide Web, 22(2): 843-861 [DOI: 10.1007/s11280-018-0601-2http://dx.doi.org/10.1007/s11280-018-0601-2]
Liu S. 2022. Design of Intelligent Service Robot based on Situation Model. Jinan: Shandong Jianzhu University
刘双. 2022. 基于情境模型的智能服务机器人设计研究. 济南: 山东建筑大学 [DOI: 10.27273/d.cnki.gsajc.2022.000213http://dx.doi.org/10.27273/d.cnki.gsajc.2022.000213]
Liu S S, Maljovec D, Wang B, Bremer P T and Pascucci V. 2017. Visualizing high-dimensional data: advances in the past decade. IEEE Transactions on Visualization and Computer Graphics, 23(3): 1249-1268 [DOI: 10.1109/TVCG.2016.2640960http://dx.doi.org/10.1109/TVCG.2016.2640960]
Liu T T, Liu Z, Chai Y J, Wang J and Wang Y Y. 2021. Agent affective computing in human-computer interaction. Journal of Image and Graphics, 26(12): 2767-2777
刘婷婷, 刘箴, 柴艳杰, 王瑾, 王媛怡. 2021. 人机交互中的智能体情感计算研究. 中国图象图形学报, 26(12): 2767-2777 [DOI: 10.11834/jig.200498http://dx.doi.org/10.11834/jig.200498]
Liu Z C, Wu N Q, Zhang Y J and Ling Z H. 2022. Integrating discrete word-level style variations into non-autoregressive acoustic models for speech synthesis//Proceedings of Interspeech 2022. Incheon, Korea(South): [s.n.]: 5508-5512 [DOI: 10.21437/Interspeech.2022-984http://dx.doi.org/10.21437/Interspeech.2022-984]
Long M L and Kang H Y. 2022. Malicious code detection for industrial internet based on code visualization. Computer-Integrated Manufacturing Systems: 1-14
龙墨澜, 康海燕. 2022. 基于代码可视化的工业互联网恶意代码检测方法. 计算机集成制造系统, 1-14
Lorenzo-Trueba J, Henter G E, Takaki S, Yamagishi J, Morino Y and Ochiai Y. 2018. Investigating different representations for modeling and controlling multiple emotions in DNN-based speech synthesis. Speech Communication, 99: 135-143 [DOI: 10.1016/j.specom.2018.03.002http://dx.doi.org/10.1016/j.specom.2018.03.002]
Lu L, Cai R Y and Gursoy D. 2019. Developing and validating a service robot integration willingness scale. International Journal of Hospitality Management, 80: 36-51 [DOI: 10.1016/j.ijhm.2019.01.005http://dx.doi.org/10.1016/j.ijhm.2019.01.005]
Lu W, Wang J D, Chen Y Q, Pan S J, Hu C Y and Qin X. 2022. Semantic-discriminative mixup for generalizable sensor-based cross-domain activity recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 6(2): #65 [DOI: 10.1145/3534589http://dx.doi.org/10.1145/3534589]
Luck J E. 1969. Automatic speaker verification using cepstral measurements. Journal of the Acoustical Society of America, 46(4): #1026 [DOI: 10.1121/1.1911795http://dx.doi.org/10.1121/1.1911795]
Luo X N, Yuan Y, Zhang K Y, Xia J Z, Zhou Z G, Chang L and Gu T L. 2019. Enhancing statistical charts: toward better data visualization and analysis. Journal of Visualization, 22(4): 819-832 [DOI: 10.1007/s12650-019-00569-2http://dx.doi.org/10.1007/s12650-019-00569-2]
Müller M. 2007. Dynamic time warping//Information Retrieval for Music and Motion. Berlin, Heidelberg: Springer: 69-84 [DOI: 10.1007/978-3-540-74048-3_4http://dx.doi.org/10.1007/978-3-540-74048-3_4]
Ma H T. 2022. Safety Analysis and Evaluation of Human-Machine Interaction of Automated Lane Keeping System. Jilin: Jilin University
马海涛. 2022. 自动车道保持系统人机交互安全分析与评价. 吉林: 吉林大学 [DOI: 10.27162/d.cnki.gjlin.2022.006085http://dx.doi.org/10.27162/d.cnki.gjlin.2022.006085]
Ma N and Yuan X R. 2020. Tabular data visualization interactive construction for analysis tasks. Journal of Computer-Aided Design and Computer Graphics, 32(10): 1628-1636
马楠, 袁晓如. 2020. 面向分析任务的表格数据可视化交互构建. 计算机辅助设计与图形学学报, 32(10): 1628-1636 [DOI: 10.3724/SP.J.1089.2020.18467http://dx.doi.org/10.3724/SP.J.1089.2020.18467]
Madotto A, Wu C S and Fung P. 2018. Mem2Seq: effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. arXiv preprint [EB/OL]. [2023-01-13]. https://arxiv.org/pdf/1804.08217.pdfhttps://arxiv.org/pdf/1804.08217.pdf
Marchetti E, Grimme S, Hornecker E, Kollakidou A and Graf P. 2022. Pet-robot or appliance? Care home residents with dementia respond to a zoomorphic floor washing robot//Proceedings of 2022 CHI Conference on Human Factors in Cong Systems. New Orleans, USA: Association for Computing Machinery: #521 [DOI: 10.1145/3491102.3517463http://dx.doi.org/10.1145/3491102.3517463]
Min S, Lee B and Yoon S. 2017. Deep learning in bioinformatics. Briefings in Bioinformatics, 18(5): 851-869 [DOI: 10.1093/bib/bbw068http://dx.doi.org/10.1093/bib/bbw068]
Mohamed A R, Dahl G and Hinton G. 2010. Deep belief networks for phone recognition//Proceedings of NIPS Workshop on Deep Learning for Speech Recognition and Related Applications. [s.l.]: [s.n.]: #39
Moseler O, Kreber L and Diehl S. 2022. The ThreadRadar visualization for debugging concurrent Java programs. Journal of Visualization, 25(6): 1267-1289 [DOI: 10.1007/s12650-022-00843-whttp://dx.doi.org/10.1007/s12650-022-00843-w]
Mu W B, Xu B Y, Guo W T, Wahafu T, Zou C, Ji B C and Cao L. 2022. Early clinical outcomes of MAKO robot-assisted total knee arthroplasty for knee osteoarthritis with severe varus deformity. Chinese Journal Bone and Joint Surgery, 15(8): 612-618
穆文博, 胥伯勇, 郭文涛, 吐尔洪江·瓦哈甫, 邹晨, 纪保超, 曹力. 2022. MAKO机器人辅助下全膝关节置换术治疗伴有严重内翻畸形膝骨关节炎的早期疗效研究. 中华骨与关节外科杂志, 15(8): 612-618 [DOI: 10.3969/j.issn.2095-9958.2022.08.09http://dx.doi.org/10.3969/j.issn.2095-9958.2022.08.09]
Murphy J, Gretzel U and Pesonen J. 2019. Marketing robot services in hospitality and tourism: the role of anthropomorphism. Journal of Travel and Tourism Marketing, 36(7): 784-795 [DOI: 10.1080/10548408.2019.1571983http://dx.doi.org/10.1080/10548408.2019.1571983]
Niederer C, Stitz H, Hourieh R, Grassinger F, Aigner W and Streit M. 2018. TACO: visualizing changes in tables over time. IEEE Transactions on Visualization and Computer Graphics, 24(1): 677-686 [DOI: 10.1109/TVCG.2017.2745298http://dx.doi.org/10.1109/TVCG.2017.2745298]
Niu K, Zhang F S, Chang Z X and Zhang D Q. 2018. A Fresnel diffraction model based human respiration detection system using COTS Wi-Fi devices//Proceedings of 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers. Singapore,Singapore: ACM: 416-419 [DOI: 10.1145/3267305.3267561http://dx.doi.org/10.1145/3267305.3267561]
Obuchi M, Huckins J F, Wang W C, daSilva A, Rogers C, Murphy E, Hedlund E, Holtzheimer P, Mirjafari S and Campbell A. 2020. Predicting brain functional connectivity using mobile sensing. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4(1): #23 [DOI: 10.1145/3381001http://dx.doi.org/10.1145/3381001]
Ochiai T, Watanabe S, Katagiri S, Hori T and Hershey J. 2018. Speaker adaptation for multichannel end-to-end speech recognition//Proceedings of 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Calgary, Canada: IEEE: 6707-6711 [DOI: 10.1109/ICASSP.2018.8462161http://dx.doi.org/10.1109/ICASSP.2018.8462161]
Ogawa K, Nishio S, Koda K, Taura K, Minato T, Ishii C T and Ishiguro H. 2011. Telenoid: tele-presence android for communication//Proceedings of ACM SIGGRAPH 2011 Emerging Technologies. Vancouver, Canada: Association for Computing Machinery: #15 [DOI: 10.1145/2048259.2048274http://dx.doi.org/10.1145/2048259.2048274]
Osawa H, Ema A, Hattori H, Akiya N, Kanzaki N, Kubo A, Koyama T and Ichise R. 2017. Analysis of robot hotel: reconstruction of works with robots//Proceedings of the 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). Lisbon, Portugal: IEEE: 219-223 [DOI: 10.1109/ROMAN.2017.8172305http://dx.doi.org/10.1109/ROMAN.2017.8172305]
Pajer S, Streit M, Torsney-Weir T, Spechtenhauser F, Möller T and Piringer H. 2017. WeightLifter: visual weight space exploration for multi-criteria decision making. IEEE Transactions on Visualization and Computer Graphics, 23(1): 611-620 [DOI: 10.1109/TVCG.2016.2598589http://dx.doi.org/10.1109/TVCG.2016.2598589]
Pakarinen T, Pietilä J and Nieminen H. 2019. Prediction of self-perceived stress and arousal based on electrodermal activity//Proceedings of the 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). Berlin, Germany: IEEE: 2191-2195 [DOI: 10.1109/EMBC.2019.8857621http://dx.doi.org/10.1109/EMBC.2019.8857621]
Pang G F. 2022. Research on Interaction Design of Home Elderly Service Robot based on User Experience. Yangzhou: Yangzhou University
庞广风. 2022. 基于用户体验的家庭助老服务机器人交互设计研究. 扬州: 扬州大学 [DOI: 10.27441/d.cnki.gyzdu.2022.002178http://dx.doi.org/10.27441/d.cnki.gyzdu.2022.002178]
Paul P and George T. 2015. An effective approach for human activity recognition on smartphone//Proceedings of 2015 IEEE International Conference on Engineering and Technology (ICETECH). Coimbatore, India: IEEE: #7275024 [DOI: 10.1109/ICETECH.2015.7275024http://dx.doi.org/10.1109/ICETECH.2015.7275024]
Peng Y K and Ling Z H. 2022. Decoupled pronunciation and prosody modeling in meta-learning-based multilingual speech synthesis [EB/OL]. [2022-09-14]. https://arxiv.org/pdf/2209.06789.pdfhttps://arxiv.org/pdf/2209.06789.pdf
Powers A, Kiesler S, Fussell S and Torrey C. 2007. Comparing a computer agent with a humanoid robot//Proceedings of 2007 ACM/IEEE International Conference on Human-Robot Interaction. Arlington, USA: ACM: 145-152 [DOI: 10.1145/1228716.1228736http://dx.doi.org/10.1145/1228716.1228736]
Prentice C, Dominique Lopes S and Wang X Q. 2020. The impact of artificial intelligence and employee service quality on customer satisfaction and loyalty. Journal of Hospitality Marketing and Management, 29(7): 739-756 [DOI: 10.1080/19368623.2020.1722304http://dx.doi.org/10.1080/19368623.2020.1722304]
Prescott T J, Camilleri D, Martinez-Hernandez U, Damianou A and Lawrence N D. 2019. Memory and mental time travel in humans and social robots. Philosophical Transactions of the Royal Society B: Biological Sciences, 374(1771): #20180025 [DOI: 10.1098/rstb.2018.0025http://dx.doi.org/10.1098/rstb.2018.0025]
Price B A, Baecker R M and Small I S. 1993. A principled taxonomy of software visualization. Journal of Visual Languages and Computing, 4(3): 211-266 [DOI: 10.1006/jvlc.1993.1015http://dx.doi.org/10.1006/jvlc.1993.1015]
Pripfl J, Körtner T, Batko-Klein D, Hebesberger D, Weninger M, Gisinger C, Frennert S, Eftring H, Antona M, Adami I, Weiss A, Bajones M and Vincze M. 2016. Results of a real world trial with a mobile social service robot for older adults//Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). Christchurch, New Zealand: IEEE: 497-498 [DOI: 10.1109/HRI.2016.7451824http://dx.doi.org/10.1109/HRI.2016.7451824]
Pu X Y, Kross S, Hofman J M and Goldstein D G. 2021. Datamations: animated explanations of data analysis pipelines//Proceedings of 2021 CHI Conference on Human Factors in Computing Systems. Yokohama, Japan: ACM: #3445063 [DOI: 10.1145/3411764.3445063http://dx.doi.org/10.1145/3411764.3445063]
Qi J, Yang P, Waraich A, Deng Z K, Zhao Y B and Yang Y. 2018. Examining sensor-based physical activity recognition and monitoring for healthcare using internet of things: a systematic review. Journal of Biomedical Informatics, 87: 138-153 [DOI: 10.1016/j.jbi.2018.09.002http://dx.doi.org/10.1016/j.jbi.2018.09.002]
Qian Y M, Gong X and Huang H J. 2022. Layer-wise fast adaptation for end-to-end multi-accent speech recognition. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30: 2842-2853 [DOI: 10.1109/TASLP.2022.3198546http://dx.doi.org/10.1109/TASLP.2022.3198546]
Qian Y M and Zhou Z K. 2022. Optimizing data usage for low-resource speech recognition. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30: 394-403 [DOI: 10.1109/TASLP.2022.3140552http://dx.doi.org/10.1109/TASLP.2022.3140552]
Qin L B, Xu X, Che W X and Liu T. 2020a. AGIF: an adaptive graph-interactive framework for joint multiple intent detection and slot filling [EB/OL]. [2023-01-13]. https://arxiv.org/pdf/2004.10087.pdfhttps://arxiv.org/pdf/2004.10087.pdf
Qin L B, Xu X, Che W X, Zhang Y and Liu T. 2020b. Dynamic fusion network for multi-domain end-to-end task-oriented dialog//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics: 6344-6354 [DOI: 10.18653/v1/2020.acl-main.565http://dx.doi.org/10.18653/v1/2020.acl-main.565]
Rabiner L and Juang B H. 1993. Fundamentals of Speech Recognition. Englewood Cliffs, USA: Prentice-Hall, Inc.
Rae I, Takayama L and Mutlu B. 2013. The influence of height in robot-mediated communication//Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI). Tokyo, Japan: IEEE: 1-8 [DOI: 10.1109/HRI.2013.6483495http://dx.doi.org/10.1109/HRI.2013.6483495]
Rae I, Mutlu B and Takayama L. 2014. Bodies in motion: mobility, presence, and task awareness in telepresence//Proceedings of 2014 SIGCHI Conference on Human Factors in Computing Systems. Toronto, Canada: Association for Computing Machinery: 2153-2162 [DOI: 10.1145/2556288.2557047http://dx.doi.org/10.1145/2556288.2557047]
Rae I, Venolia G, Tang J C and Molnar D. 2015. A framework for understanding and designing telepresence//Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work and Social Computing. Vancouver, Canada: Association for Computing Machinery: 1552-1566 [DOI: 10.1145/2675133.2675141http://dx.doi.org/10.1145/2675133.2675141]
Reynolds D A, Quatieri T F and Dunn R B. 2000. Speaker verification using adapted Gaussian mixture models. Digital Signal Processing, 10(1/3): 19-41 [DOI: 10.1006/dspr.1999.0361http://dx.doi.org/10.1006/dspr.1999.0361]
Reynolds D A and Rose R C. 1995. Robust text-independent speaker identification using Gaussian mixture speaker models. IEEE Transactions on Speech and Audio Processing, 3(1): 72-83 [DOI: 10.1109/89.365379http://dx.doi.org/10.1109/89.365379]
Roberts J and Arnold D. 2012. Robots, the internet and teaching history in the age of the NBN and the Australian curriculum. Teaching History, 46(4): 32-34
Rodríguez-Guerra D, Sorrosal G, Cabanes I and Calleja C. 2021. Human-robot interaction review: challenges and solutions for modern industrial environments. IEEE Access, 9: 108557-108578 [DOI: 10.1109/ACCESS.2021.3099287http://dx.doi.org/10.1109/ACCESS.2021.3099287]
Rohdin J, Stafylakis T, Silnova A, Zeinali H, Burget L and Plchot O. 2019. Speaker verification using end-to-end adversarial language adaptation//Proceedings of 2019 IEEE International Conference on Acoustics, Speech and Signal Processing. Brighton, UK: IEEE: 6006-6010 [DOI: 10.1109/ICASSP.2019.8683616http://dx.doi.org/10.1109/ICASSP.2019.8683616]
Ronao C A and Cho S B. 2016. Human activity recognition with smartphone sensors using deep learning neural networks. Expert Systems with Applications, 59: 235-244 [DOI: 0.1016/j.eswa.2016.04.032http://dx.doi.org/0.1016/j.eswa.2016.04.032]
Sadri A, Salim F D, Ren Y L, Shao W, Krumm J C and Mascolo C. 2018. What will you do for the rest of the day? An approach to continuous trajectory prediction. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(4): #186 [DOI: 10.1145/3287064http://dx.doi.org/10.1145/3287064]
Sakoe H and Chiba S. 1978. Dynamic programming algorithm optimization for spoken word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing, 26(1): 43-49 [DOI: 10.1109/TASSP.1978.1163055http://dx.doi.org/10.1109/TASSP.1978.1163055]
Salichs M A, Castro-Gonzlez Á, Salichs E, Fernndez-Rodicio E, Maroto-Gómez M, Gamboa-Montero J J, Marques-Villarroya S, Castillo J C, Alonso-Martín F and Malfaz M. 2020. Mini: a new social robot for the elderly. International Journal of Social Robotics, 12(6): 1231-1249 [DOI: 10.1007/s12369-020-00687-0http://dx.doi.org/10.1007/s12369-020-00687-0]
Samani H, Saadatian E, Pang N, Polydorou D, Fernando O N N, Nakatsu R and Koh J T K V. 2013. Cultural robotics: the culture of robotics and robotics in culture. International Journal of Advanced Robotic Systems, 10(12): #400 [DOI: 10.5772/57260http://dx.doi.org/10.5772/57260]
Sano A, Phillips A J, Yu A Z, McHill A W, Taylor S, Jaques N, Czeisler C A, Klerman E B and Picard R W. 2015. Recognizing academic performance, sleep quality, stress level, and mental health using personality traits, wearable sensors and mobile phones//Proceedings of the 12th IEEE International Conference on Wearable and Implantable Body Sensor Networks (BSN). Cambridge, USA: IEEE: 1-6 [DOI: 10.1109/BSN.2015.7299420http://dx.doi.org/10.1109/BSN.2015.7299420]
Seide F, Li G and Yu D. 2011. Conversational speech transcription using context-dependent deep neural networks//Proceedings of the 12th Annual Conference of the International Speech Communication Association (INTERSPEECH). Florence, Italy: [s.n.]: 437-440 [DOI: 10.21437/Interspeech.2011-169http://dx.doi.org/10.21437/Interspeech.2011-169]
Shao W, Nguyen T, Qin K, Youssef M and Salim F D. 2018. BLEDoorGuard: a device-free person identification framework using bluetooth signals for door access. IEEE Internet of Things Journal, 5(6): 5227-5239 [DOI: 10.1109/JIOT.2018.2868243http://dx.doi.org/10.1109/JIOT.2018.2868243]
Shao W, Salim F D, Nguyen T and Youssef M. 2017. Who opened the room? Device-free person identification using bluetooth signals in door access//Proceedings of 2017 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData). Exeter, UK: IEEE: 68-75 [DOI: 10.1109/iThings-GreenCom-CPSCom-SmartData.2017.16http://dx.doi.org/10.1109/iThings-GreenCom-CPSCom-SmartData.2017.16]
Shao Z H, Wu Z Q and Huang M L. 2022. AdvExpander: generating natural language adversarial examples by expanding text. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30: 1184-1196 [DOI: 10.1109/TASLP.2021.3129339http://dx.doi.org/10.1109/TASLP.2021.3129339]
Sheridan T B. 2016. Human-robot interaction: status and challenges. Human Factors, 58(4): 525-532 [DOI: 10.1177/0018720816644364http://dx.doi.org/10.1177/0018720816644364]
Shi J, Qin K J and Guo K Y. 2018. Research on automated driving classification methods and related test evaluation techniques. Auto Industry Research, (7): 30-37
石娟, 秦孔建, 郭魁元. 2018. 自动驾驶分级方法及相关测试评价技术研究. 汽车工业研究, (7): 30-37 [DOI: 10.3969/j.issn.1009-847X.2018.07.006http://dx.doi.org/10.3969/j.issn.1009-847X.2018.07.006]
Shomin M, Forlizzi J and Hollis R. 2015. Sit-to-stand assistance with a balancing mobile robot//Proceedings of 2015 IEEE International Conference on Robotics and Automation (ICRA). Seattle, USA: IEEE: 3795-3800 [DOI: 10.1109/ICRA.2015.7139727http://dx.doi.org/10.1109/ICRA.2015.7139727]
Shrestha N, Barik T and Parnin C. 2021. Unravel: a fluent code explorer for data wrangling//Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology. Virtual Event, USA: ACM: 198-207 [DOI: 10.1145/3472749.3474744http://dx.doi.org/10.1145/3472749.3474744]
Siddhant A, Goyal A and Metallinou A. 2019. Unsupervised transfer learning for spoken language understanding in intelligent agents. Proceedings of the AAAI Conference on Artificial Intelligence, 33(1): 4959-4966 [DOI: 10.1609/aaai.v33i01.33014959http://dx.doi.org/10.1609/aaai.v33i01.33014959]
Singer P W. 2009. Wired for War: The Robotics Revolution and Conflict in the 21st Century. New York, USA: Penguin
Slade P, Tambe A and Kochenderfer M J. 2021. Multimodal sensing and intuitive steering assistance improve navigation and mobility for people with impaired vision. Science Robotics, 6(59): #eabg6594 [DOI: 10.1126/scirobotics.abg6594http://dx.doi.org/10.1126/scirobotics.abg6594]
Snyder D, Garcia-Romero D, Sell G, Povey D and Khudanpur S. 2018. X-vectors: robust DNN embeddings for speaker recognition//Proceedings of 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Calgary, Canada: IEEE: 5329-5333 [DOI: 10.1109/ICASSP.2018.8461375http://dx.doi.org/10.1109/ICASSP.2018.8461375]
Song Y P, Liu Z Q, Bi W, Yan R and Zhang M. 2020. Learning to customize model structures for few-shot dialogue generation tasks//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics: 5832-5841 [DOI: 10.18653/v1/2020.acl-main.517http://dx.doi.org/10.18653/v1/2020.acl-main.517]
Sotelo J, Mehri S, Kumar K, Santos J F, Kastner K, Courville A C and Bengio Y. 2017. Char2Wav: end-to-end speech synthesis//Proceedings of the 5th International Conference on Learning Representations. Toulon, France: [s.n.]
Stahnke J, Dörk M, Müller B and Thom A. 2016. Probing projections: interaction techniques for interpreting arrangements and errors of dimensionality reductions. IEEE Transactions on Visualization and Computer Graphics, 22(1): 629-638 [DOI: 10.1109/TVCG.2015.2467717http://dx.doi.org/10.1109/TVCG.2015.2467717]
Tanaka F, Isshiki K, Takahashi F, Uekusa M, Sei R and Hayashi K. 2015. Pepper learns together with children: development of an educational application//Proceedings of the 15th IEEE-RAS International Conference on Humanoid Robots (Humanoids). Seoul, Korea (South): IEEE: 270-275 [DOI: 10.1109/HUMANOIDS.2015.7363546http://dx.doi.org/10.1109/HUMANOIDS.2015.7363546]
Tomashenko N and Estève Y. 2018. Evaluation of feature-space speaker adaptation for end-to-end acoustic models//Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC 2018. Miyazaki, Japan: European Language Resources Association (ELRA)
Tu Y Z, Mak M W and Chien J T. 2020. Variational domain adversarial learning with mutual information maximization for speaker verification. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28: 2013-2024 [DOI: 10.1109/TASLP.2020.3004760http://dx.doi.org/10.1109/TASLP.2020.3004760]
van den Oord A, Dieleman S, Zen H, Simonyan K, Vinyals O, Graves A, Kalchbrenner N, Senior A and Kavukcuoglu K. 2016. WaveNet: a generative model for raw audio [EB/OL]. [2016-09-19]. https://arxiv.org/pdf/1609.03499.pdfhttps://arxiv.org/pdf/1609.03499.pdf
Varela-Alds J, Guamn J, Paredes B and Chicaiza F A. 2020. Robotic cane for the visually impaired//Proceedings of the 14th International Conference on Universal Access in Human-Computer Interaction. Copenhagen, Denmark: Springer: 506-517 [DOI: 10.1007/978-3-030-49282-3_36http://dx.doi.org/10.1007/978-3-030-49282-3_36]
Variani E, Lei X, McDermott E, Moreno I L and Gonzalez-Dominguez J. 2014. Deep neural networks for small footprint text-dependent speaker verification//Proceedings of 2014 IEEE International Conference on Acoustics, Speech and Signal Processing. Florence, Italy: IEEE: 4052-4056 [DOI: 10.1109/ICASSP.2014.6854363http://dx.doi.org/10.1109/ICASSP.2014.6854363]
Wainer J, Feil-Seifer D J, Shell D A and Mataric M J. 2007. Embodiment and human-robot interaction: a task-based perspective//The 16th IEEE International Symposium on Robot and Human Interactive Communication. Jeju, Korea (South): IEEE: 872-877 [DOI: 10.1109/ROMAN.2007.4415207http://dx.doi.org/10.1109/ROMAN.2007.4415207]
Wan L, Wang Q, Papir A and Moreno I L. 2018. Generalized end-to-end loss for speaker verification//Proceedings of 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Calgary, Canada: IEEE: 4879-4883 [DOI: 10.1109/ICASSP.2018.8462665http://dx.doi.org/10.1109/ICASSP.2018.8462665]
Wang B Y, Liang Y, Xu D Z, Wang Z H and Ji J. 2021a. Design on electrohydraulic servo driving system with walking assisting control for lower limb exoskeleton robot. International Journal of Advanced Robotic Systems, 18(1): #172988142199228 [DOI: 10.1177/1729881421992286http://dx.doi.org/10.1177/1729881421992286]
Wang L, Yu Z W, Guo B, Ku T and Yi F. 2017a. Moving destination prediction using sparse dataset: a mobility gradient descent approach. ACM Transactions on Knowledge Discovery from Data, 11(3): #37 [DOI: 10.1145/3051128http://dx.doi.org/10.1145/3051128]
Wang L H, Mohammed A and Onori M. 2014. Remote robotic assembly guided by 3D models linking to a real robot. CIRP Annals, 63(1): 1-4 [DOI: 10.1016/j.cirp.2014.03.013http://dx.doi.org/10.1016/j.cirp.2014.03.013]
Wang L Y, Zhao J X and Zhang L J. 2021b. NavDog: robotic navigation guide dog via model predictive control and human-robot modeling//Proceedings of the 36th Annual ACM Symposium on Applied Computing. Virtual Event Republic of Korea: Association for Computing Machinery: 815-818 [DOI: 10.1145/3412841.3442098http://dx.doi.org/10.1145/3412841.3442098]
Wang M N. 2019. Research on the Effect of Joint Attention Intervention of Autistic Children Based on Social Robot. Hangzhou: Zhejiang University of Technology
王蒙娜. 2019. 基于社交机器人的孤独症儿童共同注意干预效果研究. 杭州: 浙江工业大学 [DOI: 10.27463/d.cnki.gzgyu.2019.000530http://dx.doi.org/10.27463/d.cnki.gzgyu.2019.000530]
Wang R, Wang W C, DaSilva A, Huckins J F, Kelley W M, Heatherton T F and Campbell A T. 2018. Tracking depression dynamics in college students using mobile phone and wearable sensing. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(1): #43 [DOI: 10.1145/3191775http://dx.doi.org/10.1145/3191775]
Wang R Y, Wang S H, Du S Y, Xiao E D, Yuan W Z and Feng C. 2020a. Real-time soft body 3D proprioception via deep vision-based sensing. IEEE Robotics and Automation Letters, 5(2): 3382-3389 [DOI: 10.1109/LRA.2020.2975709http://dx.doi.org/10.1109/LRA.2020.2975709]
Wang T, Tao J H, Fu R B, Yi J Y, Wen Z Q and Zhong R X. 2020b. Spoken content and voice factorization for few-shot speaker adaptation//Proceedings of the 21st Annual Conference of the International Speech Communication Association. Shanghai, China: [s.n.]: 796-800
Wang W S, Na X X, Cao D P, Gong J W, Xi J Q, Xing Y and Wang F Y. 2020c. Decision-making in driver-automation shared control: a review and perspectives. IEEE/CAA Journal of Automatica Sinica, 7(5): 1289-1307 [DOI: 10.1109/JAS.2020.1003294http://dx.doi.org/10.1109/JAS.2020.1003294]
Wang Y X, Skerry-Ryan R J, Stanton D, Wu Y H, Weiss R J, Jaitly N, Yang Z H, Xiao Y, Chen Z F, Bengio S, Le Q, Agiomyrgiannakis Y, Clark R and Saurous R A. 2017b. Tacotron: towards end-to-end speech synthesis//Proceedings of INTERSPEECH 2017. Stockholm, Sweden: ISCA: 4006-4010 [DOI: 10.21437/Interspeech.2017-1452http://dx.doi.org/10.21437/Interspeech.2017-1452]
Wei H B. 2022. Semi-Autonomous Teleoperated Robot System for Power Grid Operations. Guangzhou: Guangdong University of Technology
韦海彬. 2022. 面向电网作业的半自主遥操作机器人系统研究. 广州: 广东工业大学 [DOI: 10.27029/d.cnki.ggdgu.2022.000296http://dx.doi.org/10.27029/d.cnki.ggdgu.2022.000296]
Wei K, Zhang Y K, Sun S N, Xie L and Ma L. 2022a. Leveraging acoustic contextual representation by audio-textual cross-modal learning for conversational ASR [EB/OL]. [2022-07-03]. https://arxiv.org/pdf/2207.01039v1.pdfhttps://arxiv.org/pdf/2207.01039v1.pdf
Wei K, Zhang Y K, Sun S N, Xie L and Ma L. 2022b. Conversational speech recognition by learning conversation-level characteristics//Proceedings of 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Singapore,Singapore: IEEE: 6752-6756 [DOI: 10.1109/ICASSP43922.2022.9746884http://dx.doi.org/10.1109/ICASSP43922.2022.9746884]
Wei Y T, Mei H H, Huang W Q, Wu X Y, Xu M L and Chen W. 2022c. An evolutional model for operation-driven visualization design. Journal of Visualization, 25(1): 95-110 [DOI: 10.1007/s12650-021-00784-whttp://dx.doi.org/10.1007/s12650-021-00784-w]
Wilk R and Johnson M J. 2014. Usability feedback of patients and therapists on a conceptual mobile service robot for inpatient and home-based stroke rehabilitation//Proceedings of the 5th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics. Sao Paulo, Brazil: IEEE: 438-443 [DOI: 10.1109/BIOROB.2014.6913816http://dx.doi.org/10.1109/BIOROB.2014.6913816]
Witt P L, Wheeless L R and Allen M. 2004. A meta-analytical review of the relationship between teacher immediacy and student learning. Communication Monographs, 71(2): 184-207 [DOI: 10.1080/036452042000228054http://dx.doi.org/10.1080/036452042000228054]
Wu C S, Socher R and Xiong C M. 2019a. Global-to-local memory pointer networks for task-oriented dialogue [EB/OL]. [2023-01-13]. https://arxiv.org/pdf/1901.04713.pdfhttps://arxiv.org/pdf/1901.04713.pdf
Wu P F, Ling Z H, Liu L J, Jiang Y, Wu H C and Dai L R. 2019b. End-to-end emotional speech synthesis using style tokens and semi-supervised training//Proceedings of 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference. Lanzhou, China: IEEE: 623-627 [DOI: 10.1109/APSIPAASC47483.2019.9023186http://dx.doi.org/10.1109/APSIPAASC47483.2019.9023186]
Wu Z Z and King S. 2016. Investigating gated recurrent networks for speech synthesis//Proceedings of 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Shanghai, China: IEEE: 5140-5144 [DOI: 10.1109/ICASSP.2016.7472657http://dx.doi.org/10.1109/ICASSP.2016.7472657]
Xiao A X, Tong W Z, Yang L Z, Zeng J, Li Z Y and Sreenath K. 2021. Robotic guide dog: leading a human with leash-guided hybrid physical interaction//Proceedings of 2021 IEEE International Conference on Robotics and Automation (ICRA). Xi’an, China: IEEE: 11470-11476 [DOI: 10.1109/ICRA48506.2021.9561786http://dx.doi.org/10.1109/ICRA48506.2021.9561786]
Xiao H. 2020. China’s new energy vehicles enter a new phase of accelerated development. Sinopec Monthly, (11): #75
萧河. 2020. 中国新能源汽车进入加速发展新阶段. 中国石化, 11): #75 [DOI: 10.3969/j.issn.1005-457X.2020.11.025http://dx.doi.org/10.3969/j.issn.1005-457X.2020.11.025]
Xiong K, Fu S W, Ding G M, Luo Z S, Yu R, Chen W, Bao H J and Wu Y C. 2022. Visualizing the scripts of data wrangling with SOMNUS. IEEE Transactions on Visualization and Computer Graphics [DOI: 10.1109/TVCG.2022.3144975http://dx.doi.org/10.1109/TVCG.2022.3144975]
Xu T. 2022. Research on the Design of Human-Computer Interface in the Situation of Autonomous Driving Take-Over. Mianyang: Southwest University of Science and Technology (徐韬. 2022. 自动驾驶接管情境下的人机交互界面设计研究. 绵阳: 西南科技大学) [DOI: 10.27415/d.cnki.gxngc.2022.000665]
Xu Y. 2021. Spatial memory and visual characteristics of drivers in L2 autonomous driving scenarios. Tianjin: Tianjin Normal University
徐杨. 2021. L2自动驾驶情境下驾驶员空间记忆与视觉特征. 天津: 天津师范大学 [DOI: 10.27363/d.cnki.gtsfu.2021.000611http://dx.doi.org/10.27363/d.cnki.gtsfu.2021.000611]
Yalçın M A, Elmqvist N and Bederson B B. 2018. Keshif: rapid and expressive tabular data exploration for novices. IEEE Transactions on Visualization and Computer Graphics, 24(8): 2339-2352 [DOI: 10.1109/TVCG.2017.2723393http://dx.doi.org/10.1109/TVCG.2017.2723393]
Yang C Y, Zhou S R, Guo J L C and Kästner C. 2021. Subtle bugs everywhere: generating documentation for data wrangling code//Proceedings of the 36th IEEE/ACM International Conference on Automated Software Engineering (ASE). Melbourne, Australia: IEEE: 304-316 [DOI: 10.1109/ASE51524.2021.9678520http://dx.doi.org/10.1109/ASE51524.2021.9678520]
Yang L Q, Ting K and Srivastava M B. 2014. Inferring occupancy from opportunistically available sensor data//Proceedings of 2014 IEEE International Conference on Pervasive Computing and Communications (PerCom). Budapest, Hungary: IEEE: 60-68 [DOI: 10.1109/PerCom.2014.6813945http://dx.doi.org/10.1109/PerCom.2014.6813945]
Yang Q Y. 2022. Motion Control Strategy and Model Optimization based on Master-Slave Teleoperation Robot. Nanning: Guangxi University
杨启业. 2022. 基于主从遥操作机器人的运动控制策略及模型优化研究. 南宁: 广西大学 [DOI: 10.27034/d.cnki.ggxiu.2022.001210http://dx.doi.org/10.27034/d.cnki.ggxiu.2022.001210]
Yang X D and Tian Y L. 2017. Super normal vector for human activity recognition with depth cameras. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(5): 1028-1039 [DOI: 10.1109/TPAMI.2016.2565479http://dx.doi.org/10.1109/TPAMI.2016.2565479]
Yeh C F, Mahadeokar J, Kalgaonkar K, Wang Y Q, Le D, Jain M, Schubert K, Fuegen C and Seltzer M L. 2019. Transformer-transducer: end-to-end speech recognition with self-attention [EB/OL]. [2023-01-13]. https://arxiv.org/pdf/1910.12977.pdfhttps://arxiv.org/pdf/1910.12977.pdf
Yeh C K, Chen J S, Yu C Z and Yu D. 2018. Unsupervised speech recognition via segmental empirical output distribution matching [EB/OL]. [2023-01-13]. https://arxiv.org/pdf/1812.09323.pdfhttps://arxiv.org/pdf/1812.09323.pdf
Yu D, Yao K S, Su H, Li G and Seide F. 2013. KL-divergence regularized deep neural network adaptation for improved large vocabulary speech recognition//Proceedings of 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. Vancouver, Canada: IEEE: 7893-7897 [DOI: 10.1109/ICASSP.2013.6639201http://dx.doi.org/10.1109/ICASSP.2013.6639201]
Yu Z W and Wang Z. 2020. Human Behavior Analysis: Sensing and Understanding. Singapore, Singapore: Springer [DOI: 10.1007/978-981-15-2109-6http://dx.doi.org/10.1007/978-981-15-2109-6]
Yuan W, Li Z J and Su C Y. 2021. Multisensor-based navigation and control of a mobile service robot. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 51(4): 2624-2634 [DOI: 10.1109/TSMC.2019.2916932http://dx.doi.org/10.1109/TSMC.2019.2916932]
Zhai X H, Oliver A, Kolesnikov A and Beyer L. 2019. S4L: self-supervised semi-supervised learning//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea (South): IEEE: 1476-1485 [DOI: 10.1109/ICCV.2019.00156http://dx.doi.org/10.1109/ICCV.2019.00156]
Zhang B Q, Barbareschi G, Herrera R R, Carlson T and Holloway C. 2022a. Understanding interactions for smart wheelchair navigation in crowds//Proceedings of 2022 CHI Conference on Human Factors in Computing Systems. New Orleans, USA: Association for Computing Machinery: #194 [DOI: 10.1145/3491102.3502085http://dx.doi.org/10.1145/3491102.3502085]
Zhang F S, Chang Z X, Niu K, Xiong J, Jin B H, Lyu Q and Zhang D Q. 2020a. Exploring LoRa for long-range through-wall sensing. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4(2): #86 [DOI: 10.1145/3397326http://dx.doi.org/10.1145/3397326]
Zhang H J and Qian J. 2019. Design of telepresence mobile robot system based on ROS. Manufacturing Automation, 41(9): 111-114
张华健, 钱钧. 2019. 基于ROS的远程呈现移动机器人系统设计. 制造业自动化, 41(9): 111-114 [DOI: 10.3969/j.issn.1009-0134.2019.09.025http://dx.doi.org/10.3969/j.issn.1009-0134.2019.09.025]
Zhang K L, Liu T T, Liu Z, Zhuang Y and Chai Y J. 2020. Multimodal human-computer interactive technology for emotion regulation. Journal of Image and Graphics, 25(11): 2451-2464
张凯乐, 刘婷婷, 刘箴, 庄寅, 柴艳杰. 2020. 面向情绪调节的多模态人机交互技术. 中国图象图形学报, 25(11): 2451-2464 [DOI: 10.11834/jig.200251http://dx.doi.org/10.11834/jig.200251]
Zhang Q, Lu H, Sak H, Tripathi A, McDermott E, Koo S and Kumar S. 2020b. Transformer transducer: a streamable speech recognition model with transformer encoders and RNN-T loss//Proceedings of 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Barcelona, Spain: IEEE: 7829-7833 [DOI: 10.1109/ICASSP40776.2020.9053896http://dx.doi.org/10.1109/ICASSP40776.2020.9053896]
Zhang R S, Zheng Y H, Shao J Z, Mao X X, Xi Y D and Huang M L. 2020c. Dialogue distillation: open-domain dialogue augmentation using unpaired data//Proceedings of 2020 Conference on Empirical Methods in Natural Language Processing. Virtual Event Association for Computational Linguistics: 3449-3460 [DOI: 10.18653/v1/2020.emnlp-main.277http://dx.doi.org/10.18653/v1/2020.emnlp-main.277]
Zhang X, Li W Z, Chen X and Lu S L. 2018. MoodExplorer: towards compound emotion detection via smartphone sensing. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 1(4): #176 [DOI: 10.1145/3161414http://dx.doi.org/10.1145/3161414]
Zhang X S. 2013. The Design And Development For Visaul Programming In Mobile Devices.Taiyuan, Shanxi Normal University.
张秀深. 2013. 可视化编程移动学习平台研究与实现. 太原: 山西师范大学
Zhang Y, Lyu Z Q, Wu H B, Zhang S S, Hu P F, Wu Z Y, Lee H Y and Meng H L. 2022b. MFA-conformer: multi-scale feature aggregation conformer for automatic speaker verification [EB/OL]. [2022-11-11]. https://arxiv.org/pdf/2203.15249.pdfhttps://arxiv.org/pdf/2203.15249.pdf
Zhang Y, Li Z Y, Guo H L, Wang L Y, Chen Q H, Jiang W J, Fan M M, Zhou G Y and Gong J T. 2023. “I am the follower, also the boss”: exploring different levels of autonomy and machine forms of guiding robots for the visually impaired//Proceedings of 2023 CHI Conference on Human Factors in Computing Systems. Hamburg, Germany. ACM: 1-22 [DOI: 10.1145/3544548.3580884http://dx.doi.org/10.1145/3544548.3580884]
Zhao Y G, Li M R, Li B H and Han B. 2010. A method and implementation of program visualization based on speculative multithreading technology. Journal of Xi’an University of Posts and Telecommunications, 15(5): 69-74
赵永刚, 李美蓉, 李保红, 韩博. 2010. 基于推测多线程技术的程序可视化方法与实现. 西安邮电学院学报, 15(5): 69-74 [DOI: 10.13682/j.issn.2095-6533.2010.05.028http://dx.doi.org/10.13682/j.issn.2095-6533.2010.05.028]
Zheng Y, Li Q N, Chen Y K, Xie X and Ma W Y. 2008. Understanding mobility based on GPS data//Proceedings of the 10th International Conference on Ubiquitous Computing. Seoul, Korea (South): ACM: 312-321 [DOI: 10.1145/1409635.1409677http://dx.doi.org/10.1145/1409635.1409677]
Zhong Z, Lei M Y, Cao D L, Fan J P and Li S Z. 2017. Class-specific object proposals re-ranking for object detection in automatic driving. Neurocomputing, 242: 187-194 [DOI: 10.1016/j.neucom.2017.02.068http://dx.doi.org/10.1016/j.neucom.2017.02.068]
相关文章
相关作者
相关机构