多源融合SLAM的现状与挑战
Review of multi-source fusion SLAM: current status and challenges
- 2022年27卷第2期 页码:368-389
纸质出版日期: 2022-02-16 ,
录用日期: 2021-11-02
DOI: 10.11834/jig.210547
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2022-02-16 ,
录用日期: 2021-11-02
移动端阅览
王金科, 左星星, 赵祥瑞, 吕佳俊, 刘勇. 多源融合SLAM的现状与挑战[J]. 中国图象图形学报, 2022,27(2):368-389.
Jinke Wang, Xingxing Zuo, Xiangrui Zhao, Jiajun Lyu, Yong Liu. Review of multi-source fusion SLAM: current status and challenges[J]. Journal of Image and Graphics, 2022,27(2):368-389.
同时定位与地图构建(simultaneous localization and mapping,SLAM)技术在过去几十年中取得了惊人的进步,并在现实生活中实现了大规模的应用。由于精度和鲁棒性的不足,以及场景的复杂性,使用单一传感器(如相机、激光雷达)的SLAM系统往往无法适应目标需求,故研究者们逐步探索并改进多源融合的SLAM解决方案。本文从3个层面回顾总结该领域的现有方法:1)多传感器融合(由两种及以上传感器组成的混合系统,如相机、激光雷达和惯性测量单元,可分为松耦合、紧耦合);2)多特征基元融合(点、线、面、其他高维几何特征等与直接法相结合);3)多维度信息融合(几何、语义、物理信息和深度神经网络的推理信息等相融合)。惯性测量单元和视觉、激光雷达的融合可以解决视觉里程计的漂移和尺度丢失问题,提高系统在非结构化或退化场景中的鲁棒性。此外,不同几何特征基元的融合,可以大大减少有效约束的程度,并可为自主导航任务提供更多的有用信息。另外,数据驱动下的基于深度学习的策略为SLAM系统开辟了新的道路。监督学习、无监督学习和混合监督学习等逐渐应用于SLAM系统的各个模块,如相对姿势估计、地图表示、闭环检测和后端优化等。学习方法与传统方法的结合将是提升SLAM系统性能的有效途径。本文分别对上述多源融合SLAM方法进行分析归纳,并指出其面临的挑战及未来发展方向。
Simultaneous localization and mapping (SLAM) technology is widely used in mobile robot applications
and it focuses on the robot's motion state estimation issue and reconstructing the environment model (map) at the same time. The SLAM science community has promoted the technique to be deployed in various applications in real life nowadays
such as virtual reality
augmented reality
autonomous driving
service robots
etc. In complicated scenarios
SLAM systems empowered with single sensor such as a camera or light detection and ranging(LiDAR) often fail to customize the targeted applications due to the deficiency of accuracy and robustness. Current research analyses have gradually improved SLAM solutions based on multi-sensors
multiple feature primitives
and the integration of multi-dimensional information. This research reviews current methods in the multi-source fusion SLAM realm at three scales: multi-sensor fusion (hybrid system with two or more kinds of sensors such as camera
LiDAR and inertial measurement unit (IMU)
and combination methods can be divided into two categories(the loosely-coupled and the tightly-coupled)
multi-feature-primitive fusion (point
line
plane
other high-dimensional geometric features
and the featureless direct-based method) and multi-dimensional information fusion (geometric information
semantic information
physical information
and inferred information from deep neural networks). The challenges and future research of multi-source fusion SLAM has been predicted as well. Multi-source fusion systems can implement accurate and robust state estimation and mapping
which can meet the requirements in a wider variety of applications. For instance
the fusion of vision and inertial sensors can illustrate the drift and scale missing issue of visual odometry
while the fusion of LiDAR and inertial measurement unit can improve the system's robustness
especially in unstructured or degraded scenes. The fusion of other sensors
such as sonar
radar and GPS(global positioning system) can extend the applicability further. In addition
the fusion of diverse geometric feature primitives such as feature points
lines
curves
planes
curved surfaces
cubes
and featureless direct-based methods can greatly deduct the degree of valid constraints
which is of great importance for state estimation systems. The reconstructed environmental map with multiple feature primitives is informative in autonomous navigation tasks. Furthermore
data-driven deep-learning-based synthesized analysis in the context of probabilistic model-based methods paves a new path to overcome the challenges of the initial SLAM systems. The learning-based methods (supervised learning
unsupervised learning
and hybrid supervised learning) are gradually applied to various modules of the SLAM system
including relative pose regression
map representation
loop closure detection
and unrolled back-end optimization
etc. Learning-based methods will benefit the performance of SLAM more with more cutting-edge research to fill the gap amongst networks and various original methods. This demonstration is shown as following: 1) The analysis of funder mental mechanisms of multi-sensor fusion and current multi-sensor fusion methods are illustrated; 2) Multi-feature primitive fusion and multi-dimensional information fusion are demonstrated; 3) The current difficulties and challenges of multi-source fusion towards SLAM have been issued; 4) The executive summary has been implemented at the end.
同时定位与地图构建(SLAM)多源融合多传感器融合多特征基元融合多维度信息融合
simultaneous localization and mapping(SLAM)multi-source fusionmulti-sensor fusionmulti-feature fusionmulti-dimension information fusion
Almalioglu Y, Turan M, Lu C X, Trigoni N and Markham A. 2021. Milli-RIO: Ego-motion estimation with low-cost millimetre-wave radar. IEEE Sensors Journal, 21(3): 3314-3323[DOI:10.1109/JSEN.2020.3023243]
Arndt C, Sabzevari R and Civera J. 2020. From points to planes-adding planar constraints to monocular SLAM factor graphs//Proceedings of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. Las Vegas, USA: IEEE: 4917-4922[DOI: 10.1109/IROS45743.2020.9340805http://dx.doi.org/10.1109/IROS45743.2020.9340805]
Bao S Y and Savarese S. 2011. Semantic structure from motion//CVPR 2011. Colorado Springs, USA: IEEE: 2025-2032[DOI: 10.1109/CVPR.2011.5995462http://dx.doi.org/10.1109/CVPR.2011.5995462]
Behley J, Garbade M, Milioto A, Quenzel J, Behnke S, Stachniss C and Gall J. 2019. SemanticKITTI: a dataset for semantic scene understanding of LiDAR sequences//Proceedings of 2019 IEEE/CVFInternational Conference on Computer Vision. Seoul, Korea(South): 9296-9306[DOI: 10.1109/ICCV.2019.00939http://dx.doi.org/10.1109/ICCV.2019.00939]
Berger C. 2013. Toward rich geometric map for SLAM: online detection of planes in 2D lidar. Journal of Automation Mobile Robotics and Intelligent Systems, 7(1): 35-41
Bescos B, Fácil J M, Civera J and Neira J. 2018. DynaSLAM: tracking, mapping, and inpainting in dynamic scenes. IEEE Robotics and Automation Letters, 3(4): 4076-4083[DOI:10.1109/LRA.2018.2860039]
Bloesch M, Czarnowski J, Clark R, Leutenegger S and Davison A J. 2018. CodeSLAM-learning a compact, optimisable representation for dense visual SLAM//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 2560-2568[DOI: 10.1109/CVPR.2018.00271http://dx.doi.org/10.1109/CVPR.2018.00271]
Britcher V and Bergbreiter S. 2021. Use of a MEMS differential pressure sensor to detect ground, ceiling, and walls on small quadrotors. IEEE Robotics and Automation Letters, 6(3): 4568-4575[DOI:10.1109/LRA.2021.3068661]
Brostow G J, Shotton J, Fauqueur J and Cipolla R. 2008. Segmentation and recognition using structure from motion point clouds//Proceedings of the 10th European Conference on Computer Vision. Marseille, Framce: Springer: 44-57[DOI: 10.1007/978-3-540-88682-2_5http://dx.doi.org/10.1007/978-3-540-88682-2_5]
Bui M, Albarqouni S, Ilic S and Navab N. 2018. Scene coordinate and correspondence learning for image-based localization. [EB/OL]. [2021-07-05].https://arxiv.org/pdf/1805.08443.pdfhttps://arxiv.org/pdf/1805.08443.pdf
Burschka D and Hager G D. 2004. V-GPS(SLAM): vision-based inertial system for mobile robots//Proceedings of 2004 IEEE International Conference on Robotics and Automation. New Orleans, USA: IEEE: 409-415[DOI: 10.1109/ROBOT.2004.1307184http://dx.doi.org/10.1109/ROBOT.2004.1307184]
Cadena C, Carlone L, Carrillo H, Latif Y, Scaramuzza D, Neira J, Reid I and Leonard J J. 2016. Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on Robotics, 32(6): 1309-1332[DOI:10.1109/TRO.2016.2624754]
Campos C, Elvira R, Rodríguez J J G, Montiel J M M and Tardós J D. 2021. ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM. IEEE Transactions on Robotics[DOI: 10.1109/TRO.2021.3075644http://dx.doi.org/10.1109/TRO.2021.3075644]
Camurri M, Ramezani M, Nobili S and Fallon M. 2020. Pronto: a multi-sensor state estimator for legged robots in real-world scenarios. Frontiers in Robotics and AI, 7: 68[DOI:10.3389/frobt.2020.00068]
Charles R Q, Su H, Mo K C and Guibas L J. 2017. PointNet: deep learning on point sets for 3D classification and segmentation//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 77-85[DOI: 10.1109/CVPR.2017.16http://dx.doi.org/10.1109/CVPR.2017.16]
Chatfield A B. 1997. Fundamentals of High Accuracy Inertial Navigation. Progress in Astronautics and Aeronautics: 15-32[DOI: 10.2514/4.866463http://dx.doi.org/10.2514/4.866463]
Chen C B, Tian Y Y, Lin L, Chen S F, Li H W, Wang Y X and Su K X. 2020a. Obtaining world coordinate information of UAV in GNSS denied environments. Sensors, 20(8): #2241[DOI:10.3390/s20082241]
Chen C H, Lu X X, Markham A and Trigoni N. 2018. IONet: learning to cure the curse of drift in inertial odometry//Proceedings of the 32nd AAAI Conference on Artificial Intelligence. New Orleans, USA: AAAI: 6468-6476
Chen C H, Wang B, Lu C X, Trigoni N and Markham A. 2020b. A survey on deep learning for localization and mapping: towards the age of spatial machine intelligenc[EB/OL]. [2020-06-22].https://arxiv.org/pdf/2006.12567v1.pdfhttps://arxiv.org/pdf/2006.12567v1.pdf
Chen S L, Nan L L, Xia R B, Zhao J B and Wonka P. 2020c. PLADE: a plane-based descriptor for point cloud registration with small overlap. IEEE Transactions on Geoscience and Remote Sensing, 58(4): 2530-2540[DOI:10.1109/TGRS.2019.2952086]
Chen S W, Nardari G V, Lee E S, Qu C, Liu X, Romero R A F and Kumar V. 2020d. SLOAM: semantic lidar odometry and mapping for forest inventory. IEEE Robotics and Automation Letters, 5(2): 612-619[DOI:10.1109/LRA.2019.2963823]
Clark R, Bloesch M, Czarnowski J, Leutenegger S and Davison A J. 2018. Learning to solve nonlinear least squares for monocular stereo//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer: 291-306[DOI: 10.1007/978-3-030-01237-3_18http://dx.doi.org/10.1007/978-3-030-01237-3_18]
Czarnowski J, Laidlow T, Clark R and Davison A J. 2020. DeepFactors: real-time probabilistic dense monocular SLAM. IEEE Robotics and Automation Letters, 5(2): 721-728[DOI:10.1109/LRA.2020.2965415]
Davison A J. 2003. Real-time simultaneous localisation and mapping with a single camera//Proceedings of the 9th IEEE International Conference on Computer Vision. Nice, France: IEEE: 1403-1410[DOI: 10.1109/ICCV.2003.1238654http://dx.doi.org/10.1109/ICCV.2003.1238654]
Deschaud J E. 2018. IMLS-SLAM: scan-to-model matching based on 3D data//Processings of 2018 IEEE International Conference on Robotics and Automation. Brisbane, Australia: IEEE: 2480-2485[DOI: 10.1109/ICRA.2018.8460653http://dx.doi.org/10.1109/ICRA.2018.8460653]
Dissanayake G, Huang S D, Wang Z and Ranasinghe R. 2011. A review of recent developments in simultaneous localization and mapping//Proceedings of the 6th International Conference on Industrial and Information Systems. Kandy, Sri Lanka: IEEE: 477-482[DOI: 10.1109/ICIINFS.2011.6038117http://dx.doi.org/10.1109/ICIINFS.2011.6038117]
Dong J M, Fei X H and Soatto S. 2017. Visual-inertial-semantic scene representation for 3D object detection//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 3567-3577[DOI: 10.1109/CVPR.2017.380http://dx.doi.org/10.1109/CVPR.2017.380]
Du T, Zeng Y H, Yang J, Tian C Z and Bai P F. 2020. Multi-sensor fusion SLAM approach for the mobile robot with a bio-inspired polarised skylight sensor. IET Radar, Sonar and Navigation, 14(12): 1950-1957[DOI:10.1049/iet-rsn.2020.0260]
DubéR, Cramariuc A, Dugas D, Nieto J, Siegwart R and Cadena C. 2018. SegMap: 3D segment mapping using data-driven descriptors[EB/OL]. [2020-06-05].https://arxiv.org/pdf/1804.09557.pdfhttps://arxiv.org/pdf/1804.09557.pdf
Durrant-Whyte H and Bailey T. 2006. Simultaneous localization and mapping: part I. IEEE Robotics and Automation Magazine, 13(2): 99-110[DOI:10.1109/MRA.2006.1638022]
Eigen D, Puhrsch C and Fergus R. 2014. Depth map prediction from a single image using a multi-scale deep network//Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Canada: ACM: 2366-2374
Engel J, Koltun V and Cremers D. 2018. Direct sparse odometry. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(3): 611-625[DOI:10.1109/TPAMI.2017.2658577]
Engel J, Schöps T and Cremers D. 2014. LSD-SLAM: large-scale direct monocular SLAM//Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland: Springer: 834-849[DOI: 10.1007/978-3-319-10605-2_54http://dx.doi.org/10.1007/978-3-319-10605-2_54]
Engel J, Stückler J and Cremers D. 2015. Large-scale direct SLAM with stereo cameras//Proceedings of 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems. Hamburg, Germany: IEEE: 1935-1942[DOI: 10.1109/IROS.2015.7353631http://dx.doi.org/10.1109/IROS.2015.7353631]
Enqvist O, Kahl F and Olsson C. 2011. Non-sequential structure from motion//Proceedings of 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops). Barcelona, Spain: IEEE: 264-271[DOI: 10.1109/ICCVW.2011.6130252http://dx.doi.org/10.1109/ICCVW.2011.6130252]
Forster C, Carlone L, Dellaert F and Scaramuzza D. 2017a. On-manifold preintegration for real-time visual——inertial odometry. IEEE Transactions on Robotics, 33(1): 1-21[DOI:10.1109/TRO.2016.2597321]
Forster C, Pizzoli M and Scaramuzza D. 2014. SVO: Fast semi-direct monocular visual odometry//Proceedings of 2014 IEEE International Conference on Robotics and Automation. Hong Kong, China: IEEE: 15-22[DOI: 10.1109/ICRA.2014.6906584http://dx.doi.org/10.1109/ICRA.2014.6906584]
Forster C, Zhang Z C, Gassner M, Werlberger M and Scaramuzza D. 2017b. SVO: semidirect visual odometry for monocular and multicamera systems. IEEE Transactions on Robotics, 33(2): 249-265[DOI:10.1109/TRO.2016.2623335]
Gálvez-López D and Tardos J D. 2012. Bags of binary words for fast place recognition in image sequences. IEEE Transactions on Robotics, 28(5): 1188-1197[DOI:10.1109/TRO.2012.2197158]
Gao X, Wang R, Demmel N and Cremers D. 2018. LDSO: direct sparse odometry with loop closure//Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Madrid, Spain: IEEE: 2198-2204[DOI: 10.1109/IROS.2018.8593376http://dx.doi.org/10.1109/IROS.2018.8593376]
Gao X and Zhang T. 2017. Unsupervised learning to detect loops using deep neural networks forvisual SLAM system. Autonomous Robots, 41(1): 1-18[DOI:10.1007/s10514-015-9516-2]
Gao X and Zhang T. 2019. Introduction to Visual SLAM: From Theory to Practice. 2nd Edition. Beijing: Publishing House of Electronics Industry
高翔, 张涛. 2019. 视觉SLAM十四讲: 从理论到实践. 2版. 北京: 电子工业出版社
Garcia-Garcia A, Orts-Escolano S, Oprea S, Villena-Martinez V and Garcia-Rodriguez J. 2017. A review on deep learning techniques applied to semantic segmentation[EB/OL]. [2020-06-05].https://arxiv.org/pdf/1704.06857.pdfhttps://arxiv.org/pdf/1704.06857.pdf
Geneva P, Eckenhoff K, Yang Y L and Huang G Q. 2018. LIPS: LiDAR-inertial 3D plane SLAM//Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Madrid, Spain: IEEE: 123-130[DOI: 10.1109/IROS.2018.8594463http://dx.doi.org/10.1109/IROS.2018.8594463]
Godard C, Mac Aodha O and Brostow G J. 2017. Unsupervised monocular depth estimation with left-right consistency//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 6602-6611[DOI: 10.1109/CVPR.2017.699http://dx.doi.org/10.1109/CVPR.2017.699]
Gomez-Ojeda R, Briales J and Gonzalez-Jimenez J. 2016. PL-SVO: semi-direct monocular visual odometry by combining points and line segments//Proceedings of 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems. Daejeon, Korea(South): IEEE: 4211-4216[DOI: 10.1109/IROS.2016.7759620http://dx.doi.org/10.1109/IROS.2016.7759620]
Gomez-Ojeda R and Gonzalez-Jimenez J. 2016. Robust stereo visual odometry through a probabilistic combination of points and line segments//Proceedings of 2016 IEEE International Conference on Robotics and Automation. Stockholm, Sweden: IEEE: 2521-2526[DOI: 10.1109/ICRA.2016.7487406http://dx.doi.org/10.1109/ICRA.2016.7487406]
Graeter J, Wilczynski A and Lauer M. 2018. LIMO: lidar-monocular visual odometry//Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Madrid, Spain: IEEE: 7872-7879[DOI: 10.1109/IROS.2018.8594394http://dx.doi.org/10.1109/IROS.2018.8594394]
Grinvald M, Furrer F, Novkovic T, Chung J J, Cadena C, Siegwart R and Nieto J. 2019. Volumetric instance-aware semantic mapping and 3D object discovery. IEEE Robotics and Automation Letters, 4(3): 3037-3044[DOI:10.1109/LRA.2019.2923960]
Han L M, Lin Y M, Du G G and Lian S G. 2019. DeepVIO: self-supervised deep learning of monocular visual inertial odometry using 3D geometric constraints//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 6906-6913[DOI: 10.1109/IROS40897.2019.8968467http://dx.doi.org/10.1109/IROS40897.2019.8968467]
Hartley R, Ghaffari M, Eustice R M and Grizzle J W. 2020. Contact-aided invariant extended Kalman filtering for robot state estimation. The International Journal of Robotics Research, 39(4): 402-430[DOI:10.1177/0278364919894385]
Hess W, Kohler D, Rapp H and Andor D. 2016. Real-time loop closure in 2D LIDAR SLAM//Proceedings of 2016 IEEE International Conference on Robotics and Automation. Stockholm, Sweden: IEEE: 1271-1278[DOI: 10.1109/ICRA.2016.7487258http://dx.doi.org/10.1109/ICRA.2016.7487258]
Huang G Q. 2019. Visual-inertial navigation: a concise review//Proceedings of 2019 International Conference on Robotics and Automation. Montreal, Canada: IEEE: 9572-9582[DOI: 10.1109/ICRA.2019.8793604http://dx.doi.org/10.1109/ICRA.2019.8793604]
Jang J and Kim J. 2019. Dynamic grid adaptation for panel-based bathymetric SLAM//Proceedings of 2019 IEEE Underwater Technology (UT). Kaohsiung, China: 1-4[DOI: 10.1109/UT.2019.8734360http://dx.doi.org/10.1109/UT.2019.8734360]
Ji M Q, Gall J, Zheng H T, Liu Y B and Fang L. 2017. SurfaceNet: an end-to-end 3D neural network for multiview stereopsis//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 2326-2334[DOI: 10.1109/ICCV.2017.253http://dx.doi.org/10.1109/ICCV.2017.253]
Jia Y P, Luo H Y, Zhao F, Jiang G L, Li Y H, Yan J Q, Jiang Z Q and Wang Z T. 2021. Lvio-fusion: a self-adaptive multi-sensor fusion SLAM framework using actor-critic method[EB/OL]. [2021-06-05].https://arxiv.org/pdf/2106.06783.pdfhttps://arxiv.org/pdf/2106.06783.pdf
Ke T, Wu K J and Roumeliotis S I. 2019. RISE-SLAM: a resource-aware inverse schmidt estimator for SLAM//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 354-361[DOI: 10.1109/IROS40897.2019.8967892http://dx.doi.org/10.1109/IROS40897.2019.8967892]
Kendall A, Grimes M and Cipolla R. 2015. PoseNet: a convolutional network for real-time 6-DOF camera relocalization//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE: 2938-2946[DOI: 10.1109/ICCV.2015.336http://dx.doi.org/10.1109/ICCV.2015.336]
Khan M S A, Chowdhury S S, Niloy N, Aurin F T Z and Ahmed T. 2018. Sonar-based SLAM using occupancy grid mapping and dead reckoning//TENCON 2018-2018 IEEE Region 10 Conference. Jeju, Korea(South): IEEE: 1707-1712[DOI: 10.1109/TENCON.2018.8650124http://dx.doi.org/10.1109/TENCON.2018.8650124]
Khattak S, Nguyen H, Mascarich F, Dang T and Alexis K. 2020. Complementary multi-modal sensor fusion for resilient robot pose estimation in subterranean environments//Proceedings of 2020 International Conference on Unmanned Aircraft Systems. Athens, Greece: IEEE: 1024-1029[DOI: 10.1109/ICUAS48674.2020.9213865http://dx.doi.org/10.1109/ICUAS48674.2020.9213865]
Kim P, Lee H and Kim H J. 2019. Autonomous flight with robust visual odometry under dynamic lighting conditions. Autonomous Robots, 43(6): 1605-1622[DOI:10.1007/s10514-018-9816-4]
Kim Y, Yoon S, Kim S and Kim A. 2021. Unsupervised balanced covariance learning for visual-inertial sensor fusion. IEEE Robotics and Automation Letters, 6(2): 819-826[DOI:10.1109/LRA.2021.3051571]
Klein G and Murray D. 2007. Parallel tracking and mapping for small AR workspaces//The 6th IEEE and ACM International Symposium on Mixed and Augmented Reality. Nara, Japan: IEEE: 225-234[DOI: 10.1109/ISMAR.2007.4538852http://dx.doi.org/10.1109/ISMAR.2007.4538852]
Konecny J, Prauzek M and Hlavica J. 2016. ICP algorithm in mobile robot navigation: analysis of computational demands in embedded solutions. IFAC-PapersOnLine, 49(25): 396-400[DOI:10.1016/j.ifacol.2016.12.079]
Konolige K, Agrawal M and SolàJ. 2010. Large-scale visual odometry for rough terrain//Kaneko M, Nakamura Y, eds. Robotics Research. Berlin: Springer: 201-212[DOI: 10.1007/978-3-642-14743-2_18http://dx.doi.org/10.1007/978-3-642-14743-2_18]
Laskar Z, Melekhov I, Kalia S and Kannala J. 2017. Camera relocalization by computing pairwise relative poses using convolutional neural network//Proceedings of 2017 IEEE International Conference on Computer Vision Workshops. Venice, Italy: IEEE: 920-929[DOI: 10.1109/ICCVW.2017.113http://dx.doi.org/10.1109/ICCVW.2017.113]
Lee S H and Civera J. 2019. Loosely-coupled semi-direct monocular slam. IEEE Robotics and Automation Letters, 4(2): 399-406[DOI:10.1109/LRA.2018.2889156]
Leutenegger S, Lynen S, Bosse M, Siegwart R and Furgale P. 2015. Keyframe-based visual-inertial odometry using nonlinear optimization. The International Journal of Robotics Research, 34(3): 314-334[DOI:10.1177/0278364914554813]
Levinson J and Thrun S. 2010. Robust vehicle localization in urban environments using probabilistic maps//Proceedings of 2010 IEEE International Conference on Robotics and Automation. Anchorage, USA: IEEE: 4372-4378[DOI: 10.1109/ROBOT.2010.5509700http://dx.doi.org/10.1109/ROBOT.2010.5509700]
Li C, Xiao H, Tateno K, Tombari F, Navab N and Hager G D. 2016. Incremental scene understanding on dense SLAM//Proceedings of 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems. Daejeon, Korea(South): IEEE: 574-581[DOI: 10.1109/IROS.2016.7759111http://dx.doi.org/10.1109/IROS.2016.7759111]
Li H A, Zhao J, Bazin J C and Liu Y H. 2020a. Quasi-globally optimal and near/true real-time vanishing point estimation in manhattan world. IEEE Transactions on Pattern Analysis and Machine Intelligence[DOI: 10.1109/TPAMI.2020.3023183http://dx.doi.org/10.1109/TPAMI.2020.3023183]
Li Q, Chen S Y, Wang C, Li X, Wen C L, Cheng M and Li J. 2019a. LO-Net: deep real-time lidar odometry//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 8465-8474[DOI: 10.1109/CVPR.2019.00867http://dx.doi.org/10.1109/CVPR.2019.00867]
Li S P, Zhang T, Gao X, Wang D and Xiao Y. 2019b. Semi-direct monocular visual and visual-inertial SLAM with loop closure detection. Robotics and Autonomous Systems, 112: 201-210[DOI:10.1016/j.robot.2018.11.009]
Li X, Li Y Y, Örnek E P, Lin J L and Tombari F. 2020b. Co-planar parametrization for stereo-SLAM and visual-inertial odometry. IEEE Robotics and Automation Letters, 5(4): 6972-6979[DOI:10.1109/LRA.2020.3027230]
Lianos K N, Schönberger J L, Pollefeys M and Sattler T. 2018. VSO: visual semantic odometry//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer: 246-263[DOI: 10.1007/978-3-030-01225-0_15http://dx.doi.org/10.1007/978-3-030-01225-0_15]
Liu F Y, Shen C H, Lin G S and Reid I. 2016. Learning depth from single monocular images using deep convolutional neural fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(10): 2024-2039[DOI:10.1109/TPAMI.2015.2505283]
Liu H M, Chen M Y, Zhang G F, Bao H J and Bao Y Z. 2018. ICE-BA: incremental, consistent and efficient bundle adjustment for visual-inertial SLAM//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 1974-1982[DOI: 10.1109/CVPR.2018.00211http://dx.doi.org/10.1109/CVPR.2018.00211]
Liu T A, Lin H Y and Lin W Y. 2019. InertialNet: toward robust SLAM via visual inertial measurement//Proceedings of 2019 IEEE Intelligent Transportation Systems Conference (ITSC). Auckland, New Zealand: IEEE: 1311-1316[DOI: 10.1109/ITSC.2019.8917003http://dx.doi.org/10.1109/ITSC.2019.8917003]
Long X X, Cheng X J, Zhu H, Zhang P J, Liu H M, Li J, Zheng L T, Hu Q Y, Liu H, Cao X, Yang R G, Wu Y H, Zhang G F, Liu Y B, Xu K, Guo Y L and Chen B Q. 2021. Recent progress in3D vision. Journal of Image and Graphics, 26(6): 1389-1428
龙霄潇, 程新景, 朱昊, 张朋举, 刘浩敏, 李俊, 郑林涛, 胡庆拥, 刘浩, 曹汛, 杨睿刚, 吴毅红, 章国锋, 刘烨斌, 徐凯, 郭裕兰, 陈宝权. 2021. 三维视觉前沿进展. 中国图象图形学报, 26(6): 1389-1428[DOI:10.11834/jig.210043]
Lu W X, Zhou Y, Wan G W, Hou S H and Song S Y. 2019. L3-Net: towards learning based LiDAR localization for autonomous driving//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 6382-6391[DOI: 10.1109/CVPR.2019.00655http://dx.doi.org/10.1109/CVPR.2019.00655]
Lynen S, Achtelik M W, Weiss S, Chli M and Siegwart R. 2013. A robust and modular multi-sensor fusion approach applied to MAV navigation//Proceedings of 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. Tokyo, Japan: IEEE: 3923-3929[DOI: 10.1109/IROS.2013.6696917http://dx.doi.org/10.1109/IROS.2013.6696917]
Marzorati D, Matteucci M, Migliore D and Sorrenti D G. 2007. Integration of 3D lines and points in 6DoF visual SLAM by uncertainty projective geometry//Proceedings of the European Conference on Mobile Robots. Freiburg, Germany: EMCR
McCormac J, Clark R, Bloesch M, Davison A and Leutenegger S. 2018. Fusion++: volumetric object-level SLAM//Proceedings of 2018 International Conference on 3D Vision (3DV). Verona, Italy: IEEE: 32-41[DOI: 10.1109/3DV.2018.00015http://dx.doi.org/10.1109/3DV.2018.00015]
McCormac J, Handa A, Davison A and Leutenegger S. 2017. SemanticFusion: dense 3D semantic mapping with convolutional neural networks//Proceedings of 2017 IEEE International Conference on Robotics and Automation. Singapore, Singapore: IEEE: 4628-4635[DOI: 10.1109/ICRA.2017.7989538http://dx.doi.org/10.1109/ICRA.2017.7989538]
Meier K, Chung S J and Hutchinson S. 2018. Visual-inertial curve simultaneous localization and mapping: creating a sparse structured world without feature points. Journal of Field Robotics, 35(4): 516-544
Mourikis A I and Roumeliotis S I. 2007. A multi-state constraint Kalman filter for vision-aided inertial navigation//Proceedings of 2007 IEEE International Conference on Robotics and Automation. Rome, Italy: IEEE: 3565-3572[DOI: 10.1109/ROBOT.2007.364024http://dx.doi.org/10.1109/ROBOT.2007.364024]
Mourikis A I, Trawny N, Roumeliotis S I, Johnson A E, Ansar A and Matthies L. 2009. Vision-aided inertial navigation for spacecraft entry, descent, and landing. IEEE Transactions on Robotics, 25(2): 264-280[DOI:10.1109/TRO.2009.2012342]
Mur-Artal R, Montiel J M M and Tardós J D. 2015. ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Transactions on Robotics, 31(5): 1147-1163[DOI:10.1109/TRO.2015.2463671]
Mur-Artal R and Tardós J D. 2017. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Transactions on Robotics, 33(5): 1255-1262[DOI:10.1109/TRO.2017.2705103]
Narita G, Seno T, Ishikawa T and Kaji Y. 2019. PanopticFusion: online volumetric semantic mapping at the level of stuff and things//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 4205-4212[DOI: 10.1109/IROS40897.2019.8967890http://dx.doi.org/10.1109/IROS40897.2019.8967890]
Newcombe R A, Izadi S, Hilliges O, Molyneaux D, Kim D, Davison A J, Kohi P, Shotton J, Hodges S and Fitzgibbon A. 2011a. KinectFusion: real-time dense surface mapping and tracking//Proceedings of the 10th IEEE International Symposium on Mixed and Augmented Reality. Basel, Switzer-land: IEEE: 127-136[DOI: 10.1109/ISMAR.2011.6092378http://dx.doi.org/10.1109/ISMAR.2011.6092378]
Newcombe R A, Lovegrove S J and Davison A J. 2011b. DTAM: dense tracking and mapping in real-time//Proceedings of 2011 International Conference on Computer Vision. Barcelona, Spain: IEEE: 2320-2327[DOI: 10.1109/ICCV.2011.6126513http://dx.doi.org/10.1109/ICCV.2011.6126513]
Nicholson L, Milford M and Sünderhauf N. 2019. QuadricSLAM: dual quadrics from object detections as landmarks in object-oriented SLAM. IEEE Robotics and Automation Letters, 4(1): 1-8[DOI:10.1109/LRA.2018.2866205]
Nisar B, Foehn P, Falanga D and Scaramuzza D. 2019. VIMO: simultaneous visual inertial model-based odometry and force estimation. IEEE Robotics and Automation Letters, 4(3): 2785-2792[DOI:10.1109/LRA.2019.2918689]
Olid D, Fácil J M and Civera J. 2018. Single-view place recognition under seasonal changes. [EB/OL]. [2021-06-05].https://arxiv.org/pdf/1808.06516.pdfhttps://arxiv.org/pdf/1808.06516.pdf
Pumarola A, Vakhitov A, Agudo A, Sanfeliu A and Moreno-Noguer F. 2017. PL-SLAM: real-time monocular visual SLAM with points and lines//Proceedings of 2017 IEEE International Conference on Robotics and Automation. Singapore, Singapore: IEEE: 4503-4508[DOI: 10.1109/ICRA.2017.7989522http://dx.doi.org/10.1109/ICRA.2017.7989522]
Qin T, Li P L and Shen S J. 2018. VINS-Mono: a robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics, 34(4): 1004-1020[DOI:10.1109/TRO.2018.2853729]
Roddick T and Cipolla R. 2020. Predicting semantic map representations from images using pyramid occupancy networks//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 11135-11144[DOI: 10.1109/CVPR42600.2020.01115http://dx.doi.org/10.1109/CVPR42600.2020.01115]
Rosinol A, Abate M, Chang Y and Carlone L. 2020. Kimera: an open-source library for real-time metric-semantic localization and mapping//Proceedings of 2020 IEEE International Conference on Robotics and Automation. Paris, France: IEEE: 1689-1696[DOI: 10.1109/ICRA40945.2020.9196885http://dx.doi.org/10.1109/ICRA40945.2020.9196885]
Rünz M and Agapito L. 2017. Co-fusion: real-time segmentation, tracking and fusion of multiple objects//Proceedings of 2017 IEEE International Conference on Robotics and Automation. Singapore, Singapore: IEEE: 4471-4478[DOI: 10.1109/ICRA.2017.7989518http://dx.doi.org/10.1109/ICRA.2017.7989518]
Runz M, Buffier M and Agapito L. 2018. MaskFusion: real-time recognition, tracking and reconstruction of multiple moving objects//Proceedings of 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). Munich, Germany: IEEE: 10-20[DOI: 10.1109/ISMAR.2018.00024http://dx.doi.org/10.1109/ISMAR.2018.00024]
Salas-Moreno R F, Newcombe R A, Strasdat H, Kelly P H J and Davison A J. 2013. SLAM++: simultaneous localisation and mapping at the level of objects//Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, USA: IEEE: 1352-1359[DOI: 10.1109/CVPR.2013.178http://dx.doi.org/10.1109/CVPR.2013.178]
Sarlin P E, Cadena C, Siegwart R and Dymczyk M. 2019. From coarse to fine: robust hierarchical localization at large scale//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 12708-12717[DOI: 10.1109/CVPR.2019.01300http://dx.doi.org/10.1109/CVPR.2019.01300]
Sawa T, Yanagi T, Kusayanagi Y, Tsukui S and Yoshida A. 2018. Seafloor mapping by 360 degree view camera with sonar supports//Proceedings of 2018 OCEANS-MTS/IEEE Kobe Techno-Oceans. Kobe, Japan: IEEE: 1-4[DOI: 10.1109/OCEANSKOBE.2018.8559360http://dx.doi.org/10.1109/OCEANSKOBE.2018.8559360]
Schops T, Schonberger J L, Galliani S Sattler T, Schindler K, Pollefeys M and Geiger A. 2017. A multi-view stereo benchmark with high-resolution images and multi-camera videos//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 2538-2547[DOI: 10.1109/CVPR.2017.272http://dx.doi.org/10.1109/CVPR.2017.272]
Shamwell E J, Lindgren K, Leung S and Nothwang W D. 2020. Unsupervised deep visual-inertial odometry with online error correction for RGB-D imagery. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(10): 2478-2493[DOI:10.1109/TPAMI.2019.2909895]
Shan T X and Englot B. 2018. LeGO-LOAM: lightweight and ground-optimized lidar odometry and mapping on variable terrain//Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Madrid, Spain: IEEE: 4758-4765[DOI: 10.1109/IROS.2018.8594299http://dx.doi.org/10.1109/IROS.2018.8594299]
Shan T X, Englot B, Meyers D, Wang W, Ratti C and Rus D. 2020. LIO-SAM: tightly-coupled lidar inertial odometry via smoothing and mapping//Proceedings of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems Las Vegas, USA: IEEE: 5135-5142[DOI: 10.1109/IROS45743.2020.9341176http://dx.doi.org/10.1109/IROS45743.2020.9341176]
Shan T X, Englot B Ratti C and Rus D. 2021. LVI-SAM: tightly-coupled lidar-visual-inertial odometry via smoothing and mapping. [EB/OL]. [2021-06-05].https://arxiv.org/pdf/2104.10831.pdfhttps://arxiv.org/pdf/2104.10831.pdf
Shao W Z, Vijayarangan S, Li C and Kantor G. 2019. Stereo visual inertial LiDAR simultaneous localization and mapping//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 370-377[DOI: 10.1109/IROS40897.2019.8968012http://dx.doi.org/10.1109/IROS40897.2019.8968012]
Sheng L, Xu D, Ouyang W L and Wang X G. 2019. Unsupervised collaborative learning of keyframe detection and visual odometry towards monocular deep SLAM//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea(South): IEEE: 4301-4310[DOI: 10.1109/ICCV.2019.00440http://dx.doi.org/10.1109/ICCV.2019.00440]
Smith R C and Cheeseman P. 1986. On the representation and estimation of spatial uncertainty. The International Journal of Robotics Research, 5(4): 56-68[DOI:10.1177/027836498600500404]
Strasdat H, Davison A J, Montiel J M M and Konolige K. 2011. Double window optimisation for constant time visual SLAM//Proceedings of 2011 International Conference on Computer Vision. Barcelona, Spain: IEEE: 2352-2359[DOI: 10.1109/ICCV.2011.6126517http://dx.doi.org/10.1109/ICCV.2011.6126517]
Strasdat H, Montiel J M M and Davison A J. 2012. Visual SLAM: why filter? Image and Vision Computing, 30(2): 65-77[DOI: 10.1016/j.imavis.2012.02.009http://dx.doi.org/10.1016/j.imavis.2012.02.009]
Sturm J, Engelhard N, Endres F, Burgard W and Cremers D. 2012. A benchmark for the evaluation of RGB-D SLAM systems//Proceedings of 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. Vilamoura-Algarve, Portugal: IEEE: 573-580[DOI: 10.1109/IROS.2012.6385773http://dx.doi.org/10.1109/IROS.2012.6385773]
Süenderhauf N, Shirazi S, Jacobson A, Dayoub F and Milford M. 2015. Place recognition with convnet landmarks: viewpoint-robust, condition-robust, training-free//Hsu D, ed. Robotics: Science and Systems XI. Rome: Sapienza University of Rome: 1-10
Tang C Z and Tan P. 2018. BA-Net: dense bundle adjustment network. [EB/OL]. [2021-06-05].https://arxiv.org/pdf/1806.04807.pdfhttps://arxiv.org/pdf/1806.04807.pdf
Tardif J P, George M, Laverne M, Kelly A and Stentz A. 2010. A new approach to vision-aided inertial navigation//Proceedings of 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems. Taipei, China: IEEE: 4161-4168[DOI: 10.1109/IROS.2010.5651059http://dx.doi.org/10.1109/IROS.2010.5651059]
Tateno K, Tombari F, Laina I and Navab N. 2017. CNN-SLAM: real-time dense monocular SLAM with learned depth prediction//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 6565-6574[DOI: 10.1109/CVPR.2017.695http://dx.doi.org/10.1109/CVPR.2017.695]
Tateno K, Tombari F and Navab N. 2015. Real-time and scalable incremental segmentation on dense SLAM//Proceedings of 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems. Hamburg, Germany: IEEE: 4465-4472[DOI: 10.1109/IROS.2015.7354011http://dx.doi.org/10.1109/IROS.2015.7354011]
Thrun S, Burgard W and Fox D. 2005. Probabilistic Robotics. Cambridge: The MIT Press
Titterton D and Weston J. 2005. Strapdown inertial navigation technology-2nd edition-[Book review]. IEEE Aerospace and Electronic Systems Magazine, 20(7): 33-34[DOI:10.1109/MAES.2005.1499250]
Trawny N and Roumeliotis S I. 2005. Indirect Kalman Filter for 3D Attitude Estimation. University of Minnesota, Department of Computer Science&Engineering. Technical Report Number 2005-002, Rec. 57. 1-25
Ummenhofer B, Zhou H Z, Uhrig J, Mayer N, Ilg E, Dosovitskiy A and Brox T. 2017. DeMoN: depth and motion network for learning monocular stereo//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, Hawaii, USA: IEEE: 5622-5631[DOI: 10.1109/CVPR.2017.596http://dx.doi.org/10.1109/CVPR.2017.596]
van Dinh N and Kim G W. 2020. Multi-sensor fusion towards VINS: a concise tutorial, survey, framework and challenges//Proceedings of 2020 IEEE International Conference on Big Data and Smart Computing (BigComp). Busan, Korea(South): IEEE: 459-462[DOI: 10.1109/BigComp48618.2020.00-26http://dx.doi.org/10.1109/BigComp48618.2020.00-26]
Wald J, Tateno K, Sturm J, Navab N and Tombari F. 2018. Real-time fully incremental scene understanding on mobile platforms. IEEE Robotics and Automation Letters, 3(4): 3402-3409[DOI:10.1109/LRA.2018.2852782]
Wang R, Schwörer M and Cremers D. 2017a. Stereo DSO: large-scale direct sparse visual odometry with stereo cameras//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 3923-3931[DOI: 10.1109/ICCV.2017.421http://dx.doi.org/10.1109/ICCV.2017.421]
Wang S, Clark R, Wen H K and Trigoni N. 2017b. DeepVO: towards end-to-end visual odometry with deep recurrent convolutional neural networks//Proceedings of 2017 IEEE International Conference on Robotics and Automation. Singapore, Singapore: IEEE: 2043-2050[DOI: 10.1109/ICRA.2017.7989236http://dx.doi.org/10.1109/ICRA.2017.7989236]
Wang S, Clark R, Wen H K and Trigoni N. 2018. End-to-end, sequence-to-sequence probabilistic visual odometry through deep neural networks. The International Journal of Robotics Research, 37(4/5): 513-542[DOI:10.1177/0278364917734298]
Wang Z Y, Zhang J H, Chen S Y, Yuan C E, Zhang J Q and Zhang J W. 2019. Robust high accuracy visual-inertial-laser SLAM system//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 6636-6641[DOI: 10.1109/IROS40897.2019.8967702http://dx.doi.org/10.1109/IROS40897.2019.8967702]
Weiss S and Siegwart R. 2011. Real-time metric state estimation for modular vision-inertial systems//Proceedings of 2011 IEEE International Conference on Robotics and Automation. Shanghai, China: IEEE: 4531-4537[DOI: 10.1109/ICRA.2011.5979982http://dx.doi.org/10.1109/ICRA.2011.5979982]
Whelan T, Leutenegger S, Salas-Moreno R F, Glocker B and Davison A J. 2015. ElasticFusion: dense SLAM without a pose graph. Robotics: Science and Systems, #11
Wisth D, Camurri M, Das S and Fallon M. 2021. Unified multi-modal landmark tracking for tightly coupled lidar-visual-inertial odometry. IEEE Robotics and Automation Letters, 6(2): 1004-1011[DOI:10.1109/LRA.2021.3056380]
Wu D, Meng Y, Zhan K and Ma F. 2018. ALIDAR SLAM based on point-line features for underground mining vehicle//Proceedings of 2018 Chinese Automation Congress (CAC). Xi'an, China: IEEE: 2879-2883[DOI: 10.1109/CAC.2018.8623075http://dx.doi.org/10.1109/CAC.2018.8623075]
Xiang Y and Fox D. 2017. DA-RNN: semantic mapping with data associated recurrent neural networks. [EB/OL]. [2021-06-05].https://arxiv.org/pdf/1703.03098.pdfhttps://arxiv.org/pdf/1703.03098.pdf
Xu B B, Li W B, Tzoumanikas D, Bloesch M, Davison A and Leutenegger S. 2019a. MID-Fusion: octree-based object-level multi-instance dynamic SLAM//Proceedings of 2019 IEEE International Conference on Robotics and Automation. Montreal, Canada: IEEE: 5231-5237[DOI: 10.1109/ICRA.2019.8794371http://dx.doi.org/10.1109/ICRA.2019.8794371]
Xu C Y, Chen J W, Zhu H C, Liu H H and Lin Y. 2019b. Experimental research on seafloor mapping and vertical deformation monitoring for gas hydrate zone using nine-axis mems sensor tapes. IEEE Journal of Oceanic Engineering, 44(4): 1090-1101[DOI:10.1109/JOE.2018.2859498]
Yang N, Von Stumberg L, Wang R and Cremers D. 2020. D3VO: deep depth, deep pose and deep uncertainty for monocular visual odometry//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 1278-1289[DOI: 10.1109/CVPR42600.2020.00136http://dx.doi.org/10.1109/CVPR42600.2020.00136]
Yang S C and Scherer S. 2019. CubeSLAM: monocular 3-D object SLAM. IEEE Transactions on Robotics, 35(4): 925-938[DOI:10.1109/TRO.2019.2909168]
Yang Y L, Geneva P, Eckenhoff K and Huang G Q. 2019a. Visual-inertial odometry with point and line features//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 2447-2454[DOI: 10.1109/IROS40897.2019.8967905http://dx.doi.org/10.1109/IROS40897.2019.8967905]
Yang Y L, Geneva P, Zuo X X, Eckenhoff K, Liu Y and Huang G Q. 2019b. Tightly-coupled aided inertial navigation with point and plane features//Proceedings of 2019 International Conference on Robotics and Automation. Montreal, Canada: IEEE: 6094-6100[DOI: 10.1109/ICRA.2019.8794078http://dx.doi.org/10.1109/ICRA.2019.8794078]
Ye H Y, Chen Y Y and Liu M. 2019. Tightly coupled 3D lidar inertial odometry and mapping//Proceedings of 2019 International Conference on Robotics and Automation. Montreal, Canada: IEEE: 3144-3150[DOI: 10.1109/ICRA.2019.8793511http://dx.doi.org/10.1109/ICRA.2019.8793511]
Zhang G X, Lee J H, Lim J and Suh I H. 2015. Building a 3-D line-based map using stereo SLAM. IEEE Transactions on Robotics, 31(6): 1364-1377[DOI:10.1109/TRO.2015.2489498]
Zhang J, Kaess M and Singh S. 2014. Real-time depth enhanced monocular odometry//Proceedings of 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. Chicago, USA: IEEE: 4973-4980[DOI: 10.1109/IROS.2014.6943269http://dx.doi.org/10.1109/IROS.2014.6943269]
Zhang J, Kaess M and Singh S. 2016. On degeneracy of optimization-based state estimation problems//Proceedings of 2016 IEEE International Conference on Robotics and Automation. Stockholm, Sweden: IEEE: 809-816[DOI: 10.1109/ICRA.2016.7487211http://dx.doi.org/10.1109/ICRA.2016.7487211]
Zhang J and Singh S. 2014. LOAM: lidar odometry and mapping in real-time//Robotics: Science and Systems. Berkeley: [s. n.][DOI: 10.15607/RSS.2014.X.007http://dx.doi.org/10.15607/RSS.2014.X.007]
Zhang J and Singh S. 2015. Visual-lidar odometry and mapping: low-drift, robust, and fast//Proceedings of 2015 IEEE International Conference on Robotics and Automation. Seattle, USA: IEEE: 2174-2181[DOI: 10.1109/ICRA.2015.7139486http://dx.doi.org/10.1109/ICRA.2015.7139486]
Zhang M M, Zuo X X, Chen Y M, Liu Y and Li M Y. 2021. Pose estimation for ground robots: on manifold representation, integration, reparameterization, and optimization. IEEE Transactions on Robotics, 37(4): 1081-1099[DOI:10.1109/TRO.2020.3043970]
Zheng L T, Zhu C Y, Zhang J Z, Zhao H, HuangH, Niessner M and Xu K. 2019. Active scene understanding via online semantic reconstruction. Computer Graphics Forum, 38(7): 103-114[DOI:10.1111/cgf.13820]
Zhou H Y, Yao Z and Lu M Q. 2021. UWB/lidar coordinate matching method with anti-degeneration capability. IEEE Sensors Journal, 21(3): 3344-3352[DOI:10.1109/JSEN.2020.3023738]
Zhou H Z, Ummenhofer B and Brox T. 2020. DeepTAM: deep tracking and mapping with convolutional neural networks. International Journal of Computer Vision, 128(3): 756-769[DOI:10.1007/s11263-019-01221-0]
Zhou H Z, Zou D P, Pei L, Ying R D, Liu P L and Yu W X. 2015. StructSLAM: visual SLAM with building structure lines. IEEE Transactions on Vehicular Technology, 64(4): 1364-1375[DOI:10.1109/TVT.2015.2388780]
Zhou T H, Brown M, Snavely N and Lowe D G. 2017. Unsupervised learning of depth and ego-motion from video//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 6612-6619[DOI: 10.1109/CVPR.2017.700http://dx.doi.org/10.1109/CVPR.2017.700]
Zhou Y and Tuzel O. 2018. VoxelNet: end-to-end learning for point cloud based 3D object detection//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 4490-4499[DOI: 10.1109/CVPR.2018.00472http://dx.doi.org/10.1109/CVPR.2018.00472]
Zhu X X, Yu Y, Wang P F, Lin M J, Zhang H R and Cao Q X. 2019. A visual SLAM system based on the panoramic camera//Proceedings of 2019 IEEE International Conference on Real-time Computing and Robotics. Irkutsk, Russia: IEEE: 53-58[DOI: 10.1109/RCAR47638.2019.9044117http://dx.doi.org/10.1109/RCAR47638.2019.9044117]
Zou H, Chen C L, Li M X, Yang J F, Zhou Y X, Xie L H and Spanos C J. 2020. Adversarial learning-enabled automatic WiFi indoor radio map construction and adaptation with mobile robot. IEEE Internet of Things Journal, 7(8): 6946-6954[DOI:10.1109/JIOT.2020.2979413]
Zou X, Xiao C S, Wen Y Q and Yuan H W. 2020. Research of feature-based and direct methods VSLAM. Application Research of Computers, 37(5): 1281-1291
邹雄, 肖长诗, 文元桥, 元海文. 2020. 基于特征点法和直接法VSLAM的研究. 计算机应用研究, 37(5): 1281-1291[DOI:10.19734/j.issn.1001-3695.2018.11.0789]
Zuo X X. 2021. Robust and Intelligent Multi-Source Fusion SLAM Technology. Hangzhou: Zhejiang University
左星星. 2021. 面向鲁棒和智能化的多源融合SLAM技术研究. 杭州: 浙江大学
Zuo X X, Geneva P, Lee W, Liu Y and Huang G Q. 2019a. LIC-fusion: LiDAR-inertial-camera odometry//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 5848-5854[DOI: 10.1109/IROS40897.2019.8967746http://dx.doi.org/10.1109/IROS40897.2019.8967746]
Zuo X X, Merrill N, Li W, Liu Y, Pollefeys M and Huagn G Q. 2021. CodeVIO: visual-inertial odometry with learned optimizable dense depth//Proceedings of 2021 IEEE International Conference on Robotics and Automation. Xi'an, China: IEEE: 14382-14388[DOI: 10.1109/ICRA48506.2021.9560792http://dx.doi.org/10.1109/ICRA48506.2021.9560792]
Zuo X X, Xie X J, Liu Y and Huang G Q. 2017. Robust visual SLAM with point and line features//Proceedings of 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems. Vancouver, Canada: IEEE: 1775-1782[DOI: 10.1109/IROS.2017.8205991http://dx.doi.org/10.1109/IROS.2017.8205991]
Zuo X X, Yang Y L, Geneva P, Lv J J, Liu Y, Huang G Q and Pollefeys M. 2020. LIC-fusion 2.0: LiDAR-inertial-camera odometry with sliding-window plane-feature tracking//Proceedings of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. Las Vegas, USA: IEEE: 5112-5119[DOI: 10.1109/IROS45743.2020.9340704http://dx.doi.org/10.1109/IROS45743.2020.9340704]
Zuo X X, Zhang M M, Chen Y M, Liu Y, Huang G Q and Li M Y. 2019b. Visual-inertial localization for skid-steering robots with kinematic constraints. [EB/OL]. [2021-06-05].https://arxiv.org/pdf/1911.05787.pdfhttps://arxiv.org/pdf/1911.05787.pdf
相关文章
相关作者
相关机构