图像匹配方法研究综述
Image matching methods
- 2019年24卷第5期 页码:677-699
纸质出版日期: 2019-05-16
DOI: 10.11834/jig.180501
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2019-05-16 ,
移动端阅览
贾迪, 朱宁丹, 杨宁华, 吴思, 李玉秀, 赵明远. 图像匹配方法研究综述[J]. 中国图象图形学报, 2019,24(5):677-699.
Di Jia, Ningdan Zhu, Ninghua Yang, Si Wu, Yuxiu Li, Mingyuan Zhao. Image matching methods. [J]. Journal of Image and Graphics, 2019,24(5):677-699.
目的
2
图像匹配作为计算机视觉的核心任务,是后续高级图像处理的关键,如目标识别、图像拼接、3维重建、视觉定位、场景深度计算等。本文从局部不变特征点、直线、区域匹配3个方面对图像匹配方法予以综述。
方法
2
局部不变特征点匹配在图像匹配领域发展中最早出现,对这类方法中经典的算法本文仅予以简述,对于近年来新出现的方法予以重点介绍,尤其是基于深度学习的匹配方法,包括时间不变特征检测器(TILDE)、Quad-networks、深度卷积特征点描述符(DeepDesc)、基于学习的不变特征变换(LIFT)等。由于外点剔除类方法常用于提高局部不变点特征匹配的准确率,因此也对这类方法予以介绍,包括用于全局运动建模的双边函数(BF)、基于网格的运动统计(GMS)、向量场一致性估计(VFC)等。与局部不变特征点相比,线包含更多场景和对象的结构信息,更适用于具有重复纹理信息的像对匹配中,线匹配的研究需要克服包括端点位置不准确、线段外观不明显、线段碎片等问题,解决这类问题的方法有线带描述符(LBD)、基于上下文和表面的线匹配(CA)、基于点对应的线匹配(LP)、共面线点投影不变量法等,本文从问题解决过程的角度对这类方法予以介绍。区域匹配从区域特征提取与匹配、模板匹配两个角度对这类算法予以介绍,典型的区域特征提取与匹配方法包括最大稳定极值区域(MSER)、基于树的莫尔斯区域(TBMR),模板匹配包括快速仿射模板匹配(FAsT-Match)、彩色图像的快速仿射模板匹配(CFAST-Match)、具有变形和多样性的相似性度量(DDIS)、遮挡感知模板匹配(OATM),以及深度学习类的方法MatchNet、L2-Net、PN-Net、DeepCD等。
结果
2
本文从局部不变特征点、直线、区域3个方面对图像匹配方法进行总结对比,包括特征匹配方法中影响因素的比较、基于深度学习类匹配方法的比较等,给出这类方法对应的论文及代码下载地址,并对未来的研究方向予以展望。
结论
2
图像匹配是计算机视觉领域后续高级处理的基础,目前在宽基线匹配、实时匹配方面仍需进一步深入研究。
Objective
2
Image matching
the core task of computer vision
is the key of subsequent advanced image processing
such as object recognition
image mosaic
3D reconstruction
visual location
and scene depth calculation. Although many excellent methods have been proposed by domestic and foreign scholars in this field in recent years
no comprehensive summary of image matching methods has been reported. On this basis
this study reviews these methods from three aspects
namely
locally invariant feature points
straight lines
and regions.
Method
2
Locally invariant feature point matching first appeared in image matching development
such as Harris corner detector
features from accelerated segment test
and scale-invariant feature transform. The classical algorithms in this type of method are only briefly described in this paper. New methods
especially deep learning-based matching methods
including temporally invariant learned detector
Quad-networks
discriminative learning of deep convolutional feature point descriptors
and learned invariant feature transform (LIFT)
are mainly introduced in recent years. Other methods
including
bilateral functions for global motion modeling
grid-based motion statistics
and vector field consensus
are also introduced because the outer point culling method is often used to improve the accuracy of local invariant feature matching. Lines contain more scene and object structure information and are more suitable for matching image pairs with repeated texture information than local invariant feature points. Research on line matching should overcome various problems
such as inaccurate endpoint position
inconspicuous line segment
and segment fragmentation. The methods for solving such problems are line band descriptor
two-view line matching algorithm based on context and appearance
line matching leveraged by point correspondences
and new coplanar line point projection invariant. This paper introduces such methods from the perspective of problem solving process. Region matching is introduced from two aspects of region feature extraction and matching and template matching. Typical regional feature extraction and matching methods include maximally stable extremal regions
tree-based morse regions
template matching (including fast affine template matching)
fast affine template matching for color images
deformable diversity similarity
occlusion aware template matching
and deep learning methods
such as MatchNet
L2-Net
PN-Net
and DeepCD. Medical image matching is an important application in the image matching field
which is significant for clinically precise diagnosis and treatment. This work introduces this type of method from the point of view of practical applications
such as fractional total variation-L
1
and feature matching with learned nonlinear descriptors.
Result
2
In the analysis and comparison of multiple image matching algorithms
the CPU with two cores at 3.4 GHz and with graphics card NVIDIA GTX TITAN X GPU are selected as the experimental environment of the computer. The test datasets are the Technical University of Denmark dataset and Oxford University dataset Graf. This paper summarizes and compares these methods from three aspects
namely
local invariant feature points
straight lines
and regions. The comparison results of influential factors in feature matching methods
mismatched point removal methods
between hand-crafted and learn-based descriptors
and matching objects and the implementation forms of semantic matching methods are also presented. The corresponding papers and downloaded code addresses of such methods are provided
and the future research directions of image matching algorithms are prospected.
Conclusion
2
Image matching is the basis for subsequent advanced processing in the computer vision field. This method is widely used in medical image analysis
satellite image processing
remote sensing image processing
and computer vision. At present
further research is required on wide baseline and real-time matching.
图像匹配局部不变特征匹配直线匹配区域匹配语义匹配深度学习
image matchinglocal invariant feature matchingline matchingregion matchingsemantic matchingdeep learning
Harris C, Stephens M. A combined corner and edge detector[C]//Proceedings of the 4th Alvey Vision Conference. Manchester: AVC, 1988: 147-151.[DOI:10.5244/C.2.23http://dx.doi.org/10.5244/C.2.23]
Rosten E, Drummond T. Machine learning for high-speed corner detection[C]//Proceedings of the 9th European Conference on Computer Vision. Graz, Austria: Springer, 2006: 430-443.[DOI:10.1007/11744023_34http://dx.doi.org/10.1007/11744023_34]
Lowe D G. Distinctive image features from scale-invariantkeypoints[J]. International Journal of Computer Vision, 2004, 60(2):91-110.[DOI:10.1023/B:VISI.0000029664.99615.94]
Liu L, Zhan Y Y, Luo Y, et al. Summarization of the scale invariant feature transform[J]. Journal of Image and Graphics, 2013, 18(8):885-892.
刘立, 詹茵茵, 罗扬, 等.尺度不变特征变换算子综述[J].中国图象图形学报, 2013, 18(8):885-892.[DOI:10.11834/jig.20130801]
Xu Y X, Chen F. Recent advances in local image descriptor[J]. Journal of Image and Graphics, 2015, 20(9):1133-1150.
许允喜, 陈方.局部图像描述符最新研究进展[J].中国图象图形学报, 2015, 20(9):1133-1150.[DOI:10.11834/jig.20150901]
Zhang X H, Li B, Yang D. A novel Harris multi-scale corner detection algorithm[J]. Journal of Electronics and Information Technology, 2007, 29(7):1735-1738.
张小洪, 李博, 杨丹.一种新的Harris多尺度角点检测[J].电子与信息学报, 2007, 29(7):1735-1738.[DOI:10.3724/SP.J.1146.2005.01332]
He H Q, Huang S X. Improved algorithm for Harris rapid sub-pixel corners detection[J]. Journal of Image and Graphics, 2012, 17(7):853-857.
何海清, 黄声享.改进的Harris亚像素角点快速定位[J].中国图象图形学报, 2012, 17(7):853-857.[DOI:10.11834/jig.20120715]
Zhang L T, Huang X L, Lu L L, et al. Fast Harris corner detection based on gray difference and template[J]. Chinese Journal of Scientific Instrument, 2018, 39(2):218-224.
张立亭, 黄晓浪, 鹿琳琳, 等.基于灰度差分与模板的Harris角点检测快速算法[J].仪器仪表学报, 2018, 39(2):218-224.
Ke Y, Sukthankar R. PCA-SIFT: a more distinctive representation for local image descriptors[C]//Proceedings of 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington, DC: IEEE, 2004: 506-513.[DOI:10.1109/CVPR.2004.1315206http://dx.doi.org/10.1109/CVPR.2004.1315206]
Bay H, Tuytelaars T, Gool L. SURF: speeded up robust features[C]//Proceedings of the 9th European Conference on Computer Vision. Graz, Austria: Springer, 2006: 404-417.[DOI:10.1007/11744023_32http://dx.doi.org/10.1007/11744023_32]
Liu L, Peng F Y, Zhao K, et al. Simplified SIFT algorithm for fast image matching[J]. Infrared and Laser Engineering, 2008, 37(1):181-184.
刘立, 彭复员, 赵坤, 等.采用简化SIFT算法实现快速图像匹配[J].红外与激光工程, 2008, 37(1):181-184.[DOI:10.3969/j.issn.1007-2276.2008.01.042]
Abdel-Hakim A E, Farag A A. CSIFT: a SIFT descriptor with color invariant characteristics[C]//Proceedings of 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. New York, NY: IEEE, 2006: 1978-1983.[DOI:10.1109/CVPR.2006.95http://dx.doi.org/10.1109/CVPR.2006.95]
Mikolajczyk K, Schmid C. A performance evaluation of local descriptors[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(10):1615-1630.[DOI:10.1109/TPAMI.2005.188]
Morel J M, Yu G S. ASIFT:a new framework for fully affine invariant image comparison[J]. SIAM Journal on Imaging Sciences, 2009, 2(2):438-469.[DOI:10.1137/080732730]
Rosten E, Porter R, Drummond T. Faster and better:a machine learning approach to corner detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(1):105-119.[DOI:10.1109/TPAMI.2008.275]
Verdie Y, Yi K M, Fua P, et al. TILDE: a temporally invariant learned DEtector[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA: IEEE, 2015: 5279-5288.[DOI:10.1109/CVPR.2015.7299165http://dx.doi.org/10.1109/CVPR.2015.7299165]
Zhang X, Yu F X, Karaman S, et al. Learning discriminative and transformation covariant local feature detectors[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI: IEEE, 2017: 4923-4931.[DOI:10.1109/CVPR.2017.523http://dx.doi.org/10.1109/CVPR.2017.523]
Savinov N, Seki A, Ladicky L, et al. Quad-networks: unsupervised learning to rank for interest point detection[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI: IEEE, 2017: 3929-3937.[DOI:10.1109/CVPR.2017.418http://dx.doi.org/10.1109/CVPR.2017.418]
Simo-Serra E, Trulls E, Ferraz L, et al. Discriminative learning of deep convolutional feature point descriptors[C]//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015: 118-126.[DOI:10.1109/ICCV.2015.22http://dx.doi.org/10.1109/ICCV.2015.22]
Yi K M, Trulls E, Lepetit V, et al. LIFT: learned invariant feature transform[C]//Proceedings of the 14th European Conference on Computer Vision. Amsterdam, The Netherlands: Springer, 2016: 467-483.[DOI:10.1007/978-3-319-46466-4_28http://dx.doi.org/10.1007/978-3-319-46466-4_28]
Jaderberg M, Simonyan K, Zisserman A, et al. Spatial transformer networks[C]//Proceedings of the 28th International Conference on Neural Information Processing Systems. Montreal, Canada: ACM, 2015: 2017-2025.
Yi K M, Verdie Y, Fua P, et al. Learning to assign orientations to feature points[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV: IEEE, 2016: 107-116.[DOI:10.1109/CVPR.2016.19http://dx.doi.org/10.1109/CVPR.2016.19]
Liu C, Yuen J, Torralba A. SIFT flow:dense correspondence across scenes and its applications[J]. IEEE Transactions on Pattern Analysisand Machine Intelligence, 2011, 33(5):978-994.[DOI:10.1109/TPAMI.2010.147]
Bristow H, Valmadre J, Lucey S. Dense semantic correspondence where everypixel is a classifier[C]//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015: 4024-4031.[DOI:10.1109/ICCV.2015.458http://dx.doi.org/10.1109/ICCV.2015.458]
Novotny D, Larlus D, Vedaldi A. AnchorNet: A weakly supervised network to learn geometry-sensitive features for semantic matching[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI: IEEE, 2017: 2867-2876.[DOI:10.1109/CVPR.2017.306http://dx.doi.org/10.1109/CVPR.2017.306]
Kar A, Tulsiani S, Carreira J, et al. Category-specific object reconstruction from a single image[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA: IEEE, 2015: 1966-1974.[DOI:10.1109/CVPR.2015.7298807http://dx.doi.org/10.1109/CVPR.2015.7298807]
Thewlis J, Bilen H, Vedaldi A. Unsupervised learning of object landmarks by factorized spatial embeddings[C]//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017: 3229-3238.[DOI:10.1109/ICCV.2017.348http://dx.doi.org/10.1109/ICCV.2017.348]
Wang Q Q, Zhou X W, Daniilidis K. Multi-image semantic matching by mining consistent features[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT: IEEE, 2018: 685-694.[DOI:10.1109/CVPR.2018.00078http://dx.doi.org/10.1109/CVPR.2018.00078]
Yu D D, Yang F, Yang C Y, et al. Fast rotation-free feature-based image registration using improved N-SIFT and GMM-based parallel optimization[J]. IEEE Transactions on Biomedical Engineering, 2016, 63(8):1653-1664.[DOI:10.1109/TBME.2015.2465855]
Pock T, Urschler M, Zach C, et al. A duality based algorithm for TV-L1-optical-flow image registration[C]//Proceedings of the 10th International Conference on Medical Image Computing and Computer-Assisted Intervention. Brisbane, Australia: Springer, 2007: 511-518.[DOI:10.1007/978-3-540-75759-7_62http://dx.doi.org/10.1007/978-3-540-75759-7_62].
Zhang G M, Sun X X, Liu J X, et al. Research on TV-L1optical flow model for image registration based on fractional-order differentiation[J]. Acta Automatica Sinica, 2017, 43(12):2213-2224.
张桂梅, 孙晓旭, 刘建新, 等.基于分数阶微分的TV-L1光流模型的图像配准方法研究[J].自动化学报, 2017, 43(12):2213-2224.[DOI:0.16383/j.aas.2017.c160367]
Lu X S, Tu S X, Zhang S. A metric method using multidimensional features for nonrigid registration of medical images[J]. Acta Automatica Sinica, 2016, 42(9):1413-1420.
陆雪松, 涂圣贤, 张素.一种面向医学图像非刚性配准的多维特征度量方法[J].自动化学报, 2016, 42(9):1413-1420.[DOI:10.16383/j.aas.2016.c150608]
Yang W, Zhong L M, Chen Y, et al. Predicting CT image from MRI data through feature matching with learned nonlinear local descriptors[J]. IEEE Transactions on Medical Imaging, 2018, 37(4):977-987.[DOI:10.1109/TMI.2018.2790962]
Cao X H, Yang J H, Gao Y Z, et al. Region-adaptive deformable registration of CT/MRI pelvic images via learning-based image synthesis[J]. IEEE Transactions on Image Processing, 2018, 27(7):3500-3512.[DOI:10.1109/TIP.2018.2820424]
He M M, Guo Q, Li A, et al. Automatic fast feature-level image registration for high-resolution remote sensing images[J]. Journal of Remote Sensing, 2018, 22(2):277-292.
何梦梦, 郭擎, 李安, 等.特征级高分辨率遥感图像快速自动配准[J].遥感学报, 2018, 22(2):277-292.[DOI:10.11834/jrs.20186420]
Fischler M A, Bolles R C. Random sample consensus:a paradigm for model fitting with applications to image analysis and automated cartography[J]. Communications of the ACM, 1981, 24(6):381-395.[DOI:10.1145/358669.358692]
Torr P H S, Zisserman A.MLESAC:a new robust estimator with application to estimating image geometry[J]. Computer Vision and Image Understanding, 2000, 78(1):138-156.[DOI:10.1006/cviu.1999.0832]
Li X R, Hu Z Y. Rejecting mismatches by correspondence function[J]. International Journal of Computer Vision, 2010, 89(1):1-17.[DOI:10.1007/s11263-010-0318-x]
Liu H R, Yan S C. Common visual pattern discovery via spatially coherent correspondences[C]//Proceedings of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco, CA: IEEE, 2010: 1609-1616.[DOI:10.1109/CVPR.2010.5539780http://dx.doi.org/10.1109/CVPR.2010.5539780]
Liu H R, Yan S C. Robust graph mode seeking by graph shift[C]//Proceedings of the 27th International Conference on International Conference on Machine Learning. Haifa, Israel: ACM, 2010: 671-678.
Lin W Y D, Cheng M M, Lu J B, et al. Bilateral functions for global motion modeling[C]//Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland: Springer, 2014: 341-356.[DOI:10.1007/978-3-319-10593-2_23http://dx.doi.org/10.1007/978-3-319-10593-2_23]
Bian J W, Lin W Y, Matsushita Y, et al. GMS: grid-based motion statistics for fast, ultra-robust feature correspondence[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI: IEEE, 2017: 2828-2837.[DOI:10.1109/CVPR.2017.302http://dx.doi.org/10.1109/CVPR.2017.302]
Chen F J, Han J, Wang Z W, et al. Image registration algorithm based on improved GMS and weighted projection transformation[J]. Laser&Optoelectronics Progress, 2018, 55(11):111006.
陈方杰, 韩军, 王祖武, 等.基于改进GMS和加权投影变换的图像配准算法[J].激光与光电子学进展, 2018, 55(11):111006.
Ma J Y, Zhao J, Tian J W, et al. Robust point matching via vector field consensus[J]. IEEE Transactions on Image Processing, 2014, 23(4):1706-1721.[DOI:10.1109/TIP.2014.2307478]
Aronszajn N. Theory of reproducing kernels[J]. Transactions of the American Mathematical Society, 1950, 68(3):337-404.[DOI:10.2307/1990404]
Charles R Q, Su H, Mo K, et al. PointNet: deep learning on point sets for 3D classification and segmentation[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI: IEEE, 2017: 77-85.[DOI:10.1109/CVPR.2017.16http://dx.doi.org/10.1109/CVPR.2017.16]
Qi C R, Yi L, Su H, et al. PointNet++: deep hierarchical feature learning on point sets in a metric space[C]//Proceedings of the 31st Conference on Neural Information Processing Systems. Long Beach, CA: ACM, 2017.
Deng H W, Birdal T, Ilic S. PPFNet: global context aware local features for robust 3D point matching[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT: IEEE, 2018.[DOI:10.1109/CVPR.2018.00028http://dx.doi.org/10.1109/CVPR.2018.00028]
Zhou L, Zhu S Y, Luo Z X, et al. Learning and matching multi-view descriptors for registration of point clouds[C]//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018.[DOI:10.1007/978-3-030-01267-0_31http://dx.doi.org/10.1007/978-3-030-01267-0_31]
Wang H Y, Guo J W, Yan D M, et al. Learning 3D keypoint descriptors for non-rigid shape matching[C]//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018.[doi:10.1007/978-3-030-01237-3_1http://dx.doi.org/10.1007/978-3-030-01237-3_1]
Georgakis G, Karanam S, Wu Z Y, et al. End-to-end learning of keypoint detector and descriptor for pose invariant 3D matching[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT: IEEE, 2018.[DOI:10.1109/CVPR.2018.00210http://dx.doi.org/10.1109/CVPR.2018.00210]
Ren S Q, He K M, Girshick R, et al. Faster R-CNN:towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6):1137-1149.[DOI:10.1109/TPAMI.2016.2577031]
Wang Z H, Wu F C, Hu Z Y. MSLD:a robust descriptor for line matching[J]. Pattern Recognition, 2009, 42(5):941-953.[DOI:10.1016/j.patcog.2008.08.035]
Wang J X, Zhang X, Zhu H, et al. MSLD descriptor combined regional affine transformation and straight line matching[J]. Journal of Signal Processing, 2018, 34(2):183-191.
王竞雪, 张雪, 朱红, 等.结合区域仿射变换的MSLD描述子与直线段匹配[J].信号处理, 2018, 34(2):183-191.[DOI:10.16798/j.issn.1003-0530.2018.02.008]
Zhang L L, Koch R. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency[J]. Journal of Visual Communication and Image Representation, 2013, 24(7):794-805.[DOI:10.1016/j.jvcir.2013.05.006]
Wang L, Neumann U, You S Y. Wide-baseline image matching using line signatures[C]//Proceedings of the 12th International Conference on Computer Vision. Kyoto, Japan: IEEE, 2009: 1311-1318.[DOI:10.1109/ICCV.2009.5459316http://dx.doi.org/10.1109/ICCV.2009.5459316]
López J, Santos R, Fdez-Vidal X R, et al. Two-view line matching algorithm based on context and appearance in low-textured images[J]. Pattern Recognition, 2015, 48(7):2164-2184.[DOI:10.1016/j.patcog.2014.11.018]
Fan B, Wu F C, Hu Z Y. Line matching leveraged by point correspondences[C]//Proceedings of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco, CA: IEEE, 2010: 390-397.[DOI:10.1109/CVPR.2010.5540186http://dx.doi.org/10.1109/CVPR.2010.5540186]
Fan B, Wu F C, Hu Z Y. Robust line matching through line-point invariants[J]. Pattern Recognition, 2012, 45(2):794-805.[DOI:10.1016/j.patcog.2011.08.004]
Lourakis M I A, Halkidis S T, Orphanoudakis S C. Matching disparate views of planar surfaces using projective invariants[J]. Image and Vision Computing, 2000, 18(9):673-683.[DOI:10.1016/S0262-8856(99)00071-2]
Jia Q, Gao X K, Fan X, et al. Novel coplanar line-points invariants for robust line matching across views[C]//Proceedings of the 14th European Conference on Computer Vision. Amsterdam, The Netherlands: Springer, 2016: 599-611.[DOI:10.1007/978-3-319-46484-8_36http://dx.doi.org/10.1007/978-3-319-46484-8_36]
Luo Z X, Zhou X C, Gu D X. From a projective invariant to some new properties of algebraic hypersurfaces[J]. Science China Mathematics, 2014, 57(11):2273-2284.[DOI:10.1007/s11425-014-4877-0]
Ouyang H, Fan D Z, Ji S, et al. Line matching based on discrete description and conjugate point constraint[J]. Acta Geodaetica et Cartographica Sinica, 2018, 47(10):1363-1371.
欧阳欢, 范大昭, 纪松, 等.结合离散化描述与同名点约束的线特征匹配[J].测绘学报, 2018, 47(10):1363-1371.[DOI:10.11947/j.AGCS.2018.20170231]
Matas J, Chum O, Urban M, et al. Robust wide baseline stereo from maximally stable extremal regions[C]//Proceedings of the 13th British Machine Vision Conference. Cardiff: BMVC, 2002: 1041-1044.
Nistér D, Stewénius H. Linear time maximally stable extremal regions[C]//Proceedings of the 10th European Conference on Computer Vision. Marseille, France: Springer, 2008: 183-196.[DOI:10.1007/978-3-540-88688-4_14http://dx.doi.org/10.1007/978-3-540-88688-4_14]
Elnemr H A. Combining SURF and MSER along with color features for image retrieval system based on bag of visual words[J]. Journal of Computer Science, 2016, 12(4):213-222.[DOI:10.3844/jcssp.2016.213.222]
Mo H Y, Wang Z P. A feature detection method combined MSER and SIFT[J]. Journal of Donghua University:Natural Science, 2011, 37(5):624-628.
莫会宇, 王祝萍.一种结合MSER与SIFT算子的特征检测方法[J].东华大学学报:自然科学版, 2011, 37(5):624-628.[DOI:10.3969/j.issn.1671-0444.2011.05.017]
Xu Y C, Monasse P, Géraud T, et al. Tree-based Morse regions:a topological approach to local feature detection[J]. IEEE Transactions on Image Processing, 2014, 23(12):5612-5625.[DOI:10.1109/TIP.2014.2364127]
Korman S, Reichman D, Tsur G, et al. FasT-Match: fast affine template matching[C]//Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, OR: IEEE, 2013: 2331-2338.[DOI:10.1109/CVPR.2013.302http://dx.doi.org/10.1109/CVPR.2013.302]
Jia D, Cao J, Song W D, et al.Colour FAST (CFAST) match:fast affine template matching for colour images[J]. Electronics Letters, 2016, 52(14):1220-1221.[DOI:10.1049/el.2016.1331]
Jia D, Yang N H, Sun J G. Template selection and matching algorithm for image matching[J]. Journal of Image and Graphics, 2017, 22(11):1512-1520.
贾迪, 杨宁华, 孙劲光.像对匹配的模板选择与匹配[J].中国图象图形学报, 2017, 22(11):1512-1520.[DOI:10.11834/jig.170156]
Dekel T, Oron S, Rubinstein M, et al. Best-buddies similarity for robust template matching[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA: IEEE, 2015: 2021-2029.[DOI:10.1109/CVPR.2015.7298813http://dx.doi.org/10.1109/CVPR.2015.7298813]
Oron S, Dekel T, Xue T F, et al. Best-buddies similarity-robust template matching using mutual nearest neighbors[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(8):1799-1813.[DOI:10.1109/TPAMI.2017.2737424]
Wang G, Sun X L, Shang Y, et al. A robust template matching algorithm based on best-buddies similarity[J]. Acta Optica Sinica, 2017, 37(3):274-280.
王刚, 孙晓亮, 尚洋, 等.一种基于最佳相似点对的稳健模板匹配算法[J].光学学报, 2017, 37(3):274-280.[DOI:10.3788/aos201737.0315003]
Talmi I, Mechrez R, Zelnik-Manor L. Template matching with deformable diversity similarity[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI: IEEE, 2017: 1311-1319.[DOI:10.1109/CVPR.2017.144http://dx.doi.org/10.1109/CVPR.2017.144]
Talker L, Moses Y, Shimshoni I. Efficient sliding window computation for NN-based template matching[C]//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018: 409-424.[DOI:10.1007/978-3-030-01249-6_25http://dx.doi.org/10.1007/978-3-030-01249-6_25]
Korman S, Soatto S, Milam M. OATM: occlusion aware template matching by consensus set maximization[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT: IEEE, 2018.[DOI:10.1109/CVPR.2018.00283http://dx.doi.org/10.1109/CVPR.2018.00283]
Kat R, Jevnisek R J, Avidan S. Matching pixels using co-occurrence statistics[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT: IEEE, 2018.[DOI:10.1109/CVPR.2018.00188http://dx.doi.org/10.1109/CVPR.2018.00188]
Han X F, Leung T, Jia Y Q, et al. MatchNet: unifying feature and metric learning for patch-based matching[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA: IEEE, 2015: 3279-3286.[DOI:10.1109/CVPR.2015.7298948http://dx.doi.org/10.1109/CVPR.2015.7298948]
Zagoruyko S, Komodakis N. Learning to compare image patches via convolutional neural networks[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA: IEEE, 2015: 4353-4361.[DOI:10.1109/CVPR.2015.7299064http://dx.doi.org/10.1109/CVPR.2015.7299064]
Fan D Z, Dong Y, Zhang Y S. Satellite image matching method based on deep convolution neural network[J]. Acta Geodaetica et Cartographica Sinica, 2018, 47(6):844-853.
范大昭, 董杨, 张永生.卫星影像匹配的深度卷积神经网络方法[J].测绘学报, 2018, 47(6):844-853.[DOI:10.11947/j.AGCS.2018.20170627]
Balntas V, Johns E, Tang L L, et al. PN-Net: conjoined triple deep network for learning local image descriptors[EB/OL].[2018-08-09]https://arxiv.org/pdf/1601.05030.pdfhttps://arxiv.org/pdf/1601.05030.pdf.
Yang T Y, Hsu J H, Lin Y Y, et al.DeepCD: learning deep complementary descriptors for patch representations[C]//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017: 3334-3342.[DOI:10.1109/ICCV.2017.359http://dx.doi.org/10.1109/ICCV.2017.359]
Tian Y R, Fan B, Wu F C. L2-Net: deep learning of discriminative patch descriptor in Euclidean space[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI: IEEE, 2017: 6128-6136.[DOI:10.1109/CVPR.2017.649]http://dx.doi.org/10.1109/CVPR.2017.649].
相关作者
相关机构