可见光与红外图像融合质量评价指标分析
Analysis of quality objective assessment metrics for visible and infrared image fusion
- 2023年28卷第1期 页码:144-155
纸质出版日期: 2023-01-16 ,
录用日期: 2022-03-17
DOI: 10.11834/jig.210719
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2023-01-16 ,
录用日期: 2022-03-17
移动端阅览
孙彬, 高云翔, 诸葛吴为, 王梓萱. 可见光与红外图像融合质量评价指标分析[J]. 中国图象图形学报, 2023,28(1):144-155.
Bin Sun, Yunxiang Gao, Wuwei Zhuge, Zixuan Wang. Analysis of quality objective assessment metrics for visible and infrared image fusion[J]. Journal of Image and Graphics, 2023,28(1):144-155.
目的
2
客观评价作为图像融合的重要研究领域,是评价融合算法性能的有力工具。目前,已有几十种不同类型的评价指标,但各应用领域包括可见光与红外图像融合,仍缺少统一的选择依据。为了方便比较不同融合算法性能,提出一种客观评价指标的通用分析方法并应用于可见光与红外图像融合。
方法
2
将可见光与红外图像基准数据集中的客观评价指标分为两类,分别是基于融合图像的评价指标与基于源图像和融合图像的评价指标。采用Kendall相关系数分析融合指标间的相关性,聚类得到指标分组;采用Borda计数排序法统计算法的综合排序,分析单一指标排序和综合排序的相关性,得到一致性较高的指标集合;采用离散系数分析指标均值随不同算法的波动程度,选择充分体现不同算法间差异的指标;综合相关性分析、一致性分析及离散系数分析,总结具有代表性的建议指标集合。
结果
2
在13对彩色可见光与红外和8对灰度可见光与红外两组图像源中,分别统计分析不同图像融合算法的客观评价数据,得到可见光与红外图像融合的建议指标集(标准差、边缘保持度),作为融合算法性能评估的重要参考。相较于现有方法,实验覆盖20种融合算法和13种客观评价指标,并且不依赖主观评价结果。
结论
2
针对可见光与红外图像融合,提出了一种基于统计分析的客观评价指标分析方法,该方法可以推广至更多的图像融合应用,指导选择具有代表性的客观评价指标。
Objective
2
As a research branch in the field of image fusion
objective assessment metrics can overcome these shortcomings of subjective evaluation methods that are easily affected by human psychological interference
surrounding environment
and visual characteristics. It can be utilized to evaluate algorithms and design parameters. Our algorithms advantages proposed can be demonstrated via objective assessment metrics. However
there is still a lack of benchmarks and metrics in various application fields like visible and infrared image fusion. A couple of objective assessment metrics can be selected based on prior experience. To facilitate the comparative analysis for different fusion algorithms
our research is focused on a general option method for objective assessment metrics and a set of recommended metrics for the fusion of visible and infrared images.
Method
2
A new selecting method for objective assessment metrics is built. Our method consists of three parts: 1) correlation analysis
2) consistency analysis and 3) discrete analysis. The Kendall correlation coefficient is utilized to perform correlation analysis for all objective assessment metrics. All the objective assessment metrics are clustered according to the value of the correlation coefficient: if the Kendall value of two metrics is higher than the threshold
the two metrics will be put into the same group. The Borda voting method is used in the consistency analysis. There is a ranking for all algorithms in terms of each metric value. An overall ranking is also generated by Borda voting method based on each single ranking of different metrics. The correlation coefficient is used to analyze the consistency between each single ranking and the overall ranking. The objective assessment metric has higher consistency if its correlation coefficient value is higher. Such experiments showed that the metric value will be fluctuated if the fusion quality is changed. A good metric should reflect the fusion quality of different algorithms clearly
so the metric value will cause a large fluctuation in terms of different fusion quality. The different fusion quality we illustrated is originated from multiple algorithms. The coefficient of variation is used to interpret the fluctuation because different objective assessment metrics match different measurement scales. The coefficient of variation reflects overall fluctuations under the influence of the measurement scale. Therefore
the final selected objective assessment metrics set has the following three characteristics: 1) high consistency
2) high coefficient of variation and 3)non-same group.
Result
2
The experiments are conducted on the visible and infrared fusion benchmark (VIFB) dataset. The experiments are segmented into two groups in terms of the visible images in the dataset in related to grayscale images and RGB color images. The recommended objective assessment metric set is under the fusion of visible and infrared image
color visible and infrared image fusion: {standard deviation(SD)
$$Q^{A B / F}$$
} or {SD
$$Q_{\mathrm{CB}}$$
}; gray visible and infrared image fusion: {SD
$$Q^{A B / F}$$
} or {
$$Q_{\mathrm{CB}}$$
}. Under the fusion of color visible and infrared image
both of
$$Q^{A B / F}$$
and
$$Q_{\mathrm{CB}}$$
had good consistency and coefficient of variation within the same group. It did not make much difference to choose each of them. Combining the results of the two sets of experime
nts
{SD
$$Q^{A B / F}$$
} is applied to visible and infrared image fusion. SD is focused on evaluating the contrast information of the fused image
which can intuitively reflect the quality of the fusion of visible and infrared images.
$$Q^{A B / F}$$
is focused on evaluating the edge details. The comparative analysis shows that the algorithms filtered by two objective assessment metrics are similar to the individual subjective evaluation results. The objective assessment metric set selected by our method can be used as a basis for evaluating the performances of visible and infrared fusion algorithms. Compared to the existing methods
this method covers more fusion algorithms and objective assessment metrics without subjective evaluation results.
Conclusion
2
A general selecting method for objective assessment metrics is proposed. The method is not only matched for the fusion of visible and infrared images
but also applies to image fusion in other scenes. Our quick-response method can screen out the most representative objective assessment metric in a scene. Based on the benchmark of visible and infrared image fusion
the recommended representative objective assessment metrics of visible and infrared image fusion are SD and
$$Q^{A B / F}$$
.
图像融合客观评价指标相关性分析一致性分析离散系数
image fusionobjective assessment metricscorrelation analysisconsistency analysiscoefficient of variation
Chen H and Varshney P K. 2007. A human perception inspired quality metric for image fusion based on regional information. Information Fusion, 8(2): 193-207 [DOI: 10.1016/j.inffus.2005.10.001]
Chen M S. 2016. Image fusion of visual and infrared image based on NSCT and compressed sensing. Journal of Image and Graphics, 21(1): 39-44
陈木生. 2016. 结合NSCT和压缩感知的红外与可见光图像融合. 中国图象图形学报, 21(1): 39-44 [DOI: 10.11834/jig.20160105]
Chen Y and Blum R S. 2009. A new automated quality assessment algorithm for image fusion. Image and Vision Computing, 27(10): 1421-1432 [DOI: 10.1016/j.imavis.2007.12.002]
Emerson P. 2013. The original Borda count and partial voting. Social Choice and Welfare, 40(2): 353-358 [DOI: 10.1007/s00355-011-0603-9]
Gong R and Wang X C. 2019. Infrared and visible image fusion based on BEMD and W-transform. Journal of Image and Graphics, 24(6): 987-999
宫睿, 王小春. 2019. BEMD分解和W变换相结合的红外与可见光图像融合. 中国图象图形学报, 24(6): 987-999[DOI: 10.11834/jig.180530]
Li H and Wu X J. 2019. DenseFuse: a fusion approach to infrared and visible images. IEEE Transactions on Image Processing, 28(5): 2614-2623 [DOI: 10.1109/TIP.2018.2887342]
Li S T, Yang B and Hu J W. 2011. Performance comparison of different multi-resolution transforms for image fusion. Information Fusion, 12(2): 74-84 [DOI: 10.1016/j.inffus.2010.03.002]
Liu Z, Blasch E, Bhatnagar G, John V, Wu W and Blum R S. 2018. Fusing synergistic information from multi-sensor images: an overview from implementation to performance assessment. Information Fusion, 42: 127-145 [DOI: 10.1016/j.inffus.2017.10.010]
Liu Z, Blasch E, Xue Z Y, Zhao J Y, Laganiere R and Wu W. 2012. Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(1): 94-109 [DOI: 10.1109/TPAMI.2011.109]
Liu Z W, Luo X Q and Zhang Z C. 2020. Multi-focus image fusion with a self-learning fusion rule. Journal of Image and Graphics, 25(8): 1637-1648
刘子闻, 罗晓清, 张战成. 2020. 自学习规则下的多聚焦图像融合. 中国图象图形学报, 25(8): 1637-1648 [DOI: 10.11834/jig.190614]
Ma J Y, Liang P W, Yu W, Chen C, Guo X J, Wu J and Jiang J J. 2020. Infrared and visible image fusion via detail preserving adversarial learning. Information Fusion, 54: 85-98 [DOI: 10.1016/j.inffus.2019.07.005]
Ma J Y, Ma Y and Li C. 2019. Infrared and visible image fusion methods and applications: a survey. Information Fusion, 45: 153-178 [DOI: 10.1016/j.inffus.2018.02.004]
Martinez J, Pistonesi S, Maciel M C and Flesia A G. 2019. Multi-scale fidelity measure for image fusion quality assessment. Information Fusion, 50: 197-211 [DOI: 10.1016/j.inffus.2019.01.003]
Qu G H, Zhang D L and Yan P F. 2002. Information measure for performance of image fusion. Electronics Letters, 38(7): 313-315 [DOI: 10.1049/el:20020212]
Xydeas C S and Petrović V. 2000. Objective image fusion performance measure. Electronics Letters, 36(4): 308-309 [DOI: 10.1049/el:20000267]
Yang M H, Cao Y D, Tan L, Zhang C Y and Yu J. 2007. A new multi-quality image fusion method in visual sensor network//Proceedings of the 3rd International Conference on Intelligent Information Hiding and Multimedia Signal Processing. Kaohsiung, China: IEEE: 667-670[DOI: 10.1109/ⅡH-MSP.2007.42http://dx.doi.org/10.1109/ⅡH-MSP.2007.42]
Yang Y C, Li J and Wang Y P. 2018. Review of image fusion quality evaluation methods. Journal of Frontiers of Computer Science and Technology, 12(7): 1021-1035
杨艳春, 李娇, 王阳萍. 2018. 图像融合质量评价方法研究综述. 计算机科学与探索, 12(7): 1021-1035 [DOI: 10.3778/j.issn.1673-9418.1710001]
Zhang X C, Ye P and Xiao G. 2020. VIFB: a visible and infrared image fusion benchmark//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Seattle, USA: IEEE: 468-478[DOI: 10.1109/CVPRW50498.2020.00060http://dx.doi.org/10.1109/CVPRW50498.2020.00060]
Zhang X L, Li X F and Li J. 2014. Validation and correlation analysis of metrics for evaluating performance of image fusion. Acta Automatica Sinica, 40(2): 306-315
张小利, 李雄飞, 李军. 2014. 融合图像质量评价指标的相关性分析及性能评估. 自动化学报, 40(2): 306-315 [DOI: 10.3724/SP.J.1004.2014.00306]
Zheng Y F, Essock E A, Hansen B C and Haun A M. 2007. A new metric based on extended spatial frequency and its application to DWT based fusion algorithms. Information Fusion, 8(2): 177-192 [DOI: 10.1016/j.inffus.2005.04.003]
相关作者
相关机构