联合多重对抗与通道注意力的高安全性图像隐写
High-security image steganography with the combination of multiple competition and channel attention
- 2024年29卷第2期 页码:355-368
纸质出版日期: 2024-02-16
DOI: 10.11834/jig.230134
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2024-02-16 ,
移动端阅览
马宾, 李坤, 徐健, 王春鹏, 李健, 张立伟. 2024. 联合多重对抗与通道注意力的高安全性图像隐写. 中国图象图形学报, 29(02):0355-0368
Ma Bin, Li Kun, Xu Jian, Wang Chunpeng, Li Jian, Zhang Liwei. 2024. High-security image steganography with the combination of multiple competition and channel attention. Journal of Image and Graphics, 29(02):0355-0368
目的
2
现有基于对抗图像的隐写算法大多只能针对一种隐写分析器设计对抗图像,且无法抵御隐写分析残差网络(steganalysis residual network,SRNet)、Zhu-Net等最新基于卷积神经网络隐写分析器的检测。针对这一现状,提出了一种联合多重对抗与通道注意力的高安全性图像隐写方法。
方法
2
采用基于U-Net结构的生成对抗网络生成对抗样本图像,利用对抗网络的自学习特性实现多重对抗隐写网络参数迭代优化,并通过针对多种隐写分析算法的对抗训练,生成更适合内容隐写的载体图像。同时,通过在生成器中添加多个轻量级通道注意力模块,自适应调整对抗噪声在原始图像中的分布,提高生成对抗图像的抗隐写分析能力。其次,设计基于多重判别损失和均方误差损失相结合的动态加权组合方案,进一步增强对抗图像质量,并保障网络快速稳定收敛。
结果
2
实验在BOSS Base 1.01数据集上与当前主流的4种方法进行比较,在使用原始隐写图像训练后,相比于基于 U-Net 结构的生成式多重对抗隐写算法等其他4种方法,使得当前性能优异的5种隐写分析器平均判别准确率降低了1.6%;在使用对抗图像和增强隐写图像再训练后,相比其他4种方法,仍使得当前性能优异的5种隐写分析器平均判别准确率降低了6.8%。同时也对对抗图像质量进行分析,基于测试集生成的2 000幅对抗图像的平均峰值信噪比(peak signal-to-noise ratio,PSNR)可达到39.925 1 dB,实验结果表明本文提出的隐写网络极大提升了隐写算法的安全性。
结论
2
本文方法在隐写算法安全性领域取得了较优秀的性能,且生成的对抗图像具有很高的视觉质量。
Objective
2
The advancement of current steganographic techniques has been facing many challenges. The method of modifying the original image to hide the secret information is traceable, rendering it susceptible to detection by steganalyzers. The coverless steganographic method improves the security of steganography. However, it has limitations, such as small embedding capacity, large image database, and difficulty extracting secret information. The cover image generative steganography method also produces small and unnatural generated images. Introducing adversarial examples provides a new approach to address these limitations by adding subtle perturbations to the original image to form an adversarial image that is not visually distinguishable and causes wrong classification results to be outputted with high confidence. Thus, the security of image steganography is enhanced. However, most existing steganographic algorithms based on adversarial examples can only design adversarial samples for one steganalyzer, making them vulnerable to the latest convolutional neural network-based steganalyzers, such as SRNet and Zhu-Net. In response to this problem, a high-security image steganography method with the combination of multiple competition and channel attention is proposed in this study.
Method
2
In the proposed method, we generate the adversarial noise
V
using the generator
G
, which employs the U-Net architecture with added channel-attention modules. Subsequently, the adversarial noise
V
is added to the original image
X
to obtain the adversarial image. The pixel space minimum mean square error loss
MSE_loss
is adopted to train the generator network
G
. Thus, high-quality and semantically meaningful adversarial images are generated. Then, we generate the stego image from the original image
X
using the steganography network (SN) and input the original image
X
and its corresponding stego image into the steganalysis optimization network to optimize its parameters. Moreover, we build multiple steganalysis adversarial networks (SANs) to discriminate the original image
X
and its adversarial image and assign different scores to the adversarial and original images, providing multiple discriminant losses
SDO_loss1
. Furthermore, we embed secret messages into the adversarial image through the SN to generate the enhanced stego image. The adversarial image and the enhanced stego image are reinput into the optimized multiple steganalyzers to improve the antisteganalysis performance of the adversarial image. The SAN evaluates the data-hiding capability of the adversarial image and provides multiple discriminant losses
SDO_loss2
. Additionally, the weighted superposition of the MSE_loss, namely, the multiple steganalysis discrimination losses
SDO_loss1
and
SDO_loss2
, is employed as the cumulative loss function of generator
G
to improve the image quality of the adversarial image and its antisteganalysis ability. Finally, the proposed method enables fast and stable network convergence and high stego image visual quality and antisteganalysis ability.
Result
2
First, we select four high-performance deep-learning steganalyzers, namely, Xu-Net, Ye-Net, SRNet, and Zhu-Net, for simultaneous adversarial training to improve the antisteganalysis ability of adversarial images. However, simultaneously conducting experiments with four steganalysis networks may sharply increase the number of model parameters, resulting in slow training speed and long training period. Furthermore, each iteration of adversarial noise is generated according to the gradient feedback of the four steganalysis networks during the adversarial image generation process. A consequence of this approach is that the original image is subjected to excessive, unnecessary adversarial noise, leading to low-quality adversarial images. In response to this issue, we execute ablation experiments on different steganalysis networks employed in training. These experiments aim to decrease model parameters, reduce training time, and ultimately enhance the quality of adversarial images for their antisteganalysis capability improvement. The role of the generator is to produce adversarial noise, which is subsequently incorporated into the original image to generate adversarial images. Different positions of adversarial noise in the original image can cause distinct perturbations to the steganalysis network, and the quality of the generated adversarial images can be influenced differently. This study introduces ablation experiments by altering the addition of the channel attention module at various positions of the generator to examine the effectiveness of the channel attention module. The parameters of the generator loss function are fine-tuned by conducting the ablation experiment. Subsequently, we generate 2 000 adversarial images using the proposed model and evaluate the quality of these images. The results reveal that the average peak signal-to-noise ratio (PSNR) value of the 2 000 generated adversarial images is 39.925 1 dB. Furthermore, more than 99.55% of these images have a PSNR value greater than 39 dB, and more than 75% of the generated adversarial images have a PSNR value greater than 40 dB. Additionally, the average structural similarity index measure (SSIM) value of the generated adversarial images is 0.962 5. Among these images, more than 69.85% have an SSIM value greater than 0.955, and more than 55.6% of the adversarial samples have an SSIM value greater than 0.960. These results indicate that compared with the original images, the generated adversarial images exhibit high visual similarity. Finally, we conduct a comparative study of the proposed method with the current state-of-the-art methods on the BOSS Base 1.01 dataset. The experiments are conducted on the BOSS Base 1.01 dataset, and the results are compared with those of the current state-of-the-art methods. Compared with the four methods, the five steganalysis methods show decreased average accuracy by 1.6% after training on the original steganographic images. Compared with other four methods, the five steganalysis methods show decreased average accuracy by 6.8% after further training with adversarial images and enhanced steganographic images. The experimental results indicate that the proposed steganographic method significantly improves the security of the steganographic algorithm.
Conclusion
2
In this study, we propose a steganographic architecture based on the U-Net framework with lightweight channel attention modules to generate adversarial images, which can resist multiple steganalysis networks. The experiment results demonstrate that the security and generalization of the algorithm we propose exceed those of the compared steganographic methods.
隐写隐写分析对抗图像通道注意力生成对抗网络(GAN)
steganographysteganalysisadversarial imageschannel attentiongenerative adversarial network(GAN)
Boroumand M, Chen M and Fridrich J. 2019. Deep residual network for steganalysis of digital images. IEEE Transactions on Information Forensics and Security, 14(5): 1181-1193 [DOI: 10.1109/TIFS.2018.2871749http://dx.doi.org/10.1109/TIFS.2018.2871749]
Chen X Y, Zhang Z T, Qiu A Q, Xia Z H and Xiong N N. 2022. Novel coverless steganography method based on image selection and starGAN. IEEE Transactions on Network Science and Engineering, 9(1): 219-230 [DOI: 10.1109/TNSE.2020.3041529http://dx.doi.org/10.1109/TNSE.2020.3041529]
Filler T, Judas J and Fridrich J. 2011. Minimizing additive distortion in steganography using syndrome-trellis codes. IEEE Transactions on Information Forensics and Security, 6(3): 920-935 [DOI: 10.1109/TIFS.2011.2134094http://dx.doi.org/10.1109/TIFS.2011.2134094]
Fridrich J and Kodovsky J. 2012. Rich models for steganalysis of digital images. IEEE Transactions on Information Forensics and Security, 7(3): 868-882 [DOI: 10.1109/TIFS.2012.2190402http://dx.doi.org/10.1109/TIFS.2012.2190402]
Goodfellow I J, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A and Bengio Y. 2014. Generative adversarial networks//Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press: 2672-2680
Guo L J, Ni J Q and Shi Y Q. 2012. An efficient JPEG steganographic scheme using uniform embedding//Proceedings of 2012 IEEE International Workshop on Information Forensics and Security. Costa Adeje, Spain: IEEE: 169-174 [DOI: 10.1109/WIFS.2012.6412644http://dx.doi.org/10.1109/WIFS.2012.6412644]
Holub V and Fridrich J. 2013. Designing steganographic distortion using directional filters//Proceedings of 2012 IEEE International Workshop on Information Forensics and Security. Costa Adeje, Spain: IEEE: 234-239 [DOI: 10.1109/WIFS.2012.6412655http://dx.doi.org/10.1109/WIFS.2012.6412655]
Holub V, Fridrich J and Denemark T. 2014. Universal distortion function for steganography in an arbitrary domain. EURASIP Journal on Information Security, 2014(1): #1 [DOI: 10.1186/1687-417X-2014-1http://dx.doi.org/10.1186/1687-417X-2014-1]
Hu J, Shen L and Sun G, 2018. Squeeze-and-excitation networks//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 7132-7141 [DOI: 10.1109/CVPR.2018.00745http://dx.doi.org/10.1109/CVPR.2018.00745]
Liu M L, Luo W Q, Zheng P J and Huang J W. 2021. A new adversarial embedding method for enhancing image steganography. IEEE Transactions on Information Forensics and Security, 16: 4621-4634 [DOI: 10.1109/TIFS.2021.3111748http://dx.doi.org/10.1109/TIFS.2021.3111748]
Liu Q, Xiang X Y, Qin J H, Tan Y and Zhang Q. 2022. A robust coverless steganography scheme using camouflage image. IEEE Transactions on Circuits and Systems for Video Technology, 32(6): 4038-4051 [DOI: 10.1109/TCSVT.2021.3108772http://dx.doi.org/10.1109/TCSVT.2021.3108772]
Luo Y J, Qin J H, Xiang X Y and Tan Y. 2021. Coverless image steganography based on multi-object recognition. IEEE Transactions on Circuits and Systems for Video Technology, 31(7): 2779-2791 [DOI: 10.1109/TCSVT.2020.3033945http://dx.doi.org/10.1109/TCSVT.2020.3033945]
Ma B, Han Z W, Xu J, Wang C P, Li J and Wang Y L. 2023. Generative multiple adversarial steganography algorithm based on U-Net structure. Journal of Software, 34(7): 3385-3407
马宾, 韩作伟, 徐健, 王春鹏, 李健, 王玉立. 2023. 基于U-Net结构的生成式多重对抗隐写算法. 软件学报, 34(7): 3385-3407 [DOI: 10.13328/j.cnki.jos.006537http://dx.doi.org/10.13328/j.cnki.jos.006537]
Mielikainen J. 2006. LSB matching revisited. IEEE Signal Processing Letters, 13(5): 285-287 [DOI: 10.1109/LSP.2006.870357http://dx.doi.org/10.1109/LSP.2006.870357]
Peng F, Chen G F and Long M. 2022. A robust coverless steganography based on generative adversarial networks and gradient descent approximation. IEEE Transactions on Circuits and Systems for Video Technology, 32(9): 5817-5829 [DOI: 10.1109/TCSVT.2022.3161419http://dx.doi.org/10.1109/TCSVT.2022.3161419]
Petitcolas F A P, Anderson R J and Kuhn M G. 1999. Information hiding-a survey. Proceedings of the IEEE, 87(7): 1062-1078 [DOI: 10.1109/5.771065http://dx.doi.org/10.1109/5.771065]
Pevný T, Filler T and Bas P. 2010. Using high-dimensional image models to perform highly undetectable steganography//Proceedings of the 12th International Conference on Information Hiding. Calgary, Canada: Springer: 161-177 [DOI: 10.1007/978-3-642-16435-4_13http://dx.doi.org/10.1007/978-3-642-16435-4_13]
Ronneberger O, Fischer P and Brox T. 2015. U-Net: convolutional networks for biomedical image segmentation//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich, Germany: Springer: 234-241 [DOI: 10.1007/978-3-319-24574-4_28http://dx.doi.org/10.1007/978-3-319-24574-4_28]
Tan J X, Liao X, Liu J T, Cao Y and Jiang H B. 2022. Channel attention image steganography with generative adversarial networks. IEEE Transactions on Network Science and Engineering, 9(2): 888-903 [DOI: 10.1109/TNSE.2021.3139671http://dx.doi.org/10.1109/TNSE.2021.3139671]
Tang W X, Tan S Q, Li B and Huang J W. 2017. Automatic steganographic distortion learning using a generative adversarial network. IEEE Signal Processing Letters, 24(10): 1547-1551 [DOI: 1547-155110.1109/LSP.2017.2745572http://dx.doi.org/1547-155110.1109/LSP.2017.2745572]
Wu K C and Wang C M. 2015. Steganography using reversible texture synthesis. IEEE Transactions on Image Processing, 24(1): 130-139 [DOI: 10.1109/TIP.2014.2371246http://dx.doi.org/10.1109/TIP.2014.2371246]
Xu G S, Wu H Z and Shi Y Q. 2016. Structural design of convolutional neural networks for steganalysis. IEEE Signal Processing Letters, 23(5): 708-712 [DOI: 10.1109/LSP.2016.2548421http://dx.doi.org/10.1109/LSP.2016.2548421]
Yang J H, Ruan D Y, Huang J W, Kang X G and Shi Y Q. 2019. An embedding cost learning framework using GAN. IEEE Transactions on Information Forensics and Security, 15: 839–851 [DOI: 10.1109/TIFS.2019.2922229http://dx.doi.org/10.1109/TIFS.2019.2922229]
Ye J, Ni J Q and Yi Y. 2017. Deep learning hierarchical representations for image steganalysis. IEEE Transactions on Information Forensics and Security, 12(11): 2545-2557 [DOI: 10.1109/TIFS.2017.2710946http://dx.doi.org/10.1109/TIFS.2017.2710946]
Yin X L, Lu W, Zhang J H and Luo X Y. 2022. Robust JPEG steganography based on lossless carrier and robust cost. Journal of Image and Graphics, 27(1): 238-251
尹晓琳, 卢伟, 张俊鸿, 罗向阳. 2022. 无损载体和鲁棒代价结合的JPEG图像鲁棒隐写. 中国图象图形学报, 27(1): 238-251 [DOI: 10.11834/jig.210406http://dx.doi.org/10.11834/jig.210406]
Zhang R, Zhu F, Liu J Y and Liu G S. 2020. Depth-wise separable convolutions and multi-level pooling for an efficient spatial CNN-based steganalysis. IEEE Transactions on Information Forensics and Security, 15: 1138-1150 [DOI: 10.1109/TIFS.2019.2936913http://dx.doi.org/10.1109/TIFS.2019.2936913]
Zhang X, Peng F and Long M. 2018a. Robust coverless image steganography based on DCT and LDA topic classification. IEEE Transactions on Multimedia, 20(12): 3223-3238 [DOI: 10.1109/TMM.2018.2838334http://dx.doi.org/10.1109/TMM.2018.2838334]
Zhang Y W, Zhang W M, Chen K J, Liu J Y, Liu Y J and Yu N H. 2018b. Adversarial examples against deep neural network based steganalysis//Proceedings of the 6th ACM Workshop on Information Hiding and Multimedia Security. Innsbruck, Austria: ACM: 67-72 [DOI: 10.1145/3206004.3206012http://dx.doi.org/10.1145/3206004.3206012]
Zheng G, Hu D H, Ge H and Zheng S L. 2021. End-to-end image steganography and watermarking driven by generative adversarial networks. Journal of Image and Graphics, 26(10): 2485-2502
郑钢, 胡东辉, 戈辉, 郑淑丽. 2021. 生成对抗网络驱动的图像隐写与水印模型. 中国图象图形学报, 26(10): 2485-2502 [DOI: 10.11834/jig.200404http://dx.doi.org/10.11834/jig.200404]
Zhou L C, Feng G R, Shen L Q and Zhang X P. 2020. On security enhancement of steganography via generative adversarial image. IEEE Signal Processing Letters, 27: 166-170 [DOI: 10.1109/LSP.2019.2963180http://dx.doi.org/10.1109/LSP.2019.2963180]
相关作者
相关机构