最小依赖隐藏的屏摄鲁棒水印方法
LDH: least dependent hiding for screen-shooting resilient watermarking
- 2024年29卷第2期 页码:408-418
纸质出版日期: 2024-02-16
DOI: 10.11834/jig.220811
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2024-02-16 ,
移动端阅览
宋佳维, 刘春晓, 张心怡. 2024. 最小依赖隐藏的屏摄鲁棒水印方法. 中国图象图形学报, 29(02):0408-0418
Song Jiawei, Liu Chunxiao, Zhang Xinyi. 2024. LDH: least dependent hiding for screen-shooting resilient watermarking. Journal of Image and Graphics, 29(02):0408-0418
目的
2
现有屏摄水印方法无法有效平衡计算复杂度、嵌入水印后的图像质量以及水印鲁棒性3项指标,同时广泛使用透视畸变矫正预处理,大大限制了屏摄水印的实际商业使用。本文在重新设计噪声层的基础上,提出了一种最小依赖载体图像隐藏水印信息的屏摄鲁棒水印,将屏摄水印对于载体图像的依赖控制在最小。
方法
2
为了保证水印的嵌入效率,极大简化依赖深度隐藏网络框架中的编码网络,达成对载体图像的最小依赖,大大减小计算复杂度;为了平衡网络深度减小所导致的网络提取能力损失,加入Sobel算子,引入载体图像的边缘信息;在噪声层中加入缩放攻击操作,并由此去除了限制屏摄水印应用范围的透视畸变矫正预处理,进一步拓宽了应用范围;为了训练网络的屏摄鲁棒性,重新定义了噪声层,改进原有噪声层的设计结构,对噪声层图像扰动类型和参数进行随机选择,使得解码网络的输入数据具有更高的样本均衡性和多样性。
结果
2
在DIV2K(DIVerse 2K)数据集上与其他的3种方法进行了对比实验,本文方法获得了最高的PSNR(peak signal-to-noise ratio)和SSIM(structural similarity index measure)指标,并比排名第2的通用深度隐藏方法提高了12 dB的PSNR值和0.006的SSIM值;在有无攻击两种环境下,本文方法均能保持很高的ACC(accuracy)和F1指标,在攻击环境下比排名第2位的StegaStamp(steganography stamp)方法提高了0.262的F1分数。与同网络框架下的已有噪声层相比,在无攻击环境下,本文算法提高了0.124的ACC和0.284的F1分数;在有攻击环境下,本文算法提高了0.316的ACC和0.524的F1分数,水印提取的准确性更高。
结论
2
本文算法在图像质量和水印鲁棒性方面获得了更优的效果,摆脱了透视畸变矫正的限制,拓宽了屏摄水印的应用范围。
Objective
2
With the rapid development of internet and communication technology, the remote desktop technique enables separating the confidential information and the screen in space. However, it also engenders information security risks of confidential information because of illegal screen shooting. How can illegal screen shooting be prevented and the related responsibility identified? Adding a robust watermark and revealing the message hidden in the shot image is preferred. By taking photos of the files displayed on the screen, the captured photos can realize efficient, high-quality information recording. The pictures taken on the screen not only record effective information but also destroy the possible watermark signal carried to a large extent, making the photo leakage behavior concealed and difficult to trace. Screen-shooting watermark is a challenging subject in digital watermark. In screen shooting, the information displayed on the screen is received through camera capturing and postprocessing operations to transmit information from the screen to the camera in the optical channel involving optical changes, digital-analog conversion, image scaling, and image distortion. Four main methods are used to deal with this subject, namely, key-point-, template-, frequency-domain-, and deep neural network(DNN)-based methods. Traditional methods and DNN-based methods have some solutions. However, neither of them could balance computational complexity, image quality, and watermark robustness. The calculation of key points in key-point based methods is always overly time-consuming for practical use. Template-based methods often bring great changes to the cover images, resulting in image quality degradation. Watermarks generated by the frequency-domain-based methods have poor robustness and could be easily destroyed. Almost all methods should correct and resize the warped image to its original image size for the following watermark extraction stage, which is the main reason why the watermarks in these methods could not achieve robustness to clipping and scaling in practice. To solve the above problems, the least dependent hiding for screen-shooting resilient watermarking method is proposed to consider computational complexity, image quality, and robustness comprehensively. The decoder-based reveal network only needs to disclose the watermark message from the corresponding location of the container image, which guarantees the semantic consistency of the reveal network and the embedding network. The embedded watermark, such as user name, time, and IP address, could be extracted under the screen-shooting attack or other attacks, and to imitate the information loss in screen shooting, an improved noise layer is designed for the training of our model.
Method
2
First, the watermark embedding network in the dependent deep hiding(DDH) framework is greatly simplified, and the Sobel operator is added to introduce the edge information of the cover image. The scaling attack operation is added to the noise layer, and the perspective distortion correction preprocessing is removed because it limits the application range of screen-shooting resilient watermarking. The existing noise layer is redefined in the way that the image disturbance types are randomly selected and the parameters of the specific image disturbance types are randomly changed, which increases the sample equilibrium and diversity of the training data of the reveal network. The investigation of previous DNN-based methods reveals their watermark residuals visually approximate the edges of the cover images. A strong correlation exists between the edges of the cover images and the invisibility of the watermark. To improve robustness and reduce computation complexity, the edge map of the cover image extracted by the Sobel operator is concatenated with the feature map of the watermark. The watermark embedding network is divided into two parts according to whether the cover image is used in the convolution because the network part without cover image participating in it could be previously calculated in practice. Second, the existing noise layer is modified to simulate the image scaling operation in the screen shooting, so the widely used perspective distortion correction can be canceled. Considering the class-balance principle, a new design idea of noise layer is proposed, in which random decision modules are added to the noise layer to make the data augmentation stronger than the original image disturbing effects. When training the network, learned perceptual image patch similarity(LPIPS) loss, L2 loss, and structural similarity index measure(SSIM) loss are used to constrain the visual similarity of the cover image and the container image while information entropy loss and weighted cross entropy loss are used to reconstruct the watermark with the form of a single-channel binary image. Model training and testing is carried out based on PyTorch. PyTorch is used to implement least dependent hiding(LDH) with NVIDIA GeForce 2080Ti GPU and Intel Core i7-9700 3.00 GHz CPU. The whole neural network is optimized by Adam optimizer.The initial learning rate is set to 1e-3, which is then reduced 90% every 20 epochs. In the training, the input image resolution is 256 × 256 and the batch size is 2. A pretrained model trained without geometric transformation in the noise layer is used to initialize the model.
Result
2
Experimental results show the proposed noise layer is more effective than the three latest methods on the DIVerse 2K(DIV2K) dataset. The proposed method achieves the highest peak signal-to-noise ratio(PSNR) and SSIM index, which improves PSNR by 12 dB and SSIM by 0.006 compared with the second-best method—universal deep hiding (UDH) if no image attacks are applied. Moreover, it ranks second in accuracy and F1 index if no image attacks are applied. Compared with the same network framework using the noise layer proposed by the previous work, our algorithm achieves better indicators and higher accuracy for the watermark extraction in both modes with and without image attacks, which proves the noise layer proposed is indeed helpful to increase the training to improve the accuracy and robustness of watermark extraction. The watermark can be extracted from the screen shot images in the range of 10 cm to more than 50 cm, and it has a high extraction success rate at a usual distance.
Conclusion
2
In this paper, the least dependent hiding for screen-shooting resilient watermarking is proposed, which comprehensively balances computational complexity, image quality, and robustness. An effective noise layer improvement measure is also designed, which helps our algorithm perform better in image quality and watermark robustness. The proposed algorithm has the advantages of high embedding efficiency, high robustness, and high transparency, which means wider application range compared with the existing methods.
数字水印屏摄信道全卷积网络依赖隐藏噪声层
digital watermarkscreen-to-camera channelfully convolutional networkdependent hidingnoise layer
Ahmadi M, Norouzi A, Karimi N, Samavi S and Emami A. 2020. ReDMark: framework for residual diffusion watermarking based on deep networks. Expert Systems with Applications, 146: #113157 [DOI: 10.1016/j.eswa.2019.113157http://dx.doi.org/10.1016/j.eswa.2019.113157]
Fang H, Chen D D, Huang Q D, Zhang J, Ma Z H, Zhang W M and Yu N H. 2021. Deep template-based watermarking. IEEE Transactions on Circuits and Systems for Video Technology, 31(4): 1436-1451 [DOI: 10.1109/tcsvt.2020.3009349http://dx.doi.org/10.1109/tcsvt.2020.3009349]
Fang H, Chen D D, Wang F, Ma Z H, Liu H G, Zhou W B, Zhang W M and Yu N H. 2022. TERA: screen-to-camera image code with transparency, efficiency, robustness and adaptability. IEEE Transactions on Multimedia, 24: 955-967 [DOI: 10.1109/tmm.2021.3061801http://dx.doi.org/10.1109/tmm.2021.3061801]
Fang H, Zhang W M, Zhou H, Cui H and Yu N H. 2019. Screen-shooting resilient watermarking. IEEE Transactions on Information Forensics and Security, 14(6): 1403-1418 [DOI: 10.1109/tifs.2018.2878541http://dx.doi.org/10.1109/tifs.2018.2878541]
Goodfellow I J, Shlens J and Szegedy C. 2015. Explaining and harnessing adversarial examples [EB/OL]. [2023-11-20]. https://arxiv.org/pdf/1412.6572.pdfhttps://arxiv.org/pdf/1412.6572.pdf
Jetley S, Lord N and Torr P. 2018. With friends like these, who needs adversaries?//Proceedings of the 32nd International Conference on Neural Information Processing Systems. Montréal, Canada: Curran Associates Inc.: 10772-10782 [DOI: 10.1080/10481885.2018.1482144http://dx.doi.org/10.1080/10481885.2018.1482144]
Kandi H, Mishra D and Gorthi S R K S. 2017. Exploring the learning capabilities of convolutional neural networks for robust image watermarking. Computers and Security, 65: 247-268 [DOI: 10.1016/j.cose.2016.11.016http://dx.doi.org/10.1016/j.cose.2016.11.016]
Kingma D P and Ba J. 2017. Adam: a method for stochastic optimization [EB/OL]. [2023-11-20]. https://arxiv.org/pdf/1412.6980.pdfhttps://arxiv.org/pdf/1412.6980.pdf
Li L, Bai R, Zhang S Q, Chang C C and Shi M T. 2021. Screen-shooting resilient watermarking scheme via learned invariant keypoints and QT. Sensors, 21(19): #6554 [DOI: 10.3390/s21196554http://dx.doi.org/10.3390/s21196554]
Liu Y, Guo M X, Zhang J, Zhu Y S and Xie X D. 2019. A novel two-stage separable deep learning framework for practical blind watermarking//Proceedings of the 27th ACM International Conference on Multimedia. Nice, France: ACM: 1509-1517 [DOI: 10.1145/3343031.3351025http://dx.doi.org/10.1145/3343031.3351025]
Moosavi-Dezfooli S M, Fawzi A, Fawzi O and Frossard P. 2017. Universal adversarial perturbations//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 86-94 [DOI: 10.1109/cvpr.2017.17http://dx.doi.org/10.1109/cvpr.2017.17]
Qin J H, Luo Y J, Xiang X Y, Tan Y and Huang H J. 2019. Coverless image steganography: a survey. IEEE Access, 7: 171372-171394 [DOI: 10.1109/access.2019.2955452http://dx.doi.org/10.1109/access.2019.2955452]
Rathika L and Loganathan B. 2017. Approaches and methods for steganalysis — A survey. Journal of Advanced Research in Computer and Communication Engineering, 6(6): 433-438 [DOI: 10.17148/ijarcce.2017.6678http://dx.doi.org/10.17148/ijarcce.2017.6678]
Ronneberger O, Fischer P and Brox T. 2015. U-Net: convolutional networks for biomedical image segmentation//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich, Germany: Springer: 234-241 [DOI: 10.1007/978-3-319-24574-4_28http://dx.doi.org/10.1007/978-3-319-24574-4_28]
Ruanaidh J J K Ò and Pun T. 1998. Rotation, scale and translation invariant spread spectrum digital image watermarking. Signal Processing, 66(3): 303-317 [DOI: 10.1016/s0165-1684(98)00012-7http://dx.doi.org/10.1016/s0165-1684(98)00012-7]
Singh D and Singh S K. 2017. DWT-SVD and DCT based robust and blind watermarking scheme for copyright protection. Multimedia Tools and Applications, 76(11): 13001-13024 [DOI: 10.1007/s11042-016-3706-6http://dx.doi.org/10.1007/s11042-016-3706-6]
Tancik M, Mildenhall B and Ng R. 2020. StegaStamp: invisible hyperlinks in physical photographs//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 2114-2123 [DOI: 10.1109/cvpr42600.2020.00219http://dx.doi.org/10.1109/cvpr42600.2020.00219]
Wengrowski E and Dana K. 2019. Light field messaging with deep photographic steganography//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 1515-1524 [DOI: 10.1109/cvpr.2019.00161http://dx.doi.org/10.1109/cvpr.2019.00161]
Zhan C N, Karjauv A, Benz P and Kweon I S. 2020. Towards robust data hiding against (JPEG) compression: a pseudo-differentiable deep learning approach [EB/OL]. [2023-11-20] . https://arxiv.org/pdf/2101.00973v1.pdfhttps://arxiv.org/pdf/2101.00973v1.pdf
Zhang C N, Benz P, Karjauv A, Sun G and Kweon I S. 2020. UDH: universal deep hiding for steganography, watermarking, and light field messaging//Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc.: 10223-10234
Zhang R, Isola P, Efros A A, Shechtman E and Wang O. 2018. The unreasonable effectiveness of deep features as a perceptual metric//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 586-595 [DOI: 10.1109/cvpr.2018.00068http://dx.doi.org/10.1109/cvpr.2018.00068]
Zhong X, Huang P C, Mastorakis S and Shih F Y. 2021. An automated and robust image watermarking scheme based on deep neural networks. IEEE Transactions on Multimedia, 23: 1951-1961 [DOI: 10.1109/tmm.2020.3006415http://dx.doi.org/10.1109/tmm.2020.3006415]
Zhu J R, Kaplan R, Johnson J and Li F F. 2018. HiDDeN: hiding data with deep networks//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer: 682-697 [DOI: 10.1007/978-3-030-01267-0_40http://dx.doi.org/10.1007/978-3-030-01267-0_40]
相关作者
相关机构