人工智能模型水印研究进展
Overview of artificial intelligence model watermarking
- 2023年28卷第6期 页码:1792-1810
纸质出版日期: 2023-06-16
DOI: 10.11834/jig.230010
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2023-06-16 ,
移动端阅览
吴汉舟, 张杰, 李越, 殷赵霞, 张新鹏, 田晖, 李斌, 张卫明, 俞能海. 2023. 人工智能模型水印研究进展. 中国图象图形学报, 28(06):1792-1810
Wu Hanzhou, Zhang Jie, Li Yue, Yin Zhaoxia, Zhang Xinpeng, Tian Hui, Li Bin, Zhang Weiming, Yu Nenghai. 2023. Overview of artificial intelligence model watermarking. Journal of Image and Graphics, 28(06):1792-1810
以神经网络为代表的人工智能技术在计算机视觉、模式识别和自然语言处理等诸多应用领域取得了巨大的成功,包括谷歌、微软在内的许多科技公司都将人工智能模型部署在商业产品中,以提升服务质量和经济效益。然而,构建性能优异的人工智能模型需要消耗大量的数据、计算资源和专家知识,并且人工智能模型易于被未经授权的用户窃取、篡改和贩卖。在人工智能技术迅速发展的同时,如何保护人工智能模型的知识产权具有显著学术意义和产业需求。在此背景下,本文主要介绍基于数字水印的人工智能模型产权保护技术。通过与传统多媒体水印技术进行对比,首先概述了人工模型水印的研究意义、基础概念和评价指标;然后,依据水印提取者是否需要掌握目标模型的内容细节以及是否需要和目标模型进行交互,从“白盒”模型水印、“黑盒”模型水印、“无盒”模型水印3个不同的角度分别梳理了国内外研究现状并总结了不同方法的差异,与此同时,对脆弱模型水印也进行了分析和讨论;最后,通过对比不同方法的特点、优势和不足,总结了不同场景下模型水印的共性技术问题,并对发展趋势进行了展望。
The deep neural networks (DNNs)-relevant artificial intelligence (AI) technique has been developing intensively in the context of such domains like computer vision, pattern analysis, natural language processing, bioinformatics, and games. Especially, AI models have been widely deployed in the cloud by technology companies to provide smart and personalized services. However, creating state-of-the-art AI models requires a lot of high-quality data, powerful computing resources and expert knowledge of the architecture design. Furthermore, AI models are threatened to be copied, tampered and redistributed in an unauthorized manner. It indicates that it is necessary to protect the AI models against intellectual property infringement, which yields researchers to concern about the intellectual property protection. Current techniques are concerned of digital watermarking for intellectual property protection of AI models, which is referred to AI model watermarking. The core of AI model watermarking is to embed a secret watermark revealing the ownership of the AI model to be protected into the AI model through an imperceptible way. However, unlike many multimedia watermarking methods that treat media data as a static signal, AI model watermarking is required to embed information into an AI model with a specific task. We cannot directly apply conventional multimedia watermarking methods to AI models since simply modifying a given AI model may significantly impair the performance of the AI model on its original task. It motivates people to design watermarking methods specifically for AI models. For embedding a watermark, the performance of the watermarked AI model on its original task should not be degraded significantly and the embedded watermark concealed in the watermarked AI model should be able to be extracted to identify the ownership of the AI model when disputes arise. Considering whether the watermark extractor should know the internal details of the target AI model or not, we can divide existing methods into two categories, i.e., white-box AI model watermarking and black-box AI model watermarking. For white-box AI model watermarking, the watermark extractor should know the internal details of the target watermarked AI model so that he can extract the embedded watermark from model parameters or model structures. For black-box AI model watermarking, the watermark extractor does not know the internal details of the target watermarked AI model, but he has the ability to query the prediction results of the target model in correspondence to a set of trigger samples. The trigger samples are carefully crafted. By checking whether the prediction results are consistent with the pre-specific labels of the trigger samples, the watermark extractor is capable of determining the ownership of the target watermarked AI model. A special case of black-box AI model watermarking is box-free AI model watermarking, in which the watermark extractor has no access to the target model. It means that the watermark extractor cannot know the internal details of the model and interact with the model. However, the watermark extractor can extract the watermark from any sample generated by the target model. The ownership can be verified via extracting the watermark from the output. In addition, fragile AI model watermarking has also been investigated recently. Unlike many methods that focus on robust verification, fragile model watermarking enables us to detect whether the target model was modified or not, thereby achieving integrity verification of the target model. To review the latest developments and trends, advanced methodologies in AI model watermarking are analyzed as mentioned below: 1) the aims and objectives, basic concepts, evaluation metrics and technical classification of AI model watermarking are introduced. 2) Current development status of AI model watermarking is summarized and analyzed. 3) Such pros and cons are compared and analyzed as well. 4)The prospects for the development trend of AI model watermarking and the potentials of AI model watermarking in relevance to AI security are provided.
模型水印数字水印信息隐藏人工智能安全知识产权保护
model watermarkingdigital watermarkinginformation hidingartificial intelligence securityintellectual property protection
Abuadbba A, Kim H and Nepal S. 2021. DeepiSign: invisible fragile watermark to protect the integrity and authenticity of CNN//The 36th Annual ACM Symposium on Applied Computing. [s.l.]: ACM: 952-959 [DOI: 10.1145/3412841.3441970http://dx.doi.org/10.1145/3412841.3441970]
Adi Y, Baum C, Cissé M, Pinkas B and Keshet J. 2018. Turning your weakness into a strength: watermarking deep neural networks by backdooring//The 27th USENIX Security Symposium(USENIX Security 18). Baltimore, USA: USENIX Association: 1615-1631
Aramoon O, Chen P Y and Qu G. 2021. AID: attesting the integrity of deep neural networks//Proceedings of the 58th ACM/IEEE Design Automation Conference (DAC). San Francisco, USA: IEEE: 19-24 [DOI: 10.1109/DAC18074.2021.9586290http://dx.doi.org/10.1109/DAC18074.2021.9586290]
Atli B G, Xia Y X, Marchal S and Asokan N. 2021. WAFFLE: watermarking in federated learning [EB/OL]. [2023-01-06]. https://arxiv.org/pdf/2008.07298.pdfhttps://arxiv.org/pdf/2008.07298.pdf
Bansal A, Chiang P Y, Curry M J, Jain R, Wigington C, Manjunatha V, Dickerson J P and Goldstein T. 2022. Certified neural network watermarks with randomized smoothing//Proceedings of the 39th International Conference on Machine Learning. Baltimore, USA: ICML: 1450-1465
Behzadan V and Hsu W. 2019. Sequential triggers for watermarking of deep reinforcement learning policies [EB/OL]. [2023-01-06]. https://arxiv.org/pdf/1906.01126.pdfhttps://arxiv.org/pdf/1906.01126.pdf
Botta M, Cavagnino D and Esposito R. 2021. NeuNAC: a novel fragile watermarking algorithm for integrity protection of neural networks. Information Sciences, 576: 228-241 [DOI: 10.1016/j.ins.2021.06.073http://dx.doi.org/10.1016/j.ins.2021.06.073]
Charette L, Chu L Y, Chen Y Z, Pei J, Wang L J and Zhang Y. 2022. Cosine model watermarking against ensemble distillation//Proceedings of the 36th AAAI Conference on Artificial Intelligence. [s.l.]: AAAI Press: 9512-9520 [DOI: 10.1609/aaai.v36i9.21184http://dx.doi.org/10.1609/aaai.v36i9.21184]
Chattopadhyay N and Chattopadhyay A. 2021. ROWBACK: robust watermarking for neural networks using backdoors//Proceedings of the 20th IEEE International Conference on Machine Learning and Applications (ICMLA). Pasadena, USA: IEEE: 1728-1735 [DOI: 10.1109/ICMLA52953.2021.00274http://dx.doi.org/10.1109/ICMLA52953.2021.00274]
Chen H L, Rouhani B D, Fu C, Zhao J S and Koushanfar F. 2019a. DeepMarks: a secure fingerprinting framework for digital rights management of deep learning models//Proceedings of 2019 on International Conference on Multimedia Retrieval. Ottawa, Canada: ACM: 105-113 [DOI: 10.1145/3323873.3325042http://dx.doi.org/10.1145/3323873.3325042]
Chen H L, Rouhani B D and Koushanfar F. 2019b. Blackmarks: Blackbox multibit watermarking for deep neural networks [EB/OL]. [2023-01-06]. https://arxiv.org/pdf/1904.00344.pdfhttps://arxiv.org/pdf/1904.00344.pdf
Chen H L, Rouhani B D and Koushanfar F. 2020. SpecMark: a spectral watermarking framework for IP protection of speech recognition systems//Proceedings of the 21st Annual Conference of the International Speech Communication Association. Shanghai, China: ISCA: 2312-2316 [DOI: 10.21437/Interspeech.2020-2787http://dx.doi.org/10.21437/Interspeech.2020-2787]
Chen H Z, Zhang W M, Liu K L, Chen K J, Fang H and Yu N H. 2022. Speech pattern based black-box model watermarking for automatic speech recognition//Proceedings of 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Singapore, Singapore: IEEE: 3059-3063 [DOI: 10.1109/ICASSP43922.2022.9747044http://dx.doi.org/10.1109/ICASSP43922.2022.9747044]
Chen K J, Guo S W, Zhang T W, Li S X and Liu Y. 2021a. Temporal watermarks for deep reinforcement learning models//Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems. [s.l.]: International Foundation for Autonomous Agents and Multiagent Systems: 314-322
Chen X X, Chen T L, Zhang Z Y and Wang Z Y. 2021b. You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership//Proceedings of the 35th Advances in Neural Information Processing Systems. [s.l.]: [s.n.]: 1780-1791
Cong T S, He X L and Zhang Y. 2022. SSLGuard: a watermarking scheme for self-supervised learning pre-trained encoders//Proceedings of 2022 ACM SIGSAC Conference on Computer and Communications Security. Los Angeles, USA: ACM: 579-593
Cortiñas-Lorenzo B and Pérez-Gonzlez F. 2020. Adam and the ants: on the influence of the optimization algorithm on the detectability of DNN watermarks. Entropy, 22(12): #1379 [DOI: 10.3390/e22121379http://dx.doi.org/10.3390/e22121379]
Dai L, Mao J R, Fan X F and Zhou X Y. 2022. DeepHider: a multi-module and invisibility watermarking scheme for language model [EB/OL]. [2023-01-06]. https://arxiv.org/pdf/2208.04676v1.pdfhttps://arxiv.org/pdf/2208.04676v1.pdf
Dong B X, Zhang B and Wang H. 2021. VeriDL: integrity verification of outsourced deep learning services//Proceedings of 2021 Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Bilbao, Spain: Springer: 583-598 [DOI: 10.1007/978-3-030-86520-7_36http://dx.doi.org/10.1007/978-3-030-86520-7_36]
Fan L X, Ng K W and Chan C S. 2019. Rethinking deep neural network ownership verification: embedding passports to defeat ambiguity attacks//Proceedings of the 33rd International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc.: 4714-4723
Fan L X, Ng K W, Chan C S and Yang Q. 2022. DeepIPR: deep neural network ownership verification with passports. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10): 6122-6139 [DOI: 10.1109/TPAMI.2021.3088846http://dx.doi.org/10.1109/TPAMI.2021.3088846]
Fei J W, Xia Z H, Tondi B and Barni M. 2022. Supervised GAN watermarking for intellectual property protection//2022 IEEE International Workshop on Information Forensics and Security (WIFS). Shanghai, China: IEEE: #9975409 [DOI: 10.1109/WIFS55849.2022.9975409http://dx.doi.org/10.1109/WIFS55849.2022.9975409]
Frankle J and Carbin M. 2019. The lottery ticket hypothesis: finding sparse, trainable neural networks//Proceedings of the 7th International Conference on Learning Representations. New Orleans, USA: ICLR
Guan X Q, Feng H M, Zhang W M, Zhou H, Zhang J and Yu N H. 2020. Reversible watermarking in deep convolutional neural networks for integrity authentication//Proceedings of the 28th ACM International Conference on Multimedia. Seattle, USA: ACM: 2273-2280 [DOI: 10.1145/3394171.3413729http://dx.doi.org/10.1145/3394171.3413729]
Guo J and Potkonjak M. 2018. Watermarking deep neural networks for embedded systems//Proceedings of 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). San Diego, USA: IEEE: #3740862 [DOI: 10.1145/3240765.3240862http://dx.doi.org/10.1145/3240765.3240862]
Guo J and Potkonjak M. 2021. Evolutionary trigger set generation for DNN black-box watermarking [EB/OL]. [2023-01-06]. https://arxiv.org/pdf/1906.04411.pdfhttps://arxiv.org/pdf/1906.04411.pdf
He X L, Xu Q K, Lyu L J, Wu F Z and Wang C G. 2022. Protecting intellectual property of language generation APIs with lexical watermark//Proceedings of the 36th AAAI Conference on Artificial Intelligence. AAAI: 10758-10766 [DOI: 10.1609/aaai.v36i10.21321http://dx.doi.org/10.1609/aaai.v36i10.21321]
He Z C, Zhang T W and Lee R. 2019. Sensitive-sample fingerprinting of deep neural networks//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 4724-4732 [DOI: 10.1109/CVPR.2019.00486http://dx.doi.org/10.1109/CVPR.2019.00486]
Jebreel N M, Domingo-Ferrer J, Snchez D and Blanco-Justicia A. 2021. KeyNet: an asymmetric key-style framework for watermarking deep learning models. Applied Sciences, 11(3): #999 [DOI: 10.3390/app11030999http://dx.doi.org/10.3390/app11030999]
Jia H R, Choquette-Choo C A, Chandrasekaran V and Papernot N. 2021a. Entangled watermarks as a defense against model extraction//The 30th USENIX Security Symposium. [s.l.]: USENIX Association: 1937-1954
Jia J Y, Wang B H and Gong N Z. 2021b. Robust and verifiable information embedding attacks to deep neural networks via error-correcting codes//Proceedings of 2021 ACM Asia Conference on Computer and Communications Security. Hong Kong, China: ACM: 2-13 [DOI: 10.1145/3433210.3437519http://dx.doi.org/10.1145/3433210.3437519]
Kapusta K, Thouvenot V, Bettan O, Beguinet H and Senet H. 2021. A protocol for secure verification of watermarks embedded into machine learning models//Proceedings of 2021 ACM Workshop on Information Hiding and Multimedia Security. Brussels, Belgium: ACM: 171-176 [DOI: 10.1145/3437880.3460409http://dx.doi.org/10.1145/3437880.3460409]
Kuribayashi M, Tanaka T and Funabiki N. 2020. DeepWatermark: embedding watermark into DNN model//Proceedings of 2020 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference. Auckland, New Zealand: IEEE: 1340-1346
Kuttichira D P, Gupta S, Nguyen D, Rana S and Venkatesh S. 2022. Verification of integrity of deployed deep learning models using Bayesian optimization. Knowledge-Based Systems, 241: #108238 [DOI: 10.1016/j.knosys.2022.108238http://dx.doi.org/10.1016/j.knosys.2022.108238]
Lao Y J, Zhao W J, Yang P and Li P. 2022. DeepAuth: a DNN authentication framework by model-unique and fragile signature embedding//Proceedings of the 36th AAAI Conference on Artificial Intelligence. AAAI: 9595-9603 [DOI: 10.1609/aaai.v36i9.21193http://dx.doi.org/10.1609/aaai.v36i9.21193]
Le Merrer E, Perez P and Trédan G. 2020. Adversarial frontier stitching for remote neural network watermarking. Neural Computing and Applications, 32(13): 9233-9244 [DOI: 10.1007/s00521-019-04434-zhttp://dx.doi.org/10.1007/s00521-019-04434-z]
Li B W, Fan L X, Gu H L, Li J and Yang Q. 2022a. FedIPR: ownership verification for federated deep neural network models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4):4521-4536 [DOI: 10.1109/TPAMI.2022.3195956http://dx.doi.org/10.1109/TPAMI.2022.3195956]
Li F Q and Wang S L. 2021. Persistent watermark for image classification neural networks by penetrating the autoencoder//Proceedings of 2021 IEEE International Conference on Image Processing (ICIP). Anchorage, USA: IEEE: 3063-3067 [DOI: 10.1109/ICIP42928.2021.9506368http://dx.doi.org/10.1109/ICIP42928.2021.9506368]
Li F Q, Wang S L and Liew A W C. 2022b. Watermarking protocol for deep neural network ownership regulation in federated learning//Proceedings of 2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW). Taipei, China: IEEE: #9859395 [DOI: 10.1109/ICMEW56448.2022.9859395http://dx.doi.org/10.1109/ICMEW56448.2022.9859395]
Li H Y, Wenger E, Shan S, Zhao B Y and Zheng H T. 2020a. Piracy resistant watermarks for deep neural networks [EB/OL]. [2023-01-06]. https://arxiv.org/pdf/1910.01226.pdfhttps://arxiv.org/pdf/1910.01226.pdf
Li J T, Rakin A S, He Z Z, Fan D L and Chakrabarti C. 2021a. RADAR: run-time adversarial weight attack detection and accuracy recovery//Proceedings of 2021 Design, Automation and Test in Europe Conference and Exhibition (DATE). Grenoble, France: IEEE: 790-795 [DOI: 10.23919/DATE51398.2021.9474113http://dx.doi.org/10.23919/DATE51398.2021.9474113]
Li M, Zhong Q, Zhang L Y, Du Y J, Zhang J and Xiang Y. 2020b. Protecting the intellectual property of deep neural networks with watermarking: the frequency domain approach//Proceedings of the 19th IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). Guangzhou, China: IEEE: 402-409 [DOI: 10.1109/TrustCom50675.2020.00062http://dx.doi.org/10.1109/TrustCom50675.2020.00062]
Li Y, Abady L, Wang H X and Barni M. 2021b. A feature-map-based large-payload DNN watermarking algorithm//Proceedings of the 20th International Workshop on Digital Watermarking. Beijing, China: Springer: 135-148 [DOI: 10.1007/978-3-030-95398-0_10http://dx.doi.org/10.1007/978-3-030-95398-0_10]
Li Y, Tondi B and Barni M. 2021c. Spread-transform dither modulation watermarking of deep neural network. Journal of Information Security and Applications, 63: #103004 [DOI: 10.1016/j.jisa.2021.103004http://dx.doi.org/10.1016/j.jisa.2021.103004]
Li Y M, Bai Y, Jiang Y, Yang Y, Xia S T and Li B. 2022c. Untargeted backdoor watermark: towards harmless and stealthy dataset copyright protection [EB/OL]. [2023-01-06]. https://arxiv.org/pdf/2210.00875.pdfhttps://arxiv.org/pdf/2210.00875.pdf
Li Z, Hu C Y, Zhang Y and Guo S Q. 2019. How to prove your model belongs to you: a blind-watermark based framework to protect intellectual property of DNN//Proceedings of the 35th Annual Computer Security Applications Conference. San Juan, USA: ACM: 126-137 [DOI: 10.1145/3359789.3359801http://dx.doi.org/10.1145/3359789.3359801]
Lim J H, Chan C S, Ng K W, Fan L X and Yang Q. 2022. Protect, show, attend and tell: empowering image captioning models with ownership protection. Pattern Recognition, 122: #108285 [DOI: 10.1016/j.patcog.2021.108285http://dx.doi.org/10.1016/j.patcog.2021.108285]
Liu H W, Weng Z Y and Zhu Y S. 2021a. Watermarking deep neural networks with greedy residuals//Proceedings of the 38th International Conference on Machine Learning. [s.l.]: ICML: 6978-6988
Liu X Y, Shao S, Yang Y, Wu K M, Yang W Y and Fang H. 2021b. Secure federated learning model verification: a client-side backdoor triggered watermarking scheme//Proceedings of 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC). Melbourne, Australia: IEEE: 2414-2419 [DOI: 10.1109/SMC52423.2021.9658998http://dx.doi.org/10.1109/SMC52423.2021.9658998]
Liu Y P, Jia J Y, Liu H B and Gong N Z. 2022. StolenEncoder: stealing pre-trained encoders in self-supervised learning//Proceedings of 2022 ACM SIGSAC Conference on Computer and Communications Security. Los Angeles, USA: ACM: 2115-2128 [DOI: 10.1145/3548606.3560586http://dx.doi.org/10.1145/3548606.3560586]
Lou X X, Guo S W, Li J W and Zhang T W. 2022. Ownership verification of DNN architectures via hardware cache side channels. IEEE Transactions on Circuits and Systems for Video Technology, 32(11): 8078-8093 [DOI: 10.1109/TCSVT.2022.3184644http://dx.doi.org/10.1109/TCSVT.2022.3184644]
Namba R and Sakuma J. 2019. Robust watermarking of neural network with exponential weighting//Proceedings of 2019 ACM Asia Conference on Computer and Communications Security. Auckland, New Zealand: ACM: 228-240 [DOI: 10.1145/3321705.3329808http://dx.doi.org/10.1145/3321705.3329808]
Ong D S, Chan C S, Ng K W, Fan L X and Yang Q. 2021. Protecting intellectual property of generative adversarial networks from ambiguity attacks//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 3629-3638 [DOI: 10.1109/CVPR46437.2021.00363http://dx.doi.org/10.1109/CVPR46437.2021.00363]
Quan Y H, Teng H, Chen Y X and Ji H. 2021. Watermarking deep neural networks in image processing. IEEE Transactions on Neural Networks and Learning Systems, 32(5): 1852-1865 [DOI: 10.1109/TNNLS.2020.2991378http://dx.doi.org/10.1109/TNNLS.2020.2991378]
Rathi P, Bhadauria S and Rathi S. 2022. Watermarking of deep recurrent neural network using adversarial examples to protect intellectual property. Applied Artificial Intelligence, 36(1): #2008613 [DOI: 10.1080/08839514.2021.2008613http://dx.doi.org/10.1080/08839514.2021.2008613]
Rouhani B D, Chen H L and Koushanfar F. 2019. DeepSigns: an end-to-end watermarking framework for ownership protection of deep neural networks//Proceedings of the 24th International Conference on Architectural Support for Programming Languages and Operating Systems. Providence, USA: ACM: 485-497 [DOI: 10.1145/3297858.3304051http://dx.doi.org/10.1145/3297858.3304051]
Sakazawa S, Myodo E, Tasaka K and Yanagihara H. 2019. Visual decoding of hidden watermark in trained deep neural network//Proceedings of 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). San Jose, USA: IEEE: 371-374 [DOI: 10.1109/MIPR.2019.00073http://dx.doi.org/10.1109/MIPR.2019.00073]
Sha Z Y, He X L, Yu N, Backes M and Zhang Y. 2022. Can’t steal? Cont-steal! Contrastive stealing attacks against image encoders [EB/OL]. [2023-01-06]. https://arxiv.org/pdf/2201.07513.pdfhttps://arxiv.org/pdf/2201.07513.pdf
Sun S, Zhang W M, Fang H and Yu N H. 2022. Automatic generation of Chinese document watermarking fonts. Journal of Image and Graphics, 27(1): 262-276
孙杉, 张卫明, 方涵, 俞能海. 2022. 中文水印字库的自动生成方法. 中国图象图形学报, 27(1): 262-276 [DOI: 10.11834/jig.200695http://dx.doi.org/10.11834/jig.200695]
Sun S C, Xue M F, Wang J and Liu W Q. 2021. Protecting the intellectual properties of deep neural networks with an additional class and steganographic images [EB/OL]. [2023-01-06]. https://arxiv.org/pdf/2104.09203.pdfhttps://arxiv.org/pdf/2104.09203.pdf
Szyller S, Atli B G, Marchal S and Asokan N. 2021. DAWN: dynamic adversarial watermarking of neural networks//Proceedings of the 29th ACM International Conference on Multimedia. Chengdu, China: ACM: 4417-4425 [DOI: 10.1145/3474085.3475591http://dx.doi.org/10.1145/3474085.3475591]
Tan Z Q, Wong H S and Chan C S. 2022. An embarrassingly simple approach for intellectual property rights protection on recurrent neural networks//Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing. [s.l.]: Association for Computational Linguistics: 93-105
Tang R X, Jin H Y, Wigington C, Du M N, Jain R and Hu X. 2021. Was my model stolen? Feature sharing for robust and transferable watermarks[EB/OL]. [2023-01-06]. https://openreview.net/forum?id=XHxRBwjpEQhttps://openreview.net/forum?id=XHxRBwjpEQ
Tartaglione E, Grangetto M, Cavagnino D and Botta M. 2021. Delving in the loss landscape to embed robust watermarks into neural networks//Proceedings of the 25th International Conference on Pattern Recognition. Milan, Italy: IEEE: 1243-1250 [DOI: 10.1109/ICPR48806.2021.9413062http://dx.doi.org/10.1109/ICPR48806.2021.9413062]
Uchida Y, Nagai Y, Sakazawa S and Satoh S. 2017. Embedding watermarks into deep neural networks//Proceedings of 2017 ACM on International Conference on Multimedia Retrieval. Bucharest, Romania: ACM: 269-277 [DOI: 10.1145/3078971.3078974http://dx.doi.org/10.1145/3078971.3078974]
Venugopal A, Uszkoreit J, Talbot D, Och F J and Ganitkevitch J. 2011. Watermarking the outputs of structured prediction with an application in statistical machine translation//Proceedings of 2011 Conference on Empirical Methods in Natural Language Processing. Edinburgh, UK: ACL: 1363-1372
Wang J F, Wu H Z, Zhang X P and Yao Y W. 2020. Watermarking in deep neural networks via error back-propagation. Electronic Imaging, 32(4): #00003 [DOI: 10.2352/ISSN.2470-1173.2020.4.MWSF-022http://dx.doi.org/10.2352/ISSN.2470-1173.2020.4.MWSF-022]
Wang T H and Kerschbaum F. 2019. Attacks on digital watermarks for deep neural networks//Proceedings of 2019 IEEE International Conference on Acoustics, Speech and Signal Processing. Brighton, UK: IEEE: 2622-2626 [DOI: 10.1109/ICASSP.2019.8682202http://dx.doi.org/10.1109/ICASSP.2019.8682202]
Wang T H and Kerschbaum F. 2021. RIGA: covert and robust white-box watermarking of deep neural networks//Proceedings of 2021 Web Conference. Ljubljana, Slovenia: ACM: 993-1004 [DOI: 10.1145/3442381.3450000http://dx.doi.org/10.1145/3442381.3450000]
Wang Y F, Zhou Y M, Qian Z X, Li S and Zhang X P. 2022. Review of robust video watermarking. Journal of Image and Graphics, 27(1): 27-42
王翌妃, 周杨铭, 钱振兴, 李晟, 张新鹏. 2022. 鲁棒视频水印研究进展. 中国图象图形学报, 27(1): 27-42 [DOI: 10.11834/jig.210437http://dx.doi.org/10.11834/jig.210437]
Wang Y M and Wu H Z. 2022. Protecting the intellectual property of speaker recognition model by black-box watermarking in the frequency domain. Symmetry, 14(3): #619 [DOI: 10.3390/sym14030619http://dx.doi.org/10.3390/sym14030619]
Wang Y M, Ye J Y and Wu H Z. 2021. Generating watermarked speech adversarial examples//Proceedings of 2021 ACM Turing Award Celebration Conference-China (ACM TURC 2021). Hefei, China: ACM: 254-260 [DOI: 10.1145/3472634.3474080http://dx.doi.org/10.1145/3472634.3474080]
Wu H Z, Liu G, Yao Y W and Zhang X P. 2021. Watermarking neural networks with watermarked images. IEEE Transactions on Circuits and Systems for Video Technology, 31(7): 2591-2601 [DOI: 10.1109/TCSVT.2020.3030671http://dx.doi.org/10.1109/TCSVT.2020.3030671]
Wu Y T, Qiu H, Zhang T W and Qiu M K. 2022. Watermarking pre-trained encoders in contrastive learning//Proceedings of the 4th International Conference on Data Intelligence and Security. Shenzhen, China: IEEE: 228-233 [DOI: 10.1109/ICDIS55630.2022.00042http://dx.doi.org/10.1109/ICDIS55630.2022.00042]
Xiang T, Xie C L, Guo S W, Li J W and Zhang T W. 2021. Protecting your NLG models with semantic and robust watermarks [EB/OL]. [2023-01-06]. https://arxiv.org/pdf/2112.05428.pdfhttps://arxiv.org/pdf/2112.05428.pdf
Xiong C, Feng G R, Li X R, Zhang X P and Qin C. 2022. Neural network model protection with piracy identification and tampering localization capability//Proceedings of the 30th ACM International Conference on Multimedia. Lisboa, Portugal: ACM: 2881-2889 [DOI: 10.1145/3503161.3548247http://dx.doi.org/10.1145/3503161.3548247]
Xu G W, Li H W, Ren H, Sun J F, Xu S M, Ning J T, Yang H M, Yang K and Deng R H. 2020a. Secure and verifiable inference in deep neural networks//Proceedings of 2020 Annual Computer Security Applications Conference. Austin, USA: ACM: 784-797 [DOI: 10.1145/3427228.3427232http://dx.doi.org/10.1145/3427228.3427232]
Xu J and Picek S. 2021. Watermarking graph neural networks based on backdoor attacks [EB/OL]. [2023-01-06]. https://arxiv.org/pdf/2110.11024v1.pdfhttps://arxiv.org/pdf/2110.11024v1.pdf
Xu X R, Li Y Q and Yuan C. 2020b. “Identity bracelets” for deep neural networks. IEEE Access, 8: 102065-102074 [DOI: 10.1109/ACCESS.2020.2998784http://dx.doi.org/10.1109/ACCESS.2020.2998784]
Yadollahi M M, Shoeleh F, Dadkhah S and Ghorbani A A. 2021. Robust black-box watermarking for deep neural network using inverse document frequency//Proceedings of 2021 IEEE International Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). AB, Canada: IEEE: 574-581 [DOI: 10.1109/DASC-PICom-CBDCom-CyberSciTech52372.2021.00100http://dx.doi.org/10.1109/DASC-PICom-CBDCom-CyberSciTech52372.2021.00100]
Yang P, Lao Y J and Li P. 2021. Robust watermarking for deep neural networks via bi-level optimization//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 14821-14830 [DOI: 10.1109/ICCV48922.2021.01457http://dx.doi.org/10.1109/ICCV48922.2021.01457]
Yang X, Zhang J, Chen K J, Zhang W M, Ma Z H, Wang F and Yu N H. 2022. Tracing text provenance via context-aware lexical substitution. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10): 11613-11621 [DOI: 10.1609/aaai.v36i10.21415http://dx.doi.org/10.1609/aaai.v36i10.21415]
Yang Z Q, Dang H and Chang E C. 2019. Effectiveness of distillation attack and countermeasure on neural network watermarking [EB/OL]. [2023-01-06]. https://arxiv.org/pdf/1906.06046.pdfhttps://arxiv.org/pdf/1906.06046.pdf
Yin Z X, Yin H and Zhang X P. 2022. Neural network fragile watermarking with no model performance degradation//Proceedings of 2022 IEEE International Conference on Image Processing (ICIP). Bordeaux, France: IEEE: 3958-3962 [DOI: 10.1109/ICIP46576.2022.9897413http://dx.doi.org/10.1109/ICIP46576.2022.9897413]
Yu N, Skripniuk V, Abdelnabi S and Fritz M. 2021. Artificial fingerprinting for generative models: rooting deepfake attribution in training data//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 14428-14437 [DOI: 10.1109/ICCV48922.2021.01418http://dx.doi.org/10.1109/ICCV48922.2021.01418]
Zhang C Y, Bengio S, Hardt M, Recht B and Vinyals O. 2021a. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3): 107-115 [DOI: 10.1145/3446776http://dx.doi.org/10.1145/3446776]
Zhang J, Chen D D, Huang Q D, Liao J, Zhang W M, Feng H M, Hua G and Yu N H. 2022a. Poison ink: robust and invisible backdoor attack. IEEE Transactions on Image Processing, 31: 5691-5705 [DOI: 10.1109/TIP.2022.3201472http://dx.doi.org/10.1109/TIP.2022.3201472]
Zhang J, Chen D D, Liao J, Fang H, Ma Z H, Zhang W M, Hua G and Yu N H. 2021b. Exploring structure consistency for deep model watermarking [EB/OL]. [2023-01-06]. https://arxiv.org/pdf/2108.02360.pdfhttps://arxiv.org/pdf/2108.02360.pdf
Zhang J, Chen D D, Liao J, Fang H, Zhang W M, Zhou W B, Cui H and Yu N H. 2020a. Model watermarking for image processing networks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(7): 12805-12812 [DOI: 10.1609/aaai.v34i07.6976http://dx.doi.org/10.1609/aaai.v34i07.6976]
Zhang J, Chen D D, Liao J, Zhang W M, Feng H M, Hua G and Yu N H. 2022b. Deep model intellectual property protection via deep watermarking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(8): 4005-4020 [DOI: 10.1109/TPAMI.2021.3064850http://dx.doi.org/10.1109/TPAMI.2021.3064850]
Zhang J, Chen D D, Liao J, Zhang W M, Hua G and Yu N H. 2020b. Passport-aware normalization for deep model protection//Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc.: #1896
Zhang J L, Gu Z S, Jang J, Wu H, Stoecklin M P, Huang H Q and Molloy I. 2018. Protecting intellectual property of deep neural networks with watermarking//Proceedings of 2018 on Asia Conference on Computer and Communications Security. Incheon, Korea (South): ACM: 159-172 [DOI: 10.1145/3196494.3196550http://dx.doi.org/10.1145/3196494.3196550]
Zhang L, Liu Y, Liu S T, Yang T S, Wang Y X, Zhang X P and Wu H Z. 2022c. Generative model watermarking based on human visual system [EB/OL]. [2023-01-06]. https://arxiv.org/pdf/2209.15268.pdfhttps://arxiv.org/pdf/2209.15268.pdf
Zhang T X, Wu H Z, Lu X F and Sun G L. 2022d. AWEncoder: adversarial watermarking pre-trained encoders in contrastive learning [EB/OL]. [2023-01-06]. https://arxiv.org/pdf/2208.03948.pdfhttps://arxiv.org/pdf/2208.03948.pdf
Zhang Y Q, Jia Y R, Wang X Y, Niu Q and Chen N D. 2020c. DeepTrigger: a watermarking scheme of deep learning models based on chaotic automatic data annotation. IEEE Access, 8: 213296-213305 [DOI: 10.1109/ACCESS.2020.3039323http://dx.doi.org/10.1109/ACCESS.2020.3039323]
Zhao G J, Qin C, Yao H and Han Y F. 2022. DNN self-embedding watermarking: towards tampering detection and parameter recovery for deep neural network. Pattern Recognition Letters, 164: 16-22 [DOI: 10.1016/j.patrec.2022.10.013http://dx.doi.org/10.1016/j.patrec.2022.10.013]
Zhao X Y, Wu H Z and Zhang X P. 2021. Watermarking graph neural networks by random graphs//The 9th International Symposium on Digital Forensics and Security (ISDFS). Elazig, Turkey: IEEE: 1-6 [DOI: 10.1109/ISDFS52919.2021.9486352http://dx.doi.org/10.1109/ISDFS52919.2021.9486352]
Zheng G, Hu D H, Ge H and Zheng S L. 2021. End-to-end image steganography and watermarking driven by generative adversarial networks. Journal of Image and Graphics, 26(10): 2485-2502
郑钢, 胡东辉, 戈辉, 郑淑丽. 2021. 生成对抗网络驱动的图像隐写与水印模型. 中国图象图形学报, 26(10): 2485-2502 [DOI: 10.11834/jig.200404http://dx.doi.org/10.11834/jig.200404]
Zhong Q, Zhang L Y, Zhang J, Gao L X and Xiang Y. 2020. Protecting IP of deep neural networks with watermarking: a new label helps//Proceedings of the 24th Pacific-Asia Conference on Knowledge Discovery and Data Mining. Singapore, Singapore: Springer: 462-474 [DOI: 10.1007/978-3-030-47436-2_35http://dx.doi.org/10.1007/978-3-030-47436-2_35]
Zhu R J, Wei P, Li S, Yin Z X, Zhang X P and Qian Z X. 2021. Fragile Neural Network Watermarking with Trigger Image Set//Proceedings of the 14th International Conference on Knowledge Science, Engineering and Management. Tokyo, Japan: Springer: 280-293 [DOI: 10.1007/978-3-030-82136-4_23http://dx.doi.org/10.1007/978-3-030-82136-4_23]
Zhu R J, Zhang X P, Shi M T and Tang Z J. 2020. Secure neural network watermarking protocol against forging attack. EURASIP Journal on Image and Video Processing, 2020(1): #37 [DOI: 10.1186/s13640-020-00527-1http://dx.doi.org/10.1186/s13640-020-00527-1]
相关作者
相关机构