Protecting Intellectual Property With Reliable Availability of Learning Models in AI-Based Cybersecurity Services

Ge Ren, Jun Wu*, Gaolei Li*, Shenghong Li*, Mohsen Guizani


研究成果: Article査読

1 被引用数 (Scopus)


Artificial intelligence (AI)-based cybersecurity services offer significant promise in many scenarios, including malware detection, content supervision, and so on. Meanwhile, many commercial and government applications have raised the need for intellectual property protection of using deep neural network (DNN). Existing studies (e.g., watermarking techniques) on intellectual property protection only aim at inserting secret information into DNNs, allowing producers to detect whether the given DNN infringes on their own copyrights. However, since the availability protection of learning models is rarely considered, the piracy model can still work with high accuracy. In this paper, a novel model locking (M-LOCK) scheme for the DNN is proposed to enhance its availability protection, where the DNN produces poor accuracy if a specific token is absent, while it maps only the tokenized inputs into correct predictions. The proposed scheme performs the verification process during the DNN inference operation, actively protecting models' intellectual property copyright at each query. Specifically, to train the token-sensitive decision-making boundaries of DNNs, a data poisoning-based model manipulation (DPMM) method is also proposed, which minimizes the correlation between the dummy outputs and correct predictions. Extensive experiments demonstrate the proposed scheme could achieve high reliability and effectiveness across various benchmark datasets as well as typical model protection methods.

ジャーナルIEEE Transactions on Dependable and Secure Computing
出版ステータスPublished - 2024 3月 1

ASJC Scopus subject areas

  • コンピュータサイエンス一般
  • 電子工学および電気工学


「Protecting Intellectual Property With Reliable Availability of Learning Models in AI-Based Cybersecurity Services」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。