
tent performance. Both circular and triangular pat-
terns had lower SDs than the square pattern, which
also converged more slowly to a validation accuracy
of around 82%. For watermark verification, a thresh-
old of T = 90% proved effective. The peripheral pat-
tern showed the best verification results, closely fol-
lowed by the triangular pattern, making the triangular
pattern the most balanced choice overall.
MNIST Dataset. On MNIST, the classification task
was significantly easier for the trained DNNs (base-
line ∼ 99.4% accuracy). Therefore, the training set
classification accuracy quickly converges to values
higher than 99%. Similarly in the validation set ac-
curacy, all watermarking methods have yielded an ac-
curacy of around 99.4%. The fact that almost all of
the validation accuracies lie within an SD of each
other means that it is difficult to infer much from
these values. Our results show that the Square and
Random watermarked models have performed clos-
est to the non-WM model. Even though the Ran-
dom watermarked model appears to have the most
accurate and reliable validation set, its performance
in verifiability is significantly lower than the other
watermarked models. The Triangular watermarked
model offers the best watermark verifiability among
all methods tested. Combined with its validation ac-
curacy, which closely matches the non-watermarked
model, it proves to be the most effective option. Our
results suggest that a verification threshold of T =
99% is suitable.
Comparison with Li et al. Compared to the re-
sults in (Li et al., 2020), our findings show some no-
table differences. First, our CIFAR-10 models did not
reach the reported 88% normal classification (NC) ac-
curacy, despite closely following the original proce-
dure. Even with 100 training epochs, accuracy stayed
around 84%, likely due to hardware limitations. In
contrast, our MNIST models achieved over 99% NC
accuracy, exceeding the 98.7% reported by (Li et al.,
2020). While they used a 50% embedding rate for
MNIST, we found no benefit in doing so and used a
consistent 10% rate for both datasets. On CIFAR-10,
models using circular or triangular watermark pat-
terns performed about 0.5% better in NC accuracy
than those using the square pattern. This suggests that
non-square shapes are more effective for preserving
model accuracy when using null embedding.
6 FUTURE WORK AND
CONCLUSION
We proposed four new ways to embed a watermark
into a DNN with the null embedding method from (Li
et al., 2020). Experiments on CIFAR-10 showed that
the circular pattern preserves model accuracy best,
improving performance by about 0.5% compared to
the square pattern and matching the accuracy of a
non-watermarked model. Future work includes test-
ing the impact of transfer learning and fine-tuning.
ACKNOWLEDGEMENTS
This paper is supported by the European Union’s
Horizon Europe research and innovation program un-
der grant agreement No. 101094901, the Septon
and 101168490, the Recitals Projects. Devris¸
˙
Is¸ler
was supported by the European Union’s HORIZON
project DataBri-X (101070069).
REFERENCES
Adi, Y., Baum, C., Ciss
´
e, M., Pinkas, B., and Keshet, J.
(2018). Turning your weakness into a strength: Wa-
termarking deep neural networks by backdooring. In
USENIX Security.
Guo, J. and Potkonjak, M. (2018). Watermarking Deep
Neural Networks for Embedded Systems. In 2018
IEEE/ACM ICCAD. ISSN: 1558-2434.
˙
Is¸ler, D., Cabana, E., Garc
´
ıa-Recuero,
´
A., Koutrika, G., and
Laoutaris, N. (2024). Freqywm: Frequency water-
marking for the new data economy. In IEEE ICDE.
Katz, J. and Lindell, Y. (2014). Introduction to Modern
Cryptography, Second Edition. CRC Press.
Li, H., Wenger, E., Shan, S., Zhao, B. Y., and Zheng, H.
(2020). Piracy Resistant Watermarks for Deep Neural
Networks. arXiv:1910.01226 [cs, stat].
Ribeiro, M., Grolinger, K., and Capretz, M. (2016). MLaaS:
Machine learning as a service.
Sion, R., Atallah, M. J., and Prabhakar, S. (2004). wmdb.:
Rights protection for numeric relational data. In IEEE
ICDE.
Uchida, Y., Nagai, Y., Sakazawa, S., and Satoh, S. (2017).
Embedding watermarks into deep neural networks. In
ACM ICMR.
Zhang, J., Gu, Z., Jang, J., Wu, H., Stoecklin, M., Huang,
H., and Molloy, I. (2018). Protecting intellectual prop-
erty of deep neural networks with watermarking. In
ACM AsiaCCS.
SECRYPT 2025 - 22nd International Conference on Security and Cryptography
776