
images were preprocessed using ResNet’s preprocess-
ing function (TensorFlow, ). Therefore, any differ-
ences observed in Figure 12 can be attributed to the
preprocessing function. The original paper of ObfNet
(Xu et al., 2020) states that ObfNet is lightweight
and can run on devices without acceleration for in-
ference. This claim holds primarily for MNIST and
other small-scale, simple datasets. However, this fea-
sibility diminishes as dataset size and complexity in-
crease, leading to a substantial rise in network pa-
rameters, size, and FLOPs. This observation is not
unique to ObfNet but reflects a common challenge for
many UPDT methods when scaling to more complex
data: the trade-off between computational efficiency
and robust privacy preservation as the size of the data
that needs to be transformed grows. The original pa-
per (Xu et al., 2020) also claims that ”when more
neurons are used in the first hidden layer of O
M
, the
overall darkness levels of the obfuscation results of all
digits are equalized, suggesting a better obfuscation
quality”-however, our test results in Figure 7 show
the opposite. As the bottleneck size increases, ObfNet
inadvertently retain more information, making sensi-
tive data more susceptible to leakage. This reliance
on visual indicators of obfuscation, rather than robust
privacy metrics, is a broader issue across UPDT tech-
niques. Each dataset comes with its unique privacy
requirements and characteristics, making it difficult
to establish a universal privacy metric that applies to
all cases. Furthermore, the lack of well-defined de-
sign principles in UPDT methods is a common chal-
lenge as each dataset is different (Malekzadeh et al.,
2020). For example, the results from LightNet (Fig-
ure 12) demonstrate that without explicit mechanisms
to enforce effective obfuscation, networks trained to
prioritize utility-such as inference accuracy-may in-
advertently leave sensitive data insufficiently trans-
formed. This issue is further exacerbated by train-
ing methodologies that do not impose strong con-
straints for selective feature removal, which can result
in residual sensitive information remaining within the
transformed datasets. Addressing these shortcomings
is essential for improving the scalability, robustness,
and privacy guarantees of UPDT architectures.
REFERENCES
(2017). Geo. l. tech. rev. 202.
Abadi, M., Chu, A., Goodfellow, I., McMahan, H.,
Mironov, I., Talwar, K., and Zhang, L. (2016). Deep
learning with differential privacy. pages 308–318.
Chen, J. and Ran, X. (2019). Deep learning with edge
computing: A review. Proceedings of the IEEE,
107(8):1655–1674.
Deng, L. (2012). The mnist database of handwritten digit
images for machine learning research. IEEE Signal
Processing Magazine, 29(6):141–142.
Dhinakaran, D., Sankar, S. M. U., Selvaraj, D., and Raja,
S. E. (2024). Privacy-preserving data in iot-based
cloud systems: A comprehensive survey with ai in-
tegration.
Ding, X., Fang, H., Zhang, Z., Choo, K.-K. R., and Jin, H.
(2022). Privacy-preserving feature extraction via ad-
versarial training. IEEE Transactions on Knowledge
and Data Engineering, 34(4):1967–1979.
Feng, T. and Narayanan, S. (2021). Privacy and utility pre-
serving data transformation for speech emotion recog-
nition. In 2021 9th International Conference on Af-
fective Computing and Intelligent Interaction (ACII),
pages 1–7.
Ganin, Y. and Lempitsky, V. (2015). Unsupervised domain
adaptation by backpropagation.
ha, T., Dang, T., Dang, T. T., Truong, T., and Nguyen,
M. (2019). Differential privacy in deep learning: An
overview. pages 97–102.
He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep resid-
ual learning for image recognition.
Howard, J. (2019). Imagenette. https://github.com/fastai/
imagenette.
Malekzadeh, M., Clegg, R. G., Cavallaro, A., and Haddadi,
H. (2020). Privacy and utility preserving sensor-data
transformations. Pervasive and Mobile Computing,
63:101132.
Nieto, G., de la Iglesia, I., Lopez-Novoa, U., and Perfecto,
C. (2024). Deep reinforcement learning techniques for
dynamic task offloading in the 5g edge-cloud contin-
uum. Journal of Cloud Computing, 13(1):94.
Raynal, M., Achanta, R., and Humbert, M. (2020). Image
obfuscation for privacy-preserving machine learning.
Romanelli, M., Palamidessi, C., and Chatzikokolakis, K.
(2019). Generating optimal privacy-protection mech-
anisms via machine learning. CoRR, abs/1904.01059.
TensorFlow. Preprocesses a tensor or numpy ar-
ray encoding a batch of images. https:
//www.tensorflow.org/api docs/python/tf/keras/
applications/resnet/preprocess input.
Xu, D., Zheng, M., Jiang, L., Gu, C., Tan, R., and Cheng,
P. (2020). Lightweight and unobtrusive data obfusca-
tion at iot edge for remote inference. IEEE Internet of
Things Journal, 7(10):9540–9551.
Zheng, M., Xu, D., Jiang, L., Gu, C., Tan, R., and Cheng,
P. (2019). Challenges of privacy-preserving machine
learning in iot. In Proceedings of the First Interna-
tional Workshop on Challenges in Artificial Intelli-
gence and Machine Learning for Internet of Things,
SenSys ’19. ACM.
SECRYPT 2025 - 22nd International Conference on Security and Cryptography
354