Kone
ˇ
cn
`
y, J., McMahan, H. B., Yu, F. X., Richt
´
arik, P.,
Suresh, A. T., and Bacon, D. (2016b). Federated
learning: Strategies for improving communication ef-
ficiency. arXiv preprint arXiv:1610.05492.
Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple
layers of features from tiny images.
Laishram, R. and Phoha, V. V. (2016). Curie: A method for
protecting svm classifier from poisoning attack. arXiv
preprint arXiv:1606.01584.
LeCun, Y. (1998). The mnist database of handwritten digits.
http://yann. lecun. com/exdb/mnist/.
Li, Z., Zhao, Y., Botta, N., Ionescu, C., and Hu, X. (2020).
Copod: copula-based outlier detection. In 2020 IEEE
International Conference on Data Mining (ICDM),
pages 1118–1123. IEEE.
Mei, S. and Zhu, X. (2015). Using machine teaching to
identify optimal training-set attacks on machine learn-
ers. In Twenty-Ninth AAAI Conference on Artificial
Intelligence.
Melis, M., Demontis, A., Pintor, M., Sotgiu, A., and Big-
gio, B. (2019). secml: A python library for secure
and explainable machine learning. arXiv preprint
arXiv:1912.10013.
Mu
˜
noz-Gonz
´
alez, L., Biggio, B., Demontis, A., Paudice,
A., Wongrassamee, V., Lupu, E. C., and Roli, F.
(2017). Towards poisoning of deep learning algo-
rithms with back-gradient optimization. In Proceed-
ings of the 10th ACM workshop on artificial intelli-
gence and security, pages 27–38.
Paudice, A., Mu
˜
noz-Gonz
´
alez, L., Gyorgy, A., and Lupu,
E. C. (2018). Detection of adversarial training exam-
ples in poisoning attacks through anomaly detection.
arXiv preprint arXiv:1802.03041.
Paudice, A., Mu
˜
noz-Gonz
´
alez, L., and Lupu, E. C. (2019).
Label sanitization against label flipping poisoning at-
tacks. In Alzate, C., Monreale, A., Assem, H., Bifet,
A., Buda, T. S., Caglayan, B., Drury, B., Garc
´
ıa-
Mart
´
ın, E., Gavald
`
a, R., Koprinska, I., Kramer, S.,
Lavesson, N., Madden, M., Molloy, I., Nicolae, M.-I.,
and Sinn, M., editors, ECML PKDD 2018 Workshops,
pages 5–15, Cham. Springer International Publishing.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V.,
Thirion, B., Grisel, O., Blondel, M., Prettenhofer,
P., Weiss, R., Dubourg, V., Vanderplas, J., Passos,
A., Cournapeau, D., Brucher, M., Perrot, M., and
Duchesnay, E. (2011). Scikit-learn: Machine learning
in Python. Journal of Machine Learning Research,
12:2825–2830.
Pitropakis, N., Panaousis, E., Giannetsos, T., Anastasiadis,
E., and Loukas, G. (2019). A taxonomy and survey of
attacks against machine learning. Computer Science
Review, 34:100199.
Radford, B. J., Apolonio, L. M., Trias, A. J., and Simpson,
J. A. (2018). Network traffic anomaly detection using
recurrent neural networks. CoRR, abs/1803.10769.
Ramaswamy, S., Rastogi, R., and Shim, K. (2000). Efficient
algorithms for mining outliers from large data sets. In
Proceedings of the 2000 ACM SIGMOD international
conference on Management of data, pages 427–438.
Shejwalkar, V., Houmansadr, A., Kairouz, P., and Ramage,
D. (2022). Back to the drawing board: A critical eval-
uation of poisoning attacks on production federated
learning. In IEEE Symposium on Security and Pri-
vacy.
Steinhardt, J., Koh, P. W., and Liang, P. (2017). Certified
defenses for data poisoning attacks. In Proceedings of
the 31st International Conference on Neural Informa-
tion Processing Systems, pages 3520–3532.
Sun, G., Cong, Y., Dong, J., Wang, Q., Lyu, L., and Liu, J.
(2021). Data poisoning attacks on federated machine
learning. IEEE Internet of Things Journal.
Suykens, J. A., De Brabanter, J., Lukas, L., and Vandewalle,
J. (2002). Weighted least squares support vector ma-
chines: robustness and sparse approximation. Neuro-
computing, 48(1-4):85–105.
Tolpegin, V., Truex, S., Gursoy, M. E., and Liu, L. (2020).
Data poisoning attacks against federated learning sys-
tems. In European Symposium on Research in Com-
puter Security, pages 480–501. Springer.
Wang, S., Chen, M., Saad, W., and Yin, C. (2020). Fed-
erated learning for energy-efficient task computing in
wireless networks. In ICC 2020-2020 IEEE Interna-
tional Conference on Communications (ICC), pages
1–6. IEEE.
Xiao, H., Biggio, B., Brown, G., Fumera, G., Eckert, C.,
and Roli, F. (2015). Is feature selection secure against
training data poisoning? In international conference
on machine learning, pages 1689–1698. PMLR.
Yin, D., Chen, Y., Kannan, R., and Bartlett, P. (2018).
Byzantine-robust distributed learning: Towards opti-
mal statistical rates. In International Conference on
Machine Learning, pages 5650–5659. PMLR.
Zhang, R. and Zhu, Q. (2017). A game-theoretic defense
against data poisoning attacks in distributed support
vector machines. In 2017 IEEE 56th Annual Confer-
ence on Decision and Control (CDC), pages 4582–
4587. IEEE.
Zhao, Y., Nasrullah, Z., and Li, Z. (2019). Pyod: A python
toolbox for scalable outlier detection. Journal of Ma-
chine Learning Research, 20(96):1–7.
Zhou, Y., Kantarcioglu, M., Thuraisingham, B., and Xi, B.
(2012). Adversarial support vector machine learning.
In Proceedings of the 18th ACM SIGKDD interna-
tional conference on Knowledge discovery and data
mining, pages 1059–1067.
Zhu, Y., Cui, L., Ding, Z., Li, L., Liu, Y., and Hao, Z.
(2022). Black box attack and network intrusion de-
tection using machine learning for malicious traffic.
Computers & Security, 123:102922.
Towards Poisoning of Federated Support Vector Machines with Data Poisoning Attacks
33