
clients, making them suitable for federated health
monitoring.
• In heterogeneous data settings, FedProx and
FedYogi are the best choices. FedProx
prevents divergence by regularizing updates,
while FedYogi adapts learning rates for
client variability, making them ideal for
multi-institutional studies and mobile health.
• For communication efficiency, FedAvg and
FedMedian minimize overhead. FedAvg reduces
update frequency, and FedMedian boosts
robustness, making them suitable for remote
healthcare and edge-based AI.
• For adversarial robustness, FedMedian and
FedYogi resist malicious updates. FedMedian
mitigates adversarial impact, while FedYogi
stabilizes training, ideal for secure wearable
health and remote diagnosis.
By aligning FL model selection with these criteria,
healthcare practitioners and researchers can ensure
optimal performance while maintaining privacy and
efficiency in federated medical AI systems.
5 CONCLUSION
This study evaluated various federated learning (FL)
optimization strategies on tabular health datasets,
analyzing accuracy, privacy, and trade-offs. Adaptive
optimizers like FedYogi and FedAdam showed
improved performance in non-IID settings, while
robust aggregation methods like FedMedian and
FedTrimmedAvg enhanced resilience against noisy
updates. The findings emphasize that no single
FL model is universally optimal; instead, selection
should be guided by specific healthcare requirements,
such as the need for high accuracy or privacy. Future
work should explore stricter privacy mechanisms,
such as differential privacy, and assess scalability in
real-world deployments. We also plan to explore the
performance of FL models, when data is distributed
vertically rather than horizontally.
REFERENCES
Coelho, K. K., Nogueira, M., Vieira, A. B., Silva, E. F., and
Nacif, J. A. M. (2023). A survey on federated learning
for security and privacy in healthcare applications.
Computer Communications, 207:113–127.
Duchi, J., Hazan, E., and Singer, Y. (2011). Adaptive
subgradient methods for online learning and
stochastic optimization. Journal of Machine
Learning Research, 12(61):2121–2159.
Gaballah, S. A., Abdullah, L., Alishahi, M., Nguyen, T.
H. L., Zimmer, E., M
¨
uhlh
¨
auser, M., and Marky, K.
(2024). Anonify: Decentralized dual-level anonymity
for medical data donation. Proc. Priv. Enhancing
Technol., 2024(3):94–108.
Hernandez, M., Epelde, G., Alberdi, A., Cilla, R.,
and Rankin, D. (2022). Synthetic data generation
for tabular health records: A systematic review.
Neurocomputing, 493:28–45.
Hsu, T. H., Qi, H., and Brown, M. (2019). Measuring the
effects of non-identical data distribution for federated
visual classification. CoRR, abs/1909.06335.
Kingma, D. P. and Ba, J. (2017). Adam: A method for
stochastic optimization.
Nguyen, D. C., Pham, Q.-V., Pathirana, P. N., Ding, M.,
Seneviratne, A., Lin, Z., Dobre, O. A., and Hwang,
W.-J. (2021). Federated learning for smart healthcare:
A survey.
Ouadrhiri, A. E. and Abdelhadi, A. (2022). Differential
privacy for deep and federated learning: A survey.
IEEE Access, 10:22359–22380.
Reddi, S. J., Charles, Z., Zaheer, M., Garrett, Z., Rush,
K., Kone
ˇ
cn
´
y, J., Kumar, S., and McMahan, H. B.
(2020). Adaptive federated optimization. CoRR,
abs/2003.00295.
Sahu, A. K., Li, T., Sanjabi, M., Zaheer, M., Talwalkar,
A., and Smith, V. (2018). On the convergence of
federated optimization in heterogeneous networks.
CoRR, abs/1812.06127.
Sheikhalishahi, M. and Martinelli, F. (2018). Privacy-utility
feature selection as a tool in private data classification.
In Distributed Computing and Artificial Intelligence,
14th International Conference, pages 254–261.
Springer International Publishing.
Sheikhalishahi, M., Saracino, A., Martinelli, F., and Marra,
A. L. (2022). Privacy preserving data sharing and
analysis for edge-based architectures. Int. J. Inf. Sec.,
21(1):79–101.
Tasbaz, O., Moghtadaiee, V., and Farahani, B. (2024).
Feature fusion federated learning for privacy-aware
indoor localization. Peer-to-Peer Networking and
Applications, 17:2781–2795.
Torki, O., Ashouri-Talouki, M., and Alishahi, M.
(2025). Fed-gwas: Privacy-preserving individualized
incentive-based cross-device federated gwas learning.
Journal of Information Security and Applications,
89:104002.
Yin, D., Chen, Y., Ramchandran, K., and Bartlett,
P. L. (2018). Byzantine-robust distributed
learning: Towards optimal statistical rates. CoRR,
abs/1803.01498.
Zaheer, M., Reddi, S., Sachan, D., Kale, S., and
Kumar, S. (2018). Adaptive methods for nonconvex
optimization. In Advances in Neural Information
Processing Systems, volume 31. Curran Associates,
Inc.
Zhang, F., Kreuter, D., Chen, Y., Dittmer, S., Tull, S.,
Shadbahr, T., Collaboration, B., Preller, J., Rudd, J.
H. F., Aston, J. A. D., Sch
¨
onlieb, C.-B., Gleadall,
N., and Roberts, M. (2023). Recent methodological
advances in federated learning for healthcare.
SECRYPT 2025 - 22nd International Conference on Security and Cryptography
740