loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Peter Sakalik 1 ; Lukas Hudec 1 ; Marek Jakab 1 ; Vanda Benešová 1 and Ondrej Fabian 2 ; 3

Affiliations: 1 Faculty of Informatics and Information Technologies, Slovak University of Technology, Ilkovicova 2, Bratislava, Slovakia ; 2 Clinical and Transplant Pathology Centre, Institute for Clinical and Experimental Medicine, Videnska 9, Prague 4, Czechia ; 3 Department of Pathology and Molecular Medicine, 3rd Faculty of Medicine, Charles University and Thomayer Hospital, Videnska 800, Prague 4, Czechia

Keyword(s): Medical data, Annotated Data Synthesis, Generative Adversarial Networks.

Abstract: The automatic analysis of medical images with the application of deep learning methods relies highly on the amount and quality of annotated data. Most of the diagnostic processes start with the segmentation and classification of cells. The manual annotation of a sufficient amount of high-variability data is extremely time-consuming, and the semi-automatic methods may introduce an error bias. Another research option is to use deep learning generative models to synthesize medical data with annotations as an extension to real datasets. Enhancing the training with synthetic data proved that it can improve the robustness and generalization of the models used in industrial problems. This paper presents a deep learning-based approach to generate synthetic histological stained images with corresponding multi-class annotated masks evaluated on cell semantic segmentation. We train conditional generative adversarial networks to synthesize a 6-channeled image. The six channels consist of the his tological image and the annotations concerning the cell and organ type specified in the input. We evaluated the impact of the synthetic data on the training with the standard network UNet. We observe quantitative and qualitative changes in segmentation results from models trained on different distributions of real and synthetic data in the training batch. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.148.107.255

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Sakalik, P.; Hudec, L.; Jakab, M.; Benešová, V. and Fabian, O. (2023). Synthesis for Dataset Augmentation of H&E Stained Images with Semantic Segmentation Masks. In Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP; ISBN 978-989-758-634-7; ISSN 2184-4321, SciTePress, pages 873-880. DOI: 10.5220/0011679300003417

@conference{visapp23,
author={Peter Sakalik. and Lukas Hudec. and Marek Jakab. and Vanda Benešová. and Ondrej Fabian.},
title={Synthesis for Dataset Augmentation of H&E Stained Images with Semantic Segmentation Masks},
booktitle={Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP},
year={2023},
pages={873-880},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011679300003417},
isbn={978-989-758-634-7},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP
TI - Synthesis for Dataset Augmentation of H&E Stained Images with Semantic Segmentation Masks
SN - 978-989-758-634-7
IS - 2184-4321
AU - Sakalik, P.
AU - Hudec, L.
AU - Jakab, M.
AU - Benešová, V.
AU - Fabian, O.
PY - 2023
SP - 873
EP - 880
DO - 10.5220/0011679300003417
PB - SciTePress