Publication:
Conditional Image-To Translation Generative Adversarial Network (CGAN) for Fabric Defect Data Augmentation

dc.authorscopusid57697992100
dc.authorscopusid16229976900
dc.contributor.authorMohammed, S.S.
dc.contributor.authorGökalp, H.G.
dc.date.accessioned2025-12-11T00:33:34Z
dc.date.issued2024
dc.departmentOndokuz Mayıs Üniversitesien_US
dc.department-temp[Mohammed] Swash Sami, Department of Electrical and Electronic Engineering, Ondokuz Mayis Üniversitesi, Samsun, Turkey; [Gökalp] Hülya, Department of Electrical and Electronic Engineering, Ondokuz Mayis Üniversitesi, Samsun, Turkeyen_US
dc.description.abstractThe availability of comprehensive datasets is a crucial challenge for developing artificial intelligence (AI) models in various applications and fields. The lack of large and diverse public fabric defect datasets forms a major obstacle to properly and accurately developing and training AI models for detecting and classifying fabric defects in real-life applications. Models trained on limited datasets struggle to identify underrepresented defects, reducing their practicality. To address these issues, this study suggests using a conditional generative adversarial network (cGAN) for fabric defect data augmentation. The proposed image-to-image translator GAN features a conditional U-Net generator and a 6-layered PatchGAN discriminator. The conditional U-Network (U-Net) generator can produce highly realistic synthetic defective samples and offers the ability to control various characteristics of the generated samples by taking two input images: a segmented defect mask and a clean fabric image. The segmented defect mask provides information about various aspects of the defects to be added to the clean fabric sample, including their type, shape, size, and location. By augmenting the training dataset with diverse and realistic synthetic samples, the AI models can learn to identify a broader range of defects more accurately. This technique helps overcome the limitations of small or unvaried datasets, leading to improved defect detection accuracy and generalizability. Moreover, this proposed augmentation method can find applications in other challenging fields, such as generating synthetic samples for medical imaging datasets related to brain and lung tumors. © The Author(s) 2024.en_US
dc.identifier.doi10.1007/s00521-024-10179-1
dc.identifier.endpage20244en_US
dc.identifier.issn0941-0643
dc.identifier.issn1433-3058
dc.identifier.issue32en_US
dc.identifier.scopus2-s2.0-85200971014
dc.identifier.scopusqualityQ1
dc.identifier.startpage20231en_US
dc.identifier.urihttps://doi.org/10.1007/s00521-024-10179-1
dc.identifier.urihttps://hdl.handle.net/20.500.12712/37423
dc.identifier.volume36en_US
dc.language.isoenen_US
dc.publisherSpringer Science and Business Media Deutschland GmbHen_US
dc.relation.ispartofNeural Computing and Applicationsen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectConditional GANen_US
dc.subjectData Augmentationen_US
dc.subjectDefect Detectionen_US
dc.subjectFabric Defecten_US
dc.subjectImage-to-Image Translationen_US
dc.subjectU-Neten_US
dc.titleConditional Image-To Translation Generative Adversarial Network (CGAN) for Fabric Defect Data Augmentationen_US
dc.typeArticleen_US
dspace.entity.typePublication

Files