SAR-to-Optical Image Translation Using Pix2Pix GAN
DOI:
https://doi.org/10.47392/IRJAEM.2026.0224Keywords:
Synthetic Aperture Radar, Pix2Pix GAN, Image Translation, Attention U-Net, Remote SensingAbstract
Synthetic aperture radar (SAR) is very useful in remote sensing because it can capture images all year round, day and night. However, its complex electromagnetic scattering properties can be hard for humans to understand and inter- pret. Generative models based on deep learning have shown promise in translating between SAR and optical images. Still, their effectiveness depends on having large training datasets, which can be costly in terms of logistics and finances. To address these issues, we propose an improved image-to-image translation framework using a Pix2Pix conditional generative adversarial network (CGAN) to facilitate high-quality SAR-to- optical image transformation. The primary goal of this work is to increase the variety and structural accuracy of the generated images, especially when there are few training samples; this scenario is often called few-shot image generation (FSIG). The methodology employs a U-Net generator structure combined with Spectral Normalization (SN) in the discriminator to provide stability during adversarial training. A key innovation of this framework is the use of a new loss function called Pairwise Distance (PD). This approach normalizes the synthesized results using simulated SAR data, allowing the model to reflect the inherent variability found in real scattering phenomena. We expect improved performance in terms of structural similarity (SSIM) and higher classification accuracy, particularly when the generated images serve as data augmentation. This study has successfully converted the complex geometric and radiometric features of SAR into optical representations that are easier to interpret. This development enhances the usefulness of SAR data in resource-limited environments. Overall, the proposed model will create a scalable path for advancing automated target recognition and environmental surveillance by synthesizing cross- modal features effectively.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2026 International Research Journal on Advanced Engineering and Management (IRJAEM)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
.