Spatial Attention GAN for Unsupervised Image-to-Image Translation
Event Type
DescriptionImage-to-image translation is to learn a mapping
between images from a source domain and images from a target
domain. In this paper, we introduce the attention mechanism
directly to the generative adversarial network (GAN) architecture
and propose a novel spatial attention GAN model (SPA-GAN)
for image-to-image translation tasks. SPA-GAN computes the
attention in its discriminator and use it to help the generator
focus more on the most discriminative regions between the
source and target domains, leading to more realistic output
images. We also find it helpful to introduce an additional feature
map loss in SPA-GAN training to preserve domain specific
features during translation. Compared with existing attentionguided
GAN models, SPA-GAN is a lightweight model that does
not need additional attention networks or supervision. Qualitative
and quantitative comparison against state-of-the-art methods on
benchmark datasets demonstrates the superior performance of