Paper
18 November 2019 Attention-guided GANs for human pose transfer
Jinsong Zhang, Yuyang Zhao, Kun Li, Yebin Liu, Jingyu Yang, Qionghai Dai
Author Affiliations +
Abstract
This paper presents a novel generative adversarial network for the task of human pose transfer, which aims at transferring the pose of a given person to a target pose. In order to deal with pixel-to-pixel misalignment due to the pose differences, we introduce an attention mechanism and propose Pose-Guided Attention Blocks. With these blocks, the generator can learn how to transfer the details from the conditional image to the target image based on the target pose. Our network can make the target pose truly guide the transfer of features. The effectiveness of the proposed network is validated on DeepFasion and Market-1501 datasets. Compared with state-of-the-art methods, our generated images are more realistic with better facial details.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jinsong Zhang, Yuyang Zhao, Kun Li, Yebin Liu, Jingyu Yang, and Qionghai Dai "Attention-guided GANs for human pose transfer", Proc. SPIE 11187, Optoelectronic Imaging and Multimedia Technology VI, 111870W (18 November 2019); https://doi.org/10.1117/12.2538638
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Gallium nitride

Computer programming

Computer vision technology

Image processing

Machine vision

Multimedia

Network architectures

RELATED CONTENT

Multiple images segmentation based on saliency map
Proceedings of SPIE (June 19 2017)
Structure guided GANs
Proceedings of SPIE (November 15 2017)
Computer vision on the World Wide Web
Proceedings of SPIE (October 03 1995)
Hypertext-based computer vision teaching packages
Proceedings of SPIE (October 03 1994)
Use of coordinate mapping as a method for image data...
Proceedings of SPIE (February 01 1991)

Back to Top