Compositional gan: Learning image-conditional binary composition

S Azadi, D Pathak, S Ebrahimi, T Darrell - International Journal of …, 2020 - Springer
International Journal of Computer Vision, 2020Springer
Abstract Generative Adversarial Networks can produce images of remarkable complexity
and realism but are generally structured to sample from a single latent source ignoring the
explicit spatial interaction between multiple entities that could be present in a scene.
Capturing such complex interactions between different objects in the world, including their
relative scaling, spatial layout, occlusion, or viewpoint transformation is a challenging
problem. In this work, we propose a novel self-consistent Composition-by-Decomposition …
Abstract
Generative Adversarial Networks can produce images of remarkable complexity and realism but are generally structured to sample from a single latent source ignoring the explicit spatial interaction between multiple entities that could be present in a scene. Capturing such complex interactions between different objects in the world, including their relative scaling, spatial layout, occlusion, or viewpoint transformation is a challenging problem. In this work, we propose a novel self-consistent Composition-by-Decomposition network to compose a pair of objects. Given object images from two distinct distributions, our model can generate a realistic composite image from their joint distribution following the texture and shape of the input objects. We evaluate our approach through qualitative experiments and user evaluations. Our results indicate that the learned model captures potential interactions between the two object domains, and generates realistic composed scenes at test time.
Springer
Showing the best result for this search. See all results