Mon. Apr 19th, 2021


Automatic generation of images from textual descriptions would be a useful task in domains like art generation or computer-aided design. In order to generate realistic images, it is necessary to infer spatial relations between entities.

Computer-aided image editing. Image credit: Free-Photos | Free picture via Pixabay

Computer-aided image editing. Image credit: Free-Photos | Free picture via Pixabay

Current datasets include subject, object, and relation triplets associated with bounding boxes in the picture. However, they require manual annotation. Therefore, the authors of a recent study on arXiv.org propose a method to extract spatial relations from textual descriptions of images.

A publicly available dataset comprising pairs of images and captions together with tokens in the description and bounding boxes of subject and object is created. The method lets to successfully infer the size and location of an object with respect to a given subject from the caption. It locates the object better than systems that use manually generated triplets.

Generating an image from its textual description requires both a certain level of language understanding and common sense knowledge about the spatial relations of the physical entities being described. In this work, we focus on inferring the spatial relation between entities, a key step in the process of composing scenes based on text. More specifically, given a caption containing a mention to a subject and the location and size of the bounding box of that subject, our goal is to predict the location and size of an object mentioned in the caption. Previous work did not use the caption text information, but a manually provided relation holding between the subject and the object. In fact, the used evaluation datasets contain manually annotated ontological triplets but no captions, making the exercise unrealistic: a manual step was required; and systems did not leverage the richer information in captions. Here we present a system that uses the full caption, and Relations in Captions (REC-COCO), a dataset derived from MS-COCO which allows to evaluate spatial relation inference from captions directly. Our experiments show that: (1) it is possible to infer the size and location of an object with respect to a given subject directly from the caption; (2) the use of full text allows to place the object better than using a manually annotated relation. Our work paves the way for systems that, given a caption, decide which entities need to be depicted and their respective location and sizes, in order to then generate the final image.

Research paper: Elu, A., Azkune, G., Lopez de Lacalle, O., Arganda-Carreras, I., Soroa, A., and Agirre, E., “Inferring spatial relations from textual descriptions of images”, 2021. Link: https://arxiv.org/abs/2102.00997






Source link