Sun. Apr 18th, 2021


Intelligent machines can successfully read and comprehend natural language texts to answer a question. However, information is often provided not only in text but in the visual layer and content also (for instance, in the text appearance, tables, or charts). A recent research paper addresses this problem.

Image credit: pxhere.com, CC0 Public Domain

A new dataset, called Visual Machine Reading Comprehension, is created. It contains more than 30 000 questions defined on more than 10 000 images. A machine has to read and comprehend text in an image and to answer questions in natural language.

A novel model is based on current natural language understanding and natural language generation abilities. It additionally learns the visual layout and content of document images. The suggested approach outperformed both the current state-of-the-art visual question answering model and encoder-decoder models trained only on textual data.

Recent studies on machine reading comprehension have focused on text-level understanding but have not yet reached the level of human understanding of the visual layout and content of real-world documents. In this study, we introduce a new visual machine reading comprehension dataset, named VisualMRC, wherein given a question and a document image, a machine reads and comprehends texts in the image to answer the question in natural language. Compared with existing visual question answering (VQA) datasets that contain texts in images, VisualMRC focuses more on developing natural language understanding and generation abilities. It contains 30,000+ pairs of a question and an abstractive answer for 10,000+ document images sourced from multiple domains of webpages. We also introduce a new model that extends existing sequence-to-sequence models, pre-trained with large-scale text corpora, to take into account the visual layout and content of documents. Experiments with VisualMRC show that this model outperformed the base sequence-to-sequence models and a state-of-the-art VQA model. However, its performance is still below that of humans on most automatic evaluation metrics. The dataset will facilitate research aimed at connecting vision and language understanding.

Research paper: Tanaka, R., Nishida, K., and Yoshida, S., “VisualMRC: Machine Reading Comprehension on Document Images”, 2021. Link: https://arxiv.org/abs/2101.11272






Source link