Deep Reader: Information Extraction from Document Images via Relation Extraction and Natural Language

D. Vishwanath, Rohit Rahul, Gunjan Sehgal, Swati, Arindam Chowdhury, Monika Sharma, Lovekesh Vig, Gautam Shroff, Ashwin Srinivasan

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Scopus citations

Abstract

Recent advancements in the area of Computer Vision with state-of-art Neural Networks has given a boost to Optical Character Recognition (OCR) accuracies. However, extracting characters/text alone is often insufficient for relevant information extraction as documents also have a visual structure that is not captured by OCR. Extracting information from tables, charts, footnotes, boxes, headings and retrieving the corresponding structured representation for the document remains a challenge and finds application in a large number of real-world use cases. In this paper, we propose a novel enterprise based end-to-end framework called DeepReader which facilitates information extraction from document images via identification of visual entities and populating a meta relational model across different entities in the document image. The model schema allows for an easy to understand abstraction of the entities detected by the deep vision models and the relationships between them. DeepReader has a suite of state-of-the-art vision algorithms which are applied to recognize handwritten and printed text, eliminate noisy effects, identify the type of documents and detect visual entities like tables, lines and boxes. Deep Reader maps the extracted entities into a rich relational schema so as to capture all the relevant relationships between entities (words, textboxes, lines etc.) detected in the document. Relevant information and fields can then be extracted from the document by writing SQL queries on top of the relationship tables. A natural language based interface is added on top of the relationship schema so that a non-technical user, specifying the queries in natural language, can fetch the information with minimal effort. In this paper, we also demonstrate many different capabilities of Deep Reader and report results on a real-world use case.

Original languageEnglish
Title of host publicationComputer Vision – ACCV 2018 Workshops - 14th Asian Conference on Computer Vision, 2018, Revised Selected Papers
EditorsGustavo Carneiro, Shaodi You
PublisherSpringer Verlag
Pages186-201
Number of pages16
ISBN (Print)9783030210731
DOIs
StatePublished - 2019
Externally publishedYes
Event14th Asian Conference on Computer Vision, ACCV 2018 - Perth, Australia
Duration: Dec 2 2018Dec 6 2018

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11367 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference14th Asian Conference on Computer Vision, ACCV 2018
Country/TerritoryAustralia
CityPerth
Period12/2/1812/6/18

Fingerprint

Dive into the research topics of 'Deep Reader: Information Extraction from Document Images via Relation Extraction and Natural Language'. Together they form a unique fingerprint.

Cite this