A BERT-based sequential deep neural architecture to identify contribution statements and extract phrases for triplets from scientific publications

Komal Gupta, Ammaar Ahmad, Tirthankar Ghosal, Asif Ekbal

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Research in Natural Language Processing (NLP) is increasing rapidly; as a result, a large number of research papers are being published. It is challenging to find the contributions of the research paper in any specific domain from the huge amount of unstructured data. There is a need for structuring the relevant contributions in Knowledge Graph (KG). In this paper, we describe our work to accomplish four tasks toward building the Scientific Knowledge Graph (SKG). We propose a pipelined system that performs contribution sentence identification, phrase extraction from contribution sentences, Information Units (IUs) classification, and organize phrases into triplets (subject, predicate, object) from the NLP scholarly publications. We develop a multitasking system (ContriSci) for contribution sentence identification with two supporting tasks, viz. Section Identification and Citance Classification. We use the Bidirectional Encoder Representations from Transformers (BERT)—Conditional Random Field (CRF) model for the phrase extraction and train with two additional datasets: SciERC and SciClaim. To classify the contribution sentences into IUs, we use a BERT-based model. For the triplet extraction, we categorize the triplets into five categories and classify the triplets with the BERT-based classifier. Our proposed approach yields the F1 score values of 64.21%, 77.47%, 84.52%, and 62.71% for the contribution sentence identification, phrase extraction, IUs classification, and triplet extraction, respectively, for non-end-to-end setting. The relative improvement for contribution sentence identification, IUs classification, and triplet extraction is 8.08, 2.46, and 2.31 in terms of F1 score for the NLPContributionGraph (NCG) dataset. Our system achieves the best performance (57.54% F1 score) in the end-to-end pipeline with all four sub-tasks combined. We make our codes available at: https://github.com/92Komal/pipeline_triplet_extraction .

Original languageEnglish
JournalInternational Journal on Digital Libraries
DOIs
StateAccepted/In press - 2024

Keywords

  • Information extraction
  • Knowledge graph
  • Machine learning
  • Multitask learning
  • Scholarly article

Fingerprint

Dive into the research topics of 'A BERT-based sequential deep neural architecture to identify contribution statements and extract phrases for triplets from scientific publications'. Together they form a unique fingerprint.

Cite this