TY - GEN
T1 - ARGUABLY @ AI Debater-NLPCC 2021 Task 3
T2 - 10th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2021
AU - Kohli, Guneet Singh
AU - Kaur, Prabsimran
AU - Singh, Muskaan
AU - Ghosal, Tirthankar
AU - Rana, Prashant Singh
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - This paper describes our participating system run to the argumentative text understanding shared task for AI Debater at NLPCC 2021 (http://www.fudan-disc.com/sharedtask/AIDebater21/tracks.html ). The tasks are motivated towards developing an autonomous debating system. We make an initial attempt with Track-3, namely, argument pair extraction from peer review and rebuttal where we extract arguments from peer reviews and their corresponding rebuttals from author responses. Compared to the multi-task baseline by the organizers, we introduce two significant changes: (i) we use ERNIE 2.0 token embedding, which can better capture lexical, syntactic, and semantic aspects of information in the training data, (ii) we perform double attention learning to capture long-term dependencies. Our proposed model achieves the state-of-the-art results with a relative improvement of 8.81% in terms of F1 score over the baseline model. We make our code available publicly at https://github.com/guneetsk99/ArgumentMining_SharedTask. Our team ARGUABLY is one of the third prize-winning teams in Track 3 of the shared task.
AB - This paper describes our participating system run to the argumentative text understanding shared task for AI Debater at NLPCC 2021 (http://www.fudan-disc.com/sharedtask/AIDebater21/tracks.html ). The tasks are motivated towards developing an autonomous debating system. We make an initial attempt with Track-3, namely, argument pair extraction from peer review and rebuttal where we extract arguments from peer reviews and their corresponding rebuttals from author responses. Compared to the multi-task baseline by the organizers, we introduce two significant changes: (i) we use ERNIE 2.0 token embedding, which can better capture lexical, syntactic, and semantic aspects of information in the training data, (ii) we perform double attention learning to capture long-term dependencies. Our proposed model achieves the state-of-the-art results with a relative improvement of 8.81% in terms of F1 score over the baseline model. We make our code available publicly at https://github.com/guneetsk99/ArgumentMining_SharedTask. Our team ARGUABLY is one of the third prize-winning teams in Track 3 of the shared task.
KW - Argument pair extraction
KW - Deep learning
KW - Peer review
UR - http://www.scopus.com/inward/record.url?scp=85118142227&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-88483-3_48
DO - 10.1007/978-3-030-88483-3_48
M3 - Conference contribution
AN - SCOPUS:85118142227
SN - 9783030884826
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 590
EP - 602
BT - Natural Language Processing and Chinese Computing - 10th CCF International Conference, NLPCC 2021, Proceedings
A2 - Wang, Lu
A2 - Feng, Yansong
A2 - Hong, Yu
A2 - He, Ruifang
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 13 October 2021 through 17 October 2021
ER -