ELITR Minuting Corpus: A Novel Dataset for Automatic Minuting from Multi-Party Meetings in English and Czech

Anna Nedoluzhko, Muskaan Singh, Marie Hledíková, Tirthankar Ghosal, Ondřej Bojar

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

14 Scopus citations

Abstract

Taking minutes is an essential component of every meeting, although the goals, style, and procedure of this activity (“minuting” for short) can vary. Minuting is a relatively unstructured writing act and is affected by who takes the minutes and for whom the minutes are intended. With the rise of online meetings, automatic minuting would be an important use-case for the meeting participants and those who might have missed the meeting. However, automatically generating meeting minutes is a challenging problem due to various factors, including the quality of automatic speech recognition (ASR), public availability of meeting data, subjective knowledge of the minuter, etc. In this work, we present the first of its kind dataset on Automatic Minuting. We develop a dataset of English and Czech technical project meetings, consisting of transcripts generated from ASRs, manually corrected, and minuted by several annotators. Our dataset, ELITR Minuting Corpus, consists of 120 English and 59 Czech meetings, covering about 180 hours of meeting content. The corpus is publicly available at http://hdl.handle.net/11234/1-4692 as a set of meeting transcripts and minutes, excluding the recordings for privacy reasons. A unique feature of our dataset is that most meetings are equipped with more than one minute, each created independently. Our corpus thus allows studying differences in what people find important while taking the minutes. We also provide baseline experiments for the community to explore this novel problem further. To the best of our knowledge, ELITR Minuting Corpus is probably the first resource on minuting in English and also in a language other than English (Czech).

Original languageEnglish
Title of host publication2022 Language Resources and Evaluation Conference, LREC 2022
EditorsNicoletta Calzolari, Frederic Bechet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Helene Mazo, Jan Odijk, Stelios Piperidis
PublisherEuropean Language Resources Association (ELRA)
Pages3174-3182
Number of pages9
ISBN (Electronic)9791095546726
StatePublished - 2022
Externally publishedYes
Event13th International Conference on Language Resources and Evaluation Conference, LREC 2022 - Marseille, France
Duration: Jun 20 2022Jun 25 2022

Publication series

Name2022 Language Resources and Evaluation Conference, LREC 2022

Conference

Conference13th International Conference on Language Resources and Evaluation Conference, LREC 2022
Country/TerritoryFrance
CityMarseille
Period06/20/2206/25/22

Funding

This work has received funding from the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement No 825460 (ELITR), the grant 19-26934X (NEUREM3) of the Czech Science Foundation, and has also been supported by the Ministry of Education, Youth and Sports of the Czech Republic, Project No. LM2018101 LINDAT/CLARIAH-CZ. This work has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No 825460 (ELITR), the grant 19-26934X (NEUREM3) of the Czech Science Foundation, and has also been supported by the Ministry of Education, Youth and Sports of the Czech Republic, Project No. LM2018101 LINDAT/CLARIAH-CZ.

FundersFunder number
Horizon 2020 Framework Programme
Ministerstvo Školství, Mládeže a TělovýchovyLM2018101 LINDAT/CLARIAH-CZ
Grantová Agentura České Republiky
Horizon 2020825460, 19-26934X

    Keywords

    • automatic minuting
    • meeting summarization
    • multi-party dialogues

    Fingerprint

    Dive into the research topics of 'ELITR Minuting Corpus: A Novel Dataset for Automatic Minuting from Multi-Party Meetings in English and Czech'. Together they form a unique fingerprint.

    Cite this