TY - JOUR
T1 - Emotion aided multi-task framework for video embedded misinformation detection
AU - Kumari, Rina
AU - Gupta, Vipin
AU - Ashok, Nischal
AU - Ghosal, Tirthankar
AU - Ekbal, Asif
N1 - Publisher Copyright:
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023.
PY - 2024/4
Y1 - 2024/4
N2 - Online news consumption via social media platforms has accelerated the growth of digital journalism. Adverse to traditional media, digital media has lower entry barriers and allows everyone as a content creator, resulting in numerous fake news productions to attract public attention. As multimedia content is more convenient for users than expressing their feelings through text, images and video-embedded fake news is being circulated rapidly on social media nowadays. Emotional appeal in fake news is also a driving factor in its rapid dissemination. Although prior studies have made a remarkable effort toward fake news detection, they give less emphasis on exploring video modality and emotional appeal in fake news. To bridge this gap, this paper presents the following two contributions: i) It first develops a video-based multimodal fake news detection dataset named FakeClips and ii) It introduces a deep multitask framework dedicated to video-embedded multimodal fake news detection in which fake news detection is the main task and emotion recognition is the auxiliary task. The results reveal that investigating emotion and fake news together in a multitasking framework achieves 9.04% and 5.27% gains in terms of accuracy and f-score, respectively over the state-of-the-art model i.e. Fake Video Detection Model.
AB - Online news consumption via social media platforms has accelerated the growth of digital journalism. Adverse to traditional media, digital media has lower entry barriers and allows everyone as a content creator, resulting in numerous fake news productions to attract public attention. As multimedia content is more convenient for users than expressing their feelings through text, images and video-embedded fake news is being circulated rapidly on social media nowadays. Emotional appeal in fake news is also a driving factor in its rapid dissemination. Although prior studies have made a remarkable effort toward fake news detection, they give less emphasis on exploring video modality and emotional appeal in fake news. To bridge this gap, this paper presents the following two contributions: i) It first develops a video-based multimodal fake news detection dataset named FakeClips and ii) It introduces a deep multitask framework dedicated to video-embedded multimodal fake news detection in which fake news detection is the main task and emotion recognition is the auxiliary task. The results reveal that investigating emotion and fake news together in a multitasking framework achieves 9.04% and 5.27% gains in terms of accuracy and f-score, respectively over the state-of-the-art model i.e. Fake Video Detection Model.
KW - Deep learning
KW - Multimodal emotion
KW - Multimodal fake news detection
KW - Supervised contrastive learning
KW - Video embedded fake news
UR - http://www.scopus.com/inward/record.url?scp=85174388297&partnerID=8YFLogxK
U2 - 10.1007/s11042-023-17208-6
DO - 10.1007/s11042-023-17208-6
M3 - Article
AN - SCOPUS:85174388297
SN - 1380-7501
VL - 83
SP - 37161
EP - 37185
JO - Multimedia Tools and Applications
JF - Multimedia Tools and Applications
IS - 12
ER -