Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.12188/33984
Title: Multimodal Deep Learning for Online Meme Classification
Authors: Han, Stephanie
Leal-Arenas, Sebastian
Zdravevski, Eftim 
Cavalcante, Charles C
Boukouvalas, Zois
Corizzo, Roberto
Keywords: Memes, multimodal learning, data fusion, neural networks, deep learning
Issue Date: 15-Dec-2024
Publisher: IEEE
Conference: 2024 IEEE International Conference on Big Data (BigData)
Abstract: Memes possess a humorous intent, yet they can also be used for malicious purposes. Analysing meme data has the potential to enhance content monitoring, identify emerging topics, and support content moderation in online platforms. Memes also represent an interesting use case for multimodal machine learning, as they combine text and image data. In this study, we explored the linguistic characteristics and analysed the convergent themes of five meme classes through common word extraction. Moreover, we compared the effectiveness of various machine learning models, i.e., unimodal (text or image) and multimodal (early fusion, late fusion) in binary and multiclass meme classification tasks. Our results on a large meme dataset showed that memes heavily adhered to current affairs, demonstrated by the high frequency of topical words across meme classes. Regarding model accuracy, early fusion achieved superior accuracy over late fusion in meme classification. Binary models outperformed multi-class classification methods. However, fusion models did not consistently surpass the accuracy of independent text or image-based models.
URI: http://hdl.handle.net/20.500.12188/33984
Appears in Collections:Faculty of Computer Science and Engineering: Conference papers

Files in This Item:
File SizeFormat 
Multimodal_Deep_Learning_of_Online_Memes__Copy_-8.pdf1.16 MBAdobe PDFView/Open
Show full item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.