Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.12188/17479
Title: COMPARATIVE ANALYSIS OF WORD EMBEDDINGS FOR CAPTURING WORD SIMILARITIES
Authors: Toshevska, Martina
Stojanovska, Frosina
Kalajdjieski, Jovan
Keywords: Word Embeddings, Distributed Word Representation, Word Similarity
Issue Date: 8-May-2020
Journal: arXiv preprint arXiv:2005.03812
Abstract: Distributed language representation has become the most widely used technique for language representation in various natural language processing tasks. Most of the natural language processing models that are based on deep learning techniques use already pre-trained distributed word representations, commonly called word embeddings. Determining the most qualitative word embeddings is of crucial importance for such models. However, selecting the appropriate word embeddings is a perplexing task since the projected embedding space is not intuitive to humans. In this paper, we explore different approaches for creating distributed word representations. We perform an intrinsic evaluation of several state-of-the-art word embedding methods. Their performance on capturing word similarities is analysed with existing benchmark datasets for word pairs similarities. The research in this paper conducts a correlation analysis between ground truth word similarities and similarities obtained by different word embedding methods.
URI: http://hdl.handle.net/20.500.12188/17479
Appears in Collections:Faculty of Computer Science and Engineering: Journal Articles

Files in This Item:
File Description SizeFormat 
2005.03812.pdf1.42 MBAdobe PDFView/Open
Show full item record

Page view(s)

18
checked on Apr 26, 2024

Download(s)

12
checked on Apr 26, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.