Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.12188/17484
Title: Exploration into Deep Learning Text Generation Architectures for Dense Image Captioning
Authors: Toshevska, Martina
Lameski, Petre 
Stojanovska, Frosina
Zdravevski, Eftim 
Gievska, Sonja 
Issue Date: 6-Sep-2020
Publisher: IEEE
Conference: 2020 15th Conference on Computer Science and Information Systems (FedCSIS)
Abstract: Image captioning is the process of generating a textual description that best fits the image scene. It is one of the most important tasks in computer vision and natural language processing and has the potential to improve many applications in robotics, assistive technologies, storytelling, medical imaging and more. This paper aims to analyse different encoder-decoder architectures for dense image caption generation while focusing on the text generation component. Already trained models for image feature generation are utilized with transfer learning. These features are used for describing the regions using three different models for text generation. We propose three deep learning architectures for generating one-sentence captions of Regions of Interest (RoIs). The proposed architectures reflect several ways of integrating features from images and text. The proposed models were evaluated and compared with several metrics for natural language generation. The experimental results demonstrate that injecting image features into a decoder RNN while generating a caption word by word is the best performing architecture among the architectures explored in this paper.
URI: http://hdl.handle.net/20.500.12188/17484
Appears in Collections:Faculty of Computer Science and Engineering: Conference papers

Files in This Item:
File Description SizeFormat 
57.pdf1.4 MBAdobe PDFView/Open
Show full item record

Page view(s)

39
checked on Apr 22, 2024

Download(s)

36
checked on Apr 22, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.