Please use this identifier to cite or link to this item:
Title: Exploration into Deep Learning Text Generation Architectures for Dense Image Captioning
Authors: Toshevska, Martina
Lameski, Petre 
Stojanovska, Frosina
Zdravevski, Eftim 
Gievska, Sonja 
Issue Date: 6-Sep-2020
Publisher: IEEE
Conference: 2020 15th Conference on Computer Science and Information Systems (FedCSIS)
Abstract: Image captioning is the process of generating a textual description that best fits the image scene. It is one of the most important tasks in computer vision and natural language processing and has the potential to improve many applications in robotics, assistive technologies, storytelling, medical imaging and more. This paper aims to analyse different encoder-decoder architectures for dense image caption generation while focusing on the text generation component. Already trained models for image feature generation are utilized with transfer learning. These features are used for describing the regions using three different models for text generation. We propose three deep learning architectures for generating one-sentence captions of Regions of Interest (RoIs). The proposed architectures reflect several ways of integrating features from images and text. The proposed models were evaluated and compared with several metrics for natural language generation. The experimental results demonstrate that injecting image features into a decoder RNN while generating a caption word by word is the best performing architecture among the architectures explored in this paper.
Appears in Collections:Faculty of Computer Science and Engineering: Conference papers

Files in This Item:
File SizeFormat 
57.pdf1.4 MBAdobe PDFView/Open
Show full item record

Page view(s)

checked on Jun 23, 2022


checked on Jun 23, 2022

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.