International Journal of Advance Computational Engineering and Networking (IJACEN)
.
Follow Us On :
current issues
Volume-12,Issue-6  ( Jun, 2024 )
Past issues
  1. Volume-12,Issue-5  ( May, 2024 )
  2. Volume-12,Issue-4  ( Apr, 2024 )
  3. Volume-12,Issue-3  ( Mar, 2024 )
  4. Volume-12,Issue-2  ( Feb, 2024 )
  5. Volume-12,Issue-1  ( Jan, 2024 )
  6. Volume-11,Issue-12  ( Dec, 2023 )
  7. Volume-11,Issue-11  ( Nov, 2023 )
  8. Volume-11,Issue-10  ( Oct, 2023 )
  9. Volume-11,Issue-9  ( Sep, 2023 )
  10. Volume-11,Issue-8  ( Aug, 2023 )

Statistics report
Oct. 2024
Submitted Papers : 80
Accepted Papers : 10
Rejected Papers : 70
Acc. Perc : 12%
Issue Published : 138
Paper Published : 1629
No. of Authors : 4297
  Journal Paper


Paper Title :
A Comparative Study of Various Deep Learning Models for Image Caption Generation on Indian Historical Data

Author :Krishna Desai, Ashwini Joshi

Article Citation :Krishna Desai ,Ashwini Joshi , (2022 ) " A Comparative Study of Various Deep Learning Models for Image Caption Generation on Indian Historical Data " , International Journal of Advance Computational Engineering and Networking (IJACEN) , pp. 54-58, Volume-10,Issue-7

Abstract : Abstract - Image Captioning follows encoder-decoder architecture that poses a challenge for image analysis and text generation. Due to the success of the attention-based deep learning model for both language translation and image processing, the auto image captioning problem has received a lot of attention. Improving the performance of each part of the framework or employing a more effective attention mechanism will have a positive impact on eventual performance. We proposed a newly created datasetcontaininghistoricalsitesinIndiasuchashistoricaltemples,step- wells, carved columns, and sculptures of the god, goddess, and people. In this research, we have used various deep learning encoder-decoder architectures, VGG16-LSTM and ResNet50- LSTM, Resnet152-LSTM, InceptionV3-LSTM, EfficientNetB0+ LSTM and the Transformer for the image captioning where LSTM is the most powerful decoder. Recent work shows that the Transformer is superior to LSTM in efficiency and performance for some NLP tasks and captioning tasks. The Transformer consists of multiple pairs of encoder-decoders where encoders represent the image feature vectors based on self-attention to extract important features and allow all feature vectors to inter- act with each other and figure out who to pay more attention to. The decoders increase multi-head attention, which helps generate the sequence of words sequentially based on the contextualized encoding sequence. This study presents the comparative analysis of the performance of six models implemented on the newly created data set. The performance measures considered in this study are the BLEU scores[1-4]. Keywords - Image-captioning, Encoder-Decoder, Multi-Head Attention

Type : Research paper

Published : Volume-10,Issue-7


DOIONLINE NO - IJACEN-IRAJ-DOIONLINE-18840   View Here

Copyright: © Institute of Research and Journals

| PDF |
Viewed - 45
| Published on 2022-10-10
   
   
IRAJ Other Journals
IJACEN updates
Paper Submission is open now for upcoming Issue.
The Conference World

JOURNAL SUPPORTED BY