Contextualise, Attend, Modulate and Tell: Visual Storytelling

Zainy M. Malakan, Zainy M. Malakan, Nayyer Aafaq, Ghulam Mubashar Hassan, Ajmal Mian

2021

Abstract

Automatic natural language description of visual content is an emerging and fast-growing topic that has attracted extensive research attention recently. However, different from typical ‘image captioning’ or ‘video captioning’, coherent story generation from a sequence of images is a relatively less studied problem. Story generation poses the challenges of diverse language style, context modeling, coherence and latent concepts that are not even visible in the visual content. Contemporary methods fall short of modeling the context and visual variance, and generate stories devoid of language coherence among multiple sentences. To this end, we propose a novel framework Contextualize, Attend, Modulate and Tell (CAMT) that models the temporal relationship among the image sequence in forward as well as backward direction. The contextual information and the regional image features are then projected into a joint space and then subjected to an attention mechanism that captures the spatio-temporal relationships among the images. Before feeding the attentive representations of the input images into a language model, gated modulation between the attentive representation and the input word embeddings is performed to capture the interaction between the inputs and their context. To the best of our knowledge, this is the first method that exploits such a modulation technique for story generation. We evaluate our model on the Visual Storytelling Dataset (VIST) employing both automatic and human evaluation measures and demonstrate that our CAMT model achieves better performance than existing baselines.

Download


Paper Citation


in Harvard Style

Malakan Z., Aafaq N., Hassan G. and Mian A. (2021). Contextualise, Attend, Modulate and Tell: Visual Storytelling. In Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) - Volume 5: VISAPP; ISBN 978-989-758-488-6, SciTePress, pages 196-205. DOI: 10.5220/0010314301960205


in Bibtex Style

@conference{visapp21,
author={Zainy M. Malakan and Nayyer Aafaq and Ghulam Mubashar Hassan and Ajmal Mian},
title={Contextualise, Attend, Modulate and Tell: Visual Storytelling},
booktitle={Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) - Volume 5: VISAPP},
year={2021},
pages={196-205},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010314301960205},
isbn={978-989-758-488-6},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) - Volume 5: VISAPP
TI - Contextualise, Attend, Modulate and Tell: Visual Storytelling
SN - 978-989-758-488-6
AU - Malakan Z.
AU - Aafaq N.
AU - Hassan G.
AU - Mian A.
PY - 2021
SP - 196
EP - 205
DO - 10.5220/0010314301960205
PB - SciTePress