Full description
Visual storytelling refers to the manner of describing a set of images rather than a single image, also known as multi-image captioning. Visual Storytelling Task (VST) takes a set of images as input and aims to generate a coherent story relevant to the input images. In this dataset, we bridge the gap and present a new dataset for expressive and coherent story creation. We present the Sequential Storytelling Image Dataset (SSID), consisting of open-source video frames accompanied by story-like annotations. In addition, we provide four annotations (i.e., stories) for each set of five images. The image sets are collected manually from publicly available videos in three domains: documentaries, lifestyle, and movies, and then annotated manually using Amazon Mechanical Turk. In summary, SSID dataset is comprised of 17,365 images, which resulted in a total of 3,473 unique sets of five images. Each set of images is associated with four ground truths, resulting in a total of 13,892 unique ground truths (i.e., written stories). And each ground truth is composed of five connected sentences written in the form of a story.Notes
External OrganisationsKing Fahd University of Petroleum and Minerals; Umm Al-Qura University
Associated Persons
Zainy M. Malakan Aljawy (Creator)Saeed Anwar (Contributor)
Zainy M. Malakan Aljawy (Creator)Saeed Anwar (Contributor)
Issued: 2023-07-10
User Contributed Tags
Login to tag this record with meaningful keywords to make it easier to discover
Identifiers
- DOI : 10.21227/DBR9-DQ51
- global : bf980c65-45df-4d9a-b6b9-7a9a670a4b87