Storytelling with Transformers: A Comparative Analysis of GPT-2 and BART
- DOI
- 10.2991/978-94-6463-852-3_9How to use a DOI?
- Keywords
- Story generation; GPT-2; BART Transformers; NLP; Narrative evaluation
- Abstract
This study presents a comparative analysis of two widely used transformer-based models, GPT-2 and BART, in the context of interactive storytelling. The evaluation emphasizes model performance in generating coherent, creative, and contextually aligned narratives. We utilize a Reddit-based dataset of writing prompts and apply fine-tuning techniques to both models. The experiment emphasizes the better performance of GPT-2 to generate semantically good and creative stories whereas BART generated a more emotionally consistent and grammatically perfect story. Those findings support the architectural differences of the models GPT-2 is autoregressive and thus was biased towards generating imaginatively, whereas BART has an encoder-decoder structure, which is biased towards controlled generation. Qualitative analysis showed that GPT-2 would tend to repeat similar words when given vague prompts and BART would tend to follow a narrative when given strange prompts. Our findings highlight each model’s strengths and limitations, offering insights into their capabilities for creative text generation and guiding future work in transformer-based storytelling systems.
- Copyright
- © 2025 The Author(s)
- Open Access
- Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
Cite this article
TY - CONF AU - Hrithik Singh AU - Vanita Mane AU - Tushar Ghorpade PY - 2025 DA - 2025/10/07 TI - Storytelling with Transformers: A Comparative Analysis of GPT-2 and BART BT - Proceedings of the MULTINOVA: First International Conference on Artificial Intelligence in Engineering, Healthcare and Sciences (ICAIEHS- 2025) PB - Atlantis Press SP - 139 EP - 148 SN - 1951-6851 UR - https://doi.org/10.2991/978-94-6463-852-3_9 DO - 10.2991/978-94-6463-852-3_9 ID - Singh2025 ER -