Proceedings of the 6th International Conference on Deep Learning, Artificial Intelligence and Robotics (ICDLAIR 2024)

PROSPECT-SCI: Performance Review and Optimization of Summarization Techniques for Scientific Content

Authors
Y. P. Pragathi1, *, Shraddha Khanapur1, D. R. Manjunath1
1Department of Computer Science and Engineering, B.M.S. College of Engineering, Bengaluru, Karnataka, 560019, India
*Corresponding author. Email: pragathiy.cs20@bmsce.ac.in
Corresponding Author
Y. P. Pragathi
Available Online 25 June 2025.
DOI
10.2991/978-94-6463-740-3_15How to use a DOI?
Keywords
Comparative Analysis; Large Language Models; Natural Language Processing; Scientific Text Summarization
Abstract

Extensive research has been conducted in the field of text summarization, leading to significant advancements in various domains. However, scientific summarization, which entails capturing nuanced details such as mathematical equations, complex terminologies, and intricate symbols, remains largely underexplored. Current works on summarization models often fail to adequately address the specific challenges posed by scientific texts, particularly in their handling of highly technical content. Moreover, there is a notable lack of comprehensive comparative studies that thoroughly examine the strengths and limitations of existing models in scientific summarization.

To address these gaps, our work presents a detailed comparison of eight state-of-the-art summarization methods: Latent Semantic Analysis, TextRank, Gemini, BART-Large, DistilBART, PEGASUS, T5-Large and T5-Small. Each model is fine-tuned on the SciTLDR dataset, which is specifically designed for scientific summarization, and their performance is evaluated using established metrics such as ROUGE-1, ROUGE-2, and ROUGE-L. Additionally, a robust analysis of each model’s advantages and drawbacks is provided, along with an in-depth literature review that categorizes these models based on key criteria, including their ability to capture complex details and their computational complexity.

Our findings demonstrate that abstractive models such as BART, PEGASUS, and T5 outperform extractive and pre-trained models, with BART-Large achieving the highest scores on both Rouge-1 metric at 0.347 as well as Rouge-L at 0.302. Among the extractive techniques examined, TextRank is an example that achieved the lowest ROUGE-1 score of 0.147. This comprehensive evaluation not only highlights the strengths of current approaches but also identifies areas for further improvement, contributing to a deeper understanding of the challenges and opportunities in scientific summarization.

Copyright
© 2025 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the 6th International Conference on Deep Learning, Artificial Intelligence and Robotics (ICDLAIR 2024)
Series
Advances in Intelligent Systems Research
Publication Date
25 June 2025
ISBN
978-94-6463-740-3
ISSN
1951-6851
DOI
10.2991/978-94-6463-740-3_15How to use a DOI?
Copyright
© 2025 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Y. P. Pragathi
AU  - Shraddha Khanapur
AU  - D. R. Manjunath
PY  - 2025
DA  - 2025/06/25
TI  - PROSPECT-SCI: Performance Review and Optimization of Summarization Techniques for Scientific Content
BT  - Proceedings of the 6th International Conference on Deep Learning, Artificial Intelligence and Robotics (ICDLAIR 2024)
PB  - Atlantis Press
SP  - 169
EP  - 181
SN  - 1951-6851
UR  - https://doi.org/10.2991/978-94-6463-740-3_15
DO  - 10.2991/978-94-6463-740-3_15
ID  - Pragathi2025
ER  -