Proceedings of the International Workshop on Advances in Deep Learning for Image Analysis and Computer Vision (IWADIC 2025)

Research and Analysis on Chain of Thought (CoT) Reasoning and Interpretability in Large Language Models

Authors
Pengyu Liao1, *
1College of Information and Engineering, Hefei University of Economics, Hefei, China
*Corresponding author. Email: lllzz6662222@gmail.com
Corresponding Author
Pengyu Liao
Available Online 24 April 2026.
DOI
10.2991/978-94-6239-648-7_56How to use a DOI?
Keywords
Large Language Models; Chain of Thought; Interpretability; Multi-path reasoning; Trusted AI
Abstract

As an important form of Large Language Models (LLM), Chain-of-Thought (CoT) has made breakthroughs in logical consistency, interpretability, and task accuracy by guiding the model to generate answers by stepwise reasoning. This paper systematically reviews the research progress of CoT reasoning in the last five years (2021–2025) and analyse it from three main dimensions, First, the evolution and limitations of chain-of-thinking reasoning: from zero-sample and few-sample CoT to structured reasoning methods such as Self-Consistency and Tree-of-Thought (ToT), although the depth of reasoning of the model has been improved effectively, there are still problems such as unstable reasoning and insufficient generalisation. Second, the verifiability and consistency of the reasoning chain: researchers have gradually paid attention to how to verify the correctness of the reasoning process and ensure the logical closure between the reasoning steps and the conclusions. Thirdly, the development of tree and multi-path reasoning; through multi-path exploration, path weighting and aggregation mechanism, the reasoning robustness and diversity of the model under complex tasks are improved. This paper concludes that the current CoT still has challenges in terms of stability, realism and cross-domain transferability. Future research should focus on three aspects: establishing a verifiable reasoning framework, developing multi-level structured thought modelling, and constructing a unified interpretability evaluation system, in order to promote the large language model to achieve more reliable and transparent intelligent reasoning.

Copyright
© 2026 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the International Workshop on Advances in Deep Learning for Image Analysis and Computer Vision (IWADIC 2025)
Series
Advances in Computer Science Research
Publication Date
24 April 2026
ISBN
978-94-6239-648-7
ISSN
2352-538X
DOI
10.2991/978-94-6239-648-7_56How to use a DOI?
Copyright
© 2026 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Pengyu Liao
PY  - 2026
DA  - 2026/04/24
TI  - Research and Analysis on Chain of Thought (CoT) Reasoning and Interpretability in Large Language Models
BT  - Proceedings of the International Workshop on Advances in Deep Learning for Image Analysis and Computer Vision (IWADIC 2025)
PB  - Atlantis Press
SP  - 509
EP  - 515
SN  - 2352-538X
UR  - https://doi.org/10.2991/978-94-6239-648-7_56
DO  - 10.2991/978-94-6239-648-7_56
ID  - Liao2026
ER  -