Proceedings of the 2025 4th International Conference on Mathematical Statistics and Economic Analysis (MSEA 2025)

Evaluation and Capacity of Large Language Model in Natural Language Processing

Authors
Zihan Yu1, *
1Tongji University, Economy and Management College, 200095, Shanghai, China
*Corresponding author. Email: 2350304@tongji.edu.cn
Corresponding Author
Zihan Yu
Available Online 20 February 2026.
DOI
10.2991/978-94-6463-992-6_6How to use a DOI?
Keywords
Large Language Model; Evaluation; Text Generation; Knowledge Completion; Complex Reasoning
Abstract

The structural complexity and logical coherence of human languages have positioned them as a central concern in artificial intelligence research, particularly within the domain of language modeling. This field has progressed from early statistical frameworks to advanced neural architectures, exemplified by pre-trained Transformer models (PLMs), which demonstrate robust performance across natural language processing tasks. Recent investigations into scaling these models have given rise to large language models (LLMs), revealing notable enhancements in capability. This study examines the continuum between PLMs and LLMs, focusing on an evaluation of LLMs through three primary attributes: text generation, knowledge completion, and complex reasoning. Analysis centers on their proficiency in producing contextually appropriate text, addressing knowledge-based inquiries, and resolving tasks requiring multi-step inference. Results indicate that LLMs, upon exceeding a critical parameter threshold, manifest significant performance advances and emergent properties, such as in-context learning, absent in smaller-scale models like BERT. These findings affirm the critical role of LLMs in advancing computational linguistics and provide a structured basis for future inquiry.

Copyright
© 2026 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the 2025 4th International Conference on Mathematical Statistics and Economic Analysis (MSEA 2025)
Series
Advances in Economics, Business and Management Research
Publication Date
20 February 2026
ISBN
978-94-6463-992-6
ISSN
2352-5428
DOI
10.2991/978-94-6463-992-6_6How to use a DOI?
Copyright
© 2026 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Zihan Yu
PY  - 2026
DA  - 2026/02/20
TI  - Evaluation and Capacity of Large Language Model in Natural Language Processing
BT  - Proceedings of the 2025 4th International Conference on Mathematical Statistics and Economic Analysis (MSEA 2025)
PB  - Atlantis Press
SP  - 35
EP  - 48
SN  - 2352-5428
UR  - https://doi.org/10.2991/978-94-6463-992-6_6
DO  - 10.2991/978-94-6463-992-6_6
ID  - Yu2026
ER  -