Abstract:Since its public release at the end of 2022, ChatGPT has attracted worldwide attention. Much research has been conducted on the opportunities and challenges ChatGPT has brought to linguistic studies. At the same time, scholars hold different views on the roles of ChatGPT in linguistic studies. This paper begins with Norvig’s (2011) argument on two competing goals in linguistic studies: descriptive accuracy (of linguistic performance, that is, how) and scientific explanation (of linguistic competence, that is, why). Centered on this issue, a series of related questions are discussed, leading to the following conclusions: (1) ChatGPT and Large Language Models (LLMs) can surpass Markov Process Model to capture long-distant dependency holding between different words in a sentence. They can implicitly learn basic syntactic and semantic knowledge, enabling them to understand, recognize, and generate semantically anomalous sentences. (2) Descriptive accuracy and scientific explanation do not contradict each other, and the former is more important than the latter in linguistic studies. (3) Categorical grammar within the “principles and parameters” paradigm of generative grammar faces insurmountable difficulties in describing human natural language. (4) The approach of grammar study should prioritize semantics over syntax. (5) The success of LLMs shows that the descriptive accuracy of linguistic performance is far more basic than abstract explanation of linguistic competence.
袁毓林. 描写还是解释:由ChatGPT反思语言学的两种目标[J]. 语言战略研究, 2025, 10(1): 62-74.
Yuan Yulin. How versus Why: Reflections on the Two Objectives of Linguistics by Means of ChatGPT. , 2025, 10(1): 62-74.