Abstract Since its public release at the end of 2022, ChatGPT has attracted worldwide attention. Much research has been conducted on the opportunities and challenges ChatGPT has brought to linguistic studies. At the same time, scholars hold different views on the roles of ChatGPT in linguistic studies. This paper begins with Norvig’s (2011) argument on two competing goals in linguistic studies: descriptive accuracy (of linguistic performance, that is, how) and scientific explanation (of linguistic competence, that is, why). Centered on this issue, a series of related questions are discussed, leading to the following conclusions: (1) ChatGPT and Large Language Models (LLMs) can surpass Markov Process Model to capture long-distant dependency holding between different words in a sentence. They can implicitly learn basic syntactic and semantic knowledge, enabling them to understand, recognize, and generate semantically anomalous sentences. (2) Descriptive accuracy and scientific explanation do not contradict each other, and the former is more important than the latter in linguistic studies. (3) Categorical grammar within the “principles and parameters” paradigm of generative grammar faces insurmountable difficulties in describing human natural language. (4) The approach of grammar study should prioritize semantics over syntax. (5) The success of LLMs shows that the descriptive accuracy of linguistic performance is far more basic than abstract explanation of linguistic competence.
|
|
|
|
|