Abstract:With the great success of ChatGPT and other large language models (LLMs) in practical applica-tions, a heated debate has arisen regarding whether language faculty is unique to human beings. Two contrasting perspectives have emerged within the international academic community. One perspective argues that LLMs have achieved human-level proficiency in language understanding and production, thereby challenging Chomsky’s linguistic theories and even potentially replacing the theoretical framework of Generative Grammar. The opposing perspective argues that while humans acquire language despite “poverty of stimulus”, demonstrating a remarkable generative capacity, LLMs “learn” language by leveraging massive data input. Therefore, LLMs fundamen-tally differ from human language faculty in their core attributes and cannot adequately explain the essential nature of human language. Empirical studies have also criticized the tendency to overstate the role of LLMs for linguistic theory. This paper argues that discussions on this issue should begin by addressing the following key issues: (1) the differentiation between scientific theory formulation and engineering applications; (2) the principled predictions and explanations regarding the distinction between “possible languages” and “impossible languages”; (3) the underlying factors accounting for the contrast between natural language acquisition under “poverty of stimulus” and LLMs’ reliance on massive data input; and (4) multi-dimensional and systematic comparative evaluations of the role of syntax in human language versus LLMs.
时 仲,田英慧,司富珍. 国际学界关于ChatGPT语言能力的争论与思考[J]. 语言战略研究, 2025, 10(1): 75-86.
Shi Zhong, Tian Yinghui and Si Fuzhen. ChatGPT’s Linguistic Competence: Debates and Reflections in the International Academic Community. , 2025, 10(1): 75-86.