Abstract:Large language models (LLMs) offer new approaches for automatic text simplification. To explore the capabilities of LLMs in simplifying Chinese texts, this study constructed a Chinese passage-level text simplification dataset and conducted a feature analysis of the parallel text pairs within it. Based on this, an experiment was designed to assess the automatic text simplification performance of LLMs using four prompting strategies: zero-shot, few-shot, few-shot with lexicon, and few-shot with rules. The study evaluated the performance of six commonly used domestic and international LLMs in Chinese text simplification under different prompting strategies, utilizing a combination of existing and study-specific linguistic feature evaluation metrics. The findings revealed that the few-shot prompting strategy performed best in terms of text features, significantly enhancing information retention. Incorporating external lexicons in the prompts helped the LLMs use relatively simpler words, while integrating simplification rules enabled the LLMs to employ more concise syntactic structures. Different LLMs exhibited distinct strengths and limitations in controlling complexity and preserving semantics, but all showed a noticeable gap compared to human experts in discourse cohesion, coherence, and paragraph segmentation, with varying degrees of hallucination also observed. Future research should focus on constructing larger-scale, high-quality Chinese simplification datasets and exploring multi-faceted approaches to enhance the text simplification capabilities of LLMs.
杨尔弘,朱君辉,朱浩楠,宗绪泉,杨麟儿. 大语言模型的中文文本简化能力研究[J]. 语言战略研究, 2024, 9(5): 34-47.
Yang Erhong, Zhu Junhui, Zhu Haonan, Zong Xuquan and Yang Lin’er. A Study on the Evaluation of Large Language Models’ Capabilities in Chinese Text Simplification. , 2024, 9(5): 34-47.