Machine translation (MT) is one of the major research fields of natural language processing (NLP), and it always spearheadsthe research frontier in NLP. In this paper, after a systematic survey of the development history of MT from a macroscopic perspective,with particular emphasis on the main development path of underlying methodologies and core technologies in MT, we drew a general picture of the milestones that marked the key points of a long journey for both theoretical study and practical accomplishment for the past seven decades. The latest fruitful development achieved in the area of MT application shows that, the paradigm shift from the traditional linguistic rule-based approaches to the so-called empirical approach, based on increasingly available amounts of “raw data” in the form of massive collections of texts and their translations, compounded by the phenomenal advancement of computer technology, will become the driving force that will potentially lead to the breakthrough in MT. Based on the above observation and analysis, some suggestions on the short-term development strategy for machine translation as well as natural language processing in China are proposed.
In the past over 60 years, research on Chinese language processing has made great achievements. With the rapid development and popularization of the Internet and communication technology, Chinese language processing technology has attracted worldwide attention in recent years. This article summarizes the achievements of Chinese language processing and analyzes the present status of the technology in this field, particularly the problems that the field may face in term of development. The author argues that it is stilldifficult for artificial intelligence to “understand” rather than “process” naturally produced Chinese because of the following three reasons:(1) the current information processing technology is inadequate in processing grammatically complex Chinese sentences; (2) there are unsolved problems in machine learning technologies; and (3) our understanding of how human brain processes language is still very limited. This paper concludes that we need a better understanding of how the Chinese language is decoded in human brain and build acomputational model that specifically targets at the Chinese language in order for artificial intelligence to understand naturally produced Chinese.
In recent years there has been an enormous boom in Computational Intelligence in Information Systems. This paperattempts to provide rich information and professional observation about the recent progress made in adapting Chinese language processing and computing industry to the new challenges arisen from rapid advancement of the Internet as well as the worldwide proliferation of mobile devices and social media. In the process of language digitization, search engine and machine translation are the two major typical areas pertinent to large scale industrialization. Through tracing the developing trajectory of these two areasas exemplar cases, we attempt to demonstrate how language digitization as a technology and industry deals with a range of new challenges, including intelligent applications and big data, such as business intelligence, social analytics, data/text mining, machinelearning, text summarization and information retrieval. In conclusion, we are optimistic for the future of the fields in achieving even better quality based on paradigm shift away from linguistic/rule-based methods towards empirical/data-driven methods which havebeen made possible by the availability of large amounts of training data and large computational resources.
With social and technological developments, the contents and means of human communication have undergone tremendous changes, which, in turn, lead to the evolution of word forms and their meanings in human language. In literature, much scholarship has been devoted to the semantic dynamics of words from the perspective of usage frequency, yet this frequency-based method cannot explain clearly the lexical-semantic change due to its failure to cover word senses. In this paper, a large-scale Chinese newspaper text corpus is employed and the distributed representations of some words and their senses are elicited in order to observe the diachronic evolvement of word semantics. The semantic change of the words in the timeline suggests that the distributional method proposed in this paper is effective for the exploration of lexical semantic dynamics. The implication of this study is that the corpus-based distributional method can become a useful tool for studies in other fi elds, such as language evolution, sociolinguistics and language planning.
As a fundamental feature that distinguishes human being from other species, language capacity is not only a principaltheoretical issue for linguistics and other relevant disciplines, but also a signifi cant practical concern for national social development.Based on a thorough examination of the fundamental role of language capacity in human cognition and social development, thisarticle reviews the latest developments of research about language capacity in the international literature, and argues for the urgentneed to conduct scientifi c research about language capacity from a holistic perspective at present. We also summarize the awareness ofinternational communities about the great demand of language capacity and the practical undertakings to improve language capacityby different countries. Based on what is described above, we come out with the following suggestions: in order to meet the demandfrom the national social development, we should actively organize collaborative studies on language capacity, to implement practicalprogram with aim to enhance language capacity, and to push the state to enforce significant decision.