The crowdsourcing concept can be understood from three perspectives: the crowdsourcing strategy, the implementation method of crowdsourcing strategy, and the implementation
case of crowdsourcing strategy. The gist of crowdsourcing strategy is to unite the power and wisdom of crowds to solve problems in the form of open call via the Internet. Language
resource construction consists of natural language resource construction and extended language resource construction. In language resource construction, crowdsourcing can be used to collect language data, to process language data, to conduct language-related surveys, to fund language resource construction, to promote the publicity of language resource construction, and to cultivate the social forces for language resource construction. This paper provides a relatively detailed explanation of the development of crowdsourcing strategy and how it can be utilized in developing language resources with an illustration of Mechanical Turk, which has extensive applications in language resource construction and is the most well established and operational crowdsourcing implementation method at present. At the end of the article, we further elaborate the strength of this innovative approach that presents a strategic opportunity for users to champion a collaborative digital enterprise and to tap into the possibility contributed by diverse audience through using a variety of social media and collaborative software solutions, showing our optimism for the prospect of outsourcing work to the crowd for obtaining needed services or ideas in resolving problems.
黄居仁 王世昌. 众包策略在语言资源建设中的应用[J]. 语言战略研究, 2016, 1(6): 36-46.
Chu-Ren Huang and Wang Shichang. The Application of Crowdsourcing Strategy in Utilizing Language Resources. , 2016, 1(6): 36-46.
Behrend, Tara S., David J. Sharek, Adam W. Meade, and Eric N. Wiebe. 2011. The Viability of Crowdsourcing for Survey Research. Behavior Research Methods 43(3), 800-813.
Callison-Burch, Chris and Mark Dredze. 2010. Creating Speech and Language Data with Amazon’s Mechanical Turk. Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, 1-12.
Chen, Tao and Min-Yen Kan. 2013. Creating a Live, Public Short Message Service Corpus: The NUS SMS Corpus. Language Resources and Evaluation 47(2), 299-335.
Crump, Matthew J. C., John V. McDonnell, and Todd M. Gureckis. 2013. Evaluating Amazon’s Mechanical Turk as a Tool for Experimental Behavioral Research. PLoS ONE 8(3), e57410.
Enochson, Kelly and Jennifer Culbertson. 2015. Collecting Psycholinguistic Response Time Data Using Amazon Mechanical Turk. PLoS ONE 10(3), e0116946.
Geiger, David, Stefan Seedorf, Thimo Schulze, Robert C. Nickerson, and Martin Schader. 2011. Managing the Crowd: Towards a Taxonomy of Crowdsourcing Processes.
Proceedings of the Seventheenth America’s Conference on Information Systems, 1-11.
Hoosain, Rumjahn. 1992. Psychological Reality of the Word in Chinese. Advances in Psychology 90, 111-130.
Howe, Jeff. 2006. The Rise of Crowdsourcing. Wired Magazine 14(6), 1-4.
Howe, Jeff. 2009. Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business. New York: Three Rivers Press.
Kuperman, Victor, Hans Stadthagen-Gonzalez, and Marc Brysbaert. 2012. Age-of-Acquisition Ratings for 30 000 English Words. Behavior Research Methods 44(4), 978-990.
Mason, Winter and Siddharth Suri. 2012. Conducting Behavioral Research on Amazon’s Mechanical Turk. Behavior Research Methods 44(1), 1-23.
Quinn, Alexander J. and Benjamin B. Bederson. 2009. A Taxonomy of Distributed Human Computation. University of Maryland 107(2), 263-270.
Quinn, Alexander J. and Benjamin B. Bederson. 2011. Human Computation: A Survey and Taxonomy of a Growing Field. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1403-1412.
Raymond, Eric S. 1998. The Cathedral and the Bazaar. First Monday 3(3). 2 Mar. 1998. 2 Jul. 2016. http://fi rstmonday. org/article/view/578/499.
Tapscott, Don and Anthony D. Williams. 2006. Wikinomics: How Mass Collaboration Changes Everything. Region 42(1), 11.
von Ahn, Luis. 2006. Games with a Purpose. IEEE Computer 39(6), 92-94.
von Ahn, Luis, Benjamin Maurer, Colin McMillen, David Abraham, and Manuel Blum. 2008. reCAPTCHA: Human-Based Character Recognition via Web Security Measures. Science 321(5895), 1465-1468.
Wang, Shichang. 2016. Crowdsourcing Method in Empirical Linguistic Research: Chinese Studies Using Mechanical Turk-Based Experimentation. PhD thesis, The Hong Kong Polytechnic University.
Wang, Shichang, Chu-Ren Huang, Yao Yao, and Angel Chan. 2014a. Building a Semantic Transparency Dataset of Chinese Nominal Compounds: A Practice of Crowdsourcing Methodology. Proceedings of Workshop on Lexical and Grammatical Resources for Language Processing, 147-156.
Wang, Shichang, Chu-Ren Huang, Yao Yao, and Angel Chan. 2014b. Exploring Mental Lexicon in an Efficient and Economic Way: Crowdsourcing Method for Linguistic Experiments. Proceedings of the 4th Workshop on Cognitive Aspects of the Lexicon, 105-113.
Wang, Shichang, Chu-Ren Huang, Yao Yao, and Angel Chan. 2015a. Create a Manual Chinese Word Segmentation Dataset Using Crowdsourcing Method. Proceedings of the Eighth SIGHAN Workshop on Chinese Language Processing, 7-14.
Wang, Shichang, Chu-Ren Huang, Yao Yao, and Angel Chan. 2015b. Mechanical Turk-Based Experiment vs Laboratory-Based Experiment: A Case Study on the Comparison of Semantic Transparency Rating Data. Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation, 53-62.