Tags

, , ,

In 2005, Jeff Howe coined the term “crowdsourcing” to describe the practice of using the internet for outsourcing work to a number of unspecified individuals. Although the term itself is relatively new, the concept itself has existed for a long time, with one of the earlier examples being the Oxford English Dictionary, where people were called on to submit words and usage examples.

The task of compiling, categorising, and analysing vast quantities of information is an arduous endeavour for any researcher, and Digital Humanities is no exception. Crowdsourcing provides an excellent method for delegating large quantities of work that may not necessarily require the expertise of a professional, freeing up those managing the project for both broader and more specialised work. The trend towards using crowdsourcing as a tool in digital humanities can be seen in the following three papers from Digital Humanities 2013.

Incidental Crowdsourcing: Crowdsourcing in the Periphery

Peter Organisciak presents the concept of “Incidental crowdsourcing” (IC), which he defines as follows: “Incidental crowdsourcing is the gathering of contributions from online groups in an unobtrusive and non-critical way.” The paper is divided into 2 main sections, the former an analysis of the pros and cons of IC from both the perspective of system and user, and the latter a study of the differences in user engagement in IC and non-IC systems.

Back to the definition of IC, “unobtrusive” refers to the fact that IC must not hinder the user’s ability to do their task, while “non-critical” means that user contributions are not compulsory, and simple add value to the system, a fact that is emphasised multiple times during the paper. The author provides the following table of common examples of IC:

ab-273.t01

Common forms of incidental crowdsourcing and examples

In the comparative study, the author compared the app-rating systems on Google Play and Amazon Appstore, representing IC and non-IC systems respectively. It was found that Google Play users voted more at the higher end of the spectrum (4/5 and above). The author suggests that this “distinct pivot” can be used to adjust the results obtained from Google Play to be more in line with those from Amazon, while maintaining the IC approach.

Interfaces for Crowdsourcing Interpretation

This paper focuses on crowdsourcing interpretation, an application of the concept of crowdsourcing to the interpretation of different texts. The author discusses the tool Prism, which has users categorise portions of a text into predetermined categories. This data is then used to create a visualisation of the trends in user’s responses to the text. However, it is noted that “Prism is not a device for rich, individual exegesis”, and the author discusses ideas for future development of the tool.

The author discusses earlier research (Owens 2012), where crowdsourcing is put into two categories: “human computation” and “wisdom of crowds.” Human computation involved having users perform tasks that are computationally expensive (the example of transcription is given), while the “wisdom of crowds” approach to crowdsourcing is not necessarily limited to simple processing, and has users engage in “open-ended socially-negotiated tasks”, and example of which is Wikipedia. It is suggested that both these approaches are applicable for use in Digital Humanities, and the author goes on to suggest an additional area of interest: “Wisdom of the Individual”, which involves preserving participants information as part of their contribution, allowing further analysis such as breakdowns by various demographics of different user contributions.

Text Theory, Digital Document, and the Practice of Digital Editions

This abstract details a panel to be held on the topic of crowdsourcing and the use of digital tools in the transcribing and annotating of texts. The main topic is how well digital texts produced through these methods compare to theoretical standards, and how to assess such. The current practice of transforming a physical book into an essentially identical digital version is compared with text theory in other digital applications, where information is more fragmented and fluid, adapting to different contexts and devices. It is questioned whether it is better to continue with the current practice, or to move more toward the patterns found in other areas of digital information representation: If there is a disparity between the different practices, would a shift prove beneficial or is it better to stay in line with current practices?

Conclusion

As can be seen from the first two abstracts, there is a definite interest in the application of crowdsourcing in digital humanities, especially in the processing (transcription, interpretation) of texts. In addition, the third abstract can be seen to be an effort to maintain quality standards and ensure that new methods and technologies being used (in this case crowdsourcing) are also in fact useful and applicable ways to tackle problems in the field.

References:

[1] Incidental Crowdsourcing: Crowdsourcing in the Periphery http://dh2013.unl.edu/abstracts/ab-273.html

[2] Interfaces for Crowdsourcing Interpretation http://dh2013.unl.edu/abstracts/ab-294.html

[3] Text Theory, Digital Document, and the Practice of Digital Editions http://dh2013.unl.edu/abstracts/ab-169.html

[4] Crowdsourcing http://en.wikipedia.org/wiki/Crowdsourcing

[5] The Praxis Program (Prism) http://praxis.scholarslab.org/

Advertisements