Take a moment and imagine for an instant being seated in a luxurious town house, contemplating the most briliant and cultivated minds of your time such as writers Corneille, Malherbe or La Rochefoucauld in a single glance. You’re in a parisian “Salon littéraire” in the XVII th century, experiencing a form of crowsourcing in its debuts, where contemporary literature was discussed in a codified and formal way, giving birth to a new era that still lives on with the present literary cafés. At this point, a shrewd reader may ask himself why we are letting ourself slip away in the tides of time so early in this article. This is simply to illustrate the fact that though not immediate, the relation between the two terms crowdsourcing and literature is indeed an old and durable one. Thus leading us to our principal discussion and its underlining problematic. One might ask why using the apealing analogy to the famous Bryan Singer masterpiece, this is simply an imaged way of introducing the following argumentation : Is our elder duet going to fit right in its standard suspect profile in the Digital Humanities or is it going to shape up the future of the field as a genius and surprising mastermind ? We’ll try to argue by lining up three abstracts for questioning and attempt extracting a trend linked to DH.
Our first abstract will be What do you do with a million readers ? by Roja Bandari, Timothy Roland Tangherlini and Vwani Roychowdhury. An indeed interesting question as it is a complex problem on many aspects. The article places us in a literary context dominated by digitalization and large scale reading and more important : abundant and easily accessible reader commenting provided by the easy access to electronic venues. In this context, the goal of the project was to collect and compile a large number of user reviews. One might say at this point that the experiment would just be an internet “Salon litteraire”, simple scaled version of its original. But let’s not fall into premature conclusion and go deeper into this reader response data-mining experiment. The “digital miners” have chosen a corpus of sixteen heavy rated and commented science fiction novels, with particular properties, mainly wide range of narrative content and characters. The goal was then to harvest user reviews according to their completeness and accuracy in order to apply iterative computations on the collection to extract increasingly complex data related to the novel. This data-mining goes from finding the main dramatis personae to deriving a entity based graph imaging pairwise relations and their intensity between entities. The miners have designed and used standardized procedures in order to be able to apply the processus to different sources. So as far as abstraction goes, this experiment is pushing the literary café experience one step further, incorporating a big data component to the mix, but the core purpose and informaton extracted stay the same. However such experiment allows to take a glimpse at powerful ideas or futures ; What if, based on such computations on user review, one could rewrite the “perfect story” according to user feedback ? Or more realistic, by consulting a consensus of multi-disciplinary experts modelize and link data together in order to get the big picture ? Just imagine applying such ideas to the venice time machine, linking the historian’s, linguist’s, architect’s and others specialist knowledge together in a structured way…
The Anatomy of a Drop-Off Reading Curve  by Cyril Bornet and Frédéric Kaplan, is a reader oriented crowdsourcing experiment on a precise corpus of texts : La simulation humaine  by Daniel de Roulet, a fairly big aggregation of chapters, offering the reader different paths and subsequent stories alltogeteher in a big picture. The question asked by the two authors is really simple : When do the readers stop reading a book ? By applying standard web-analytics and knowledge about reader speed, one can derive the drop-off rate for each chapter read. This allowed to derive some trivial information : Drop-off important on the first chapter, but also some more intricate knowledge : giving more insight on the chapters structure, content and flow. A two sided classification is proposed for the reading regime of each chapter : Immersion (small drop-off) and critical (dropping and skimming) mode. Such classification was then used to predict the drop off rate of a given chapter with trained machine learning. Though corpus dependent, useful data can be extracted : for example, using smaller chapter during critical mode helps reducing drop-off rate. Such experimentation is heavily based on the reader as the subject, more than the content itself. Such experiments reveal intimate relations between reader and work that would be out of range without digitalization and therefore make crowdsourcing & literature a whole new association, focused on the reader end.
At last, Mapping the Emotions of London in Fiction, 1700-1900: A Crowdsourcing Experiment, is taking us for an emotional time travel experience. In order to fully grasp the experiement, we’ll have to introduce Digital Literary Geography : though quite modern by technology, this field has old precursors, going back to the begining of the XXth century where maps of fictional setting were drawn to be able to define “gravity centers” versus “unwritten regions”. DLG can also wear many faces, linking geopolitics, geoeconomics, geosocial networking, geohistory etc… to the text giving the opportunity to strip down the mechanisms and machinery of literature, as such metrics are the anchors tethering the story to a powerful and meaningful context for the reader. As for the experiment itself : two methods are proposed. The first one is to go through a corpus of texts and link place-name to actual geographic areas or discrete places in order to find the likelihood for fiction at a given time to mention a certain place. The result of such experiment over two centuries fiction gives us a gravity center of fictional attention located in London City itself and its West end, that stays stable. Confirming our theory that geosocial facts (like population changes in that example) gives strong meaning to place-names, transforming them into conceptualized and substantial entities. In order to concretize further the idea, a second experiment was carried on, linking reader based dichotomic emotions (fear or hapiness) to certain places. The result was a splitting of London’s geography and topology, namely prisons, hills, pre-modern buildings, and places in the City associated to fear versus parks, churches, squares, theatres, modern buildings, places in the West associated to happiness. Once again showing that a place and its underlining social and architectural properties are a powerful mechanic in literature driving the reader’s emotion. This experiments could be synthesized in a single causal chain : Geographic facts (social, economic, architectural) included in the story cause reader emotion and therefore association. Once again such an abstract shows us a new dynamic in the Crowdsourcing and Literature couple.
As a conclusion, we’ll unmask our not so usual suspect by linking him to a trend, both induced by and acting on the digital humanities field. By analyzing the three abstract, an idea that would certainly be considered harebrained if submitted during one of Rambouillet Duchess’s “Salon Littéraire” started to impose itself. Though still central, the literature work is surreptitiously and progressively eclipsed by its relation with its reader, and the limit between the art and its public is blurred, nearly to a point of fusion. Quite humorously, we could say that contrary to the expectations, the digitalization of literature is humanizing its perception in some way, by incorporating living, fresh and volatile human experiences between encoded lines of a frozen tale. The impact of the story on the reader is tending to become the central piece of a new trend in the field. Finally to conclude, one could say that Digital Humanities are now turning the virtual pages of literature, and in the gesture, they are opening new possibles, far away behind the dry ink on a frail page.
 What do you do with a million readers ? by Roja Bandari, Timothy Roland Tangherlini and Vwani Roychowdhury. DH2015 Conference Abstract.
 Anatomy of a Drop-Off Reading Curve by Cyril Bornet and Frédéric Kaplan. DH2015 Conference Abstract.
 La simulation humaine by Daniel de Roulet. More information.
 Mapping the Emotions of London in Fiction, 1700-1900: A Crowdsourcing Experiment.
Ryan Heuser, Stanford University; Mark Algee-Hewitt, Stanford University; Van Tran, Stanford University; Annalise Lockhart, Stanford University; Erik Steiner, Stanford University.