One of the big areas inside Digital Humanities is the study/analysis of the texts of ancient books. Obviously, the fact that computers are a part of the process has increased the ability of the humanist researcher to do more things and perform more complex analysis on these texts, thus generating increasing amounts of information about these texts. However, some major challenges at this point in time are to find:
- How to communicate all this information / make it publicly available to other researchers
- Visualize the enormous amounts of information generated in an effective manner (visualization for humans)
First of all, we can see that one of the abstracts presented in this year’s conference was focusing on preparing workshops to discuss “phylogenetic analysis of textual variation”, categorization, computer simulation of textual transmissions, and similar topics. (“Digital New Testament” abstract.) The Bible is also shown as a good place to test many of these techniques due to the sheer volume of the text and the reach it has had, which allows researchers to assert with a good level of confidence that it will also work well on other bodies of ancient text. 
The use of the Bible as a testbed can also be seen in another abstract for this year’s conference, concerning a visualization technique called “A Distant Reading Visualization for Variant Graphs”. This technique was created for visualizing the variations of a text from version to version, showing colored lines going through the phrase, with the variation that is most common with a bigger font and more towards the vertical center of the phrase alignment. Again, the Bible is a good candidate for this type of analysis due to the enormous number of translations/versions of the text. 
The main problem with this is that the simple graph visualization described earlier will become unreadable for long bodies of text. This is an example of how one of the challenging new things to figure out in this field is how to adequately visualize the enormous amount of information generated by these comparisons.
The work that the team behind the distant reading visualization graphs has done is focused towards fixing this problem, and finding a way to display the results in a meaningful way for a human researcher, extending their approach to better suit varying levels of granularity; starting with sentences/verses to chapters, books, and even over the entire Bible.
Another interesting thing this team presented is the ability to change the highlights for visualization using sliders to only highlight different levels of variation between texts, which would enable a researcher to really zero in on the “controversial” areas with very high variance between translations/versions, or also just get a general idea of which books are more consistent throughout time and translations.
Concerning the final bit of the technological integration puzzle, it would be good to mention the work of the ARTFL project, the Philologic API. Their work on creating a public API for consumption by an Android client is an example of a step in the right direction. They mention that their API endpoints are still not completely RESTful and that the main work ahead of them is to make it more generic and easy to use.  They have the ability of querying the system for words and can look up frequencies and analyze links to other relevant texts.
I believe that this is a central part of building & publishing these new visualization methods. I would really like to see work done on the following fronts:
- Build a standard format for APIs concerning texts, references, etc. (maybe standardize JSON object formats, etc.) for digital humanities projects
- When developing new visualization and/or analysis tools, consider having the option to receive input from these APIs. This would enable any new tools developed to work as extensions of these bodies of processed text and make their deployment & adoption much faster.