Project Description

The plague’s epidemics have been strong during the Middle-Age and Venice, as a big city has been hit badly. This project’s aim is to visualize the propagation of the disease into town as well as the major changes in the Venetian administration in order to handle the epidemics. The data will be displayed using maps, graphs for population and disease statistics while relevant documents like drawings or specific texts and manuscripts will be integrated in the interface. A dynamical timeline will allow the user to picture the spatiotemporal evolution of Venice, changes in religious beliefs and even sanitary precautions taken by the city.

Image

[Trend] The process of education with the Digital Humanities

Digital Humanities are and have been present in the teaching, learning and research, providing tools for the development of the entire process of the education.

When we talk about the social and moral education, focusing on gender, is spoken as handled to the genus polarized in only two options, it has binary options such as: boy, girl; male, female; etc. And as a practical goal the Digital Humanities (DH), seek to integrate the multiple dimensions of gender throughout.

Science has managed over the years to observe the existence of Hermaphroditism and other sexual disorders in plants and animals; and if this gender diversity is found in nature that surrounds us, do that cannot be accepted the same way naturally in the human race? And leave the binary stigma that actually is being used.

Therefore the goal of the DH is to understand the diversity of gender; it uses media based on technology, to transform the closed mentality of humanity, and incorporate and accept something natural which is the variety of genre. It is important to note that Digital Humanities in their quest for integration is motivating to create a social and moral conscience to ensure their integration into society for each individual and which is free of manifest under any stigma in the same society.

Now what is the contribution of the DH in Open Access and Online Teaching Materials, we observe as  Digital Humanities can ensure that information reaches all openly and taking advantage of technological means to get everyone to have access to the information, avoiding language, economic, social and cultural barriers. Benefits the quality of teaching-learning, in such way that modern media promotes creative activities, distribution and manipulation of information and knowledge and have a conclusion in a meaningful learning within all areas.

What is the contribution of the DH in access to information?, but what happens when there are no frontiers to access education?, refers to the impact of migration in teaching-learning, many institutions open its doors to the world by incorporating programs in a universal language for their students;  as for the student, a boost of self-improvement leads him to seek education in countries with higher academic levels and since the individual has taken the customs from where he emigrated, prefers to stay or seek countries with better employment opportunities resulting in a brain drain. However you can also see that some developing countries have government programs seeking the formation of their citizens able to contribute in the development of their country.

In each article Digital Humanities have bigger goals to resolve problems that involve humanity, moral goal, is learn the complex understanding of the true nature of human race against the gender diversity, the social goal, that each individual has access to the information, etc. the technological goal, use technological means to teach no matter the distance (we have access to virtual education, perhaps an academic institution no longer needs to expand its infrastructure, and only need to expand their virtual space to provide Online learning), employing striking resources to achieve meaningful learning.  But in its pursuit of the goals may also be losses, such as alienation of cultures, somehow losing the identity of the knowledge of a country.

Three different approaches seen in these articles, which have a common denominator to the intervention of Digital Humanities to propagate education universally using technological media that promotes the dissemination of information in favor of an individual, country or nation. And a search of unification in terms of technological, social and cultural ending with the economical barriers, of language, of gender, that prevents so far the diffusion of teaching as a common good of humanity, and the DH are always researching new ways of teaching and learning.

Today we talk about new teaching skills both in personal, social and professional that enable them to adapt to the world of online education and how to handle digital tools for the benefit of humanity. Digital Humanities are accompanying the great change that technology has achieved in humanity and seeks a leadership to promote human values which must be achieved in the pursuit of progress.

Image

In the present work are considered the following articles:

“Against the Binary of Gender: A Case for Considering the Many Dimensions of Gender in Digital Humanities Teaching and Research”. http://dh2013.unl.edu/abstracts/ab-154.html

“Should the Digital Humanities be taking a lead in Open Access and Online Teaching Materials?”, http://dh2013.unl.edu/abstracts/ab-283.html

“Academic Migrants: A Digital Discussion of Transnational Teaching and Learning”, http://dh2013.unl.edu/abstracts/ab-122.html.

Posted on

bolog

Benefits of technology to variety branches of Research

As day-by-day technology improves, it becomes indispensable of our life. It affects whole life. There exists a lot of cars in the cities, approximately at least one computer exists at each home, there are many digital watches and etc. But, how technology had entered to our life, what helped to it for that. Actually, in my opinion the greatest value of technology that makes it important is making life easier. In the past, if one wished to speak or see his relatives far away from him, he should made a travel to near of relative. But, now we have technology, with the help of many variations of chats or video calls, one can contact easily with relatives/friends. There are many examples exist that technology helps to life. One of the most important affect of technology is to researchers. In my article I will try to give detailed information about benefits of technology to researchers with real life examples that I took 3 paper as reference to me.

The Atlanta Map Project :

This paper is all about to a project “The Atlanta Map Project”. Firstly, I want to describe the aim of this project. The project aims to describe Atlanta in the past years around 1928. In this description, street names, names of people their race, their health information all should kept. For keeping all these data, developers of project are using two type of development tool, one of them is TEI, and the second one is GIS. TEI development tool contains text based searches and similar functions. Actually, there exist a database and in database there are information of search names and street locations which taken from city directory. In addition, records from funeral home also taken, and it will be encoded and will inserted to database (it has not done yet, project is developing currently). The benefit of database is that, users can ask any query that they want and can get answer to their queries rapidly. But, the second development tool is a little different, that it looks like to google map. In this development data will encoded in different layers of geographic maps, So, there will be a geodatabase to GIS users, which for specific place informations will hold. And approximately all functions similar to google maps. All in all, the Atlanta Map Project is something like to google map, but it will contain more information, even it will gives racial information of city, about health problems in there and many such informations, which makes very easy to research all characteristics of Atlanta from 1930′s in this map.

Coding Media History :

This paper is about three ongoing interrelated projects : the Media History Library (an open access digital resource) , Lantern (a search tool) , and Coding Media History (a text mining research project). Actually, these projects supporting each other. First project, the Media History Library is the pool of information. Till now, more than 500 000 pages those are related to histories of film, broadcasting and recorded sound was embedded to that library. The second project is Lantern, that is a co-production of the Media History Digital Library and UW Madison’s department of communication arts. Lantern allows users the ability to perform full text searches across the Media History Library entire corpus. Eventually, it is expected, with powerful functionalities beyond search, such as topic modeling and network visualizations. The third one is “Computational Analysis of the Hollywood Trade Press”, something like machine learning. That, this project will give result about similarities of different papers, make some statistical analysis among them and output a result. For example, how did the buyers and amount of advertising change over time. All such questions could be answered with the help of analyzer. So, as a whole. One of there project is keeping all track of information, other one enables search on this track and last one helps to get results from all data that we kept. I think all these are really outstanding projects, that makes researching in media history much more easier.

Developing a virtual research environment for scholarly editing :

This paper is mainly about a project that researches about German-language author Arthur Schnitzler. Actually, it is a 18 year project with several steps. In initial situation one has 3 aims, providing the first edition of most of works during lifetime of author, providing a genetic edition of the major part of the literary estate, (both published & unpublished) , making the whole edited material accessible by text critical comments and contextualizing its literary-historical background. After these 3 aims, we have to take consider challenges in scholarly editing. Actually there some hard things to handle to define links among writings of Schnitzlers. I will not explain what whole difficulties exists, but if some want publish an unpublished writing it should not consider just semantic link but also physical form and processuality. Actually, writings which are not published may seem really very bad, unsuitable for reading. Now, let speak about literary computing and software development. For literary computing, one should take consider 2 aspects. First is the some need of scholarly editing, actually this editing can be done just with providing appropriate tools. Second, tools must be used properly. And in paper there mentioning a development which helps for clarifying unpublished wittings. Transcribo is a development that developed by collaboration between computer scientists and philologist of the project partners. It helps to understand what actually written in handwritten texts. So, with all technological methods and tools, some writings which did not published can return to a digitized material, which one can read easily.

As seen from the examples, technology is used for supporting research. Such as research in history, the exact situation in past times, some research on media texts, and even for publishing handwritings which till now, not published, yet. Actually, in first two paper there mentioning some search and saving data of technologies mentioned, and in last of second paper and in third paper there exists more advanced usage of technology such as analysis/machine learning. I think, project owners that are mentioned in the papers do very important missions for supporting research. Actually they optimize the time of researcher in huge amount. I want to mention one another nice property of these projects, that people are working for these projects just once, but it always help to researchers for many years. So, in all, all these projects and technologies are estimable, and I am thankful to owners of these projects for such works.

http://dh2013.unl.edu/abstracts/ab-416.html

http://dh2013.unl.edu/abstracts/ab-181.html

http://dh2013.unl.edu/abstracts/ab-412.html

TREND: The use of technological tools to visualize poetry

Tags

, ,

A recent trend in digital humanities has been the development of new visualization tools to aid analyzing and understand of poetry. Historically, poetry has been analyzed and understood from a literary, data driven viewpoint of the syntactic and semantic elements of the texts at hand. Recent work, as is evident in the referenced texts has contrarily focused on a more qualitative, linguistic, and even graphical analysis of poetic literature.

The authors of the three abstracts all have in common that they extract features, key elements, like, margin size, width, height, spacing of text, syntax, rhyme, structure of the narrative, the organization of the poem, the language elements, etc. These features are used by the different tools to graphically represent the corresponding poems and help to visually analyze and compare them. After organizing all of this information, they are able to combine those features via pattern recognition.

It is at this point that their tools differ in their approach and motivation. Houston et al [1] use their recognized patterns to find clusters of poems which help them in their analysis of large corpora of literary work, for example of the Victorian era. Their tool enables them to identify significant trends and patterns in the graphical design of Victorian books, and also find out which texts, including previously lesser or even not all studied ones, are similar to other works from the same era, and thus representative to this era, rather than simply anecdotal.

On the other hand, Abdul-Rahman et al [2] map their features along a 26 dimensional space, allowing for the end user to decide which dimensions to display in the tool. The dimensions correspond to each of the features, or attributes which the authors defined as relevant to the analyzed poems, such as meter, sound, tone or rhyme. Here also, the authors perform a pattern recognition, enabling them further to not only compare poems with each other, but also allowing them to place the poems in their historical, societal and technological context.

Lastly, Meneses et al [3] take the approaches of the other two groups one step further, not only analyzing poems according to certain features and patterns, but also allowing poets to directly interact with the tool, and even other writers and readers, in real time. Or in other words, a framework that affords a symbiotic relationship between writing and visualization a poem. As an author writes new poetry, the visualization of it gives direct feedback to him, allowing to also understand the writing process from a unique new viewpoint.

Even though each of the groups of authors uses their respective tools for different motivations, what they all have in common is the use of visualization technology to get a better understanding of poetry.

References

  1. Houston, Natalie M. Audenaert, Neal.  “Reading the Visual Page of Victorian Poetry” Digital Humanities 2013. July 2013. http://dh2013.unl.edu/abstracts/ab-274.html
  2. Abdul-Rahman, Alfie. Coles, Katharine. Lein, Julie. Wynne, Martin. “Reading Freedom and Flow: A New Approach to Visualizing Poetry” Digital Humanities 2013. July 2013 http://dh2013.unl.edu/abstracts/ab-143.html
  3. Meneses, Luis.  Furuta, Richard. Mandell, Laura. “Ambiances: A Framework to Write and Visualize Poetry” Digital Humanities 2013. July 2013 http://dh2013.unl.edu/abstracts/ab-365.html

Data managment from the perspective of Digital Humanities

Tags

, , ,

After decades of data storage it is possible to lose the gap between the data that really makes sense. Current trends in areas of research forces the adaption of new technologies to solve specific problems while data preservation is left behind. As expected, technology evolves rapidly leaving data at risk. In the domain of Digital Humanities, is there any solution capable of comprising the reality of contemporary humanities data for continuous access and reuse?

The paper “Lost in the data, Aerial Views of an archeological collection”[1] presents a visual analytic tool that displays “aerial views” of digital collections and a tool to navigate the curation process. This study comprises a collection of more than a million files, representing more than forty years of research activities by the Institute of Classical Archaeology (ICA) at the University of Texas at Austin. Now that the Institute’s focus has changed from fieldwork to publication crisis arises, since the researches try to retrieve, assimilate, and share those digital resources with the intention of study and dissemination.

As a response, they adopted a new data management strategy capable of not interrupting ongoing research, while documenting and archiving the collection and providing web access for collaboration. The collection is visually presented as directories so user can navigate, search, browse and select them for observation. The main advantage is the exploration of information more effectively as the visualization provides a clearer view of its content and significance.

Image

Figure 1. View of the entire collection.

The project “ChartEx (Charter Excavator)”[2] is another innovative application based in a novel instrumental interaction technique capable of exploring full text content of digital historical records, in this case charters, a fundamental source to study the lives of people in the past. It uses a combination of natural language processing (NLP) and data mining to extract information about places, people and event in their lives from the charters. ChartEx is intended to assist researchers in the whole process of search, extract, analyze, link and understand the Charter’s contents. A similar purpose of the archeological collection’s project presented above.

Since historians work with vast amount of information contained in charters, the project is also creating a virtual workbench in order to achieve a complete interaction among computational systems and humans. An innovative method applied from a different approach in the archeologist’s collection, both with the intention that users interact with the visual representation of the collection.

ChartEx designed a markup scheme that represents how currently historians read charters and extracts. The schema was created from a collaborative process that involved researchers. The same methodology of gathering the knowledge used from various experts was also applied in the first project which allowed a better understanding of the context of both collections in order to plan its preservation.

 

The paper “A concept of data modeling for the humanities” [3] refers to data modeling as a relevant task of digital humanists since it comprises the creation of databases, digital editions, geographical information systems, research collections, digital libraries, among other activities mainly carried out by this field.

“Data modeling is referred to the activity of designing a model of some real (or fictional) world segment to fulfill a specific set of user requirements using one or more of the meta models available in order to make some aspects of the data computable, to enable consistency constraints and to establish a common field of perception.”[3] 

The paper considers the possibility of a specific data modeling activity shared by all humanities that can lead to the definition of a general theory of data modeling exclusive for DH. For this, two main features common to all activities carried out by this field have been considered. First, the objects of the data modeling activity in DH are considered as artifacts and most of their properties are intentionally created. Second, the objects of humanities research have a long history as well as the research made on these objects, thus the models have to convey these complexities.

This hypothesis leaves a gap in the assumption that in order to continuously retrieve and preserve data with long history a theory of data modeling can help close this barrier and make data collections of the projects presented, more accessible for continuous access and reuse.

 

Nowadays, researchers have access to huge amounts of data in the form of digitized historical records but current search engines are not enough to exploit them in detail. The presented projects provided the insight of some applications developed to search, extract, analyze, link and understand the collection’s content, providing new functionalities not seen before. However, there’s still a gap in projects with long history, as the solutions provided need to be adapted to the realities of contemporary humanities data and to accomplish this, new theories comprising the field of digital humanities need to emerge.

References:

1. Lost in the Data, Aerial Views of an Archaeological Collection

http://dh2013.unl.edu/abstracts/ab-371.html

2. ChartEx: a project to extract information from the content of medieval charters and create a virtual workbench for historians to work with this information

http://dh2013.unl.edu/abstracts/ab-431.html

3. A concept of data modeling for the humanities

http://dh2013.unl.edu/abstracts/ab-313.html

3D modeling and representation – A powerful visualisation tool

Tags

, , , , , ,

During the past 20 years, the rapid development of personal computers and computer science has revolutionized the way people extract, analyse, represent and interact with information. Realizing that fact, Digital Humanities try to take full advantage of the new techniques and technologies developed, in order to efficiently manipulate and represent the vast information coming from the past and the present.Specifically in the field of representation and visualization, during the last years there is an increasing demand in three-dimensional(3D) representation of spatial information such as historical buildings, maps and landscapes. Modern hardware and software, through virtual environments and advanced computer graphics, give us the possibility to actually bring to life historical sites and even interact with them. Moving even further, the technology of 3D printing is finally being perfected and more accessible to the general public, realizing the concept of transferring these representations from the virtual to the real world.

3D modeling and reconstruction of historical buildings and mechanisms

Nowadays, sophisticated computer programs and platforms are widely used to model and reconstruct 3D representations of buildings and fabrications by various historical data. For example, [1] describes an effort of reconstructing  european and chinese astronomical clock towers through the use of a number of software programs and platforms, corresponding to the different stages of the reconstruction procedure. The endeavor is not as simple as it might seem at a first glance as it is not only about a static representation of  historical buildings, but more of geometric modeling the different parts and formulation and visualization of their relative motion. Towards this purpose, the researchers will first use modeling software like 3DSMAX and SOLIDWORKS to create the different static and dynamic parts as well as JavaScript and VRML technology to control the display. Moreover they will try to use procedural modeling for facilitating and automating mainly the modeling and production of the solid components of the structure. Finally they will use ADAMS software to link all these different parts to one complete system and conduct simulations and experiments on it to define missing information and unknown parameters. Doing this for european clock towers will be easier as there are lots of living examples across the continent, whereas information about the ancient chinese water-driven astronomical clock towers can be derived only by literary writings.

Considering the potential of using online 3D game engines

Taking a step forward,  DH scholars and researchers have considered using the new sophisticated 3D engines of online games to visualize archeological sites through virtual environments. Computer game  graphics have made a long way since their early applications, looking nothing like the simple 2-dimensional pixel-depiction of their predecessors. New and powerful online 3D game engines like Unity 3D, allow the reconstruction of large and detailed archeological environments containing a vast number of data. Users from all over the world are just some clicks away from this historical treasure and the only prerequisite is to possess an internet connection and a browser with the proper 3D engine plug-ins. Users will not only be able to virtually walk through ancient buildings and cities, but interact with the environment too. Depending on the purpose and the coding structure of the project, the user can have the ability to make annotations live as he wanders in the site or even actively change the environment according to his interpretation and historical knowledge. This could be proven very useful for scholars’ collaboration and restoration of missing pieces of information or clarification of ambiguous historical data. Of course, some control might be needed to the level of accessibility from the general public to such features, especially if it is feasible for the user to make important alterations.

Virtual Hadrian’s Villa simulation on Unity 3D engine by IDIA Lab

Shah Jahan Mosque interactive environment running on Unity 3D engine – property of Islam In British Stone Website

As an example of the above, the author of [2] has created a real-time reconstruction of a 18th century North American imperial fort, based on the Unity 3D engine. In this reconstruction the user can witness the real construction stages of the fort as it was developing through time as well as different interpretations about architectural features and additional data provided through links to documents,maps, multimedia etc dispersed across they layout.

Transforming virtual 3D models into physical objects

Desktop fabrication is a disruptive technology which enables the transformation of digital models to solid physical objects, made mostly of plastic. Desktop 3D printers, milling machines and laser cutters are some examples of this technology, used until recently mainly for prototyping and manufacturing applications. Although this fascinating technology is still not quite used in Digital Humanities, it presents great potential in preserving and exhibiting cultural heritage in real 3D models. The importance of 3D modeling and desktop fabrication research in DH is underlined in [3], which focuses on

  • Describing the current workflow of a desktop fabrication procedure, from photographing and digitizing the object of our interest, using proper computer software to extract,modify and bring to a printable format the virtual 3D models, all the way to its live interactive or online exhibition. Suggestions are also made on this workflow as well as basic ways to contribute to the desktop fabrication research in DH contexts.
  • Highlighting the importance of receiving feedback on the relevance of this fabrication on different aspects of Digital Humanities as well as using makerspaces in DH research. The basic elements and characteristics of a makerspace are also identified.
  • pointing out the need of defining optimized techniques in error-correcting 3D models,  attributing the materialized 3D artifacts, enhancing desktop fabrication and others.


To conclude, all the topics discussed above agree to one thing: 3D representation techniques are becoming extremely important in the area of Digital Humanities. Technology provides us with the tools to accurately reconstruct 3D cultural heritage objects, from buildings and cities to sculptures and mechanisms, not only in a virtual environment but in reality too. All these practises make history more tangible and accessible to the general public and to the  DH community. With the right amount of effort and investment in 3D simulation and desktop fabrication, the possibilities are endless.

References :

A Comparative Study of Astronomical Clock towers in Europe and China based on their detailed 3D modeling     Li, Guoqiang; Van Gool, Luc
http://dh2013.unl.edu/abstracts/ab-130.html

A 3D Common Ground: Bringing Humanities Data Together Inside Online Game Engines  Coltrain, James Joel
http://dh2013.unl.edu/abstracts/ab-420.html

Made to Make: Expanding Digital Humanities through Desktop Fabrication            Sayers, Jentery; Boggs, Jeremy; Elliott, Devon; Turkel, William J.
http://dh2013.unl.edu/abstracts/ab-441.html

 

Breaking down abstract concepts through computer science

Tags

, , , , , , , , ,

How would you define a promenade in tango? Or an impressionist painting? In these cases like in many artistic domains, concepts are identified by qualitative and rather abstract definitions that leave a significant amount of personal interpretation. Thus it can be quite difficult for different people to discuss about a given concept with different interpretations in mind. The recent emergence of computer sciences has opened new perspectives to define those concepts: through large-scale data analysis and modeling, we are now able to study artistic disciplines in a quantitative way. In this context, we will discuss 3 interesting projects that attempted to use new technologies in order to offer a new insight to human behavior, in very different domains.

The ARTeFACT project aimed to provide a computer tool use natural language processing (NLP) in the universe of dance. It is divided into two main parts. The initial work was the development of methods for computer identification of dace movements in 3D. Therefore, a library of codified dance steps was created (along with relationships between steps). Then, each step of the library was performed by professional dancers and collected by a motion capture system (cameras, reflecting markers), from which several relevant features were extracted, such as foot-ground contact, knee and hip angles. From there, data analysis and classification was performed: each codified step was associated with a certain set of features. Finally, the classification model was tested on another set of codified steps in order to assess its reliability, showing very good results (97.3% correct step classification after model re-adjustment). The second work consisted of developing a parallel library matching physical features with “abstract movements” as they can sometimes be defined in dance: “struggle”, “victory”, “attack”… A similar procedure was implemented, in order to identify movement patterns in a variety of dance works. Although this particular work is still ongoing, it already shows some confident results. Hence, the overall project can be used in order to identify dance steps in a 2D dance film, but also extends to other movement-based disciplines.

The VisualPage project focused on the global understanding of poetry of the Victorian era. It started from the assumption that a poem cannot be summarized to plain text only, and that many other characteristics of the poem’s printed page should be taken in account. These additional features could then be used to identify undiscovered similarities and evolutions in poems of the Victorian culture. Therefore, the project was divided into three tasks. The feature extraction module performs image processing of the poem’s page and extraction of relevant features such as typeface size, margin size, spacing of text lines, before gathering these all features in a library. The pattern recognition module was designed in order to find relationships within the collection of poems in terms of the studied features. Finally, the analysis module provides a data visualization an exploration interface, where new queries can be defined and assessed. Although this project is still at the “proof-of-concept” stage, it could allow identifying significant patterns in the graphical design of Victorian books, as well as the evolution of these features during the Victorian period and their differences across different authors.

The final project consisted of a deep analysis of character networks in a large set of theater plays and movies, in order to discover similarities in literature and movies across genres and over time. Therefore, the first task was to develop methods for automatically extracting character interactions from scripts of movies/plays.  Then these interactions were analyzed and gathered into networks, using four different algorithms, each of them defining character interaction in a special way (number of scenes of common appearance, total number of words exchanged…). Next, several properties were computed for each network, in order to be able to compare them: relative importance of top/main characters, centrality of a main character, strength of relationships, number of storylines and number of characters. These network properties were used to determine characteristics of movies/plays for various media aspects (type, date, rating and critics, genre, author). Finally these assignments were tested on additional data using regression classifiers and decision trees. The results showed significant differences in those networks by comparing plays versus movies (plays usually display a central character having all important relationships, whereas movies use several main characters), different dates (older plays tend to have more disjoint group of characters and more distinct storylines than the new ones) genres (e.g. horror movies with only a few characters and a simple storyline) and authors. Hence, this project developed a classification tool in order to classify movies and plays according to several general characteristics.

As a conclusion, one could say that although the three presented projects operate on very different domains, they all provide a way of defining an abstract concept through concrete aspects. If dance was defined by humans with the help of a lot of metaphoric moves, the ARTeFACT project breaks down this wide artistic discipline into small but well-defined elements. Similarly, if a poem’s layout is more of a visual signal that usually comes along with a text to emphasize its effect, the VisualPage project defines it through quantitative geometrical properties. Finally, if a movie aspect is rather vaguely defined, the character network project refers to quantitative features in order to clearly describe it. This “translation” permits to compare different entities in a stable and repetitive way by referring to the newly identified concrete aspects, and to discover similarities or differences that were unrevealed until then.

References:

Visualization of Uncertainty

Tags

, , , ,

Uncertainty is a state that cannot be entirely described because of limited knowledge. It can arise in the data when they are acquired, processed or visualized (which is actually called uncertainty of visualization).

This short post will be focusing on the visualization of uncertainty, which is defined by the use of special variables (e.g. colors, size or texture…) in order to include uncertainty in a diagram. This is very important because people tend to treat data differently when they can visualize them instead of simply reading them. Data are not questioned much when visualised.[2] How uncertainty will be included in a diagram depends mostly on its properties. For some kind of diagram, there are quite a few simple variables that can be used to include uncertainty whereas for others, it will be a lot more challenging. In term of modeling, the inclusion of uncertainty in a visualization results in most cases to the addition of a dimension.[4] In this short post, 3 examples are given to show how uncertainty can be visualized.

In “Digging into Human Rights Violations: phrase mining and trigram visualization”, the data that is studied comes from multiple testimonies. In this case, temporal, locative and entity uncertainties are introduced at the start, during the acquisition. The authors chose to use a trigram-based visualization called an event trigraph. Shortly, an event trigraph is a 2-D planar diagram consisting of events as its buildings blocks. The weights on the line represent the confidence values and they have a peak at 1. This representation makes it possible to visually associate events that are in different documents at the same time.[1]

Image

Figure 1: Event trigraph with uncertainty values and voids

Instead of using a special kind of graph to show uncertainty visually, the authors of [2] propose an approach that only modifies a little bit the original representation. One of the most natural ways to visualize uncertain data is probably to use blurriness. The analogy is simple but strong and just like location, size, texture, color, orientation, shape, color saturation or transparency (…), blurriness is a good variable because those features correspond to the basic feature channels in our primary visual cortex, being thus perceptually completely distinct features (Ware 2012) [1].

In the paper “Visualizing Uncertainty: How to Use the Fuzzy Data of 550 Medieval Texts?”, the approach is very different: a web-based tool permits a geospatial versus temporal visualization that is robust and includes uncertainty. It is possible to change dynamically the thresholds for the different type of uncertainties and more traditionally select the genres and time range to choose the amount of data that is displayed. The uncertainty in this study seems to be higher than for the first two examples.

Uncertainty visualization is challenging because most visualisations are used with the assumption that the data are accurate. The problem is that data exists in different forms and can have different components. It is not always possible to simply draw an error bar or to add a spatial dimension. With the examples given in this post, it should be a bit easier to see how to get around those difficulties or at least realise that they exist. Also, after reading through this post, one should see that including uncertainty into a dataset does not only allow the use of all the data but can also lead to very interesting or unsuspected results.

References:

[1] Digging into Human Rights Violations: phrase mining and trigram visualization. Miller, Ben ; Li, Fuxin ; Shrestha, Ayush ; Umapathy, Karthikeyan. http://dh2013.unl.edu/abstracts/ab-368.html

[2] Bindings of Uncertainty. Visualizing Uncertain and Imprecise Data in Automatically Generated Bookbinding Structure Diagrams. Campagnolo Alberto ; Velios Athanasio. http://dh2013.unl.edu/abstracts/ab-187.html

[3] Visualizing Uncertainty: How to Use the Fuzzy Data of 550 Medieval Texts? Jänicke, Stefan ; Wrisley, David Joseph. http://dh2013.unl.edu/abstracts/ab-158.html

Additional:

[4] A Review of Uncertainty in Data Visualization. Ken Brodlie, Rodolfo Allendes Osorio and Adriano Lopes. http://www.comp.leeds.ac.uk/kwb/publication_repository/2012/uncert.pdf

Digital techniques for analyzing human face and their different applications.

Tags

, , , ,

When you meet your friends, the method you identify your friends would be his face. Perhaps, the part which has one of the biggest information showing in human will be face. It is also reflected well in the research trend of digital humanity. It is likely that every people uses facebook’s photo tag or finding a similar celebritiy. Through digital tools such as facial recognition techniques, data mining, graph theory and visualization, we can develop the methodology to analyze human face and its appliance as well. Here are several intriguing attempts putting these techniques to practical use.

First, you will find the special effort in order to analyze relation between the face to the emotion in the article of ‘Understanding the Representation of the Human Through the Analysis of Faces in World Painting’. Human facial representations contain a human expressions and emotions archive that can help us understand, through a science of the face (Cleese and Ekman 2001), various human condition traits that evolved through time and space. They sought to answer questions about periods in art history, such as the Baroque significance as a culture derived from human expression.

캡처

Figure 1:
Example of face graph for two randomly selected paintings. The red crosses basic features. The blue lines show the different distances in between the features.

The methodology borrows some ideas from the Culturomics (Michel et al. 2011) concept to deal with a huge amount of data and applying a face recognition algorithm which is also used in Facebook’s photo-tagging system. It tackles the study of this huge amount of features in three steps.

  1. building a graph with the set of basic features.
  2. finding clusters in the extended features.
  3. comparing the graphs and the clusters, corresponding to the basic and extended features respectively.

On the other hand, here is another attempt to perform a similar objective and holistic analysis of human relations by using different digital techniques. In the study ‘Research to clarify the interrelationships between family members through the analysis of family photographs’, digital technologies are also indispensable. Digital tools were used so that network analysis of all of the people could be depicted on over 100 photographs and it requires not only functions that assign the information of person to images but also computing facilities that perform network analysis on relationships between tagged persons, and in order to implement this.

It is necessary to perform numerical analysis of the people displayed together in photographic materials that were taken during the family’s official events. Thus, we constructed a digital cultural heritage system to analyze relationship in a family using photograph. The techniques used in the research are

§  Iconographic Analysis using Authoritative Information.

§  Photograph Annotator

§  Family network analysis results

2

Fig.2
Annotation Display

At last, you can see other technique called Reverse Image Lookup (RIL) technologies — usually used to identify unlicensed reuse of commercial photography — to help in assessing the impact of digitized content. In order to establish where they were reused on other webpages, they assessed current methods available for applying RIL, establishing how useful it can be to the cultural and heritage sectors.

RIL technologies are those which allow you to track and trace image reuse online. The main commercial service, TinEye, available since 2008, finds ‘exact and altered copies of the image you submit, including those that have been cropped, colour adjusted, resized, heavily edited or slightly rotated’ (TinEye, n.d). Since 2007, Google Image Search has also provided a free service which can find similar images across the Internet.

References

Not Exactly Prima Facie: Understanding the Representation of the Human Through the Analysis of Faces in World Painting

http://dh2013.unl.edu/abstracts/ab-206.html

Reverse Image Lookup, Paintings, Digitisation, Reuse

http://dh2013.unl.edu/abstracts/ab-243.html

Research to Clarify the Interrelationships Between Family Members Through the Analysis of Family Photographs

http://dh2013.unl.edu/abstracts/ab-332.html

Exploring history, from small-community sourcing to crowdsourcing

Tags

, , , , ,

One of the main trends in digital humanities is to digitize historical components such as letter exchanges, medieval books or manuscripts. Western Europe history is full of archives that contain very interesting information on socio-political contexts. Unfortunately, these registers have been hidden from masses throughout history but this problem has recently been challenged by the rapid development of digitizing technologies and the internet. Historical archives usually constitute a huge amount of data that cannot be treated by individual researchers. Therefore, communities of people can interact to enrich metadata (via ideas or contributions as an example), a phenomenon often referred to as crowdsourcing.

Crowdsourcing is the practice of obtaining needed services, ideas, or content by soliciting contributions from a large group of people, and especially from an online community, rather than from traditional employees or suppliers [1].

This study will review three projects that varied the amount of external input and propose a general pathway to digitize historical data.

Different challenges, different solutions

To better understand how metadata can be generated from manuscripts and how people can interact to enrich the metadata, a few examples are taken from the DH2013 conference. They will provide a better insight on how to effectively digitize manuscripts according to the type of the manuscript.

For the Medici Archive Project [2], translation of more than 4 million letters exchanged in Europe from and to the Medici family was achieved by a small interacting community of scholars presenting high levels of expertise in paleography and historical training. Indeed, different languages (Italian, German, Dutch, Latin etc.) were used on a period from 1537 to 1743, therefore requiring different levels of expertise to translate these letters in English. In this regard, the model developed during this project was based on a community-sourcing model where academically-acclaimed scholars could interact and show data on a forum. A small-scale test showed the ability of such a small community to enrich a digitized historical database.

The CFRP project’s main objective was to use the comédie-française’s repertoire to dg1analyze socio-political trends during the 1680 to 1793 time span in France [3]. Indeed, the authors claimed that analysis of the theatre’s income during history could establish trends in the French culture as well as the political situation at a given period. Indeed, analysis of the metadata proved that people were less prone to go to the theatre after a king’s death. Other studies with varying parameters shows an interesting promise for further scholar analysis, where new studies could be achieved via the data interpretation of this study. We can thus say that the data, which is simple to digitize since composed of numbers only has no external crowd input to build the database but still proposes powerful tools that can be used later on for crowdsourcing.

Thus, rather than crowdsourcing, MAP’ s approach is
one of community-sourcing, creating a hierarchy of levels of contributors

Finally, a comparative Kalendar was yet another example of digitization. The dg2challenge for this research was to study different versions of a similar manuscript entitled Book of Hours [4]. The main goal was to establish differences in the manuscript based on the spatio-temporal context of the writing using a distributed environment. In this way, different repositories and tools could interact to propose a comparative Kalendar, where user-generated data (transcription, commentary notes) can be added and shared between users, leading to a “dynamically growing resource”. This platform is thus a typical example of large-scale crowdsourcing.

Crowdsourcing, small-community sourcing or none of them?

We have seen in this comparative study that historical data can be digitized but that the means to pursue the study as well as the output can be varied. The metadata generated from the manuscripts can be obtained by crowdsourcing, small community-sourcing or none of beforehand mentioned terms. For translation-related tasks, small-community sourcing is preferred, this allows a certain control on the data being produced as well as increased exchange between participants who would be more familiar to each other. When digitized data has to be analyzed and interpreted for comparative purposes, the amount of external influx can be drastically varied. Indeed, relating data needs to go along with interpretation of the data, this can be either achieved by modern technologies or by a large interacting community. If the data is simple to digitize and easily comparable (number of tickets per day), crowdsourcing is not necessarily a good option. Instead, tools can be implemented to compare the data whereon another platform or interactive studies can be built. When the data is hard to compare via standard algorithms (images, local dialects etc.), crowdsourcing is preferred. One can annotate or add metadata to the original comparative database, ideas will then stack and create a dynamical platform. This ends up with a large amount of data being generated which comes at a cost of reliability of the information being produced.

To conclude, we propose a comparative study of digitization cases from historical archives. Different solutions exist depending on the type of data being produced. Generally speaking, crowdsourcing is a powerful tool in digital humanities and has been proven to be suited for metadata enrichment from historical archives.

References

[1] http://www.merriam-webster.com/dictionary/crowdsourcing

[2]  A Comparative Kalendar: Building a Research Tool for Medieval Books of Hours from Distributed Resources. Albritton, Benjamin; Sanderson, Robert; Ginther , James; Bradshaw , Shannon; Foys, Martin. http://dh2013.unl.edu/abstracts/ab-422.html

[3] Opening Aladdin’ s cave or Pandora’ s box? The challenges of crowdsourcing the Medici Archives. Allori, Lorenzo; Kaborycha, Lisa. http://dh2013.unl.edu/abstracts/ab-312.html

[4] Visualizing Centuries: Data Visualization and the Comédie-Française Registers Project. Lipshin, Jason; Fendt, Kurt; Ravel, Jeffrey; Zhang, Jia. http://dh2013.unl.edu/abstracts/ab-458.html

Follow

Get every new post delivered to your Inbox.

Join 79 other followers