Last week we discussed archival history and textual analysis. We looked at the preservation of books that were centuries old, extraordinary to say the least. The class discussed how such tools would be of use to Historians of the future, as the art of preservation is creeping further from it’s humidity controlled basement roots, to the new age of digital preservation. We were fortunate enough to be exposed to both forms of preservation and it is up to us to decide what are the benefits and flaws of both.

Web articles such as Quantitative Analysis of Culture Using Millions of Digitized Books proved to be another useful tool in the gathering of digital data as it gave us access to a large corpus of digitalized books; 4% to be exact, of all books ever printed. It’s information like this that is available to so many students, not just in the faculty of history; however, there is little exposure of information of this kind available to students out there. For example, I was not aware that the University of Waterloo had such a distinguished and renowned archive of books, dating back hundreds of years; books with a vast array of information about the time period all at our disposal. Primary sources lay right under our very nose, almost literally and we were unaware of it.

Another important tool we discussed in class was Mining the Dispatch. This is a program that was originally created by Robert K. Nelson. The program is unique in the sense that it uses different words or phrases that are usually found together to create topics. This is a similar program to that of N-Gram found on Google. The N-Gram program essentially looks at larger bodies of Data to create topic modeling. Here is an example of Mining the dispatch, “It uses as its evidence nearly the full run of the Richmond Daily Dispatch from the eve of Lincoln’s election in November 1860 to the evacuation of the city in April 1865. It uses as its principle methodology topic modeling, a computational, probabilistic technique to uncover categories and discover patterns in and among texts”.

http://dsl.richmond.edu/dispatch/

 

What was most intriguing to me about last week was our discussion of archives. After reading “Archives in Context and as Context” by Kate Theimer, It gave me a broader understanding of what “archives” really are and how they should be viewed through the digital humanities.  The main gist of the article entailed that the definition of “archive” is something that shouldn’t be separated in regards to the digital community, touching on the opinions of other scholars such as Kenneth Price; “Therefore, it is important to note that the formal definition of “archives” used in the archival community cited here recognizes no differences for electronic records, born digital material, or materials presented on the web. Price’s definition, put forward for a digital humanities audience, may be correct in that community of practice, but it should come as no surprise to digital humanists that archivists have concerns about that definition”. This past week has brought to our attention a great deal of knowledge and tools at our disposal, all of which will help shape the world of digital history for future generations. I am curious as to what we will discuss in the future with Human programming and the tool PYTHON.

Advertisements