Elena M. Friot

Home » Posts tagged 'digital humanities'

Tag Archives: digital humanities

Why Every Grad Student Should Take a Seminar in Digital History

We have been charged with submitting, for all the world to see, a critique of our digital history seminar.  We have further been instructed to be honest in our critique, and not be “nice” simply for the sake of making people feel good.  So, here goes.

I started the semester fairly enthusiastically – I really wanted to take a course in digital history, and I was excited to see something besides standard history seminars in the course offerings.  Plus, a friend and I had  snooped around the Internet and perused the course syllabus before enrolling.  We knew what to expect (so we thought) and were intrigued.  I did balk a bit at being coerced into using Twitter and setting up a blog.  Prior to taking this class, I had been entirely anti-social media, and have a Facebook account mostly so I can check in on my family now and then.  I rarely, if ever, post anything of my own.  Twitter struck me as a trendy, superficial everyone-is-doing-it-so-you-should-too fad.  But, bravely onward!  Blogging seemed a waste of time – time I could spend instead reading, doing research, or writing, but I liked the idea of blogging comments rather than sending an email or writing a response paper.  Once I got past the course requirements, which, in all honesty, appeared fairly light in comparison to other seminars (beware the shiny exterior – I probably did more work for this class than any other) I felt ready to tackle the world of digital history.  After a semester of frustration, elation, and at times utter confusion, I’ve concluded that every grad student should take a course in digital humanities.  The top five reasons are…..

1.  This course will change the way you think.  The brain game of figuring out digital tools forced me to ask new questions of my topic, and treat my research as data.  I had to alter how I thought about the information I gathered, and manipulate (not distort) it in so many ways to use the tools appropriately that I sought out more and more information, looked in additional places for untapped sources, and pondered more effectively the meaning of the historical stuff I had in front of me.

2.  If you want to be a historian in the 21st century, you need to learn how to use Twitter.  Twitter certainly had some haters, and I more than once bemoaned the lack of conversation in #dh2068.  I got into Twitter quite a bit.  I still am, but have unintentionally tapered off my use over the last week. (Have no fear – I plan to resume Twittering after the chaos of finals week is over). Twitter is no fun if it is not used to develop a conversation, and some people assume that Twitter confuses peoples’ meanings and isn’t suitable for sharing big ideas.  My philosophy? Don’t knock it until you’ve tried it.  Multiply your 140 characters by an infinite number of Tweets, and you can have as long a conversation as you want.  I found that my best experiences with Twitter were the result of commenting, re-tweeting, and contributing to conversations (even with people I don’t know personally) on a regular basis.  Like the DH world in general, Twitter requires collaboration and community, and despite the millions of conversations, can indeed be a lonely place without them.

3.  Blogging is a good way to practice your writing.  I’m not going to lie – I got into blogging, and plan to continue throughout my academic career.  Though we were encouraged to blog for class, I found myself blogging about my research and other topics along the way.  Because blogging was a requirement, I think some classmates were annoyed by the additional alerts they received when I updated my blog and suggested that we could perhaps use a separate blog for class, and another for personal use.  My philosophy is this: if you don’t want to read my entry, delete the email or cancel the alerts.  We were encouraged to push our boundaries and take risks in this course, so I did.

4.  Taking risks pays off.  Because the possibilities of digital tools intrigued me so much, I dove into a research project and didn’t look back.  While I have not had the time to devote to a full-scale research paper, I have uncovered a multitude of sources and compiled a series of questions to guide further research next semester, and am on my way to a more well-defined dissertation topic.  Beyond that, I have figured out how to use digital tools and feel fairly confident in sharing my still-limited know-how with other DH newbies, and am working on a tutorial to submit to The Programming Historian.  I am well-versed enough in the challenges and possibilities of DH to sustain a reasonable conversation with others, and am looking to continue my DH skills well after the seminar is over.

5.  You get comfortable with failure.  We started out the semester with a reading about productive failure.  I sat in front of my computer staring and clicking with little to no progress.  I made version upon version of visualizations with no visible change.  I attempted Gephi and just about had a panic attack when my screen changed and I couldn’t figure out how to get my pretty network back. (Nevermind the repeated attempts it took to get the network in the first place).  Historians don’t like failure – we’re not used to it, because failure to us is a personal inadequacy.  We can dismiss failure in the sciences because experimentation is the name of the game, and usually the net gain of the successes outweighs the even more innumerable failures.  The more comfortable with “failing” I get, the more productive I am as a researcher and a writer because I take the risk and believe that the potential for success overrides my desire to avoid failure.

Those are the warm fuzzy things.  There are certainly more pros to our DH seminar, but I’ll save them for my final project.

Here are the not-so-warm-fuzzy-things:

1. Grading and evaluation were not quite clear.  I know I earn a certain percentage for my blog posts, my use of Twitter, my leadership of discussion, and my overall engagement in discussion.  I certainly could have asked for a grade estimate at any time, but I feel that the lack of regular reporting on grades contributes to some of the requirements not being taken seriously by all, which impacts the ability of the entire class to engage critically with each other.

2. While I love the freedom of doing a series of blog posts for my final project, I do think that the freedom we have been given means that the objectives are not as clear.  A standard expectation for final projects is a sort of equalizer.  But, we have raised this question more than once in our class.  How do we evaluate digital work? Does time spent count for anything, or is it the final product that matters more? We all take varying amounts of time to write a 25 page research paper, but in the end, we all produce a 25 page paper.  We can count the pages.  How do we do this with digital projects? How do you evaluate risk? Creativity? Effort? Usefulness?

3. We were a small class, and the use of social media means that we all see the work we are doing for the class.  This is more personal than it is a reflection of others, but I was frustrated when I saw that others had not done the required work, or engaged critically with the content of the course in the way I thought was expected.  You would think that with social media as our main form of “work” and communication, public visibility would inspire us to adhere clearly to the requirements (we are in grad school, after all) but does it have the opposite effect?  Does a digital forum somehow relieve us of strict deadlines and requirements?

4. Again, we were a small class.  We were each supposed to lead a discussion, and we were all supposed to be in charge of a reading every week, if there were enough to go around.  Some people did a lot.  Some people did not.  If we need to take responsibility for discussion and for readings, we need to get this sorted at the get-go and at latest the week prior.  Leaving it to Twitter didn’t work all the time.

Despite some of the frustrations with the mechanics of the class, I learned more than I anticipated and look forward to learning more in the future.  I blog and I tweet – no small achievement, I assure you.  I am more comfortable with making my work transparent, and significantly more prepared to share my work in its early stages rather than in its final form.  And, at every possibly opportunity, I will dissuade colleagues from slapping up a PowerPoint and calling it digital history.

Scholarly Scholarship and the Perils of Peer Review in the Digital Age

Have you ever thought about, written, or seen a word so many times that it starts to look funny and you question whether or not it is even a word? That’s how I feel about “scholar” after this evening’s provocative discussion on the quality control process of peer review and its application to digital projects.

Our debate grew rather heated as we engaged with the following ideas – scholarly vs. scholarship, knowledge, use value, and sustainability, among others.  Perhaps the most controversial part of the discussion was our exchange on the differences between scholarly work and academic scholarship.  Is there a difference? Do we value one contribution more than the other? Do scholarly contributions count as much as scholarship?  Our point of departure was a series of short readings about guidelines for evaluating digital scholarship, particular as a contribution to consideration for hire, promotion, or tenure.  We (I think) decided that none of the guidelines are entirely satisfactory.

The MLA Guidelines propose a sort of “go me!” approach and offer no real methods for critiquing the products and processes of DH.  The work takes a backseat to the scholar’s ability to self-promote, and the message is fairly unambiguous: Advocate, and ye shall receive (a job, tenure, a snazzy CV, props for the digital humanities).  James Smithies takes us farther and classifies digital projects into six categories.  Slightly put off by Category 6 and worried that our own attempts at DH might be “rarely seen, and generally politely ignored,” we were on board with the fact that any digital work contributes to the field as a whole, regardless of its category. [Just a side note: Most of us in #dh2068 are fairly confident that we will at least be up for consideration for Category 5 by the end of the semester.]  The University of Nebraska provides a series of guidelines for the techies in all of us, but the suggestions might only make sense to Category 1 and 2 digital scholars, and are the DH equivalent of the Chicago Manual of Style.  Todd Presner gives us a set of evaluative criteria that we found much more helpful, most probably because they replicate our standards of review for written texts – intellectual rigor, authorship, multidimensional applicability, sustainability, and scholarly contribution.

We saw the peer review process in action when we read William G. Thomas III’s process piece on a digital article.  The article, “The Differences Slavery Made: A Close Analysis of Two American Communities”, was born digital and went through several revisions before finding a home on the University of Virginia server.  Two things struck me about the process he and Edward L. Ayers went through to construct the article.  First, the process of review was almost identical to the process currently used by print journals and publishing houses.  Second, though the pair sought to produce a unique reading experience in digital media, the peer review process resulted in the construction of a digital project that bears uncanny resemblance to traditional print articles.  This article was composed (I feel “written” doesn’t work to well when talking about a digital project – is it writing? construction? creation? production?) in the early 2000s and was therefore fairly exceptional for the time.  Even today I find it a great resource, especially when preparing for comps (just check out the historiography section and you’ll see what I mean!)  But, I can’t help but feel that reviewers tended to blame the form for the inadequacies they perceived in the content.  That is, they assumed it was the structure of the digital media that undermined the strength and clarity of the argument.  Reviewers balked at the presentation as too “gimmicky” and invoked some of the same arguments regarding authorial control that we see in Fitzpatrick’s Planned Obsolescence.   This perception is perhaps not a failure on the part of Thomas and Ayers, but a failure on the part of reviewers to do what Presner suggests – engage with the project on its own terms, respect the medium in which it is published, and privilege form and content equally.

Thomas and Ayers claim that the project was an “attempt to translate the fundamental components of professional scholarship – evidence, engagement with prior scholarship, and a scholarly argument – into forms that take advantage of the possibilities of electronic media.”  [insert lively class discussion here]

Can we, when employing and deploying digital tools in the service of humanities endeavors, simply translate traditional scholarly expectations from one form to the other? I think that’s perhaps the whole point of trying to develop a set of standards that evaluates digital projects on their own terms – that applying the traditional standards of review doesn’t work.  When applied to Thomas and Ayers project, it altered the final form such that it became a journal article on a computer screen, with links instead of appendices.

So we come to the contentious question of the night: What is scholarship? We got a bit prickly in class, but found that there might be a difference between “scholarly work” and “scholarship.”  We thought that an argument is an essential feature of scholarship, but what kind of argument? If I put together a digital archive or a collection of primary sources and offer them to the public in a way that these sources were not previously accessible, have I produced a work of scholarship? Or, just something scholarly? I could say that there is an argument embedded in my work – I chose certain sources, I arranged them to tell a story, and I included information garnered from sweaty archival efforts.  Am I less a scholar than someone who publishes a transcribed diary? – Future scholars will use my work to produce their own – who’s to say that these efforts shouldn’t be rewarded with a job, a promotion, or tenure?

The heart of this debate, snarking and verbal jabbing aside, seemed to be the intellectual weight of process versus that of product.  What do we, as future producers of knowledge, value most? The end result or how we got there? If the monograph is the accepted standard, or the journal article a signifier of progress and prestige in the field, then the answer is clearly product.  Digital humanities. scholars, though, put great stock in the process and methods – so much so that experimentation, risk, and potential failure (and the willingness to confess these!) seem to be the hallmarks of good digital scholarship.

Developing a rigorous review process that satisfies DHers and paper professors alike calls not just for separate standards for digital work, but a universal set of guidelines that account for the myriad ways in which scholars might present their work, and the changing definition of scholarship that the digital age requires.

Going with the Flow

This week’s reading list was flooded with readings about digital workflows.  The idea of the digital workflow, it seems, is first to digitize anything and everything that comes across your research gaze.  (note to world: I really want an IrisPen!)  I have tried to do this over the last year or so, simply because I find the piles of paper distracting and prone to disorganization, mouse infestations, and mis-locations due to military moves.  But I have no organized approach, and I need one.

Turkel suggests using DevonThink as a way to organize research, and after reading another post on it I pooh-poohed the idea because I don’t have a Mac.  I need discernible buttons on my mouse and thus far every time I have used a Mac I have done something unfortunate to my work.  However, as I delve deeper into the world of DH I’m becoming slightly more convinced that a Mac might be more effect at handling the kind of work that I need to do.  I haven’t won the lottery yet, but when I do, a Mac might be on my list.  I was saved from my disappointment that the DH world was excluding me from these tools by a Twitter conversation in which I discovered that I could indeed run OS X on my computer.  Yay! So I did some reading about DevonThink and think that its best feature (aside from all the concordances and groupings, text searching and database-making) is the possibility it offers for writing in short bursts.  My biggest problem when doing research is that I follow this process:

Collect – Amass – Amass even more – Make big pile – Sort into small piles – Fluster at the piles – Try to write from the piles – Fluster at product of piles

Using a program like DevonThink may help prevent this, because as I collect research (for example, from a few frustrating hours in front of the microfilm machine) I’m forced to make sense of it right away and write a bit on what I’ve found, rather than wait until I think I’ve found everything.  Ultimately this process seems more productive on both practical and intellectual levels, and dare I say, more intuitive?

In tandem with readings on digital workflows, we did some reading on the pedagogy of DH.  We got to choose our own readings, and while I initially started with Debates in the Digital Humanities, edited by Stephen Brier, I migrated to an article in DHQ titled “Digital Pedagogy Unplugged”.  This article discusses some “digital” methods applied by DHers in their classrooms.  While we were supposed to think about the pedagogy of DH, I was intrigued by the application of digital thinking to more analog-based classrooms.  For example, a professor projected a theoretical text, paragraph by paragraph, on a screen and had the class read the text together in a “reimagin[ation] [of] real-time information processing in a very old fashioned way.”  The challenge for digital humanists is to teach with a digital pedagogy, not just with digital tools.  So, stuffing a PowerPoint in front of the class, or making snazzy graphics for key terms might be helpful but doesn’t get the class thinking about the material with their digital brains.

So, how does this help me think about my work in the classroom and my own research? A few ideas:

1. Do some “analog text-mining” with primary sources from our documents text.

2. Read especially challenging passages as a class so that everyone “is on the same page at the same time.”

3. Project my own passages on a screen.  Sometimes bigger helps you see better.

4. Digitize everything.  I do this with my students – everything they produce in class as groups gets scanned or photographed and shared on DropBox, so they can access the communal brain whenever they need to.

I certainly have more, but have already exceeded my word limit. The digital world does indeed make me more productive!

When technology fails you, not the other way around

Our digital history class grapples with our insufficient store of technological know-how, and frequently bemoans our collective lack of programming skills. Yet, I find these missing pieces easier to deal with than the actual failure, or insufficiency, of the technology itself – after all, they are problems that can be fixed. I can learn to program, code, or use mapping software. Maybe I could deal with computers not working the way they’re supposed to, or scanning software freezing up, but I don’t think the library staff would appreciate me giving them a good whack.

I took a couple rolls of microfilm to the library basement and anticipated that, after a few solid hours, I would have a nice collection of newspaper articles from the Appleton Press. A weekly serial still published in Appleton, Minnesota, these newspapers from the 1940s were supposed to tell me about the life of the town during World War II, how residents communicated with their overseas soldiers, and how propaganda and advertising were mediated through a rural press.

I sat at a computer microfilm set-up, and got the microfilm on the machine just fine. According to the somewhat unhelpful user’s guide, the image was supposed to pop up on the screen. Uh, do I need to press a button? Does it miraculously appear? If it doesn’t, is it me or the system? When I finally figured out the scanning software (half of which didn’t actually work because it wasn’t installed) I thought that it was me. Then, I noticed that it didn’t matter what I did – nothing on the screen came into focus. What?!? Where is the high-def image I was promised? I started thinking it wasn’t me, but the computer.

So I did what every tech genius does – I switched computers.

The next one worked better. Images were clear, I could view the whole page (almost) on the screen, but I was still baffled that in order to save an image, I had to navigate it in an ever-so-small pop-up screen that entirely defeats the purpose of microfilm. “Text must be in focus when you save the image.” Fantastic. I can’t READ the text because the viewer is so small. Tiny fuzz became big fuzz. The slightest jostling unfocused the image and ruined my image.

I surpassed these obstacles and managed to amass quite a collection of named images, which I was about to sent to my flash drive to preserve for posterity. DISASTER. The entire system froze, and wouldn’t do a darned thing. Ctrl+alt+del is NOT an option on university computers. I managed to get the program open again (with the mal-affected program still not working in the background) and it mercifully recovered my images. Yes!

No.

The program renamed my carefully dated documents – “Recovered 1″, Recovered 2”, and so on. No problem – I’ll check out the bad screen and re-type the file names.

No.

All the files have been re-ordered, so they are not in the same order they are on the bad screen. Imagine trying to match postage-stamp-sized miniatures of full-size documents to each other so that a hot-tempered historian doesn’t come after you and accuse you of plagiarism or some equally career-ending nightmare because you cited something wrong.

I stepped away from the technology and left it to stew in its own juices. Here’s a plug for historians getting involved in Digital Humanities – if we had our way, microfilm viewing would NOT be painful. First off, all microfilms would be converted to digital forms. That would save everyone from debilitating nightmares. Then, we would be able to “flip” through the pages, use a touch screen to select, crop, and print or save whatever it is you wanted. This technology must exist somewhere. I understand that small towns don’t have funds to digitize their resources, but this process was like historian repellant. I suppose this does offer some support for the actual visit to the archives, where you can indeed flip through, touch, and sniff, if you like, the documents you’re using, but I left the library more frustrated than jubilated at my magnificent findings.

And magnificent, they were. At least in MY opinion. After reading Streets of Honor, by Erling Kolke, I had a sense of the town as a small, rural community where everybody knew everybody else. I perused the World War II Letters blog, and came across names I recognized in the newspaper – an obituary about the father that shocked me, as I have not gotten that far in the letters. I found stories about the Appleton boys overseas, and read about their letters back home. Robert P. Miller, dentist turned lieutenant colonel turned mayor, has turned up – he speaks about his overseas experiences to the Kiwanis Club and the American Legion. So, the pieces are coming together as I learn more about this small community and their experience of World War II, and what is becoming apparent is that it was this experience of war that in part contributed to the way in which they chose to honor (and continue to honor) their fallen soldiers.

The lesson here is that sometimes technology doesn’t work, and sometimes it’s not user friendly and eats your data. In this case, I wished I had the real newspapers right in front of me – microfilm viewing is still archaic in light of the other technologies that are available – but I did get something out of my library nightmare, and that made the pain (mostly) worth it.

De(mist)ifying the Rump of History

I have now read John Lewis Gaddis’ The Landscape of History four times.  Every time I read it I find out something new, or think about the work of history through a different framework.  Reading it this time, in the context of digital history, I have had to revisit previous class discussions on the use of narrative, the tools at historians’ disposal, and the importance of data as an historical source.  Gaddis perches on the argument that history is about representation, and so we get this sense that our writing of history is similar to the task of the cartographer – we choose our level of detail, decide the scale, and benefit from the birds-eye view that the gift of hindsight bestows upon us.  Gaddis’ topographical explanation of history seems to lend itself to DH.  My own dabbling has suggested that DH enables better analysis at both the macro- and micro-levels and really alters the way I view my own subject.  “Zooming in and out” assists me in my own work, but DH extends that ability to my readers [users?!].  According to Gaddis, we start with structures and then dig up the processes that produced them; accordingly, if we use DH effectively, we are truly making everyone their own historians by giving them a “structure” (our conclusions, however representational they may be) and giving them access to the sources we used to arrive at that structure.

While Gaddis’ discussion of The Wanderer above a Sea of Fog is sort of fun, I don’t really find it an accurate model for what we do as historians.  The rump is pointed at the past or the future – it can’t be pointed at both.  If we, as historians, contemplate only one or the other, we elide the possibilities that a consideration of both proffers.   Gaddis’ romantic conclusion that the Wanderer is facing either a past or future in which “wisdom, maturity, the love of life and a life of love…lie” is unsatisfying.  Our view of the future informs our treatment of the past for good or bad, just as much as our inability to be totally objective (much to Novick’s dismay) can taint, but more hopefully enhance, our interpretations and conclusions.  And just a note – Gaddis’ painting might only show part of the story – what was in the scene that the painter chose to leave out? Maybe the rump is surrounded by mist.

The singular factor that makes history so exciting is the human, or, Gaddis’ self-motivated molecule.  We are motivated to action by innumerable factors, sometimes simple, but usually complex.  Historians untangle these complexities and quite frequent complicate the simplistic.  The benefit of DH is that it can help us do both – complicate what appears to be a simple story, and demystify the intricate.  Next up, how many variables are too many?

The Hedgehog or the Fox

Unusually perturbed whines one evening brought me to the pitch-black backyard, where I found my beagle worrying a large dirt patch around a prickly-looking ball.  Usually small mammals don’t stand a chance, but this hedgehog’s spiny defenses were (seemingly) impenetrable.

This is the picture that came to mind when I read the explanation of the hedgefox in Digital_Humanities.  Pretend that my dog was a fox, and you can visualize the possibilities of the “hedgefox.”  For every problem created by the hedgehog, she would have worried another patch. Scholars are trained in increasingly narrow specializations to the point of expert mastery – such devotion to a topic or theme requires years of work and sometimes blinders to interesting but otherwise distracting subjects.  The hedgefox of the digital age seems to solve this problem, as working in the Digital Humanities requires both the wily cunning of the fox, who is the master of many things, combined with the single-minded doggedness of the hedgehog.  Do we all need to be hedgefoxes, or is it equally productive to build a group of both hedgehogs and foxes, forgetting the whole food chain thing?

Practicing what they preach, Burdick, Drucker, Lunenfeld, Presner, and Schnapp argue for a collaborative humanities “discipline” guided by the possibilities of digital tools.  What they really seem to suggest is that we (I mean the public AND academic we) are living in a post-disciplinary world, in which the “humanities” require cross-curricular cooperation, infinite sharing of resources and knowledge, and a re-visioning of the ways we manufacture, distribute, and access intellectual property.  As an aspiring professional historian, the suggestion that our studious isolation and greedy grasp on archival discoveries and subsequent interpretation is outdated is frustrating – I want to feel that what I am doing NOW will matter in the future, and that I will be able to lay claim to it as mine.  The authors point out that the issue of legitimacy is an important one – who can publish, and whose credentials matter?

Perhaps this crisis of ownership and intellectual validity will create a more critical public, so that by our participation in this digital culture we also encourage our readers to read analytically, question the arguments, and acquire knowledge from multiple sources.  Maybe professors won’t have to introduce the research paper by way of “Do not use __________ as your only source” because the cultural climate will expose students to so much more exciting, authoritative, and well-documented media that they wouldn’t even think of it.