Category Archives: Course Readings

Report from the Eng. dept – First Year Comp Exam

Kathleen Fitzpatrick’s book, Planned Obsolescence, and the class discussion with her last Monday, has recently become relevant in my own path through academia. Over the course of last week and over the holiday weekend, I was asked by the English Dept. for my input on their proposals to re-model the first-year comprehensive exams.

You all may know about these in some way, but let me first briefly describe how it works, esp at CUNY / the GC. This is the exam that all PhD candidates in English must take before moving on to the next stage of their program. Not every department has them, but almost any PhD program in English seems to. I can’t speak to too many programs, but I do know that at Harvard, for example, they have a “Comprehensive Exam” they dub as the “100-book” exam: you must read and know a 100-book canon (gag me) like the back of your hand, and then go into a timed room of 3-4 faculty and spit out all your knowledge.

When I looked at the GC program, I was glad to see that it used a different model, one which didn’t seem to favor any particular canon. That said, it is still a full day, 8-hr, timed “exam,” in which you speed-respond to given essay prompts on an “empty” (brainless) computer.

After reports from students about the uselessness of this exam to measure their skills as thinkers / writers / teachers, and / or prepare them for “advanced study” (not to mention the fact that it penalizes students with learning disabilities or ESL), the Department has recently decided to try to change the model.

(Don’t worry, I’m going to get back to Kathleen’s points soon).

The model now under review is a “Portfolio” in the place of a test. It would consist of: one “conference paper,” one “review essay,” and one teaching syllabus.

As someone who has tested as “learning disabled,” I was certainly happy to hear that we were moving away from the timed exam.

And yet, looking back at Kathleen’s arguments made me re-think how “great” the Portfolio model really would be. As a poet, I’m interested in creative + critical teaching and practice… in building new “forms.” I’ve never written a review essay, and I’ve never attended an academic conference. I always worried that my lack of desire to do so would prevent me from getting my degree. But maybe I’m right: as Kathleen prescribes, we should be focusing more on the “process” of research, rather than the finished “product” (the review / conference papers). Maybe those are obsolete forms – forms that work towards the obsolete academic dissertation – which in turn work toward the obsolete academic book. Or am I just screaming in my head, “Don’t make me write a conference paper! I’m just a poet! Get me out of academia now!”

I have two answers to these questions. The first is: great, I finally have some smart argumentative backing (from Kathleen’s book, and our DH discussions all semester) to encourage my program to move away from the purely academic model of scholarship that is merely required, rather than wanted or needed. The second is: rather than wasting my time worrying that “pure academia” would come to get me, I should believe that I can actually interrogate these forms to create the type of work I want to do and see.

If we are given the Portfolio model, I have options, not limits. I can write, lets say, an open-access review essay. I can work collaboratively with other thinkers, perhaps even non-academic thinkers, online. I can write a conference paper both “about” and “demonstrating” joint creative and critical practice, and thereby question the form of the “paper” itself. I can certainly be grateful that I don’t have to spend all summer sweating about “failing” a biased timed-exam, and that I didn’t go to Harvard. Most importantly, I can think about the question of whether, by fixing the broken parts of a broken machine (rather than throwing them all away out of frustration, fear, and anxiety)… perhaps the machine will eventually start running well again; running somewhere new.

Teaching and Learning with Blogs

Kathleen Fitzpatricks’s emphasis on the importance of blogs in the maintenance, creation, and development of critical thought and academic communities, lead me to consider the function of “blogs” in academic teaching. Particularly, first-year writing.

I’ve taught writing courses and seminars using a WordPress blog for 3 or 4 semesters now (funny that I can’t really remember) – – – and have always struggled to get my students to use it. I’ve even struggled to get them to join it. Part of the issue was, clearly, the fact that I didn’t really know how to use these blogs myself – – – at least not “optimally.” (I’m hoping to attend a WordPress workshop before teaching again next year!)

But another part of the problem seems to run deeper – even, as Kevin pointed out on Monday – in our own DH Praxis class.

Along with student resistance to engaging with new (or really, unknown and thus intimidating) technical skills, the problem seems to be linked to the fear of exposing oneself online (as discussed last night). Exposing oneself in writing (which we are taught must be perfect, or precious), exposing oneself in permanence (rather than aloud, with no recording), and exposing oneself in front of peers and teacher(s), who might pass judgement for all sorts of reasons (this post is too long (I know), this post is too academic or too casual, this post is too short, this post is offensive, this post is irrelevant etc.)…

I myself have struggled to post on this blog, and this of course feeds my interest in the matter. Why? Perhaps its because, when I asked my own students to “post on the blog before every class,” it lead to a very difficult classroom situation. We all ended up repeating the same ideas over and over again. Because of this experience, I may have some illogical fear of being somehow forced to repeat myself, or to choose between ideas expressed “in class” and “on the blog.”

I admit that I have, at times, withheld a thought in class, deeming it “better for the blog.” I decide that I need more time to think it out; that I can express it better in writing. I’ll take copious notes, then go home with every intention of posting my thoughts. But then when I type it out, I get in over my head. Is the comment still relevant, has it become too heavy, long, or intricate in writing… too “developed”? Not blog-worthy. Turns out that if you “hide” a thought in order to work it out alone, expressing it can become a far more difficult task. I think this speaks to Kathleen’s ideas of being transparent rather than hidden, thinking and writing “in real time” rather than in time… delays.

I wonder how we can make classrooms – and academic communities – work both “in person” and “online.” How do you teach effectively both in person and with a blog? Matt & Kevin’s suggestion to post on this blog only 4 times, on subjects that are not often addressed in class discussions – is a far better model than ones I have used in my own classes. I’m definitely going to try to take this strategy to my writing classes. I’ve addressed the classroom community with “real time” “draft workshops” for each student’s paper, but I’d love to create an online community for the students to communicate about undiscussed topics, too. Perhaps the “draft workshop” can even go online. I see some connections here.

And as for the (serious) issue of self-consciousness in “public,” in writing, or “online.” that’s probably just a matter of getting used to the blog form. I still have far to go as both a student and a teacher – – –

On reading well, once again

I really enjoyed this week’s readings: Kathleen Fitzpatrick’s Planned Obsolescence: Publishing, Technology, and the Future of the Academy and select essays from Hacking the Academy: New Approached to Scholarship and Teaching from Digital Humanities, an edited collection by DanCohen and Tom Scheinfeldt. For me, the readings really made sense. What do I mean by that? Well, I think I got what DH is! It only took me a semester, but it finally happened.

If I had to name a common theme for the week, it would be “Journals Curators.” I like the metaphor, for each gallery space needs a curator. Journals can play this role now that scholarship is taking a digital turn. There is urgency to digitalize the work humanists do. And this does not mean uploading a pdf of your article to an online journal. This means uploading your work to an open and free journal in a format that allows for interaction between readers, reviewers, and the authors. This way, the article will be a constant work of progress that constantly improves as new perspectives are considered. Arguments strengthened, the total body of knowledge made healthier. Why would anyone object!? (But then again, would I really like my BA thesis to be a continuous work of progress after I submitted it to my adviser? Don’t think so.)

Michael O’Malley’s “Reading and Writing” was memorable because of the author’s humor. O’Malley’s stylistic choices make the hard love he’s giving humanists as easy to swallow as gummy bear vitamins. He points to the disconnect between the way we are taught to read and the way we are taught to write. As readers, we emphasize reading more in less time, at acquiring the skills of finding the main argument by reading a fraction of the book. Writing, however, is an art form that we must perfected, turning out draft after draft.

Dan Cohen & Roy Rozenzweig argue in the Introduction to Digital History: A Guide to Gathering, Preserving, and Presenting the Past on the Web that our reading habits are interrupted now that the content is online. There are no pages to flip and, to me, it is much harder to assess the reading on the screen compared to a print out, for example. I recall the authors’ arguments by their geographical position on the page, which is impossible when scrolling down the endless page.

In writing, on the other hand, we must take things to the next level. Things that can be said in layperson language are translated into jargon, making the arguments inaccessible. Is it for building an air of credibility? Or, as John Unsworth claims in “The Crisis of Audience and the Open-Access Solution,” is the humanities scholarship intentionally obscure? Are some things impossible without the use of words for their third or fifth listed meaning in the dictionary?

And how do we heal the diametrical split in our approaches to reading and writing?

Deformance / Hypertext Project

This is a sort of two-pronged post, addressing Matt’s question towards the end of last class, re: how the readings / class discussions are helping me think more about my data (or final) project.

I’m really interested in the ideas and examples of “deformance” (in Jerome McGann’s definition = interpretation + performance) that have come up recently, especially and most recently in Lev Manovich & Kevin’s digital work. I suppose I think of “deformance” as a way of turning art into new art… the purpose of which is beyond just “playing around” and being creative (good purpose in itself), but also, as Kevin pointed out, to ask questions of the “data” (the art, or the world in which it was produced) that you wouldn’t have known to ask before. Disordering the work of art (text, photo, or film) in order to change its questions, its answers, its “rules.” I have also been interested in the way that digital “deformance” tends to “aesthetically pleasing” results – Kevin and Lev’s work simply “look good,” and I’d love if one my projects in this course (i.e., project fully executed) could aspire to that type of artistic attention (which seems to derive from direct intention + skills + a level of pure play or “accident”).

Along these lines, it is now my intention to do a “deformance” project that is focused on my own writing / creative process. That is, rather than trying to uncover and work with the huge and somewhat impossibly impenetrable “data set” I previously proposed (Appropriation in Contemporary Poetry), I would like to either:

  • 1 – Make a digital hypertext edition of my book manuscript (Babette, recently published in print this month), adding one or more layers of text to discover more information about the language on the page. This may include anecdotes, links, or perhaps even other “poems,” that seem to enrich, deconstruct, or disorder the present text. Thus the “data set” would be the original text (+ the new text?) I would like this hypertext edition to move the reader away from the “search” (for meaning) and towards the “browse” function, revealing both writing and reading as dynamic, non-linear, and layered, with interconnected information and experiences. On that note, a final goal would be to open the text to “community, relationship, and play” (Stephen Ramsay) by allowing “users” to add their own interpretations, experiences, links, etc. (though I understand this might be beyond the scope of this project).

OR

  • 2 – Create a digital hyper-text edition of my three published manuscripts (Babette, Parades, and Latronic Strag) and do a data-visualization of the neologisms I’ve used in these works. The “data set” would thus be these “neologistic” words, about which I could ask starting questions such as: “how often do they appear in each book,” “how much do they sound like one another,” “how closely are they “related” to each other (by the computer’s definition),” how closely are they “related” to “real” words, what words do these associate with in my mind (or the computer’s, or in the minds of other readers)… what “real” language do they sound like, and is there some sort of “neologistic” conversation going on between the words, phrases, poems, manuscripts? Again, the aim would be to use the language as data to “browse” for new questions about the text, rather than “search” for these answers, and one ultimate goal would be to have the project allow for “users” to add in their own experience of these words (creating more data).

Allowing others to add reactions, data, or personal experience is one way for me to get away from the fear that this would be a “vanity project” (in which the data in the set is simply my own data). Another way would be to see this project as a starting point for hypertext-ing or disordering other texts, texts that are not my own. Perhaps I see this project as one that might move me closer to that more “research”-like or scholarly question of how language is appropriated or repurposed in contemporary poetry.

As for creating a “digital edition” of one (or more) of my books, I found a tool called Ediarum on the DIRT site, which claims to help authors “transcribe, encode, and edit” manuscripts.

As for the second (and I’d imagine, more fun and elaborate) task of “hypertexting” the book(s), I had to do a little more research to see what’s out there, and where it’s coming from. What “kind” of hypertext am I looking to produce? Based on the Wikipedia definitions of “forms of hypertexts,” I’d surely like to create something that is “networked,” i.e. “an interconnected system of nodes with no dominant axis of orientation… no designated beginning or designated ending.” And, if I wanted to be able to add that user interaction, I’d want something “layered”: a structure with two layers of linked pages in which readers could insert data of their own.

Searching for tools to create networked / layered hypertext lead me to two options on DIRT: Mozilla Thimble, and TiddlyWiki. (It also lead me to investigate what software is or has been available for hypertext, starting with Ted Nelson’s ProjectXanadu, and ending, it seems, with the popular (and expensive, at $300) program from Eastgate called StorySpace, neither of which I think will be very helpful).

I’d love any thoughts on which project (1 or 2) seems more interesting, appropriate, or feasible for this project… I’m going to make an appointment with the Digital Fellows to get their advice (and guidance on the tools).

Thanks!

– Sara

 

Sculpting in Time: Andrey Tarkovsky ‘s Individual Shot

“History is still not Time; nor is evolution. They are both consequences. Time is a state: the flame in which there lives the salamander of the human soul.” – Andrey Tarkovsky, from Sculpting in Time

 

As the semester has progressed, I’ve found, each week, that my sense of what interests and excites me about the current DH landscape is becoming richer, honed, and focused. Even as I marvel at, and have great respect and admiration for the large scale digital analysis that’s going on in the realm of social media scraping , and big data crunching, I keep finding my way back toward the idea of “distortion” and de-formance as a research method and outcome in DH. Jotting down notes that might help me find my way toward a generative approach to DH scholarship, I’ve pulled a surprising combination of books from my shelf for inspiration: John Berger’s And Our Faces, My Heart, Brief as Photographs, In Praise of Shadows by Junichiro Tanizaki, Joseph Cambell’s The Inner Reaches of Outerspace, and  On Weathering: The Life of Buidlings in Time by Mohsen Mostafavi to name a few. My instincts keep drifting toward the aesthetic, and remembering a point Matt made in class about some DH practitioners creating imperfect 3D-printed objects as teaching tools, I scoured the internet for DH studies in Materiality.

vlcsnap-134648

Still taken from Andrey Tarkovsky’s film Nostalgia

In mulling over the DH landscape gradually examined in our readings and class discussions, I’ve found, in a way, that I’ve been chasing myself. In a course focused less on subject matter, and more on methodology and approach, I’m forced to burrow down into what really motivates me in my learning. This week, I was blown away by the work that Kevin is doing with ImageJ. I’m excited to the point of jumping the gun on this blog post when I think about the potential projects that might come about when building and theorizing around the ImageJ software. Today, I downloaded the ImageJ package for Mac OS X Java App. I’ll begin experimenting with it in the coming weeks. For my initial data project, I would like to analyze a bundle of still frame from a film by Andrey Tarkovsky (The Mirror, or The Sacrifice, or potentially, Stalker). Alternatively, and somewhat thematically as the flame of time indicates in the epigraph to this blog post, I would love to analyze the nearly 10-min sequence in Nostalghia where the film’s main protagonist walks the length of a long fountain, all the while shielding the flame of a candle he cradles inside his overcoat to prevent it from going out. I hope for this data exercise to be an entryway into a more focused thesis for a larger project. (Not necessarily of a similar vein, but definitely derivative of what I learn in the process). I’m currently reading Tarkovsky’s cinema theory monograph, Sculpting in Time. What I hope to learn from a project that analyzes an iconic, yet not widely-viewed director in this initial data project is modest. In addition to creating an outcome that can visualize the “sculpting” in time of Tarkovsky’s films, I hope to get a sense of his sculpting of time by utilizing the ImageJ Stacks menu.

I’ve always loved Tarkovsky’s films, even though, at times, I find them difficult to watch. They’re  ghostly, beautiful, and most often, mysterious. Tarkovsky’s rhythmic examinations of nature and landscape of all types and scale with slow, drawn out single shots that seem to extend far longer than their actual temporal length have always contrasted with the contentious, even dangerous political climate that existed in modern Russia during the time in which they were created.  I hope to take on this small-scale project as a way to delve deeper into a subject that stimulates me. And from there, I will turn my gaze toward the DH community at large to try and locate gaps in the collective methodological toolbox, or places from which I can propose and launch a meaningful contribution on a larger scale.

Next step, capture and organize a bundle of still frames. And get my feet wet using ImageJ Stacks.

Data Project: Reading Transnationalism and Mapping “In the Country”

Last week, we discussed “thick mapping” in class using the Todd Presner readings from HyperCities: Thick Mapping in the Digital Humanities, segueing briefly into the topic of cultural production and power within transnational and postcolonial studies (Presner 52). I am interested in what the investigation of cultural layers in a novel can reveal about the narrative, or, in the case of my possible data set, In the Country: Stories by Mia Alvar, a shared narrative among a collection of short stories, each dealing specifically with transnational Filipino characters, their unique circumstances, and the historical contexts surrounding these narratives.

In the Country contains stories of Filipinos in the Philippines, the U.S., and the Middle East, some characters traveling across the world and coming back. For many Overseas Filipino Workers (OFWs), the expectation when working abroad is that you will return home permanently upon the end of a work contract or retirement. But the reality is that many Filipinos become citizens of and start families in the countries that they migrate to, sending home remittances or money transfers and only returning to the Philippines when it is affordable. The creation of communities and identities within the vast Filipino diaspora is a historical narrative worth examining and has been a driving force behind my research.

For my data set project, I hope to begin by looking at two or more chapters from In the Country and comparing themes and structures using Python and/or MALLET. The transnational aspect of these short stories, which take place in locations that span the globe, adds another possible layer of spatial analysis that could be explored using a mapping tool such as Neatline. My current task is creating the data set – if I need to convert it, I could possibly use Calibre.

On reading

Several things stood out for me from Stephen Ramsay’s essay “The Hermeneutics of Screwing Around; or What You Do with a Million Books.” The most significant of them was the fascination with where is the anxiety to read everything coming from. Ramsay says it started in the 15th century, around the time of the introduction of the Gutenberg press to Europe. Since then, there have been many philosophers who agonized over the ever-growing number of books that they could not possibly read. Referencing Margaret Cohen and what she calls “the great unread,” Ramsay pokes fun at the way we talk about the literary canon and its supposed inclusivity and representation of the field: “But in the end, arguments from the standpoint of popularity satisfy neither the canoniclast nor the historian. The dark fear is that no one can really say what is “representative” because no one has any basis for making such a claim.”

Ramsay proposes different options. He quotes Martin Mueller and his suggestion to “stop reading” once you’ve identified the location of the book in the network of transactions “that involve a reader, his interlocutors, and a ‘collective library’ of things one knows or is supposed to know.” Responding to Mueller’s quote, I jotted down in bright blue pen “but don’t you miss nuance!?” On my second reading of Ramsay’s essay, I noted, “What is the point of reading—is it to just locate the book in the network of transactions and to talk about it to others or is it to learn and enjoy?” Is intricate detail of a human story of any interest to us when we read only up to the point of locating the books in its network of transactions? For example, once we learn that the lead character Sally has favorable chances of hooking up with her object of affection Mary, then should we just stop reading?

Another option is to read books from compilations such as “Top 100 novels of all time.” And although the feasibility of canon is questionable, many do follow these lists as a way to combat this anxiety of missing out. So much so, that NPR run a story titled “You Can’t Possibly Read It All, So Stop Trying” where the guest Linda Holmes recommended strategies and coping techniques. But I have a question. Whom do we regard as authority over what makes it on the canon? In class, a colleague bought up that editors do the sorting for us when they accept or refuse manuscripts. But who are the editors and what are their standpoints and biases? Following the network associations, are the manuscripts closest to culturally dominant network of transactions more likely to become books?

Yet another option is Franco Moretti’s approach to simultaneous reading of thousands of novels assisted by computer technology. Except that there won’t be reading per se, but counting, graphing, and mapping. Since we cannot read even a fraction of all the books out there, why not analyze them and see what stands out? Matthew Jokers also came up with a way to identify “six, or possibly seven, archetypal plot shapes.” Although his methods were questioned on the public forum, his laborious attempt at condensing thousands of novels into six or seven plotlines demonstrates the strength of desire for reading it all. I wonder, are these questions being brought up by folks who have been around prior to ubiquity of computers and actually read many books cover to cover, so now they don’t mind missing out on the nuance?

Then again, why is there so much anxiety over reading it all? The shame of being “caught unaware” of a book’s plot while mingling over wine and hors-d’oeuvres? Because if it was to achieve a shared knowledgebase for meaningful participation on the public square, then wouldn’t reading the books in K-12 and college be sufficient? Nowadays it seems most of our knowledge comes in the form of visual media or lists of top things curated by BuzzFeed or our Facebook network.

Ramsay eloquently concludes, “Your ethical obligation is neither to read them all nor to pretend that you have read them all….” but to appreciate the process of discovery. Agreed!

Digital Dorothy

As I described in the last class, I’m going to use a data set that is a text.  At first, I wanted to create a “diachronic” map of a particular place—the English Lake District—which is a popular destination for hikers, walkers, photographers, and Romantic literature enthusiasts. This last category also includes a great many Japanese tourists.

My first plan was to create a corpus of 18th– and 19th-century poetry and prose related to the Lake District (read: dead white males), explore the way landscape was treated, map locations mentioned in these texts or create a timeline, and then add excerpts of text along with the present-day visual data.

For the present-day component, I was thinking about how to scrape and incorporate data and photos from Flickr and Twitter that were tagged with the names of local landmarks and landscape features of the area.

mapping the lakes image in Google Earth

An image from Mapping the Lakes in Google Earth

Early on, I discovered Mapping the Lakes – a 2007-2008 project (apparently still in pilot phase) at the University of Lancaster that uses very similar strategies to explore constellations of spatial imagination, creativity, writing, and movement in the very same landscape. From the pilot project:

The ‘Mapping the Lakes’ project website begins to demonstrate how GIS technology can be used to map out writerly movement through space. The site also highlights the critical potentiality of comparative digital cartographies. There is a need, however, to test the imaginative and conceptual possibilities of a literary GIS: there is a need to explore the usefulness of qualitative mappings of literary texts… digital space allows the literary cartographer to highlight the ways in which different writers have, across time, articulated a range of emotional responses to particular locations … we are seeking to explore the cartographical representation of subjective geographies through the creation of ‘mood maps’.

The interactive maps are built on Google Earth; therefore, don’t try to view this in Chrome. You can also use the desktop version of Google Earth. The project is quite instructive in its aims as well as its faults and failures, and the process and outcomes are described on the website. (Actually, the pilot project might be a very good object lesson on mapping creative expression with GIS.)

However, if you’re interested in this kind of mapping, you should take a look at the Lancaster team’s award-winning research presentation poster on their expanded Lakes project:

I wrote to one of the authors to ask her about it—methodology, data set, etc. She was happy to respond, and was encouraging. Although the methodology is way beyond my technical chops at present, she referred me to a helpful semantic text-tagging resource that they used, and I’m sure will come in handy at some point.

After some floundering around, I defined a data set and project that is challenging but more manageable. It will involve a map and one text: an excerpt of Dorothy Wordsworth’s journals, from 1801-1803— not long after the second edition of Lyrical Ballads was published, and she and her brother moved to the area with their friend Samuel Taylor Coleridge.

The journals are a counterpoint to William Wordsworth’s early poetry, in that she kept them as much for her brother as for herself—recording experiences they had together, and personal observations that she knew would inspire him—to provide the raw material for his poems. There is a not extensive yet established amount of scholarship on the subject. She even describes this collaborative process in her journal—although it’s not called collaboration, and until more recently wasn’t characterized as such by critics.

To prepare the data set, I downloaded the text file of the most complete edition of her journals from Project Gutenberg, took out everything not related to this time period, and did a lot of “find and replace” work to get rid of extra spaces and characters, editorial footnotes, standardize some spellings, and change initials to full names. Following the advisories on the semantic tagging and corpus analysis sites, I also saved the file in both ASCII and UTF-8 text formats, with line breaks. (This may or may not prove necessary, depending on the tools I use later on). I have considered using a concordance tool of some kind (like Antconc) to visualize those connections, since I don’t think that has been done. However, this would entail creating a second data set with the book of poems and it’s a secondary interest.

My primary goals are these:

  • I’m hoping this project will confirm or complicate existing assumptions about Dorothy and her journals, which until now—as far as I know—have only been developed through close reading, not visualization.
  • Using this text, I want to map her life in the Lake District during this period – socially, physically, and emotionally. (In her brother’s case, his poetry does a good job of that, and stacks of books have been written about his relationships to other people, women, landscape, time, etc.)
  • I want the map to be interactive to some degree, so that users can trace these different aspects of her life geographically, by clicking on related keywords. Ideally, I would like to include supplementary images—paintings, engravings, and portraits—that were created in the era, to provide a contemporaneous visual component. Including related excerpts of journal or poetic text would also be helpful: it would be a means of mapping her creativity, in a way. A similar map of  William Wordsworth‘s creativity exists. It is more extensive but not very user-friendly.

On the cartographical front, I have been considering CartoDB and Mapbox. I also looked at the British Ordnance Survey topographical map of the area, which, like all the ordnance maps, is now online. The OS website includes a feature similar to Google maps, whereby you can personalize maps to some degree, and connect text and image data. Of course, Google Earth can be used this way too. Mapbox has nice backgrounds, but less options. CartoDB is visually pleasing,  versatile, and allows for more elegant “pop ups,” which I could use to include bits of text, images from the time, etc. But it can’t be embedded into a webpage. As they come into focus, the project goals will ultimately determine what I use.

In the meantime, I’m using Voyant to explore the text/data set. It is a great resource to help you define the parameters for a more focused project. You can see what I’m working with here. Eventually I will geocode the locations, either by hand or via Google Maps, input location data, temporal data, and data about her social interactions (all in the text) in a CSV file that can be uploaded into a mapping program, and figure out how to connect everything . (Or I will die trying.) I also plan to study the new and improved “Mapping the Lakes” project more carefully, for ideas on how best to present my own, less ambitious project.

Along the way I’ve encountered some other software that may be useful for those of us who like working with olde bookes: VARD is a spelling normalization program for historic corpora (also from Lancaster U. It requires permission to download but that is easy to get).

That is all.

Impediments to Digital History

Dr. Stephen Robertson’s essay, “The Differences between Digital History and Digital Humanities” engages many thought-provoking points, foremost in my mind, the challenges surrounding access to information. For all of the debates concerning what it is exactly that DH is doing, what unique qualities it brings to the table, if the table is locked away, we’re left standing. That intellectual property, digital or otherwise, is often protected as a commodity is an intuitive reality. Access to both print and digital subscriptions to academic journals, by and large, entails substantial fees. Some of the fully digital tools have locked-down APIs. That being said, the academy has a unique responsibility, even an existential one, to facility open access to ideas and information. Instead of going into the reasons why, I wanted to write a quick conversation starter that might point at one of many potential solutions.

Piggy-backing off of a recent development in New Media reporting, I can envision a mainline channel into the second largest repository of intellectual property in the world (although at this point, not the most technically agile – a separate problem) the Library of Congress. The Story Corps audio archive has been preserved at the Library of Congress for years. However, the Story Corps mobile application is new, and potentially trans-formative. Although in the past, reporters from Story Corp would sit down with interviewers and interviewees to help them record their stories, this was time-consuming, costly, and largely inefficient. The SC application allows users with smartphones to record and upload interviews that will be digitally preserved at the LoC, instantly, at the push of a button.

My question: How can the Library of Congress, a tax payer-funded institution be used from the outset to facilitate open access to new, digitally-born intellectual property? Can we look toward partnerships with a current, or yet-to-be-created government entity, and privately-funded organizations like Story Corp as a model for guardianship of ideas? What would such a partnership look like? What would we call it?

Response to Impediments to Digital History (forum post)

See Taylor’s forum post

There are some models for open-access peer-reviewed work (that I mentioned in a forum post last week), which, if they became the “standard practice” for humanities publishing, would address some of the issues you bring up. In the sciences, Plos One seems to have achieved the tricky balance of maintaining open access to intellectual property and its status as a forum for sound, “legitimate,” research.

But, as Taylor points out, it’s not just about access, it’s about money. What are viable funding models for open-access publishing? PubMed is a publicly-funded (NIH) clearing house for research in the health sciences. But it’s highly unlikely that public funding would sustain open-access publication in the humanities. 1) public money is scarce (even for the sciences these days; 2) public money isn’t necessarily managed or spent well, and spending decisions are often highly idiosyncratic, depending on who’s making them (exemplified by the Library of Congress controversy); 3) unlike scientific research, the humanities doesn’t have the promise (at least in theory) of a “final product” that can be marketed for profit. Its only product, other than scholarship and scholarly engagement, is experiential: this requires interactive public engagement, which requires that the public is interested, which requires that public is aware of its existence. Which is the only outcome that is justifying public (or much of private) funding in the humanities these days. Put another way, how does a Kickstarter campaign to digitize an archive of crumbling, century-old Haitian newspapers with immense value to Francophone historians compete with a campaign to support an independent documentary feature film?

Possibly the LoC could connect with, or help to establish partnerships, with organizations similar to PLOS One. There are existing open-access humanities and multidisciplinary networks that were established in Europe, like the Directory of Open Access Journals, which is a non-profit that survives through corporate sponsorship and membership fees. Matt mentioned the UK’s Open Library of Humanities . It is also an independent non-profit, and presumably has both government and private sponsors.

It’s not really a “crowdsourced” publication model like those I mentioned above, but the Smithsonian/Folkways project is an interesting example. (The Smithsonian acquired Folkways Records after Moses Asch died so that this material would be preserved.) It operates as commercial-private venture, and receives no public funding. It’s not really a forum for hard-core academic scholarship, but it has a wealth of artifacts and information on ethnomusicology and music history. A good deal of its offerings are fee-based, but it also offers free access to playlists, podcasts, and teaching tools, some free downloads, and lots of information on American and world folk music. Its collection and archive are open to researchers; if these alone were made freely accessible off-site, they would be an invaluable public resource. In certain ways, it looks like a large-scale model of that used by the American Social History Project at the GC.