Author Archives: Taylor Dietrich

Sculpting in Time: Andrey Tarkovsky ‘s Individual Shot

“History is still not Time; nor is evolution. They are both consequences. Time is a state: the flame in which there lives the salamander of the human soul.” – Andrey Tarkovsky, from Sculpting in Time


As the semester has progressed, I’ve found, each week, that my sense of what interests and excites me about the current DH landscape is becoming richer, honed, and focused. Even as I marvel at, and have great respect and admiration for the large scale digital analysis that’s going on in the realm of social media scraping , and big data crunching, I keep finding my way back toward the idea of “distortion” and de-formance as a research method and outcome in DH. Jotting down notes that might help me find my way toward a generative approach to DH scholarship, I’ve pulled a surprising combination of books from my shelf for inspiration: John Berger’s And Our Faces, My Heart, Brief as Photographs, In Praise of Shadows by Junichiro Tanizaki, Joseph Cambell’s The Inner Reaches of Outerspace, and  On Weathering: The Life of Buidlings in Time by Mohsen Mostafavi to name a few. My instincts keep drifting toward the aesthetic, and remembering a point Matt made in class about some DH practitioners creating imperfect 3D-printed objects as teaching tools, I scoured the internet for DH studies in Materiality.


Still taken from Andrey Tarkovsky’s film Nostalgia

In mulling over the DH landscape gradually examined in our readings and class discussions, I’ve found, in a way, that I’ve been chasing myself. In a course focused less on subject matter, and more on methodology and approach, I’m forced to burrow down into what really motivates me in my learning. This week, I was blown away by the work that Kevin is doing with ImageJ. I’m excited to the point of jumping the gun on this blog post when I think about the potential projects that might come about when building and theorizing around the ImageJ software. Today, I downloaded the ImageJ package for Mac OS X Java App. I’ll begin experimenting with it in the coming weeks. For my initial data project, I would like to analyze a bundle of still frame from a film by Andrey Tarkovsky (The Mirror, or The Sacrifice, or potentially, Stalker). Alternatively, and somewhat thematically as the flame of time indicates in the epigraph to this blog post, I would love to analyze the nearly 10-min sequence in Nostalghia where the film’s main protagonist walks the length of a long fountain, all the while shielding the flame of a candle he cradles inside his overcoat to prevent it from going out. I hope for this data exercise to be an entryway into a more focused thesis for a larger project. (Not necessarily of a similar vein, but definitely derivative of what I learn in the process). I’m currently reading Tarkovsky’s cinema theory monograph, Sculpting in Time. What I hope to learn from a project that analyzes an iconic, yet not widely-viewed director in this initial data project is modest. In addition to creating an outcome that can visualize the “sculpting” in time of Tarkovsky’s films, I hope to get a sense of his sculpting of time by utilizing the ImageJ Stacks menu.

I’ve always loved Tarkovsky’s films, even though, at times, I find them difficult to watch. They’re  ghostly, beautiful, and most often, mysterious. Tarkovsky’s rhythmic examinations of nature and landscape of all types and scale with slow, drawn out single shots that seem to extend far longer than their actual temporal length have always contrasted with the contentious, even dangerous political climate that existed in modern Russia during the time in which they were created.  I hope to take on this small-scale project as a way to delve deeper into a subject that stimulates me. And from there, I will turn my gaze toward the DH community at large to try and locate gaps in the collective methodological toolbox, or places from which I can propose and launch a meaningful contribution on a larger scale.

Next step, capture and organize a bundle of still frames. And get my feet wet using ImageJ Stacks.

You Are Listening To New York: Reflections on Open APIs

The Digital Fellows workshops here at the GC have far exceeded my expectations of what a 2-hour seminar tends to be. There’s just only so much technical material that can be absorbed in such a small window of time. That being said, the real strength of these workshops comes from the capable Digital Fellows leading the discussions, and the superb, thorough documentation they provide.

Out of the workshops I’ve thus far attended (Server Architecture, Introduction to Webscraping, etc), I’ve found the Lexicon to be the most useful, as it touched, very briefly, on a range of DH tools and approaches. In fact, it was so successful in communicating an overview of the emerging field, that it has thrown my dataset/final project planning for a loop (for another blog post).

One fairly important aspect of DH project development glossed over during the Lexicon was the importance of open APIs. I wanted to share a project that uses open APIs to wonderful effect. The “You Are Listening To” project utilizes open APIs to curate an immersive user experience centered around a mashup of ambient music and real time transmissions of police radars and airwave communications from cities around the world. Check out this link for You Are Listening to New York.

What I like so much about this site is it’s simplicity. It’s an elegant digital curation of various streaming media. When you load the page there’s a javascript file that pulls in an audio stream from, which provides the police radio audio feed. It also pulls up a soundcloud list that has been screened by the site’s creator Eric Eberhardt to ensure that it only incorporates, ambient, dreamy soundscapes that contrast with and compliment the police scanner audio. It also loads the page’s background image (of the user’s chosen city), which is pulling from Flickr’s API. This is all legal, free, and only possible because each of the companies made an effort to provide access to their site through simple web APIs.

There’s also a ton of additional metrics in the “i” info dropdown to the website. It looks like it’s accessing  twitter and reddit feeds, a geotracking tool to provide metrics about and for listeners, some google reference info, and various news trackers.

Have a look!





Hypergraphy as a Garden of Forking Paths

In zeroing in on a specific data set to begin with in my building-up-toward a more fully-conceived project for next Spring, I’ve found it necessary to first demarcate my chosen subject matter. To work backwards so to speak.

The prefix “hyper” refers to multiplicity, abundance, and heterogeneity. A hypertext is more than a written text, a hypermedium is more than a single medium. – Preface to HyperCities

Hypergraphy, sometimes called Hypergraphics or metaGraphics : a method of mapping and graphic creation used in the mid-20th century by various Surrealist movements. The approach shares some similarities with Asemic writing, a wordless open semantic form of writing which means literally “having no specific semantic content.” Some forms of Caligraphy (think stylized Japanese ink brush work) also share a similar function, whereby the non-specificity leaves space for the reader to fill in, interpret, and deduce meaning. The viewer is suspended in a state somewhere between reading and looking. Traditionally, true Asemic writing only takes place when the creator of the asemic work can not read their own writing.

Example work:


Jorge Luis Borges was an Argentine short-story writer, essayist, poet, translator, and librarian. A key figure in the Spanish language literature movement, he is sometimes thought of as one of the founders of magical realism. He notably went blind in 1950 before his death. In his blindness, he continued to dictate new works (mostly poetry) and give lectures. Themes in his work include books, imaginary libraries, the art of memory, the search for wisdom, mythological and metaphorical labyrinths, dreams, as well as the concepts of time and eternity. One of his stories, the “Library of Babel”, centers around a library containing every possible 410-page text. Another “The Garden of Forking Paths” presents the idea of forking paths through networks of time, none of which is the same, all of which are equal. Borges goes back to, time and again, the recurring image of “a labyrinth that folds back upon itself in infinite regression” so we “become aware of all the possible choices we might make.”[88]

The forking paths have branches to represent these choices that ultimately lead to different endings.

Borges is also know for the philosophical term the “Borgesian Conundrum”. From wikipedia:

The philosophical term “Borgesian conundrum” is named after him and has been defined as the ontological question of “whether the writer writes the story, or it writes him.”[89] The original concept put forward by Borges is in Kafka and His Precursors—after reviewing works that were written before Kafka’s, Borges wrote:

If I am not mistaken, the heterogeneous pieces I have enumerated resemble Kafka; if I am not mistaken, not all of them resemble each other. The second fact is the more significant. In each of these texts we find Kafka’s idiosyncrasy to a greater or lesser degree, but if Kafka had never written a line, we would not perceive this quality; in other words, it would not exist. The poem “Fears and Scruples” by Browning foretells Kafka’s work, but our reading of Kafka perceptibly sharpens and deflects our reading of the poem. Browning did not read it as we do now. In the critics’ vocabulary, the word ‘precursor’ is indispensable, but it should be cleansed of all connotation of polemics or rivalry. The fact is that every writer creates his own precursors. His work modifies our conception of the past, as it will modify the future.”

I’m circling around 2 or 3 different project ideas:

  1. Close Reading/Qualitative Analysis: Hypertextualizd Borges poems/short stories with an emphasis on works created during his period of blindness, re-imagined as a garden of forking paths. Break down the works into levels of constituent parts. Create an engine to re-esemble them based on a methodological algorithm informed by his ideas surrounding non-linearity, and the morphology of his oeuvre.
    1.5 *Potential Visualization Component: Hyperagraphy Engine (simulated blindness) that interacts with the hypertextualized artifacts from 1.0.
  2. Distance Reading/Quantitative Analysis: Topics as “forms of discourse” in Borges and his precursors (Potential Candidates: Cervantes, Kafka, Schopenhauer, Quevedo, Gracian, Pascal, Coleridge, Poe.)
  3. …..(Running out of time, will continue this post tonight).








Impediments to Digital History

Dr. Stephen Robertson’s essay, “The Differences between Digital History and Digital Humanities” engages many thought-provoking points, foremost in my mind, the challenges surrounding access to information. For all of the debates concerning what it is exactly that DH is doing, what unique qualities it brings to the table, if the table is locked away, we’re left standing. That intellectual property, digital or otherwise, is often protected as a commodity is an intuitive reality. Access to both print and digital subscriptions to academic journals, by and large, entails substantial fees. Some of the fully digital tools have locked-down APIs. That being said, the academy has a unique responsibility, even an existential one, to facility open access to ideas and information. Instead of going into the reasons why, I wanted to write a quick conversation starter that might point at one of many potential solutions.

Piggy-backing off of a recent development in New Media reporting, I can envision a mainline channel into the second largest repository of intellectual property in the world (although at this point, not the most technically agile – a separate problem) the Library of Congress. The Story Corps audio archive has been preserved at the Library of Congress for years. However, the Story Corps mobile application is new, and potentially trans-formative. Although in the past, reporters from Story Corp would sit down with interviewers and interviewees to help them record their stories, this was time-consuming, costly, and largely inefficient. The SC application allows users with smartphones to record and upload interviews that will be digitally preserved at the LoC, instantly, at the push of a button.

My question: How can the Library of Congress, a tax payer-funded institution be used from the outset to facilitate open access to new, digitally-born intellectual property? Can we look toward partnerships with a current, or yet-to-be-created government entity, and privately-funded organizations like Story Corp as a model for guardianship of ideas? What would such a partnership look like? What would we call it?

Ways that Humanists Think About Data – An alternative text for in-class discussion

Up to this point, I’ve enjoyed our in-class discussions. Typically,  I leave with an unfocused, impending fatigue that transforms during my subway ride home into a grounded awareness of the gaps in my thinking about DH theory, what questions I have more generally about how DH fits into the larger context of humanistic inquiry in the academy, as well as a slightly more refined awareness of how I see myself finding my place in the field.

Last week I left, running through potential ideas for my data project, wishing I had articulated the desire for (in an effort to create a lexicon) a more specific discussion about terms related to actual DH projects. I found myself trying to anticipate the unique ways in which humanities scholars think about data. Data sets and maps generally, are obviously representations of a more complex, dynamic, ambiguous world. How have DH practitioners found inspiration in this reality, and what potential solutions and tools already exist? How can the gap between the “real” and the represented be used fruitfully? How can uninterpreted data result in new ways of seeing?

After reading Stephen’s Ramsay’s “Programming with Humanists: Reflections on Raising an Army of Hack-Scholars in the Digital Humanities” I found myself setting aside time to research what exactly went into “word frequency generators” and “poetry deformers”. He mentions a list of tools for analyzing text corpora: tf-dif analyzers, basic document classifiers, sentence complexity tools, etc, as well as natural language processing tools, as potential programs that could be built during a computer science introduction focusing on humanities computing. Hashing out a basic explanation about what these programs do, and potentially a bit about how they do it, would contribute an additional, fruitful dimension to our praxis seminar discussions. I have a sense that learning more about what tools exist would go a long way in helping me zero in on a meaningful dataset.

**As an aside, as I bet not everyone will have had a chance to read this particular article, I should mention that I also really appreciated Ramsay’s extensive list of supplemental reading materials, some of which I have read (The Question Concerning Technology Martin Heiddeger, and others that I would love to spend some time with like NOW, The Work of Art in the Age of Mechanical Reproduction for example.)**

During my research I came across an excellent blog post by Miriam Posner titled Humanities Data: A Necessary Contradiction in which she engages some of the questions that are preoccupying me in lieu of having to choose my dataset. In her blog post she provides a transcript of a talk she gave at the Harvard Purdue data symposium this past summer. Her talk focused on the unique ways that humanists think about data vs say a scientist or a social scientist, and the implications of these differences for librarianship and data curation. I’ll list a couple prescient quotes and a link to her post. If you have some time, check it out!

“It requires some real soul-searching about what we think data actually is and its relationship to reality itself; where is it completely inadequate, and what about the world can be broken into pieces and turned into structured data? I think that’s why digital humanities is so challenging and fun, because you’re always holding in your head this tension between the power of computation and the inadequacy of data to truly represent reality.”


So it’s quantitative evidence that seems to show something, but it’s the scholar’s knowledge of the surrounding debates and historiography that give this data any meaning. It requires a lot of interpretive work.