Data set project

I finally got a plan for my data set project and if all goes well with testing I will have created something that will be informative.  My data set will focus on the homeless street population in NYC, using a CartoDB map.  I want to tell the story of homeless people, and how many live on the street.  I believe using maps will help illustrate the widespread dilemma of homelessness, as well as showing the various parts of the city, and where they  live.  Later for my final project, I will be using my commons webpage to show how many women, men, and children are living in shelters, in conjunction with the homeless population that live on the street.

The reason I chose homelessness is that many people do not realize how easy it is to become homeless,  take it for granted, and judge people for being homeless, when not really knowing how and why.  I hope this will at least make people think about how this has become society’s problem and not just the individual’s.

Juana

Teaching and Learning with Blogs

Kathleen Fitzpatricks’s emphasis on the importance of blogs in the maintenance, creation, and development of critical thought and academic communities, lead me to consider the function of “blogs” in academic teaching. Particularly, first-year writing.

I’ve taught writing courses and seminars using a WordPress blog for 3 or 4 semesters now (funny that I can’t really remember) – – – and have always struggled to get my students to use it. I’ve even struggled to get them to join it. Part of the issue was, clearly, the fact that I didn’t really know how to use these blogs myself – – – at least not “optimally.” (I’m hoping to attend a WordPress workshop before teaching again next year!)

But another part of the problem seems to run deeper – even, as Kevin pointed out on Monday – in our own DH Praxis class.

Along with student resistance to engaging with new (or really, unknown and thus intimidating) technical skills, the problem seems to be linked to the fear of exposing oneself online (as discussed last night). Exposing oneself in writing (which we are taught must be perfect, or precious), exposing oneself in permanence (rather than aloud, with no recording), and exposing oneself in front of peers and teacher(s), who might pass judgement for all sorts of reasons (this post is too long (I know), this post is too academic or too casual, this post is too short, this post is offensive, this post is irrelevant etc.)…

I myself have struggled to post on this blog, and this of course feeds my interest in the matter. Why? Perhaps its because, when I asked my own students to “post on the blog before every class,” it lead to a very difficult classroom situation. We all ended up repeating the same ideas over and over again. Because of this experience, I may have some illogical fear of being somehow forced to repeat myself, or to choose between ideas expressed “in class” and “on the blog.”

I admit that I have, at times, withheld a thought in class, deeming it “better for the blog.” I decide that I need more time to think it out; that I can express it better in writing. I’ll take copious notes, then go home with every intention of posting my thoughts. But then when I type it out, I get in over my head. Is the comment still relevant, has it become too heavy, long, or intricate in writing… too “developed”? Not blog-worthy. Turns out that if you “hide” a thought in order to work it out alone, expressing it can become a far more difficult task. I think this speaks to Kathleen’s ideas of being transparent rather than hidden, thinking and writing “in real time” rather than in time… delays.

I wonder how we can make classrooms – and academic communities – work both “in person” and “online.” How do you teach effectively both in person and with a blog? Matt & Kevin’s suggestion to post on this blog only 4 times, on subjects that are not often addressed in class discussions – is a far better model than ones I have used in my own classes. I’m definitely going to try to take this strategy to my writing classes. I’ve addressed the classroom community with “real time” “draft workshops” for each student’s paper, but I’d love to create an online community for the students to communicate about undiscussed topics, too. Perhaps the “draft workshop” can even go online. I see some connections here.

And as for the (serious) issue of self-consciousness in “public,” in writing, or “online.” that’s probably just a matter of getting used to the blog form. I still have far to go as both a student and a teacher – – –

WordPress Workshop

Last Monday night I went to my second WordPress workshop.  I have been going to a lot of workshops for this class, and by far I found WordPress and Lexicon the most informative.  Monday night was part II and since I had already created a web page on commons, this workshop reinforced what I learned in part I.  What we learned was how to customize a menu, create widgets, and the fact that I can create my own theme page is awesome.  Also, I was really interested in learning about plugins, since I had no idea what they were about, and now I know that plugins are really essential for any website.  Looking forward to using these tools for my final project.

Everyone, have a Happy Thanksgiving….

Juana

WordPress II Workshop

I went to the WordPress II Workshop last night. The workshop was helpful in assisting me refresh some of the things I’ve learned about WordPress in the past. A couple of things we gone over included: Categories, custom menus, pages vs. posts, widgets, plugins, and CSS. I thought the CSS aspect was really interesting. And I’m planning to attend the Advanced WordPress Workshop next Monday to learn more about it.

Happy Thanksgiving everyone!

-Maple

Workshops: What I learned, and how…

… it helped me this semester :

I learned what I didn’t want to know.

Which is valuable! Here’s a quick run-down, for anyone who’s interested :

First, I went to “Scraping Social Media,” on October 19th, which was taught by a very energetic and helpful woman named Michelle Johnson-McSweeney. The workshop moved quickly, but I was able to keep up, especially by sneaking questions to the excellent lab partner next to me (JoJo). After learning about the various interests, reasons, and concerns about gathering data from the likes of Twitter and Facebook, we moved on to actually “scraping” those sites – which worked for the most part, and felt quite satisfying. There were of course, some issues, and these became more present towards the end of the workshop. The main disappointment I remember was the I couldn’t “scrape” Twitter on a Mac… at that point I hadn’t been considering doing a project exclusively on CUNY computers. Nevertheless, the workshop was encouraging enough to lead me to think of projects for which I could use this tool. This lead me to my first (overwhelming) data set proposal: scrape the web for data regarding a controversy in Best American Poetry. Unfortunately, as soon as I went down that rabbit hole, I ended up composing a project that was totally unmanageable, about “Appropriation” in contemporary poetry. It was way too big. So I moved on to something else:

I had a book come out November 1st, and I thought, why not just use my own poems? This was a moment of anxiety for me – I felt that I could “thick map” my book, create a hypertext version of it, disclose more information and “be transparent,” perhaps take some responsibility for my own “appropriations.”

So, the next workshop I attended was “Text Encoding,” taught by the ever-wonderful Mary Catherine Kinniburgh. I was pretty excited to learn about “code,” excited about the prospect that I might one day learn to “code,” excited overall to lock down some acronyms at the start, such as HTML, TEI (the focus of this workshop), and XML. However, as the workshop progressed, I naturally started wondering whether I wasn’t up-to-speed enough to be here. Or rather, that my “hypertext” project idea wouldn’t actually benefit from TEI. If HTML stood for “hypertext mark-up language,” wasn’t that what I needed to learn first? The TEI projects we looked at were Shakespeare plays, and some Latin / Greek texts, and it was great to learn more about the “backbone” of how text is encoded, with plenty of examples and explanations.

But even more than realizing HTML was what I would probably need for my hyper-text project, I realized once again that hyper-texting my book of poems wasn’t really a “data set.” I went back to my idea of “deformance” (interpretation + performance). I wanted to try to learn something about the language in my poems, and to simultaneously make “art from art.” I regretted that I had forgot to register for the “Data Visualization” workshop a week prior before it filled up.

So, although my path through these workshops may have felt like a bunch of (gentle) dead-ends, I do think that they helped me arrive at a project, albeit late to the game. I’d imagine that if I had gone into the semester knowing more about the digital terms (why did I have to miss the “DH Lexicon” workshop! And why was it so late in the semester, too?) – I might have been able to learn tools that would actually help me conceive of a project / start conducting it quicker.

There’s a kind suggestion here: have more workshops early on that might help students get grounded without prior knowledge of DH and digital tools. That said, I did learn a lot from each workshop, even if it wasn’t what I “wanted” to learn. And there’s a lesson in that: I should have gone to more workshops, or at least done better research on my own before just “following my gut.”

Reflections On Volumetric Cinema and Digital Surrealism

I meant to post this last week but it got away from me. Re: Kevin’s work: Looking at his sums of Disney films reminded me of Jason Salavon’s work with old master portraits where for four artists he averages the bulk of their work and thereby “reveals the hidden norm lurking within” (Met Online). Also, and perhaps more tangential, Kevin’s 3D stacks had me thinking of films as sculptures that are carved further and further into as time and the story progresses. This reminded me of some of the work of Alberto Giacometti, who made these very existential sculptures that are very thin and appear to have been carved almost to nothing. I am told, and I don’t know if it is true or apocryphal, that these sculptures come out of Giacometti being so traumatized post-WWII that he would carve compulsively and often would do so until his work was completely turned to dust. Left only to his own devices, even the sculptures that survive would have been completely ground down. (A bunch of his stuff is up at MoMA. I’m thinking of Tall Figure III, Man Pointing, and Standing Woman.) Anyway, with this story and Kevin’s work in mind, it’s interesting to think of the movie viewer’s gaze as compulsively carving into the film. In that case, the progression of time in a film is a measure of the observer’s destruction, before which the unfolding of plot becomes almost incidental.

On reading well, once again

I really enjoyed this week’s readings: Kathleen Fitzpatrick’s Planned Obsolescence: Publishing, Technology, and the Future of the Academy and select essays from Hacking the Academy: New Approached to Scholarship and Teaching from Digital Humanities, an edited collection by DanCohen and Tom Scheinfeldt. For me, the readings really made sense. What do I mean by that? Well, I think I got what DH is! It only took me a semester, but it finally happened.

If I had to name a common theme for the week, it would be “Journals Curators.” I like the metaphor, for each gallery space needs a curator. Journals can play this role now that scholarship is taking a digital turn. There is urgency to digitalize the work humanists do. And this does not mean uploading a pdf of your article to an online journal. This means uploading your work to an open and free journal in a format that allows for interaction between readers, reviewers, and the authors. This way, the article will be a constant work of progress that constantly improves as new perspectives are considered. Arguments strengthened, the total body of knowledge made healthier. Why would anyone object!? (But then again, would I really like my BA thesis to be a continuous work of progress after I submitted it to my adviser? Don’t think so.)

Michael O’Malley’s “Reading and Writing” was memorable because of the author’s humor. O’Malley’s stylistic choices make the hard love he’s giving humanists as easy to swallow as gummy bear vitamins. He points to the disconnect between the way we are taught to read and the way we are taught to write. As readers, we emphasize reading more in less time, at acquiring the skills of finding the main argument by reading a fraction of the book. Writing, however, is an art form that we must perfected, turning out draft after draft.

Dan Cohen & Roy Rozenzweig argue in the Introduction to Digital History: A Guide to Gathering, Preserving, and Presenting the Past on the Web that our reading habits are interrupted now that the content is online. There are no pages to flip and, to me, it is much harder to assess the reading on the screen compared to a print out, for example. I recall the authors’ arguments by their geographical position on the page, which is impossible when scrolling down the endless page.

In writing, on the other hand, we must take things to the next level. Things that can be said in layperson language are translated into jargon, making the arguments inaccessible. Is it for building an air of credibility? Or, as John Unsworth claims in “The Crisis of Audience and the Open-Access Solution,” is the humanities scholarship intentionally obscure? Are some things impossible without the use of words for their third or fifth listed meaning in the dictionary?

And how do we heal the diametrical split in our approaches to reading and writing?

Unix/Linux Command Reference Workshop

Okay, so I was originally supposed to go to the Text Encoding workshop this past Tuesday, but there was a mixup with the rooms, so I ended up in the Command Reference workshop. Although this really won’t be of any use to me in my future projects (or maybe it will, I won’t speak too soon) it was good to about this entire program in my computer (Terminal) that’s basically a command center for everything that goes on in it. I was told that through Terminal, I can use commands that go above and beyond the basic commands that you are prompted to do on a Mac. For example, if I wanted to clear out some files that weren’t being cleared the traditional route I could go into Terminal, put in a specific file command and it would be gone. Also, you can make “directories” within Terminal that can then be exported as PDFs and/or HTML codes. It was really interesting, but definitely something you have to work with consistently as the commands and codes seem to be endless. I’ll be looking forward to attending another workshop where we work on the command screens.

 

Scarlett

Apigee – Fashion Studies Dataset project #stansmith

Instagram —> # Stan Smith —> APIGEE to create a collection of data —> pictures and hashtag —> mapping or story-line

Scarlett and I decided to deepen fashion studies through DH tools that we learned during the semester. Since we are studying fashion, we noticed two different aspects that we would like to develop and that both are necessary in our field. The first one is visualization, while the second one is connectivity. For the first, we decided to start from Instagram. Between all social media we have today, Instagram is based on hashtag and images. Fashion is a visual thing, and the absence of pictures wouldn’t allow its growth. The second aspect would be connectivity: through a specific detail, like Stan Smith shoes, we can trace the visual story of fashion, declined in that specific detail. Marketing strategies, level of interests all over the world, connect people with same passions and interests, etc.… It could not be appealing for someone, but we think fashion owns this power of connection. This is what we consider a tool that has the power of social mobility, etc.…

We started with Apigee, the leading provider of API technology and services for enterprises and developers. Hundreds of companies including Walgreens, Bechtel, eBay, Pearson, and Gilt Group as well as tens of thousands of developers use Apigee to simplify the delivery, management and analysis of APIs and apps (http://apigee.com/docs/api-services/content/what-apigee-edge). Our classmates in the Fashion Studies Track have suggested this program to us because it would have helped for a good but simple data project; so let’s see how it works.

Basically we wanted to collect images, find tags with StanSmith, locations all over the world, to see what relationship exists between the world and the shoes. I guess this could also be a good project to keep track of marketing movements in the entire fashion world, and with all the items, not only one specific.

We typed https://apigee.com/console in the search bar and in the page that popped out we chose “Instagram” in the column API. The next step is to select OAuth2 under the column “Authentication” because to interact with data through Apigee is necessary to authorize your Instagram account.

At this point you will have three options (Query, Template and Headers). We chose “Template” (for Instagram) and in the “tag name*” slot we typed our tag “stansmith”. Right after this step the authentication is complete, authorizing Apigee to use your social media account.

It’s necessary to select an API method and it’s important to select the second choice under the “Tags” section of the list.

We only had to click on “send” and the response came: Instagram has pagination, so the data we got were divided in pages. Copying the URL in the picture, and pasting it in a new searching bar we obtained a weird data page in order to see the next page.

Our friend told us that the process was almost complete, but the last step was to download “JSONview” (with Safari it doesn’t work, so we used Chrome), to see the data in an organized form. This step is specifically for an easier visualization of images, profile pictures, username, etc.…and we also found the numbers for “created_time”. This part is very important because converting this numbers from Unix epoch time to GMT is necessary for the visualization of images.

With Epoch Converter we were able to convert everything and the result is a list of data, where every “attribution” is a post. We collapsed the posts, having the chance to look at posted pictures in different resolutions!

For the presentation we’ll provide a Power Point with images step by step of the process to reach this data that we will probably use for a mapping or a timeline of the item.

 

Thanks for your time,

 

Nico and Scarlett

Unix/Linux Command Reference Workshop

I have been to the wrong work shop on Tuesday, instead of going to Texting Encoding I have been to Unix/Linux Command Reference. Both of them aren’t specifically connected to what I would do as dataset project, but I’ve never been to a workshop before and I’ve found it really interesting.
Basically had this paper in our hands and we started playing with commands able to create, delete and modify files in our own computer. We started opening “Terminal” in the spotlight search bar, and few weird codes appeared on the screen.

I can’t say I understood what I was doing, but the guys were really nice with us – they understood we didn’t know anything about coding and directory – and they taught us few tricks to create directories with letters on the keyboard. For example: “pwd” for printing working directory or “mkdir” to create new folders, etc…

The useful thing I learnt is that with GNU nano to convert files in pdf, write a script, download pictures and convert them from .jpeg to .pdf.

This link will provide a guide for Command Reference

https://ubuntudanmark.dk/filer/fwunixref.pdf

 

Thank you all,

 

 

Nico