My Facebook metadata as landscape.

This semester, I’ve focused my attention on creative ways of interpreting and visualizing my personal Facebook data.

I’m interested in exploring the concept of “digital dualism” – the habit of viewing the online and offline as largely distinct (source). We are actively constructing our identities whether behind a screen or in person. As Nathan Jurgenson writes, “Any zero-sum “on” and “offline” digital dualism betrays the reality of devices and bodies working together, always intersecting and overlapping, to construct, maintain, and destroy intimacy, pleasure, and other social bonds.”

The exact location where I made a Facebook update.

With this project, I wanted to try re-inserting the digital world into the physical world. I decided to locate specific actions I took on Facebook within a physical geography and landscape.

It’s very easy to download your Facebook metadata from the website – all you have to do is follow these directions. In my data archive, I found information about every major administrative change I’ve made to my Facebook account since I created the account in 2006, including changes to my password, deactivating my account, changing my profile picture, etc. This information was interesting to me because from Facebook’s perspective, these activities were in all likelihood the most important decisions I had ever made as a Facebook user.

I rearranged that data into a simple JSON file:

I decided to explore the IP Address metadata associated with each action. I wanted to know more about the physical location where I had made these decisions concerning my Facebook account, since I obviously didn’t remember where I was or what I was doing when I had made these changes.

I wrote a Python script (see code here) that performs several different actions for each item in the JSON file:

(1) Takes the IP address and finds the corresponding geolocation, including latitude & longitude & city/state;

(2) Feeds the latitude/longitude into Google Maps’ Street View and downloads 10 images that each rotate 5 degrees;

(3) Adds a caption to each image specifying the Facebook activity, the exact date/time, and the city/state; and

(4) Merges the 10 images into a gif.

The result was two dozen weird undulating gifs of Google Street View locations, which you can check out on the project website.

After doing all that work, however, I didn’t feel satisfied with the output. If the goal was to find a way to re-insert my digital data trail into a physical space, I felt that the goal hadn’t yet been realized in this form. I decided to take the project into a different, more spatially-minded direction.

I wrote another Python script that programmatically takes the IP address and searches for the latitude/longitude on Google Maps, clicks the 3D setting, records a short video of the three-dimensional landscape, and then exports the frames of that video into images.

Programmatically screen recording Google Maps’ 3D landscape.

Using the photogrammetry software Photoscan, I created a 3D mesh and texture from the video frames. Then, I made a quick design of the Facebook app on an iPhone with the specific Facebook activity associated with that location & IP address. Finally, I pulled the landscape .obj into Unity with the iPhone image and produced some strange, fantastical 3D landscapes:

Pulling the 3D mesh into Unity and inserting the Facebook metadata into the landscape.

This week, we reviewed useful tools ffmpeg and imagemagick to manipulate images and videos found online. I decided to start working with the trailer to Akira Kurosawa’s 1985 film Ran (Japanese for “chaos”). Ran is a period tragedy that combines the Shakespearian tragedy King Lear with legends of the daimyō Mōri Motonari.

The trailer is filled with beautiful, carefully framed shots. I wanted to see if there was a way to automatically detect and chop up the trailer into its individual shots/scenes. It turns out there is no simple solution to that problem so I hobbled together my own bash script to do so.

Once I had chopped up the trailer, I decided to export one image from each scene for analysis. I did so by writing a script that saves the first frame from each video.

I then used selenium to programmatically upload those images into a reverse image search that was powered by an image classifier that had been trained on Wikipedia data. The image classifier had been trained by Douweo Singa and the site can be accessed here. It’s described this way: “A set of images representing the categories in Wikidata is indexed using Google’s Inception model. For each image we extract the top layer’s vector. The uploaded image is treated the same – displayed are the images that are closest.” You can read more detailed notes about training the data in Douweo’s blog post.

I ended up with hundreds of ‘visually similar’ images, organized according to the shots in the trailer. I combined them into a side-by-side comparison video, where you can see some of the images that were deemed ‘visually similar’ by the training set. Check out the full video for Kurosawa’s Ran:

I then decided to repeat the entire process for the trailer to Dario Argento’s classic horror film Suspiria:

Find my full Github repository here. 

I use Google to search for answers to questions I don’t want to ask a human being. While most of my searches are done out of necessity (“how to use git no deep shit”) or urgency (“ruby’s nyc address”), I also turn to Google to answer questions I’m too embarrassed to ask my friends. Our Google searches therefore reveal a side of us that we may not want shared with the public.

I decided to make a website exploring how YouTubers attempted to answer some of the questions I asked Google in 2014. See the site here.

I started by downloading my entire Google search history, spanning the years 2013-2017. The zip file contains some ugly JSONs, so using Python I generated lists of searches organized by year. Then I programmatically cleaned up the lists to weed out Google Map & Flights searches. This was the result for 2013, for instance:

Next, I filtered the Google searches down to all the instances that included the words “how to.” I wanted to get a snapshot of what I was trying to learn from the internet in that particular year. Some examples from 2014:

Then I wrote a Python script that takes that array of google searches and programmatically searches for them on YouTube, downloading whatever video tutorial is the first result. I used selenium + webdriver + PhantomJS to browse and scrape the videos for me. You can see my full code here.

When I started this project, I knew I wanted to explore the culture of YouTube tutorials using my own searches as a starting point. I wanted to know how different online communities were attempting to answer and work through my questions.

What I found interesting was the ways in which my questions were interpreted. A simple question “how to bounce back” resulted in a trampoline how-to video. A question about “how to get over a breakup” resulted in a makeup tutorial for post-breakups (side note: I had no idea that there is a huge subculture of makeup tutorials on YouTube, complete with its own norms and signifiers). If I had searched on Reddit or WebMD, for instance, the results would have been similarly a product of the language of the online community.

I studied Arabic and Middle East politics for my undergraduate degree and lived in Jerusalem and Cairo. Lately I’ve been following the steady rise of Islamophobia in the United States with concern. Recent events – namely the ban of Muslims from the U.S. under Trump – have made me think about the ways I can work to combat widespread ignorance and beliefs towards Islamic culture.

So I made a simple Twitter bot called Islamic Art Bot.

The Metropolitan Museum of Art has an extensive Islamic art collection, with over 444,000 items in its Islamic Art archive. I was inspired by Darius Kazemi’s Museum Bot, which tweets out a random item from the Met’s entire archive. I decided to tweak some of the code and then scrape quotes and sayings from a handful of well-known Muslim poets and writers. The result is a bot that every hour tweets out some words and images.

I want to continue adding content for the bot to tweet. Next, I want to find an archive with examples of Islamic architecture. I also want to add more writers, especially contemporary writers.

You can find the bot’s code over here at github. For the web scrapers, I used a python library called BeautifulSoup. My web scraper code can be found here.

Like most of the people I’ve talked to this week, I’m overwhelmed by both the scale and the velocity with which the Trump/Bannon administration has undermined basic constitutional rights within the first week of office. Furthermore, the administration sends message that are just false (“The ban isn’t a Muslim ban”, “The ban doesn’t affect U.S. green card holders”, “Protesters are being organized and funded by CAIR”). Part of the issue is that the Trump administration provided no guidance to the Department of Homeland Security as to how the Executive Order was to be enforced, leaving such decisions to the vagaries of local law enforcement.

In recent days, it’s become more clear that such pronouncements have Steve Bannon’s fingerprints all over them. A white nationalist with isolationist impulses, Bannon has been disseminating his views as editor of Breitbart. As an organization, Breitbart generates false, unverified stories aimed at stoking fear among white nationalists.

I decided to scrape the headlines from Breitbart’s homepage and run them through a Markov chain to generate newer, even faker headlines. If the original headlines were dubious, the new ones are even more suspect. Here’s a sample:

You can find all my code here.

I also found a video online of Bannon lecturing in front of (no joke) a painting that includes the Bill of Rights, an American flag, and the Liberty Bell. So I threw in real Breitbart headlines I’d scraped and made this: