Youtube Tutorials: How YouTubers tried to answer my Google searches.

I use Google to search for answers to questions I don’t want to ask a human being. While most of my searches are done out of necessity (“how to use git no deep shit”) or urgency (“ruby’s nyc address”), I also turn to Google to answer questions I’m too embarrassed to ask my friends. Our Google searches therefore reveal a side of us that we may not want shared with the public.

I decided to make a website exploring how YouTubers attempted to answer some of the questions I asked Google in 2014. See the site here.

I started by downloading my entire Google search history, spanning the years 2013-2017. The zip file contains some ugly JSONs, so using Python I generated lists of searches organized by year. Then I programmatically cleaned up the lists to weed out Google Map & Flights searches. This was the result for 2013, for instance:

Next, I filtered the Google searches down to all the instances that included the words “how to.” I wanted to get a snapshot of what I was trying to learn from the internet in that particular year. Some examples from 2014:

Then I wrote a Python script that takes that array of google searches and programmatically searches for them on YouTube, downloading whatever video tutorial is the first result. I used selenium + webdriver + PhantomJS to browse and scrape the videos for me. You can see my full code here.

When I started this project, I knew I wanted to explore the culture of YouTube tutorials using my own searches as a starting point. I wanted to know how different online communities were attempting to answer and work through my questions.

What I found interesting was the ways in which my questions were interpreted. A simple question “how to bounce back” resulted in a trampoline how-to video. A question about “how to get over a breakup” resulted in a makeup tutorial for post-breakups (side note: I had no idea that there is a huge subculture of makeup tutorials on YouTube, complete with its own norms and signifiers). If I had searched on Reddit or WebMD, for instance, the results would have been similarly a product of the language of the online community.

Islamic Art Bot

I studied Arabic and Middle East politics for my undergraduate degree and lived in Jerusalem and Cairo. Lately I’ve been following the steady rise of Islamophobia in the United States with concern. Recent events – namely the ban of Muslims from the U.S. under Trump – have made me think about the ways I can work to combat widespread ignorance and beliefs towards Islamic culture.

So I made a simple Twitter bot called Islamic Art Bot.

The Metropolitan Museum of Art has an extensive Islamic art collection, with over 444,000 items in its Islamic Art archive. I was inspired by Darius Kazemi’s Museum Bot, which tweets out a random item from the Met’s entire archive. I decided to tweak some of the code and then scrape quotes and sayings from a handful of well-known Muslim poets and writers. The result is a bot that every hour tweets out some words and images.

I want to continue adding content for the bot to tweet. Next, I want to find an archive with examples of Islamic architecture. I also want to add more writers, especially contemporary writers.

You can find the bot’s code over here at github. For the web scrapers, I used a python library called BeautifulSoup. My web scraper code can be found here.

Detourning Breitbart: An experiment in web scraping.

Like most of the people I’ve talked to this week, I’m overwhelmed by both the scale and the velocity with which the Trump/Bannon administration has undermined basic constitutional rights within the first week of office. Furthermore, the administration sends message that are just false (“The ban isn’t a Muslim ban”, “The ban doesn’t affect U.S. green card holders”, “Protesters are being organized and funded by CAIR”). Part of the issue is that the Trump administration provided no guidance to the Department of Homeland Security as to how the Executive Order was to be enforced, leaving such decisions to the vagaries of local law enforcement.

In recent days, it’s become more clear that such pronouncements have Steve Bannon’s fingerprints all over them. A white nationalist with isolationist impulses, Bannon has been disseminating his views as editor of Breitbart. As an organization, Breitbart generates false, unverified stories aimed at stoking fear among white nationalists.

I decided to scrape the headlines from Breitbart’s homepage and run them through a Markov chain to generate newer, even faker headlines. If the original headlines were dubious, the new ones are even more suspect. Here’s a sample:

You can find all my code here.

I also found a video online of Bannon lecturing in front of (no joke) a painting that includes the Bill of Rights, an American flag, and the Liberty Bell. So I threw in real Breitbart headlines I’d scraped and made this:

Thesis statement & research framework.

THESIS QUESTIONS

How are our online behaviors being interpreted and understood by machine learning algorithms? How do we adjust our behavior when we know it’s being surveilled and categorized? To what extent do we come to see and identify ourselves through the ‘eyes’ of the algorithm? How do users adjust their online behavior in response to algorithms?

Description

With my project, I want to take several different approaches to addressing the same set of questions.

First, I plan to do user research in the shape of individual anecdotes and broader surveys. I want to understand how prediction or recommendation engines, powered by increasingly accurate machine learning algorithms, are shaping our behaviors online. More importantly, I want to gain insights into how these mechanisms make us feel when we encounter them. I plan to send out an initial survey next week that gets to the heart of some of these questions.

Second, I intend to build a tool that gives users greater visibility into how algorithms are constructing a portrait of them based on their online behavior. What advertisements did they click? Who are their friends? What did they last purchase? I’m still not sure what form the tool itself will take but I plan to continue researching and referencing the work done by other researchers and activists.

Third, I want to gather up all my findings – both qualitative and quantitative – and present them in an engaging, exploratory way. I will likely write a research paper summarizing what I’ve discovered, but I also want to make that research accessible and educational to the average internet user.

Research Approach

First, the literature. I’ve started reading a number of books and academic articles that are relevant to this topic. Wendy Chun’s books Programmed Visions and Updating to Remain the Same have already been central to my research. I also plan to read Alexander Galloway’s Protocol, Patrick Hebron’s Learning Machines, and Weapons of Math Destruction. I’m making my way through Microsoft Research’s summary of academic articles related to critical algorithm studies. One article that’s been helpful in understanding user anecdotes has been Taina Bucher’s “The algorithmic imaginary: exploring the ordinary effects of Facebook algorithms,” which takes an ethnographic approach to understanding how users interact with algorithms online.

Second, the user research. I’m going to conduct my own user research in the form of surveys and the collection of individual anecdotes. I want to pinpoint specific interactions that users find particularly unnerving, creepy, benign, or invisible. I also want to understand how a knowledge that their news feed is filtered affects the way they interact with the platform.

Third, the experts. I want to get in touch with several researchers and artists who are already making strides in this field. I’m planning to reach out this week.

Personal Statement

Many people remember the day they first logged online or the day they got their first gmail account. I remember the exact day Facebook introduced its News Feed, a feature that allowed users to see what their friends were talking about on the platform. I remember going to high school that day and talking with my friends about the strangeness of it all, the experience of seeing what other people were commenting on and liking. And yet within days we had accepted and embraced the changes to the platform.

Since that day, Facebook has unrolled a number of changes to its platform, many of which we don’t notice or see because they are minor tweaks to the algorithm that dictates what information we see and what information is rendered invisible. Most recently, machine learning tools have thrown a whole new set of problems into the mix, as such algorithms become increasingly more nebulous and less transparent. I’m interested in understanding how algorithms – not just on Facebook, but on every platform – make us feel when we notice them. I also want to understand how users adjust their behavior in dialogue with such algorithms.

Much of my work at ITP has been focused on data privacy, surveillance culture, and the blurring of public and private spaces. I intend my thesis to be a continuation of past research and projects.

yamammy: A short VR documentary.

 

yamammy is a virtual reality documentary that explores how displaced refugees navigate a world in which they possess disparate overlapping ethnic, religious, and regional identities.

In this project, a former refugee from Sierra Leone Yamammy describes her feelings about some major life transitions – moving from Sierra Leone to Guinea to New York to a small town in Idaho – and how her sense of home and identity has evolved. With this doc, we wanted to create a fragmentary digital portrait of Yamammy using images and stories she describes from her memory.

In class, we presented a rough cut of the 360 video version of the documentary (still a work in progress, so publishing date TBD).

For the ITP Winter Show, we built the VR version of the documentary, which allows audience members to some of the images and stories that Yamammy describes. See the teaser:

We wanted to give the user some autonomy but not too much autonomy. There is some movement but we wanted the user to focus on the audio. Ultimately we wanted Yamammy’s voice to dictate the pace and the rhythm of entire experience.

The concept.

When Ruta and I first discussed this project, we both agreed that we wanted the story itself to determine the technology that we would use. We were both interested in exploring the idea of locative memory: memories that are bound to location or spatial dimensions.

It wasn’t until I had an initial conversation with Yamammy that the documentary began to take shape. We realized that many of the experiences she described were disjointed. Often she would remember tiny specific details while glossing over other parts of her life. Memory itself functions in this way: it’s fragmentary, it’s unreliable, and we tend to remember the stories that match patterns.

Photogrammetry as an aesthetic seems to capture the feeling we wanted our documentary to evoke. Tiny details come into focus while other details are distorted, twisted. It’s an imperfect process, just like the human recall.

The audio.

I initially did a pre-interview with Yamammy to gauge how she might feel about telling some of her stories. It went well and so Ruta and I moved forward with the full interview, which we conducted over Skype. From the two hours of audio, we then combed through the interview and edited it down to a handful of distinctive stories.

The visuals.

We started finding footage online based on Yamammy’s stories. We wanted to capture some of the landscapes she described in an abstract way and so drone footage became a useful creative tool for us.

ffmpeg

We then generated individual frames from the footage and pulled those images into Photoscan, where we created points, then point clouds, then a mesh. After exporting the model from Photoscan, we cleaned it up and smoothed it out in Meshmixer, exporting it as an .obj and a .jpg texture.

untitled-7

We then brought the objects into Unity, where we arranged them into various “scenes” for each audio story we had edited.

light

For the 360 video, we used a Unity plugin called VR Panorama, which allows you to animate a camera that flies through a scene. We edited those videos together with the audio in Adobe Premiere to create the 360 video we presented in class.

For the VR experience, we worked with the Samsung GearVR. We went back into Unity and animated the camera to inch slowly through each scene. We added the audio story to each scene. We also wrote a C++ script that would trigger the next scene when the previous scene had ended.

The feedback.

We showcased the VR experience in December 2016 at the ITP Winter Show. It functioned as a user testing of sorts, since Ruta and I still felt as if the documentary was in a rough place. We received some really good feedback about the experience which included:

  • Tightening up the stories to reduce overall time length.
  • Thinking about replacing some of the visuals – the visuals that worked best were the landscapes.
  • Emphasize the collage effect/aesthetic.
  • Rethink using the Samsung GearVR, as the user experience is not the best (the trackpad is too sensitive, the headset doesn’t fit everyone’s face).
  • Adding music to emphasize certain emotions.

We’re planning to continue working on the VR film and hopefully will submit it to some film festivals this spring.

Project update: Statelessness and identity.

Remembrance of things past is not necessarily the remembrance of things as they were.
– Marcel Proust

Farewells can be shattering, but returns are surely worse. Solid flesh can never live up to the bright shadow cast by its absence. Time and distance blur the edges; then suddenly the beloved has arrived, and its noon with its merciless light, and every spot and pore and wrinkle and bristle stands clear.
― Margaret Atwood

the concept

statelessness and identity is a short 360 documentary using photogrammetry that seeks to understand how displaced individuals navigate a world in which they possess overlapping ethnic, religious, or national identities while lacking a legal identity.

the narrative

Last week I interviewed my friend Yamammy, who was a refugee from Sierra Leone who was resettled in the U.S. in 2001 during the civil war. We talked about the process of getting here, how her idea of home and family has shifted over the years, and what it was like adjusting to a new identity and culture. Ruta and I are scheduling time to re-interview Yamammy and record the audio for the documentary.

Some of the questions we plan to ask her during the interview: How long have you been here? Where do you come from? Did you move with your family? Tell me about the moment you found out that you would be coming to the U.S. When you first moved here how did you feel? To what degree did you connect with the culture here? Tell me about a time you felt connected. What makes you feel at home here? Tell me about a time you felt disconnected. Do you keep in contact with family or friends from home anymore? What was the most difficult part of the process? What do you wish you could do differently if you had to do it all again?screen-shot-2016-11-14-at-6-21-26-pm

the aesthetic

Initially we had a lot of different ideas for what we wanted the documentary to look like. The primary goal was that the documentary be something that was accessible in the browser, whether on FB or youtube. Initially we talked about patching together different 360 images from Google Street View with layered photogrammetric models. After we met with Ziv and Julia, however, they recommended we check out Shining360, a project that uses individual frames from The Shining to create a 3D video using photogrammetry.

I like how tactile the experience is. I was immediately struck by the aesthetic of the video – it’s fragmentary and unpolished, just like human memory. We immediately decided that we want to use this same method in order to re-create the landscapes Yamammy describes in her stories from various video sources.

This week, we’ve planned to meet with Rebecca Lieberman to review the process by which you generate frames from a video and then I’m going to teach Ruta how to use photogrammetry to generate a scene.

Project progress: F(x) = x – c + b.

Since I wrote extensively about the user journey and narrative last week, I wanted to review some of the technical work I’ve been doing this week as I’ve attempted to get my deep learning framework (a tool for generating stories from images) up and running.

I started by following the installation & compilation steps outlined here for neural storyteller. The process makes use of skip-thought vectors, word embeddings, conditional neural language models, and style transfer. First, I installed dependencies, including NumPy, SciPy, Lasagne, Theano, and all their dependencies. Once I finish setting up the framework, I’ll be able to do the following:

  • Train a recurrent neural network (RNN) decoder on a genre of text (in this case, mystery novels). Each passage from the novel is mapped to a skip-thought vector. The RNN is conditioned on the skip-thought vector and aims to generate the story that it has encoded.
  • While that’s happening, train a visual-semantic embedding between Microsoft’s COCO images and captions. In this model, captions and images are mapped to a common vector space. After training, I can embed new images and retrieve captions.
  • After getting the models & the vectors, I’ll create a decoder that maps the image caption vectors to a text vector that I would then feed to the encoder to get the story.
  • The three vectors would be as follows: an image caption x, a “caption style” vector c and a “book style” vector b. The encoder F would therefore look like this: F(x) = x – c + b. In short, it’s like saying “Let’s keep the idea of the caption, but replace the image caption with a story.” This is essentially the style transfer formula that I will be using in my project. In this scenario, c is obtained from the skip-thought vectors for Microsoft COCO training captions and b obtained from the skip-thought vectors for mystery novel passages.

So far I’ve successfully set up the frameworks for skip-thought vectors (pre-trained on romance novels) & Microsoft’s COCO vectors. Now, I’m in the middle of installing and compiling Caffe, a deep learning framework for captioning images. I feel like I’ve hit a bit of a wall in the compilation process. I’ve run these commands specified in the Makefile, which have succeeded:

    make clean
    make all
    make runtests
    make pycaffe

When I try to import caffe, however, I get an error stating that the module caffe doesn’t exist, which means that there was some error in the build/compilation process. I’ve been troubleshooting the build for over a week now. I’ve met with several teachers, adjuncts, and students to troubleshoot. Today, I finally decided to use an Ubuntu container for caffe called Docker (which came highly recommended from a number of other students). I’m optimistic that Docker will help control some of the Python dependency/version issues I keep running into.


When I haven’t been working on the machine learning component of my project, I’ve started working on the website (running server-side, using node + express + gulp) where the game will live. I’ll be using this JQuery plugin that mimics the look and feel of a Terminal window.

Project update: Text-based game powered by machine learning.

For my project, I plan to build an interactive, text-based narrative where the text and the plot is generated through machine learning methods. At each stage in the narrative,  the user will be prompted to choose the next step at various stages in the story.

The content of the game will be driven by a machine learning tool that takes image files and generates sequential stories from the images.

Here’s the storyboard / user flow for the game:

storyboard-gaze-01 storyboard-gaze-02 storyboard-gaze-03 storyboard-gaze-04 storyboard-gaze-05 storyboard-gaze-06 storyboard-gaze-07 storyboard-gaze-08 storyboard-gaze-09 storyboard-gaze-10

In terms of the technical details, I need to train my own data set on a specific genre of literature (horror? detective stories? thriller? choose your own adventure books) using the neural storyteller tool. Neural storyteller makes use of several different deep learning frameworks and tools, including skip thoughts, caffe, theanos, numpy, and scikit. Here’s an overflow of how the text in the game will be generated:

preso-001

Here is the tentative schedule for the work:

Week 1: Nov. 2 – 8

  • Get the example encoder/trainer/models up and running (2-3 days).
  • Start training the same program on my own genre of literature (2-3 days).
  • Start building the website where the game will live (2 hrs).

Week 2: Nov. 9 – 15

  • After getting the machine learning framework working, start thinking about ways to structure the generative stories into the narrative arc (2-3 days).
  • Start building the front end of the game – upload buttons, submit forms. (1 day).

Week 3: Nov. 16 – 29

  • Start establishing the rules of game play & build the decision tree (2 days).
  • Continue building the website and tweaking the narrative (2 days).

Week 4: Nov. 30 – Dec. 7

  • User testing. Keep revising the game. Get feedback.

Statelessness and identity: A virtual reality experience.

Background research

When Leal’s father was born in Lebanon, her grandfather never registered his birth. Leal herself was born in Lebanon but is undocumented because her father wasn’t able to register the birth of any of his children. Lebanon is one of 27 countries that practices discriminatory nationality laws, according to a 2014 UNHCR report. The nationality law of Lebanon allows only Lebanese fathers to confer their nationality to their children in all circumstances; women cannot confer citizenship to their daughters.

Leal is considered “stateless” according to the UN definition. The term ‘stateless person’ refers to someone who is not considered a national by any country under the operation of its law. There are a number of reason individuals become stateless – perhaps they were displaced after a conflict or belong to an ethnic group that was never recognized by a nation as citizens of the country.

“To be stateless is like you don’t exist, you simply don’t exist,” Leal says. “You live in a parallel world with no proof of your identity.”

Today there are at least 10 million people worldwide who are stateless. Because such individuals are denied a legal identity, they aren’t afforded the same basic human rights we enjoy. Often they are denied access to housing, education, marriage certificates, health care, and job opportunities during their lives and confer the stateless status to their children. Most of these individuals lose their nationality by no fault of their own.

Invisible is the word most commonly used to describe what it is like to be without a nationality,” says Mr. Grandi, Commissioner of the United Nations High Commissioner for Refugees. “For stateless children and youth, being ‘invisible’ can mean missing out on educational opportunities, being marginalized in the playground, being ignored by healthcare providers, being overlooked when it comes to employment opportunities, and being silenced if they question the status quo.”

Why it matters

Statelessness is a product of a world that is increasingly defined by political boundaries and identities. It’s also a product of loose migration laws in regions like Europe, where migrants leave home and then find that they cannot return. Individuals who are stateless often have a difficult time seeking asylum in other countries even though the loss of legal identity isn’t their fault.

Another theme that emerged from our research was the role that discrimination plays in producing stateless populations. As mentioned above, many countries discriminate against women in nationality laws. There are also a number of groups denied citizenship due to ethnic or religious discrimination. For instance, over 1 million of the Muslim Rohingya people in Myanmar do not have citizenship due to religious discrimination against Muslims. In the Dominican Republic, new laws have stripped up to 200,00 Haitian immigrants and their descendants of Dominican citizenship – even going so far as to deport thousands of people.

Ruta and I interested in understanding how such stateless individuals navigate a world in which they might possess overlapping ethnic/religious/national identities but lack a legal identity. The feeling most often described by these individuals is that of invisibility. With this project, we’re aiming to give voice to an individual (or group of individuals) who are unclaimed by a state government.

Concept

The project is a virtual reality film that explores narratives of statelessness and displacement. The idea is to find a compelling personal story and pair it with stunning visuals that match the content of the story. We want the narrative itself to drive the tenor and the mood of the film.

In terms of audience participation, Ruta and I felt more drawn to a heavily curated experience in which the audience hears the audio of a story and explores places that appear in the narrative. We think the best user experience will be one in which the narrative is somewhat more controlled rather than exploratory. We want to maintain an emotional, personal tone to the piece.

Audio will obviously play a huge role in the realization of this project, so we want to make sure the recording we use is itself a character in the film.

Next steps

Right now Ruta and I are reaching out to people we know who are experts in the field of human rights law, refugees, and migration. We’re hoping to make connections with people who are advocating for stateless individuals and find the right story or individual to drive this piece.

I’ve reached out to the following friends/experts:

  • Devon C. – Works for the UNHCR to resettle refugees in the Middle East and has been an advocate for refugees in the current Syrian refugee crisis (see her recent Foreign Policy piece).
  • Thelma Y. – Worked as an activist in Myanmar
  • Estee W. – A student at UPenn law studying discriminatory employment laws in Arab countries. Studied women’s employment laws in Jordan on a Fulbright scholarship.