This week, I decided to experiment with a few different ideas and technologies in order to further develop my thesis project. Here are some of the projects/experiments I worked on:

Experiment #1: Chrome Extension (for Facebook).

A chrome extension that swaps all the pictures in your Facebook feed with the logos of the advertisers that currently have your contact information.

See the code repo here.

I started by downloading my entire Facebook archive (do it yourself). I found a list of all the advertisers who have my contact information from Facebook – more than 200 entities in total. This information shocked me, especially because a number of them were data collection companies and politicians running for office.

I took that list of advertisers and decided to scrape Google Images to download all their logos.

Then I shifted gears and built a Google Chrome extension that swaps all the images on Facebook for any images of your choosing. I wrote code that picks a random image from the folder of advertisers every time the page reloads.

Personally, I found that the advertisements for various Senators and politicians to be the most intrusive and unwanted.

Experiment #2: Facemash (for Facebook and LinkedIn).

A Python script that scrapes tagged images from Facebook and LinkedIn, and then identifies & overlays the faces using OpenCV.

See the code repo here.

I wrote several different Python scripts. One scrapes all the Facebook images you (or your friend) are tagged in. Once you have those images, you can run another script to identify the face and then overlay the faces on top of each other.

I used a few different Python packages and models, including OpenCV for facial recognition/warping and dlib for overlapping the images.  Read detailed instructions here (many thanks to Leon for his helpful workshop).

Here are some examples for me and my sisters:

I wrote another Python script that scrapes all the profile pictures from your LinkedIn connections. I was able to scrape the first 40 connections and then ran those images through the facemash script. This is what my average LinkedIn connection looks like:

Experiment #3: Aristotle Search (for Twitter).

I’ve been thinking a lot lately about how we search for and filter information online, and the ways in which Twitter and Google, for instance, make decisions for you about what’s most relevant. What if you wanted the ability to filter Twitter results according to a different set of criteria?

Inspired by Ted Hunt’s Socratic Search, I built a sister search engine called Aristotle Search that filters Twitter results according to Aristotle’s criteria for persuasive argument: logos (appeal to logic), pathos (appeal to emotion), and ethos (appeal to ethics).

The search engine is meant to be an exercise in speculative design that allows us to think about how a redesign of social platforms would change how we approach and engage with them. What if you approached Facebook with the intention to strengthen your relationship with family or reconnect with high school friends? What if you approached Google with a desire to challenge your own assumptions or seek clarity? (see: Socratic Search)


Clement Valla, The Universal Texture, 2012

I spoke with Taina Bucher and Surya Mattu today, who gave me excellent advice and direction on my thesis project.

Conversation #1: Be clear about your audience. 

In our conversation, Taina drew a distinction between the work that she has done and the work done by other researchers seeking to gauge digital literacy and algorithmic awareness (“do people know that they’re not seeing everything?”). They want to make the algorithm visible to its users. Taina, on the other hand, is more interested in user beliefs or expectations of how the algorithm system should perform. Her research aims to understand not only how people believe the Facebook algorithm functions, but also their normative conceptions about what algorithms should do.

Of course, it’s difficult to measure feelings or beliefs, so I’ve found myself asking: How can I observe user beliefs about how a platform should or does perform? How can I create an interaction that creates or observes those experiences? Many researchers take a qualitative approach – talk to study participants, interview them – but what will be my creative rupture?

Sterling Crispin, Data Masks, 2014

Conversation #2: Make it real, not hypothetical.

The conversation with Surya was really productive – he immediately understood what I was trying to achieve with this project and gave me 3-4 references of projects that had tried to achieve a similar effect, including the Chrome Extension alter, which allows you to scroll through a screenshot of someone else’s News Feed. We talked about his work on the Black Box Chrome Extension, which he said was his attempt to try to poke at the few aspects of the Facebook algorithm that are public but not immediately visible.

One piece of advice Surya gave me was to focus on the real data rather than the hypothetical. Sometimes the simplest intervention yields the most effective response. For instance, I told him about the research Taina Bucher has done to collect anecdotes from people who had had strange run-ins with the Facebook algorithm. He felt that she took a very powerful approach by focusing on the stories and images of real people, rather than theorize. He recommended that I start experimenting and see what resonates most with users.

Alter, Sena Partel, 2016

 

After talking to Taina and Surya today, I feel like I’m ready to move forward and experiment with Facebook, including downloading my own Facebook archive, building a Chrome Extension, building a 3D model of my face from tagged pictures, scraping Reddit discussions about how the Facebook algorithm works, and using personal data in an unexpected way.

I often think about Jenny Odell’s observation about her own work, that rather than take an new, unfamiliar technology and make something boring with it, she takes a familiar technology and renders it unfamiliar. I want to do the same with my thesis project.

This past week I experienced some setbacks. I competed my thesis statement and the feedback I got was that the scope might be too ambitious/broad for a ten week long project. I decided that moving forward, I think I will start building the tool and incorporate small-scale user research into the design process.

I reached out to expert developers and academics to talk about my thesis idea. Here is who I’ve made contact with:

  • danah boyd – head of Data & Society, principal researcher at Microsoft Research, adjunct professor at NYU
  • Tarleton L. Gillespie – principal researcher at Microsoft Research, adjunct professor at Cornell University
  • Suyra Mattu – fellow at Data & Society, developer of Black Box tool
  • Hang Do Thi Duc – designer, developer of data selfie
  • Taina Bucher – professor at University of Copenhagen

So far I’ve spoken to Hang, who started developing her project data selfie while getting her MFA at Parsons. She had a lot of good advice, namely to think big but stay realistic about what needs to be completed for the MVP. She also raised some ethical considerations, including thinking about how personal data will be saved (her project saves everything locally and only uses a server to make API calls). She likes the idea of building a Chrome Extension that essentially gives you recommendations and reminders of how the platform is collecting or tracking your personal data.

I have plans to Skype with Surya and Taina on Wednesday. Hopefully they can help me narrow my focus.

While I feel less pressure now that I’ve narrowed the scope, I’m not sure that the kind of tool I’m envisioning will answer my thesis question: How does the way algorithms see us change the way we see ourselves?

Today I began digging into the code of existing Chrome Extensions, such as Disconnect and show-facebook-computer-vision-tags, tools that boost awareness about how the Facebook algorithm is operating. This week, I’m planning to make a simple extension that manipulates the Facebook experience.

 

This week, we were asked to create a Joseph Cornell-style shadow box as an exercise in visualizing our thesis project. I decided to use the assignment as an opportunity to test out an idea I had about user experience design, memory, and Facebook.

I wanted to use the analogy of a city to understand the process by which users make sense of opaque processes like algorithms. Cities, like algorithms, are massive and hard to wrap our heads around.

According to one article I read, “City planner Kevin Lynch developed design principles for urban design by asking city dwellers to sketch maps of their environments from memory. In so doing, he learned what features of a city are more or less memorable in support of a ‘cognitive image.’ Based on an assumption that easily ‘imaged’ cities make for better cities, he then moved to develop design recommendations for urban planners.”

I decided to apply this exercise of drawing from memory to the Facebook algorithm. I asked 6 friends to draw their Facebook News Feed from memory (without looking at their Facebook page). Here were the results:

Some observations:

First, I noticed that 4 people drew the browser version of Facebook and 2 people drew the Facebook mobile app. I hadn’t specified in my directions and it was interesting to see which version they jumped to first.

Second, I noticed that there were certain UI features people tended to remember more often. Here’s what was most visible/memorable:

  • Upper right bar (notifications/menu/home) (6/6)
  • Friend updates (6/6)
  • FB logo (5/6)
  • Advertisements (4/6)
  • Events (4/6)
  • Trending news (4/6)
  • FB chat (3/6)
  • Comments/likes (3/6)
  • Status prompt, “What’s on your mind?” (3/6)
  • Search bar (2/6)
  • Friend live updates (2/6)
  • Sponsored posts (1/6)
  • Birthdays (1/6)
  • Left sidebar options (1/6)

Here’s what wasn’t visible/memorable:

  • Lower right hand Search (0/6)
  • Upper right hand question mark/help (0/6)
  • Create a Post & Photo/Video Album (0/6)
  • Photo/Video & Feeling/Activity prompts (0/6)
  • Public vs Private sharing option (0/6)
  • Left sidebar Shortcuts & Explore & Create options (0/6)
  • Stats & info about pages for which they’re admin (0/6)

It makes sense that what we most remember is first the overall architecture of the site and second, the things we engage with first. Most of my participants remembered the notifications, their friends’ updates, news, events, and the right-hand advertisements.

The following is a first draft of a literature review for my thesis project, which will look at how algorithms online shape user behavior and how user beliefs about the platform recursively shape the algorithm.

Algorithms as biopower

Foucault reminds us that power is not static, nor does it emanate from a center of origin; rather, power exists in an enmeshed network. In other words, power is not applied to individuals—it passes through them.

The digital era of online advertising has ushered in a new type of data collection aimed at maximizing profits by serving up advertisements based on modular, elastic categories. In the past, consumers were categorized based on demographic and geographic data available in the census. As marketers moved online over the past two decades, however, they were able to use data from search queries to build user profiles on top of these basic categories. The subsequent construction of “databases of intentions” help marketers understand general trends in social wants and needs and consequently influence purchase decisions (2011).

Through use-patterns online, an individual may be categorized based on her gender, her race, her age, her consumption patterns, her location, her peers, and any number of relevant groupings. Online users are categorized through “a process of continual interaction with, and modification of, the categories through which biopolitics works” (2011). Medical services and health-related advertisements might be served to that individual based on that categorization process, meaning that those who are categorized as Hispanic, for instance, might not experience the same advertisements and opportunities as those categorized as Caucasian.

In order to govern populations according to Foucault’s prescription for social control, biopower requires dynamic, modular categories that have the ability to adapt to the dynamic nature of human populations. In this system, the personal identity of the individuals matters less than the categorical profile of the collective body. Cheney-Lippold argues that soft biopower works by “allowing for a modularity of meaning that is always productive—in that it constantly creates new information—and always following and surveilling its subjects to ensure its user data are effective” (2011).


Foucault argues that surveillance exerts a homogenizing, “normalizing” force on individuals who are being monitored. When algorithms are employed in systems of selective surveillance, the personal identity of an individual matters less than the categorical profile of the group as a whole. It is this “normalizing” effect that I am most interested in understanding on the individual level.

Algorithms as interface

In recent years, researchers in the social sciences have worked to understand how Facebook users engage with the News Feed algorithm, which dictates what content they see in their Home Feed. Many researchers have studied the degree to which people become aware of such algorithms, how people make sense of and construct beliefs about these algorithms, and how an awareness of algorithms affect people’s use of social platforms.

Much research has been done on the question of ‘algorithm awareness’ – the extent to which people are aware that “our daily digital life is full of algorithmically selected content.” Eslami et al. (2014) raises several questions, including: How aware do users need to be of the algorithms at work in their daily internet use? How visible should computational processes be to users of a final product?

To answer the first question, several studies have attempted to gauge how aware Facebook users are of the algorithm. In one study of Facebook users, Eslami et al. (2015) found that the majority were not aware their News Feed had been filtered and curated. The authors created a tool FeedViz that allowed users to see visually how their News Feed was being sorted. Many of the study participants disclosed that they had previously made inferences about their personal relationships based on the algorithm output and were shocked to learn that such output was not a reflection of such relationships. The authors suggest that designers think about ways they can give users more autonomy and control over their News Feed without revealing the proprietary data from the algorithm itself.

A different study by Rader and Gray (2015) concluded that the majority of Facebook users were, in fact, aware that they were not seeing every post from their friends. The authors were interested in understanding how user beliefs about the Facebook news feed – accurate or not – shape the way they interact with the platform. “Even if an algorithm’s behavior is an invisible part of the system’s infrastructure,” they write, “users can still form beliefs about how the system works based on their interactions with it, and these beliefs guide their behavior.” Furthermore, such user beliefs about how the system works “are an important component of a feedback loop that can cause systems to behave in unexpected or undesirable ways.” They argue that we need more use cases where user and algorithm goals are in conflict as part of the design process. They also suggest that designers rethink their approach to making the mechanisms of the algorithm seamless or invisible—for instance, leaving clues within the interface that indicate how the system is working.

Martin Berg’s research attempts to track the ways in which personalized social feeds are shaped by the experienced relationship between the self and others (2014). He conducts a study in which participants wrote daily self-reflexive diaries about their own Facebook use. The study found that participants expressed a certain insecurity or strangeness in seeing their social boundaries collapse on Facebook. Berg argues that the algorithm acts as both an architecture, a social space, and a social intermediary. Facebook posts function as a social meeting point for friends. Furthermore, the “harvesting personal and interactional data on Facebook” forms the basis of a “virtual data-double” in which the self is “broken into distinct data flows.” His research supports the idea that the user is both shaped by and shapes the Facebook algorithm.

Building on the concept of algorithmic awareness, social scientist Taina Bucher seeks to map out the emotions and moods of the spaces in which people and algorithms meet. She develops the notion of “the algorithmic imaginary,” ways of thinking about what algorithms are, what they should be, and how they function (2017). Since such ways of thinking ultimately mold the algorithm itself, she argues that it is crucial that we understand how algorithms make people feel if we want to understand their social power. In a recent study, she examines personal stories about the Facebook algorithm through tweets and interviews with regular users of the platform. In her own words, she looks at “people’s personal algorithm stories – stories about situations and disparate scenes that draw algorithms and people together.” (2017). By taking an episodic, qualitative approach, Bucher constructs a picture of the disparate emotions generated by interactions with algorithms.

References:

Agamben, G. (1998) Homo Sacer: Sovereign Power and Bare Life. Stanford: Stanford University Press.

Agamben, G. (2005) State of Exception. Chicago: The University of Chicago Press.

Berg, M. (2014) ‘Participatory trouble: Towards an understanding of algorithmic structures on Facebook’, Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 8(3), article 2.

Bucher, T. (2017), ‘The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms’, Information, Communication & Society, 20:1, 30-44.

Bucher, T. (2012), ‘Want to be on the top? Algorithmic power and the threat of invisibility on Facebook’, new media & society 14(7): 1164-1180.

Cheney-Lippold, J. (2011) ‘A New Algorithmic Identity: Soft Biopolitics and the Modulation of Control’, Theory Culture & Society (28-164).

Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., Hamilton, K., Sandvig, C. (2015) ‘“I always assumed that I wasn’t really that close to [her]’: Reasoning about invisible algorithms in the news feed”’, CHI 2015, ACM Press.

Eslami, M., Hamilton, K., Sandvig, C., Pkarahalios, K. (2014) ‘A Path to Understanding the Effects of Algorithmic Awareness’, CHI 2014, ACM Press.

Foucault, M. (1977) Discipline and Punish: The Birth of a Prison. London: Penguin.

Foucault, M. (1990) The History of Sexuality: The Will to Knowledge. London: Penguin.

Foucault, M. (2003) Society Must Be Defended: Lectures at the Colle’ge de France, 1975-1976. New York: Picador.

Hier, S. (2003) ‘Probing the Surveillant Assemblage: On the Dialectics of Surveillance Practices as Processes of Social Control’, Surveillance & Society 1(3): 399-411.

Monahan, T. (2010) Surveillance in the Time of Insecurity. New Jersey: Rutgers University Press.

Rader, E. & Gray, R. (2015) ‘Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed’, CHI 2015, Crossings: 173-182.

Rader, E. (2016) ‘Examining User Surprise as a Symptom of Algorithmic Filtering’, Journal of Human Computer Studies.

Schmitt, C. (1922) Political Theology: Four Chapters on the Concept of Sovereignty. Chicago: University of Chicago Press.

I use Google to search for answers to questions I don’t want to ask a human being. While most of my searches are done out of necessity (“how to use git no deep shit”) or urgency (“ruby’s nyc address”), I also turn to Google to answer questions I’m too embarrassed to ask my friends. Our Google searches therefore reveal a side of us that we may not want shared with the public.

I decided to make a website exploring how YouTubers attempted to answer some of the questions I asked Google in 2014. See the site here.

I started by downloading my entire Google search history, spanning the years 2013-2017. The zip file contains some ugly JSONs, so using Python I generated lists of searches organized by year. Then I programmatically cleaned up the lists to weed out Google Map & Flights searches. This was the result for 2013, for instance:

Next, I filtered the Google searches down to all the instances that included the words “how to.” I wanted to get a snapshot of what I was trying to learn from the internet in that particular year. Some examples from 2014:

Then I wrote a Python script that takes that array of google searches and programmatically searches for them on YouTube, downloading whatever video tutorial is the first result. I used selenium + webdriver + PhantomJS to browse and scrape the videos for me. You can see my full code here.

When I started this project, I knew I wanted to explore the culture of YouTube tutorials using my own searches as a starting point. I wanted to know how different online communities were attempting to answer and work through my questions.

What I found interesting was the ways in which my questions were interpreted. A simple question “how to bounce back” resulted in a trampoline how-to video. A question about “how to get over a breakup” resulted in a makeup tutorial for post-breakups (side note: I had no idea that there is a huge subculture of makeup tutorials on YouTube, complete with its own norms and signifiers). If I had searched on Reddit or WebMD, for instance, the results would have been similarly a product of the language of the online community.

I studied Arabic and Middle East politics for my undergraduate degree and lived in Jerusalem and Cairo. Lately I’ve been following the steady rise of Islamophobia in the United States with concern. Recent events – namely the ban of Muslims from the U.S. under Trump – have made me think about the ways I can work to combat widespread ignorance and beliefs towards Islamic culture.

So I made a simple Twitter bot called Islamic Art Bot.

The Metropolitan Museum of Art has an extensive Islamic art collection, with over 444,000 items in its Islamic Art archive. I was inspired by Darius Kazemi’s Museum Bot, which tweets out a random item from the Met’s entire archive. I decided to tweak some of the code and then scrape quotes and sayings from a handful of well-known Muslim poets and writers. The result is a bot that every hour tweets out some words and images.

I want to continue adding content for the bot to tweet. Next, I want to find an archive with examples of Islamic architecture. I also want to add more writers, especially contemporary writers.

You can find the bot’s code over here at github. For the web scrapers, I used a python library called BeautifulSoup. My web scraper code can be found here.

Like most of the people I’ve talked to this week, I’m overwhelmed by both the scale and the velocity with which the Trump/Bannon administration has undermined basic constitutional rights within the first week of office. Furthermore, the administration sends message that are just false (“The ban isn’t a Muslim ban”, “The ban doesn’t affect U.S. green card holders”, “Protesters are being organized and funded by CAIR”). Part of the issue is that the Trump administration provided no guidance to the Department of Homeland Security as to how the Executive Order was to be enforced, leaving such decisions to the vagaries of local law enforcement.

In recent days, it’s become more clear that such pronouncements have Steve Bannon’s fingerprints all over them. A white nationalist with isolationist impulses, Bannon has been disseminating his views as editor of Breitbart. As an organization, Breitbart generates false, unverified stories aimed at stoking fear among white nationalists.

I decided to scrape the headlines from Breitbart’s homepage and run them through a Markov chain to generate newer, even faker headlines. If the original headlines were dubious, the new ones are even more suspect. Here’s a sample:

You can find all my code here.

I also found a video online of Bannon lecturing in front of (no joke) a painting that includes the Bill of Rights, an American flag, and the Liberty Bell. So I threw in real Breitbart headlines I’d scraped and made this:

THESIS QUESTIONS

How are our online behaviors being interpreted and understood by machine learning algorithms? How do we adjust our behavior when we know it’s being surveilled and categorized? To what extent do we come to see and identify ourselves through the ‘eyes’ of the algorithm? How do users adjust their online behavior in response to algorithms?

Description

With my project, I want to take several different approaches to addressing the same set of questions.

First, I plan to do user research in the shape of individual anecdotes and broader surveys. I want to understand how prediction or recommendation engines, powered by increasingly accurate machine learning algorithms, are shaping our behaviors online. More importantly, I want to gain insights into how these mechanisms make us feel when we encounter them. I plan to send out an initial survey next week that gets to the heart of some of these questions.

Second, I intend to build a tool that gives users greater visibility into how algorithms are constructing a portrait of them based on their online behavior. What advertisements did they click? Who are their friends? What did they last purchase? I’m still not sure what form the tool itself will take but I plan to continue researching and referencing the work done by other researchers and activists.

Third, I want to gather up all my findings – both qualitative and quantitative – and present them in an engaging, exploratory way. I will likely write a research paper summarizing what I’ve discovered, but I also want to make that research accessible and educational to the average internet user.

Research Approach

First, the literature. I’ve started reading a number of books and academic articles that are relevant to this topic. Wendy Chun’s books Programmed Visions and Updating to Remain the Same have already been central to my research. I also plan to read Alexander Galloway’s Protocol, Patrick Hebron’s Learning Machines, and Weapons of Math Destruction. I’m making my way through Microsoft Research’s summary of academic articles related to critical algorithm studies. One article that’s been helpful in understanding user anecdotes has been Taina Bucher’s “The algorithmic imaginary: exploring the ordinary effects of Facebook algorithms,” which takes an ethnographic approach to understanding how users interact with algorithms online.

Second, the user research. I’m going to conduct my own user research in the form of surveys and the collection of individual anecdotes. I want to pinpoint specific interactions that users find particularly unnerving, creepy, benign, or invisible. I also want to understand how a knowledge that their news feed is filtered affects the way they interact with the platform.

Third, the experts. I want to get in touch with several researchers and artists who are already making strides in this field. I’m planning to reach out this week.

Personal Statement

Many people remember the day they first logged online or the day they got their first gmail account. I remember the exact day Facebook introduced its News Feed, a feature that allowed users to see what their friends were talking about on the platform. I remember going to high school that day and talking with my friends about the strangeness of it all, the experience of seeing what other people were commenting on and liking. And yet within days we had accepted and embraced the changes to the platform.

Since that day, Facebook has unrolled a number of changes to its platform, many of which we don’t notice or see because they are minor tweaks to the algorithm that dictates what information we see and what information is rendered invisible. Most recently, machine learning tools have thrown a whole new set of problems into the mix, as such algorithms become increasingly more nebulous and less transparent. I’m interested in understanding how algorithms – not just on Facebook, but on every platform – make us feel when we notice them. I also want to understand how users adjust their behavior in dialogue with such algorithms.

Much of my work at ITP has been focused on data privacy, surveillance culture, and the blurring of public and private spaces. I intend my thesis to be a continuation of past research and projects.