Here’s my final 10-minute presentation I gave at ITP, summarizing the work I’d done during the semester on my thesis:
Yesterday I had the opportunity to user test my thesis project as it exists in its current state at the Quick & Dirty show. Since I’ve been doing some disparate experiments, I decided to show two of the pieces in an attempt to get feedback on what works, what feels compelling, and how the projects might be better synthesized.
First, I showed a web application I built that uses IBM Watson’s Personality Insights API (i.e. psychometrics) to make assumptions about who you are as a person. The user logs into Facebook in the application and then a dashboard appears that shows them their predicted psychological makeup and purchasing habits. I tried to take a satirical/speculative approach, suggesting what psychometrics could look like in the future.
Second, I showed the work I had done on generating 3D facial models from 2D images. The idea is that after a user logs into Facebook, the application will automatically produce a 3D model of their face just from their Facebook photos. Earlier in the day, I had 3D printed a face, so for the show I projected the isomap facial image on top of the 3D model to lend the 3D experiment more tactility.
People responded really well to the visual aspect of the project and expressed a desire to see more of a connection between this visual and the psychometric web app.
Overall the feedback was so useful. I felt as if the common theme was a desire for a stronger framing of the project. How do I want the audience to feel as an end result? What kind of approach or tone should I be taking?
The past few weeks have allowed me to think deeply about what I want to get out of my thesis project and what form this project will take. I wrote last week of the idea of the “manufactured self” – a self that has been constructed socially by external sources of power.
I stumbled on Alexandru Dragulescu’s thesis paper Data Portraits: Aesthetics and Algorithms, which outlines his creative practice for data portraiture. He describes “the concept of data portraits as a means for evoking our data bodies” and showcases his “data portraiture techniques that are re-purposed in the context of one’s social network.”
With my project, I will attempt to create a portrait of each participant based only on his or her Facebook data. I want to use facial recognition models (C++ and Python), 3D modeling (Three.js, Blender), the Facebook Graph API, and IBM Watson’s Natural Language Processing and Personality Insights APIs.
After much experimentation, I have an overall idea of what the user flow will look like. There will be an online web application + a physical component. Here’s the flow:
(1) User logs into web application (with Facebook Oauth)
(2) Real-time analysis of personality + generate 3D facial model
(3) The 3D object is manipulated/distorted based on the personality insights (?)
(4) At the show, users will be able to take home a physical artifact of their data portrait (thermal print of the 3D model? An .obj? A list of personality insights?)
This week, I used C++ and Python to get this library up and running, which allows you to create a 3D model of a face from a 2D image. I spent a significant amount of time trying to install the library, generate the landmark points, and run the analysis on my own images. Here’s what that process looked like:
I also got access to a few of IBM Watson’s APIs via the Python SDK. Specifically, I’m looking at the Personality Insights API, which analyses a body of text (your Facebook likes, your Facebook posts, etc). I ran the analysis on my own Facebook data, and added the information to the website I built from the JSON file that was generated.
You can see an example of what that analysis looked like on my own Facebook data:
I also decided to test my 2D to 3D model on an earlier image I had created of my composite face based on every Facebook photo I’ve been tagged in.
Last week I presented my midterm presentation and received some great feedback and suggestions. I resonated most with what Sam said about the monetization, commodification, and production of the self that occurs on Facebook. How can I incorporate that more fully into my thesis project?
I’m still iterating on a few different ideas, but eager to find the final form that my project will take, whether it’s one fully-developed web application or several different experimental applications.
I found some visual inspiration that has fueled the project I’m working on this week.
Share Lab has been investigating ‘The Facebook Algorithmic Factory’ with the intention “to map and visualize a complex and invisible process hidden behind a black box.” The result is an exploration of four main segments of the process: Data Collection (“Immaterial Labour and Data harvesting“), Storage and Algorithmic processing (“Human Data Banks and Algorithmic Labour“), and Targeting (“Qualified lives on discount“).
I was struck by not only the depth of research into Facebook’s policies and practices but also the beautiful (static) data visualizations produced as a way to clarify the research.
These data visualizations are simple but powerful. It left me thinking: How do I make this complex web personal ? How do I communicate the ways in which this process immediately affects every Facebook user? Can I make use of the Facebook API to build a graphic that takes the user’s personal information (likes, friends, advertisements) and displays them in an interactive web-based application?
I want to make use of a lot of the good research done by Share Lab as well as my own research to build an interactive web application that helps users see how their personal data is collected, stored, and used in order to manufacture a self, or a “consumer profile.” I was struck by what Nancy said about Facebook manufacturing a self and I think this would be a good conceptual starting point.
Right now I’m starting to build the web application using the Facebook Graph API, Facebook CLI, and a D3 clustering algorithm. I’m starting by building a web application that collects information about user_likes clustered according to category.
This week, I decided to experiment with a few different ideas and technologies in order to further develop my thesis project. Here are some of the projects/experiments I worked on:
Experiment #1: Chrome Extension (for Facebook).
A chrome extension that swaps all the pictures in your Facebook feed with the logos of the advertisers that currently have your contact information.
I started by downloading my entire Facebook archive (do it yourself). I found a list of all the advertisers who have my contact information from Facebook – more than 200 entities in total. This information shocked me, especially because a number of them were data collection companies and politicians running for office.
I took that list of advertisers and decided to scrape Google Images to download all their logos.
Then I shifted gears and built a Google Chrome extension that swaps all the images on Facebook for any images of your choosing. I wrote code that picks a random image from the folder of advertisers every time the page reloads.
Personally, I found that the advertisements for various Senators and politicians to be the most intrusive and unwanted.
Experiment #2: Facemash (for Facebook and LinkedIn).
A Python script that scrapes tagged images from Facebook and LinkedIn, and then identifies & overlays the faces using OpenCV.
I wrote several different Python scripts. One scrapes all the Facebook images you (or your friend) are tagged in. Once you have those images, you can run another script to identify the face and then overlay the faces on top of each other.
I used a few different Python packages and models, including OpenCV for facial recognition/warping and dlib for overlapping the images. Read detailed instructions here (many thanks to Leon for his helpful workshop).
Here are some examples for me and my sisters:
I wrote another Python script that scrapes all the profile pictures from your LinkedIn connections. I was able to scrape the first 40 connections and then ran those images through the facemash script. This is what my average LinkedIn connection looks like:
Experiment #3: Aristotle Search (for Twitter).
I’ve been thinking a lot lately about how we search for and filter information online, and the ways in which Twitter and Google, for instance, make decisions for you about what’s most relevant. What if you wanted the ability to filter Twitter results according to a different set of criteria?
Inspired by Ted Hunt’s Socratic Search, I built a sister search engine called Aristotle Search that filters Twitter results according to Aristotle’s criteria for persuasive argument: logos (appeal to logic), pathos (appeal to emotion), and ethos (appeal to ethics).
The search engine is meant to be an exercise in speculative design that allows us to think about how a redesign of social platforms would change how we approach and engage with them. What if you approached Facebook with the intention to strengthen your relationship with family or reconnect with high school friends? What if you approached Google with a desire to challenge your own assumptions or seek clarity? (see: Socratic Search)
Clement Valla, The Universal Texture, 2012
I spoke with Taina Bucher and Surya Mattu today, who gave me excellent advice and direction on my thesis project.
Conversation #1: Be clear about your audience.
In our conversation, Taina drew a distinction between the work that she has done and the work done by other researchers seeking to gauge digital literacy and algorithmic awareness (“do people know that they’re not seeing everything?”). They want to make the algorithm visible to its users. Taina, on the other hand, is more interested in user beliefs or expectations of how the algorithm system should perform. Her research aims to understand not only how people believe the Facebook algorithm functions, but also their normative conceptions about what algorithms should do.
Of course, it’s difficult to measure feelings or beliefs, so I’ve found myself asking: How can I observe user beliefs about how a platform should or does perform? How can I create an interaction that creates or observes those experiences? Many researchers take a qualitative approach – talk to study participants, interview them – but what will be my creative rupture?
Sterling Crispin, Data Masks, 2014
Conversation #2: Make it real, not hypothetical.
The conversation with Surya was really productive – he immediately understood what I was trying to achieve with this project and gave me 3-4 references of projects that had tried to achieve a similar effect, including the Chrome Extension alter, which allows you to scroll through a screenshot of someone else’s News Feed. We talked about his work on the Black Box Chrome Extension, which he said was his attempt to try to poke at the few aspects of the Facebook algorithm that are public but not immediately visible.
One piece of advice Surya gave me was to focus on the real data rather than the hypothetical. Sometimes the simplest intervention yields the most effective response. For instance, I told him about the research Taina Bucher has done to collect anecdotes from people who had had strange run-ins with the Facebook algorithm. He felt that she took a very powerful approach by focusing on the stories and images of real people, rather than theorize. He recommended that I start experimenting and see what resonates most with users.
After talking to Taina and Surya today, I feel like I’m ready to move forward and experiment with Facebook, including downloading my own Facebook archive, building a Chrome Extension, building a 3D model of my face from tagged pictures, scraping Reddit discussions about how the Facebook algorithm works, and using personal data in an unexpected way.
I often think about Jenny Odell’s observation about her own work, that rather than take an new, unfamiliar technology and make something boring with it, she takes a familiar technology and renders it unfamiliar. I want to do the same with my thesis project.
This past week I experienced some setbacks. I competed my thesis statement and the feedback I got was that the scope might be too ambitious/broad for a ten week long project. I decided that moving forward, I think I will start building the tool and incorporate small-scale user research into the design process.
I reached out to expert developers and academics to talk about my thesis idea. Here is who I’ve made contact with:
- danah boyd – head of Data & Society, principal researcher at Microsoft Research, adjunct professor at NYU
- Tarleton L. Gillespie – principal researcher at Microsoft Research, adjunct professor at Cornell University
- Suyra Mattu – fellow at Data & Society, developer of Black Box tool
- Hang Do Thi Duc – designer, developer of data selfie
- Taina Bucher – professor at University of Copenhagen
So far I’ve spoken to Hang, who started developing her project data selfie while getting her MFA at Parsons. She had a lot of good advice, namely to think big but stay realistic about what needs to be completed for the MVP. She also raised some ethical considerations, including thinking about how personal data will be saved (her project saves everything locally and only uses a server to make API calls). She likes the idea of building a Chrome Extension that essentially gives you recommendations and reminders of how the platform is collecting or tracking your personal data.
I have plans to Skype with Surya and Taina on Wednesday. Hopefully they can help me narrow my focus.
While I feel less pressure now that I’ve narrowed the scope, I’m not sure that the kind of tool I’m envisioning will answer my thesis question: How does the way algorithms see us change the way we see ourselves?
Today I began digging into the code of existing Chrome Extensions, such as Disconnect and show-facebook-computer-vision-tags, tools that boost awareness about how the Facebook algorithm is operating. This week, I’m planning to make a simple extension that manipulates the Facebook experience.
This week, we were asked to create a Joseph Cornell-style shadow box as an exercise in visualizing our thesis project. I decided to use the assignment as an opportunity to test out an idea I had about user experience design, memory, and Facebook.
I wanted to use the analogy of a city to understand the process by which users make sense of opaque processes like algorithms. Cities, like algorithms, are massive and hard to wrap our heads around.
According to one article I read, “City planner Kevin Lynch developed design principles for urban design by asking city dwellers to sketch maps of their environments from memory. In so doing, he learned what features of a city are more or less memorable in support of a ‘cognitive image.’ Based on an assumption that easily ‘imaged’ cities make for better cities, he then moved to develop design recommendations for urban planners.”
I decided to apply this exercise of drawing from memory to the Facebook algorithm. I asked 6 friends to draw their Facebook News Feed from memory (without looking at their Facebook page). Here were the results:
First, I noticed that 4 people drew the browser version of Facebook and 2 people drew the Facebook mobile app. I hadn’t specified in my directions and it was interesting to see which version they jumped to first.
Second, I noticed that there were certain UI features people tended to remember more often. Here’s what was most visible/memorable:
- Upper right bar (notifications/menu/home) (6/6)
- Friend updates (6/6)
- FB logo (5/6)
- Advertisements (4/6)
- Events (4/6)
- Trending news (4/6)
- FB chat (3/6)
- Comments/likes (3/6)
- Status prompt, “What’s on your mind?” (3/6)
- Search bar (2/6)
- Friend live updates (2/6)
- Sponsored posts (1/6)
- Birthdays (1/6)
- Left sidebar options (1/6)
Here’s what wasn’t visible/memorable:
- Lower right hand Search (0/6)
- Upper right hand question mark/help (0/6)
- Create a Post & Photo/Video Album (0/6)
- Photo/Video & Feeling/Activity prompts (0/6)
- Public vs Private sharing option (0/6)
- Left sidebar Shortcuts & Explore & Create options (0/6)
- Stats & info about pages for which they’re admin (0/6)
It makes sense that what we most remember is first the overall architecture of the site and second, the things we engage with first. Most of my participants remembered the notifications, their friends’ updates, news, events, and the right-hand advertisements.
The following is a first draft of a literature review for my thesis project, which will look at how algorithms online shape user behavior and how user beliefs about the platform recursively shape the algorithm.
Algorithms as biopower
Foucault reminds us that power is not static, nor does it emanate from a center of origin; rather, power exists in an enmeshed network. In other words, power is not applied to individuals—it passes through them.
The digital era of online advertising has ushered in a new type of data collection aimed at maximizing profits by serving up advertisements based on modular, elastic categories. In the past, consumers were categorized based on demographic and geographic data available in the census. As marketers moved online over the past two decades, however, they were able to use data from search queries to build user profiles on top of these basic categories. The subsequent construction of “databases of intentions” help marketers understand general trends in social wants and needs and consequently influence purchase decisions (2011).
Through use-patterns online, an individual may be categorized based on her gender, her race, her age, her consumption patterns, her location, her peers, and any number of relevant groupings. Online users are categorized through “a process of continual interaction with, and modification of, the categories through which biopolitics works” (2011). Medical services and health-related advertisements might be served to that individual based on that categorization process, meaning that those who are categorized as Hispanic, for instance, might not experience the same advertisements and opportunities as those categorized as Caucasian.
In order to govern populations according to Foucault’s prescription for social control, biopower requires dynamic, modular categories that have the ability to adapt to the dynamic nature of human populations. In this system, the personal identity of the individuals matters less than the categorical profile of the collective body. Cheney-Lippold argues that soft biopower works by “allowing for a modularity of meaning that is always productive—in that it constantly creates new information—and always following and surveilling its subjects to ensure its user data are effective” (2011).
Foucault argues that surveillance exerts a homogenizing, “normalizing” force on individuals who are being monitored. When algorithms are employed in systems of selective surveillance, the personal identity of an individual matters less than the categorical profile of the group as a whole. It is this “normalizing” effect that I am most interested in understanding on the individual level.
Algorithms as interface
In recent years, researchers in the social sciences have worked to understand how Facebook users engage with the News Feed algorithm, which dictates what content they see in their Home Feed. Many researchers have studied the degree to which people become aware of such algorithms, how people make sense of and construct beliefs about these algorithms, and how an awareness of algorithms affect people’s use of social platforms.
Much research has been done on the question of ‘algorithm awareness’ – the extent to which people are aware that “our daily digital life is full of algorithmically selected content.” Eslami et al. (2014) raises several questions, including: How aware do users need to be of the algorithms at work in their daily internet use? How visible should computational processes be to users of a final product?
To answer the first question, several studies have attempted to gauge how aware Facebook users are of the algorithm. In one study of Facebook users, Eslami et al. (2015) found that the majority were not aware their News Feed had been filtered and curated. The authors created a tool FeedViz that allowed users to see visually how their News Feed was being sorted. Many of the study participants disclosed that they had previously made inferences about their personal relationships based on the algorithm output and were shocked to learn that such output was not a reflection of such relationships. The authors suggest that designers think about ways they can give users more autonomy and control over their News Feed without revealing the proprietary data from the algorithm itself.
A different study by Rader and Gray (2015) concluded that the majority of Facebook users were, in fact, aware that they were not seeing every post from their friends. The authors were interested in understanding how user beliefs about the Facebook news feed – accurate or not – shape the way they interact with the platform. “Even if an algorithm’s behavior is an invisible part of the system’s infrastructure,” they write, “users can still form beliefs about how the system works based on their interactions with it, and these beliefs guide their behavior.” Furthermore, such user beliefs about how the system works “are an important component of a feedback loop that can cause systems to behave in unexpected or undesirable ways.” They argue that we need more use cases where user and algorithm goals are in conflict as part of the design process. They also suggest that designers rethink their approach to making the mechanisms of the algorithm seamless or invisible—for instance, leaving clues within the interface that indicate how the system is working.
Martin Berg’s research attempts to track the ways in which personalized social feeds are shaped by the experienced relationship between the self and others (2014). He conducts a study in which participants wrote daily self-reflexive diaries about their own Facebook use. The study found that participants expressed a certain insecurity or strangeness in seeing their social boundaries collapse on Facebook. Berg argues that the algorithm acts as both an architecture, a social space, and a social intermediary. Facebook posts function as a social meeting point for friends. Furthermore, the “harvesting personal and interactional data on Facebook” forms the basis of a “virtual data-double” in which the self is “broken into distinct data flows.” His research supports the idea that the user is both shaped by and shapes the Facebook algorithm.
Building on the concept of algorithmic awareness, social scientist Taina Bucher seeks to map out the emotions and moods of the spaces in which people and algorithms meet. She develops the notion of “the algorithmic imaginary,” ways of thinking about what algorithms are, what they should be, and how they function (2017). Since such ways of thinking ultimately mold the algorithm itself, she argues that it is crucial that we understand how algorithms make people feel if we want to understand their social power. In a recent study, she examines personal stories about the Facebook algorithm through tweets and interviews with regular users of the platform. In her own words, she looks at “people’s personal algorithm stories – stories about situations and disparate scenes that draw algorithms and people together.” (2017). By taking an episodic, qualitative approach, Bucher constructs a picture of the disparate emotions generated by interactions with algorithms.
Agamben, G. (1998) Homo Sacer: Sovereign Power and Bare Life. Stanford: Stanford University Press.
Agamben, G. (2005) State of Exception. Chicago: The University of Chicago Press.
Berg, M. (2014) ‘Participatory trouble: Towards an understanding of algorithmic structures on Facebook’, Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 8(3), article 2.
Bucher, T. (2017), ‘The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms’, Information, Communication & Society, 20:1, 30-44.
Bucher, T. (2012), ‘Want to be on the top? Algorithmic power and the threat of invisibility on Facebook’, new media & society 14(7): 1164-1180.
Cheney-Lippold, J. (2011) ‘A New Algorithmic Identity: Soft Biopolitics and the Modulation of Control’, Theory Culture & Society (28-164).
Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., Hamilton, K., Sandvig, C. (2015) ‘“I always assumed that I wasn’t really that close to [her]’: Reasoning about invisible algorithms in the news feed”’, CHI 2015, ACM Press.
Eslami, M., Hamilton, K., Sandvig, C., Pkarahalios, K. (2014) ‘A Path to Understanding the Effects of Algorithmic Awareness’, CHI 2014, ACM Press.
Foucault, M. (1977) Discipline and Punish: The Birth of a Prison. London: Penguin.
Foucault, M. (1990) The History of Sexuality: The Will to Knowledge. London: Penguin.
Foucault, M. (2003) Society Must Be Defended: Lectures at the Colle’ge de France, 1975-1976. New York: Picador.
Hier, S. (2003) ‘Probing the Surveillant Assemblage: On the Dialectics of Surveillance Practices as Processes of Social Control’, Surveillance & Society 1(3): 399-411.
Monahan, T. (2010) Surveillance in the Time of Insecurity. New Jersey: Rutgers University Press.
Rader, E. & Gray, R. (2015) ‘Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed’, CHI 2015, Crossings: 173-182.
Rader, E. (2016) ‘Examining User Surprise as a Symptom of Algorithmic Filtering’, Journal of Human Computer Studies.
Schmitt, C. (1922) Political Theology: Four Chapters on the Concept of Sovereignty. Chicago: University of Chicago Press.
How are our online behaviors being interpreted and understood by machine learning algorithms? How do we adjust our behavior when we know it’s being surveilled and categorized? To what extent do we come to see and identify ourselves through the ‘eyes’ of the algorithm? How do users adjust their online behavior in response to algorithms?
With my project, I want to take several different approaches to addressing the same set of questions.
First, I plan to do user research in the shape of individual anecdotes and broader surveys. I want to understand how prediction or recommendation engines, powered by increasingly accurate machine learning algorithms, are shaping our behaviors online. More importantly, I want to gain insights into how these mechanisms make us feel when we encounter them. I plan to send out an initial survey next week that gets to the heart of some of these questions.
Second, I intend to build a tool that gives users greater visibility into how algorithms are constructing a portrait of them based on their online behavior. What advertisements did they click? Who are their friends? What did they last purchase? I’m still not sure what form the tool itself will take but I plan to continue researching and referencing the work done by other researchers and activists.
Third, I want to gather up all my findings – both qualitative and quantitative – and present them in an engaging, exploratory way. I will likely write a research paper summarizing what I’ve discovered, but I also want to make that research accessible and educational to the average internet user.
First, the literature. I’ve started reading a number of books and academic articles that are relevant to this topic. Wendy Chun’s books Programmed Visions and Updating to Remain the Same have already been central to my research. I also plan to read Alexander Galloway’s Protocol, Patrick Hebron’s Learning Machines, and Weapons of Math Destruction. I’m making my way through Microsoft Research’s summary of academic articles related to critical algorithm studies. One article that’s been helpful in understanding user anecdotes has been Taina Bucher’s “The algorithmic imaginary: exploring the ordinary effects of Facebook algorithms,” which takes an ethnographic approach to understanding how users interact with algorithms online.
Second, the user research. I’m going to conduct my own user research in the form of surveys and the collection of individual anecdotes. I want to pinpoint specific interactions that users find particularly unnerving, creepy, benign, or invisible. I also want to understand how a knowledge that their news feed is filtered affects the way they interact with the platform.
Third, the experts. I want to get in touch with several researchers and artists who are already making strides in this field. I’m planning to reach out this week.
Many people remember the day they first logged online or the day they got their first gmail account. I remember the exact day Facebook introduced its News Feed, a feature that allowed users to see what their friends were talking about on the platform. I remember going to high school that day and talking with my friends about the strangeness of it all, the experience of seeing what other people were commenting on and liking. And yet within days we had accepted and embraced the changes to the platform.
Since that day, Facebook has unrolled a number of changes to its platform, many of which we don’t notice or see because they are minor tweaks to the algorithm that dictates what information we see and what information is rendered invisible. Most recently, machine learning tools have thrown a whole new set of problems into the mix, as such algorithms become increasingly more nebulous and less transparent. I’m interested in understanding how algorithms – not just on Facebook, but on every platform – make us feel when we notice them. I also want to understand how users adjust their behavior in dialogue with such algorithms.
Much of my work at ITP has been focused on data privacy, surveillance culture, and the blurring of public and private spaces. I intend my thesis to be a continuation of past research and projects.