Logo design by Katrina Ricks Peterson

algogossip is an exploratory project I’m currently developing, which was recently tested out at the 2022 Internet Yami-Ichi.

Last month I called up my friend, a full-time content creator. I’ve been reading a lot about the tactics people use to increase the chances of their posts being seen on social media, so I asked: Where did she go to get advice and tips for how she posts? 

When you consider how many people rely on algorithmic visibility on social media for financial stability, questions around how social media algorithms work become absolutely critical. According to a recent SignalFire report, roughly 2 million people work as full-time, professional content creators on platforms like YouTube, Instagram, and Twitch. These platforms do not provide detailed information about how their algorithms work, at the risk of being gamed. But at the same time, when changes to the algorithm are rolled out without warning, content creators are left scrambling trying to make sense of the changes. 

Background

Over the past few years, I’ve been thinking about the stories people tell each other about their everyday interactions with social media algorithms. As part of my grad school thesis research in 2016, I asked people to share their stories about Facebook’s ad targeting: What kinds of strange experiences did they have with targeted ads? During my time as a Ford-Mozilla fellow in 2017, I collaborated with Coding Rights to explore this question further: What experiences were women having on the platform? What constitutes algorithmic harm? We talked to women who had had unsettling or confusing experiences with Facebook’s targeted ads. Most recently at Mozilla, I’m leading qualitative research that aims to get at the heart of people’s experiences with YouTube’s user control mechanisms: Do they feel like they have meaningful control over the system? How do they change their behavior in an attempt to exert control?

My research has been informed by scholars writing about how people engage with social media algorithms. Taina Bucher has written extensively about the “algorithmic imaginary” (Bucher 2017), the animating force of social media algorithms, ways of thinking about what algorithms are, what they should be, and how they function. She argues that the stories people offer up about algorithms are important because they have real social impact. Michael Ann DeVito and others argue that the folk theories people hold about social media algorithms serve as frames through which we can understand their reactions to change (DeVito et al. 2017). They say that by looking seriously at the complaints people make about algorithms, we can better understand the nature of “expectation violations.”

A paper about how users exercise control over social media algorithms (Burrell et al. 2019), says that the complaints people make about social media algorithms are important feedback signals. Citing Sara Ahmed’s writing on complaint as a feminist tactic, they write that “the act of complaint itself can be a way for people to record their grievances and build solidarity in the face of limited recognition by those with organizational power.” (Ahmed has since published a book titled Complaint! that looks at how complaints are made and what they can do, specifically through a Black feminist and feminist of color lens.)

My thinking on this project has been most shaped and inspired by Sophie Bishop’s excellent scholarship on the concept of “algorithmic gossip,” a term she defines as “communally and socially informed theories and strategies” about social media algorithms that people share with one another in order to boost financial stability and visibility on social media platforms (Bishop 2019). She says that “gossip is productive” and that it is an “important and under-studied form of knowledge production.”

Gossip, especially in its association with women, has historically been looked down upon and treated as trivial, intimate, and dangerous. It’s also a tactic that’s wielded in situations where a power asymmetry exists, and most often it’s wielded by marginalized groups. I think about the whisper networks at universities or at companies that have warned newcomers about problematic individuals, or have allowed people to quickly share important knowledge. Most importantly, gossip serves to subvert power: In the absence of good, accurate information about how a system works, people rely on one another to make sense collectively. 

The project

Back to the question I asked my friend: Where did she go to get advice and tips for how she posts? She told me that she was part of a group text with other friends who were content creators, where they shared tips and offered support. Many of them subscribe to industry newsletters or work with agencies who give them advice about how and what to post. Others seek out internet forums for answers to their questions. 

I started looking into some internet forums where these conversations take place. There are a number of subreddits dedicated to answering people’s questions about how the TikTok algorithm works, advice for boosting visibility on Instagram’s algorithm, avoiding/appealing shadowbans, and similar topics. The posts in these forums range from the didactic (“Post at least 6 times a day. Upload history like 2 of those. HASHTAGS VERY important.”) to the supportive (“Why don’t you try some challenges ? Like challenge people to do ex: 5 pushups everyday and have your own hashtag.”). There is a real sense of camaraderie, with posts expressing frustration  (“I have been banned for more than a month now and they are not reviewing my appeal”) and affirmation (“Yeah this has been happening to my videos as well.” I decided to use these posts as a starting point from which I could explore further. 

Coding the project

After setting up Reddit API credentials, I scraped posts and their comments from these subreddits using Reddit’s PRAW, filtered by specific flairs (e.g. “Algorithm Question / Shadowbanned”). I imported a Python module written by Prakhar Rathi, and then wrote a script that would scrape the posts and comments and save the dataset as a CSV. I combined the data and converted it into a JSON file.

I thought about what I wanted to do with this dataset of ‘algorithmic gossip’, especially in an art gallery setting. I considered curating a selection of the data in a book or zine. I also considered building an ML model to generate new advice from this dataset.

I thought about some of the previous exploration of voice technology I had done with my collective tendernet, and considered the ways we think about gossip as spoken: it has an aural quality. I got really excited about the creative potential for a voice interaction – could you call a phone number and get a voice message? Pick up an object, put it to your ear, and a message plays? With one tendernet collaborator Zoe Bachman, we brainstormed some ideas and agreed on an aesthetic: y2k tech girlie. Another collaborator Katrina Peterson took the aesthetic concept and iterated some cute logo designs.

Ahead of the 2022 Internet Yami-Ichi, I decided to build a web-based piece (see the prototype here) that employs text-to-speech. I built the website using JavaScript, making use of the p5.js and p5.speech.js libraries. Each time you click the page, a new piece of ‘algorithmic gossip’ appears and is read out loud in an unnatural-sounding robotic voice. I tested out different qualities of the voice, including testing out a “whisper” (it sounded terrifying).

Testing it out

Testing out the piece at the Internet Yami-Ichi was a lot of fun. The energy of the event was very much a cross between an art book fair, a bazaar, and an art gallery. I talked to lots of people who came through about the concept behind the project and got some great ideas. I was also inspired by hearing more about Angie Waller’s work with Unknown Unknowns; she is also working with comments and pictures scraped from the internet.

I want to continue refining and exploring this dataset through different creative explorations. 

References

Sara Ahmed. 2021. Complaint! Duke University Press, Durham.

Sara Ahmed. 2018. Refusal, Resignation and Complaint. feministkilljoys. Retrieved May 7, 2022 from https://feministkilljoys.com/2018/06/28/refusal-resignation-and-complaint/

Sophie Bishop. 2019. Managing visibility on YouTube through algorithmic gossip. New Media & Society 21, 11–12 (November 2019), 2589–2606. DOI:https://doi.org/10.1177/1461444819854731

Sophie Bishop. 2020. Algorithmic Experts: Selling Algorithmic Lore on YouTube. Social Media + Society 6, 1 (January 2020), 205630511989732. DOI:https://doi.org/10.1177/2056305119897323

Taina Bucher. 2017. The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms. Information, Communication & Society 20, 1 (January 2017), 30–44. DOI:https://doi.org/10.1080/1369118X.2016.1154086

Michael A. DeVito, Darren Gergle, and Jeremy Birnholtz. 2017. “Algorithms ruin everything”: #RIPTwitter, Folk Theories, and Resistance to Algorithmic Change in Social Media. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17), Association for Computing Machinery, New York, NY, USA, 3163–3174. DOI:https://doi.org/10.1145/3025453.3025659

Michael Ann DeVito. 2021. Adaptive Folk Theorization as a Path to Algorithmic Literacy on Changing Platforms. Proc. ACM Hum.-Comput. Interact. 5, CSCW2 (October 2021), 1–38. DOI:https://doi.org/10.1145/3476080

A few months ago I bought a beautiful hand-woven object off the internet. The object measures 5.5 inches by 5.5 inches and consists dozens of thin threads tightly woven through small beads, strewn across a square resin frame.

When it first arrived, I enjoyed challenging friends by showing them the object without any context and asking them to identify what it is. One group of friends thought that it was some kind of weaving device. “Do you weave with it?” one friend asked. “Small loom for patching clothes,” guessed another. At first another friend thought it was a loom, but upon closer examination he noticed that the tiny threads woven throughout it are, in fact, thin wire filaments. Does it carry an electric charge, he asked?

Dimensions: 14 cm x 14 cm. Memory capacity: 4096 bits. Ferrite field: 64×64.

He was right. The object is what is known as a “ferromagnetic core memory,” an antiquated form of computer memory. As I started researching the origin of the object, I learned more about how the histories of computation, memory, textile production, and labor are intertwined.

Core memory was first developed in the 1950s and was the most common type of random-access computer memory until 1975. Random-access memory (RAM) is a type of computer memory that can be accessed at any time, regardless of when it was saved. Core memory works as follows: Wires are tightly laced through small ferrite rings (known as cores). Ferrite is used because it becomes magnetized when exposed to magnetic fields. Electric currents are sent through the wire, which creates magnetic fields. The core can be polarized negatively or positively based on magnetic fields operating in opposite directions (i.e. switching the polarity). Those opposing polarities correspond to 1 or 0, the components that make up bits and bytes.

Weaving core memory

In the United States, the lightweight quality of hand-woven core rope memory, “a technique of physically weaving software into high-density storage,” powered the early Apollo Guidance Computer that put the Apollo on the moon. The history of core rope memory has been well-documented: highly skilled weavers and craft workers, most of whom were women, worked in a Raytheon factory in Waltham, Massachusetts to weave the core rope memory. There was a gendered aspect to this labor: The core rope was referred to as LOL memory (“Little Old Lady” memory). Journalists, engineers, and even a manager at Raytheon allegedly described this work as requiring no thinking and no skill.

The software for flights was managed by a “rope mother” (who was usually male), although Margaret Hamilton, who is best remembered for overseeing the development of the Apollo software, was rope mother on the Luminary.

http://static.righto.com/images/agc-rope/rope-threader.jpg
Unnamed woman described as a “space age needleworker” in a Raytheon press kit. Source: Science News
Source: Raytheon CN-4-20C / Smithsonian Institution WEB15435-2016.

In their paper “Making Core Memory: Design Inquiry into Gendered Legacies of Engineering and Craftwork,” Daniela Rosner and others explore how the high-status, male labor of building computers was powered by low-status craftwork largely carried out largely by women (specifically, women of color). According to Rosner, the work performed at Raytheon was described as “tender loving care” by the man who oversaw the Apollo Guidance Computer’s hardware.

Lisa Nakamura interrogates these ideas in her paper “Indigenous Circuits: Navajo Women and the Racialization of Early Electronic Manufacture”, which looks at the indigenous women who built integrated circuits for the Apollo Guidance Computer. From 1965-1975, the Silicon Valley company Fairchild Semiconductor ran a circuit manufacturing plant in New Mexico on Navajo land where Navajo women were employed. Nakamura demonstrates how racialized notions of labor shaped how value was conferred on the engineering/craftmanship work those Navajo women carried out. According to Nakamura, the work the Navajo women did was described as “affective labor, or a ‘labor of love.’”

The critical contributions of these craftworkers – both the women weaving core memory in MA and the women building integrated circuits in NM – were systematically undervalued and largely erased from computing history until recently. Gendered and racialized notions about what is considered “real” tech work persist today.

The Saratov-2 computer

The ferrite core memory plate I own is a relic of Soviet computing history. The Saratov-2 microcomputer from which my core memory plate is from appears to have been uncovered in the ruins of a fire. Russian urban explorer Ralph Mirebs describes his discovery in a 2020 blog post “Cemetery of Soviet Computers.” Apparently no photos remained of this “legendary machine,” the Saratov-2, until the author came across the ruins. (He declined to share the location). The Saratov-2 is apparently rare enough that no examples were believed to have survived until this discovery.

https://i0.wp.com/rusue.com/wp-content/uploads/2019/01/2.jpg
Source: Ralph Mirebs
https://i0.wp.com/rusue.com/wp-content/uploads/2019/01/3.jpg
Source: Ralph Mirebs

The Saratov-2 was a clone of the US minicomputer PDP-8/M. Cloning US computers was common practice at the time: In the 1970s, the USSR began getting its hands on PDP minicomputers with the intent of copying them. The PDP-8 was allegedly acquired by the USSR from a sunken US submarine, and then reverse engineered by the Central Research Institute of Measuring Equipment (ЦНИИИА) in the city of Saratov. At least that’s what the Etsy posting says – a blog post written by another computer hobbyist investigates this claim further and can’t confirm it.

What made the Saratov-2 unique was that it didn’t have a microprocessor. Instead, it was broken down into its individual components, which sat in drawers. The ferrite core memory cube, the microcomputer’s RAM, was located in one such drawer.

What about the Saratov-2 core memory plate I own? What handiwork and labor did it require? It’s difficult to say who hand wove the core memory that powered these early microcomputers, since information about them is scant.

Source: Ralph Mirebs

According to Ralph Mirebs (site in Russian), the decimal number on the core memory plate I own begins with KhSHM, which suggests the plate was manufactured at the Central Institute of Measuring Equipment in Saratov, Russia during the 1970s. The Central Institute of Measuring Equipment (TsNIIIA for short) was founded in 1958 and specialized in the manufacturing of electronic devices, including magnetic materials and integrated circuits. This is where the Saratov-2 minicomputer was developed.

I was able to track down the location of the Central Institute of Measuring Equipment. The cluster of TsNIIIA buildings are located at the intersection of Moskovskaya and Radishcheva streets in Saratov, Saratov Oblast, Russia.

Photo taken 1965. Source
Source
Currently on Google Maps

The TsNIIIA closed in 1991, and a joint stock company TsNIIIA took its place. In 2017, the owners tried to sell the 32 buildings but failed. In 2021, it was announced the buildings would be turned into a “technocenter.” It’s unclear what the buildings are currently used for.

So, who were the artisans who worked in a TsNIIIA building to weave core memory for the Saratov-2 microcomputers? I’m really not sure. If anyone has more information, I’d love to learn more.

References

Nakamura, Lisa. “Indigenous Circuits: Navajo Women and the Racialization of Early Electronic Manufacture.” American Quarterly, vol. 66, no. 4, 2014, pp. 919–41. DOI.org (Crossref), https://doi.org/10.1353/aq.2014.0070.

Rankin, Joy Lisi. “Core memory weavers and Navajo women made the Apollo missions possible.” Science News. https://www.sciencenews.org/article/core-memory-weavers-navajo-apollo-raytheon-computer-nasa. Accessed 21 Apr. 2022.

Rosner, Daniela. Critical Fabulations: Reworking the Methods and Margins of Design. The MIT Press, 2018.

Rosner, Daniela K., et al. “Making Core Memory: Design Inquiry into Gendered Legacies of Engineering and Craftwork.” Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, ACM, 2018, pp. 1–13. DOI.org (Crossref), https://doi.org/10.1145/3173574.3174105.

Shorey, Samantha, and Daniela Rosner. “A Voice of Process: Re-Presencing the Gendered Labor of Apollo Innovation.” Communication  1, vol. 7, no. 2, Mar. 2019, https://doi.org/10.7275/yen8-qn18.

“Software Woven into Wire: Core Rope and the Apollo Guidance Computer.” http://www.righto.com/2019/07/software-woven-into-wire-core-rope-and.html. Accessed 14 Jan. 2022.

Currently, I’m in a class on Algorithmic Poetry, in which we’ll experiment with integrating machine learning algorithms into our writing and creative practice.

Last week, we were prompted to think about how we would describe our own creative practice, especially when it comes to our coding or writing. Over the past few years, I’ve been reflecting on my “style” of making, which has always been scattered and emergent but also intensely relational. I’m driven by my curiosity for a question or an idea and often the art objects or code I produce are a byproduct of the research process. I create through mapping and learning and studying, with various experimental outputs.

I recently read Natalie Loveless’ manifesto-book, A Manifesto for Research-Creation: How to Make Art at the End of the World, which I felt not only validates this practice-centered type of investigation, but also positions it as a fundamentally feminist mode of research that is focused on experimentation.

Of Donna Haraway’s book The Companion Species Manifesto, Loveless writes that it “implicitly argues that it is in allowing ourselves to be drawn by our loves, our intensive and extensive curiosities, attentive to what and whom we are driven to explore, and examining the complex web of relations that we inherit thereby, that we might inhabit research questions ethically” (27). In other words, the questions are never answered. They are always in the process of unfolding.

“A research-creational approach insists that it is to our deepest, doggiest, most curious loves that we are beholden, and that is is love – eros – that must drive our research questions as well as our methodological toolkits….A multiplicity of responsive practices structured by situated (emergent, erotic, driven) accountability” (28).

In addition, Loveless reminds us that we must cultivate the erotic as our guide in our knowledge-making practices, a reference to Audre Lorde’s essay “The Uses of the Erotic: The Erotic as Power”. When we are attuned and attentive to those things that bring us pleasure and joy, we are positioned to do our best research and work.

I’ve been reflecting on this “style” of curiosity-driven research and experimentation as I’ve revisited some of my past work in which I wrote code that generated text. A couple of examples of text generators I built:

I’m still not quite sure what the output of some of my experimentation over the next month will look like, but I have had a renewed interest in textile design – specifically, producing objects on my knitting machine. Using punch cards, I’d love to translate some of the text generation from this class into physical, knitted textiles. You can see more of my ideas & references in this are.na channel.

Brian Eno, Oblique Strategies, 1974
  1. What kinds of forms and practices emerge when we turn away from the new and attend to the persistent, unsettled, and non-digital?
  2. What tensions might these forms and practices create with our typical practices of attribution and impact?
  3. How does sidelining the technological new allow us to pay attention to things in a different manner?

These three questions are at the heart of a 2018 paper “From HCI to HCI-Amusement: Strategies for Engaging what New Technology Makes Old,” in which two HCI practitioners resist the formal logic and structure of design workshops and instead take inspiration from the Fluxus movement to develop a set of “HCI-amusements.”

In the 1960s and 1970s, Fluxus emerged as an interdisciplinary creative practice in which artists, composers, designers, and poets engaged in experimental art that emphasized the process (research, archive, iterative “critical making”) rather than a finished output. Fluxus was characterized as a shared posture and language towards making, rather than an art movement. It was also decidedly “anti-art” in that artists strove to eliminate boundaries between “art” and the “non-art” spaces by integrating an iterative creative practice into everyday life, using everyday objects. The result was a set of art objects that were radically accessible.

In a parallel effort towards “critical making,” UC Berkeley offers a class aimed at getting students to think about the role of discomfort in design (see the paper “Uncomfortable Interactions” for a theoretical overview). Similarly, the project “Disobedient Objects” is a cookbook of sorts for subverting the utility of various objects, and serves as a conceptual starting point for thinking about “making the familiar unfamiliar.”

I’m thinking about these three questions now:

  1. How do we attend to the non-digital in order to sensitize ourselves to new forms and processes?
  2. Given that human-centered and “persuasive” design are tools that have been co-opted by capitalism, what tactics can we use to subvert HCI? How do we inject friction, noise, slowness, and discomfort into design interactions?
  3. How do we design interfaces that are uncomfortable and subversive?
  4. What new design patterns might emerge?

Yesterday I had the opportunity to user test my thesis project as it exists in its current state at the Quick & Dirty show. Since I’ve been doing some disparate experiments, I decided to show two of the pieces in an attempt to get feedback on what works, what feels compelling, and how the projects might be better synthesized.

First, I showed a web application I built that uses IBM Watson’s Personality Insights API (i.e. psychometrics) to make assumptions about who you are as a person. The user logs into Facebook in the application and then a dashboard appears that shows them their predicted psychological makeup and purchasing habits. I tried to take a satirical/speculative approach, suggesting what psychometrics could look like in the future.

Second, I showed the work I had done on generating 3D facial models from 2D images. The idea is that after a user logs into Facebook, the application will automatically produce a 3D model of their face just from their Facebook photos. Earlier in the day, I had 3D printed a face, so for the show I projected the isomap facial image on top of the 3D model to lend the 3D experiment more tactility.

People responded really well to the visual aspect of the project and expressed a desire to see more of a connection between this visual and the psychometric web app.

Overall the feedback was so useful. I felt as if the common theme was a desire for a stronger framing of the project. How do I want the audience to feel as an end result? What kind of approach or tone should I be taking?

My Facebook metadata as landscape.

This semester, I’ve focused my attention on creative ways of interpreting and visualizing my personal Facebook data.

I’m interested in exploring the concept of “digital dualism” – the habit of viewing the online and offline as largely distinct (source). We are actively constructing our identities whether behind a screen or in person. As Nathan Jurgenson writes, “Any zero-sum “on” and “offline” digital dualism betrays the reality of devices and bodies working together, always intersecting and overlapping, to construct, maintain, and destroy intimacy, pleasure, and other social bonds.”

The exact location where I made a Facebook update.

With this project, I wanted to try re-inserting the digital world into the physical world. I decided to locate specific actions I took on Facebook within a physical geography and landscape.

It’s very easy to download your Facebook metadata from the website – all you have to do is follow these directions. In my data archive, I found information about every major administrative change I’ve made to my Facebook account since I created the account in 2006, including changes to my password, deactivating my account, changing my profile picture, etc. This information was interesting to me because from Facebook’s perspective, these activities were in all likelihood the most important decisions I had ever made as a Facebook user.

I rearranged that data into a simple JSON file:

I decided to explore the IP Address metadata associated with each action. I wanted to know more about the physical location where I had made these decisions concerning my Facebook account, since I obviously didn’t remember where I was or what I was doing when I had made these changes.

I wrote a Python script (see code here) that performs several different actions for each item in the JSON file:

(1) Takes the IP address and finds the corresponding geolocation, including latitude & longitude & city/state;

(2) Feeds the latitude/longitude into Google Maps’ Street View and downloads 10 images that each rotate 5 degrees;

(3) Adds a caption to each image specifying the Facebook activity, the exact date/time, and the city/state; and

(4) Merges the 10 images into a gif.

The result was two dozen weird undulating gifs of Google Street View locations, which you can check out on the project website.

After doing all that work, however, I didn’t feel satisfied with the output. If the goal was to find a way to re-insert my digital data trail into a physical space, I felt that the goal hadn’t yet been realized in this form. I decided to take the project into a different, more spatially-minded direction.

I wrote another Python script that programmatically takes the IP address and searches for the latitude/longitude on Google Maps, clicks the 3D setting, records a short video of the three-dimensional landscape, and then exports the frames of that video into images.

Programmatically screen recording Google Maps’ 3D landscape.

Using the photogrammetry software Photoscan, I created a 3D mesh and texture from the video frames. Then, I made a quick design of the Facebook app on an iPhone with the specific Facebook activity associated with that location & IP address. Finally, I pulled the landscape .obj into Unity with the iPhone image and produced some strange, fantastical 3D landscapes:

Pulling the 3D mesh into Unity and inserting the Facebook metadata into the landscape.

The past few weeks have allowed me to think deeply about what I want to get out of my thesis project and what form this project will take. I wrote last week of the idea of the “manufactured self” – a self that has been constructed socially by external sources of power.

I stumbled on Alexandru Dragulescu’s thesis paper Data Portraits: Aesthetics and Algorithms, which outlines his creative practice for data portraiture. He describes “the concept of data portraits as a means for evoking our data bodies” and showcases his “data portraiture techniques that are re-purposed in the context of one’s social network.”

With my project, I will attempt to create a portrait of each participant based only on his or her Facebook data. I want to use facial recognition models (C++ and Python), 3D modeling (Three.js, Blender), the Facebook Graph API, and IBM Watson’s Natural Language Processing and Personality Insights APIs.

 

Visit the website: http://rebecca-ricks.com/manufactured-self/becca.html

After much experimentation, I have an overall idea of what the user flow will look like. There will be an online web application + a physical component. Here’s the flow:

(1) User logs into web application (with Facebook Oauth)

(2) Real-time analysis of personality + generate 3D facial model

(3) The 3D object is manipulated/distorted based on the personality insights (?)

(4) At the show, users will be able to take home a physical artifact of their data portrait (thermal print of the 3D model? An .obj? A list of personality insights?)

This week, I used C++ and Python to get this library up and running, which allows you to create a 3D model of a face from a 2D image. I spent a significant amount of time trying to install the library, generate the landmark points, and run the analysis on my own images. Here’s what that process looked like:

Generating the landmark points based on a photo of my face.
Generating the isomorphic texture that will be applied to the 3D model.
The 3D mesh model that the texture is applied to.
The final output displayed in the browser using three.js.

I also got access to a few of IBM Watson’s APIs via the Python SDK. Specifically, I’m looking at the Personality Insights API, which analyses a body of text (your Facebook likes, your Facebook posts, etc). I ran the analysis on my own Facebook data, and added the information to the website I built from the JSON file that was generated.

You can see an example of what that analysis looked like on my own Facebook data:

http://rebecca-ricks.com/manufactured-self/becca.html

I also decided to test my 2D to 3D model on an earlier image I had created of my composite face based on every Facebook photo I’ve been tagged in.

http://rebecca-ricks.com/manufactured-self/facemash.html

Last week I presented my midterm presentation and received some great feedback and suggestions. I resonated most with what Sam said about the monetization, commodification, and production of the self that occurs on Facebook. How can I incorporate that more fully into my thesis project?

I’m still iterating on a few different ideas, but eager to find the final form that my project will take, whether it’s one fully-developed web application or several different experimental applications.

I found some visual inspiration that has fueled the project I’m working on this week.

Source: https://labs.rs/en/

Share Lab has been investigating ‘The Facebook Algorithmic Factory’ with the intention “to map and visualize a complex and invisible process hidden behind a black box.” The result is an exploration of four main segments of the process: Data Collection (“Immaterial Labour and Data harvesting“), Storage and Algorithmic processing (“Human Data Banks and Algorithmic Labour“), and Targeting (“Qualified lives on discount“).

I was struck by not only the depth of research into Facebook’s policies and practices but also the beautiful (static) data visualizations produced as a way to clarify the research.

Source: https://labs.rs/en/
Source: https://labs.rs/en/

These data visualizations are simple but powerful. It left me thinking: How do I make this complex web personal ? How do I communicate the ways in which this process immediately affects every Facebook user? Can I make use of the Facebook API to build a graphic that takes the user’s personal information (likes, friends, advertisements) and displays them in an interactive web-based application?

I want to make use of a lot of the good research done by Share Lab as well as my own research to build an interactive web application that helps users see how their personal data is collected, stored, and used in order to manufacture a self, or a “consumer profile.” I was struck by what Nancy said about Facebook manufacturing a self and I think this would be a good conceptual starting point.

Right now I’m starting to build the web application using the Facebook Graph API, Facebook CLI, and a D3 clustering algorithm. I’m starting by building a web application that collects information about user_likes clustered according to category.

This week, we reviewed useful tools ffmpeg and imagemagick to manipulate images and videos found online. I decided to start working with the trailer to Akira Kurosawa’s 1985 film Ran (Japanese for “chaos”). Ran is a period tragedy that combines the Shakespearian tragedy King Lear with legends of the daimyō Mōri Motonari.

The trailer is filled with beautiful, carefully framed shots. I wanted to see if there was a way to automatically detect and chop up the trailer into its individual shots/scenes. It turns out there is no simple solution to that problem so I hobbled together my own bash script to do so.

Once I had chopped up the trailer, I decided to export one image from each scene for analysis. I did so by writing a script that saves the first frame from each video.

I then used selenium to programmatically upload those images into a reverse image search that was powered by an image classifier that had been trained on Wikipedia data. The image classifier had been trained by Douweo Singa and the site can be accessed here. It’s described this way: “A set of images representing the categories in Wikidata is indexed using Google’s Inception model. For each image we extract the top layer’s vector. The uploaded image is treated the same – displayed are the images that are closest.” You can read more detailed notes about training the data in Douweo’s blog post.

I ended up with hundreds of ‘visually similar’ images, organized according to the shots in the trailer. I combined them into a side-by-side comparison video, where you can see some of the images that were deemed ‘visually similar’ by the training set. Check out the full video for Kurosawa’s Ran:

I then decided to repeat the entire process for the trailer to Dario Argento’s classic horror film Suspiria:

Find my full Github repository here. 

darknet markets

In Australia, 224 people were detained, including members of Asian criminal groups and biker gangs, three tons of drugs and 45 million Australian (35 million American) dollars were confiscated. The expressions “deep web” and “darknet” are periodically utilized conversely. Nonetheless, this isn’t right. The darknet is essential for the more noteworthy deep web. The deep web incorporates all unindexed destinations that don’t spring up when you do an Internet search. Australians use Darket Market in 2021 asap market link. In the course of the joint operation of the United States and Australia, ANOM app was developed and distributed in a criminal environment. Thanks to this, the police received the opportunity to monitor closed chats,The darknet is important for the deep web, yet it alludes to sites that are explicitly utilized for detestable reasons. Dark net sites are intentionally stowed away from the surface net by extra methods. in which drug smuggling was discussed, money laundering and even planning murders.