This week Shir and I did some field research, speaking with several engineers, scientists, and software developers about the viability of some of the ideas we had for our anti-surveillance biometric kit.

We first spoke with Nasir Memon, a professor at the NYU Tandon School of Engineering who specializes in biometrics. He had some ideas for the kit, including some kind of wearable (a hat?) that would hold infrared LEDs that would shield the face from facial recognition while remaining imperceptible to the human eye. Upon his suggestion, we then spoke with three NYU engineering students about the viability of this idea and got some real feedback (some of which was positive, some of which presented more challenges).

We talked to Eric Rosenthal, a scientist and professor at ITP, about some of the work he’s done with IR lights and biometric identity verification while at Disney. Shir also spoke to Lior Ben-Kereth, a partner at the facial recognition company acquired by Facebook.

We decided to move forward with the infrared LED wearable idea, but first we needed to ensure that a range of different kinds of cameras do indeed pick up the infrared light. We connected a cluster of IR LEDs and pointed them at our iPhone camera, FaceTime, Snapchat, and a range of IP surveillance cameras – including three different kinds that are in use at ITP.

You can see the result of our test below:

This semester I’ve been interrogating the concept of “algorithmic gaze” vis-a-vis available computer vision and machine learning tools. Specifically, I’m interested in how such algorithms describe and categorize images of people.

For this week’s assignment, we were to build a Twitter bot using JavaScript (and other tools we found useful – Node, RiTA, Clarifai, etc) that generates text on a regular schedule. I’ve already build a couple Twitter bots in the past using Python, including BYU Honor Code, UT Cities, and Song of Trump, but I had never built one in Javascript. For this project, I immediately knew I wanted to experiment with building a Twitter bot that uses image recognition to describe what it sees in an image.

To do so, I used the Clarifai API’s robust machine learning library to access an already-trained neural net that generates a list of keywords based on an image input. Any Twitter user can tweet an image at my Twitter bot and receive a reply that includes a description of what’s in the photo, along with a magick prediction for their future (hence the name, crystal gazing).

teapot-002

After pulling an array of keywords from the image using Clarifai, I then used Tracery to construct a grammar that included a waiting message, a collection of insights into the image using those keywords, and a pithy life prediction.

celebtech-05-cosby

I actually haven’t deployed the bot to a server just yet because I’m still ironing out some issues in the code – namely, asynchronous callbacks that are screwing with the order of how functions need to be fired – but you can still see how the bot works by checking it out on Twitter or Github. It’s still a work in progress, however.

You can see the Twitter bot here and find the full code here in my github repo. I also built a version of the bot for the browser, which you can play around with here.

As mentioned last week, I’m exploring the idea of the algorithmic gaze vis-a-vis computer vision and machine learning tools. Specifically, I’m interested in how such algorithms describe and categorize images of people. I’d like to focus primarily as the human body as subject, starting with the traditional form of the portrait. What does a computer vision algorithm see what it looks at a human body? How does it categorize and identify parts of the body? When does the algorithm break? How are human assumptions baked into the way the computer sees us?

As mentioned last week, I’m interested in exploring the gaze as mediated through the computer. Lacan first introduced the concept of the gaze into Western philosophy, suggesting that a human’s subjectivity is determined by being observed, causing the person to experience themselves as an object that is seen. Lacan (and later Foucault) argues that we enjoy being subjectivized by the gaze of someone else: “Man, in effect, knows how to play with the mask as that beyond which there is the gaze. The screen is here the locus of mediation.”

The following ideas are variations on this theme, exploring the different capabilities of computer vision.

Project idea #1: Generative text based on image INPUT.

At its simplest, my project could be a poetic exploration of text produced by machine learning algorithms when it processes an image. This week I started working with several different tools for computer vision and image processing using machine learning. I’ve been checking out some Python tools, including SimpleCV and scikit. I also tested out the Clarifai API in JavaScript.

In the example below, I’ve taken the array of keywords generated by the Clarifai API and arranged them into sentences to give the description some rhythm.

Check out the live prototype here.

I used Clarifai’s image captioning endpoint in order to generate an array of keywords based on the images it’s seeing and then included the top 5 keywords in a simple description.

toni hijab nina
tayler

You can find my repo code over here on Github.

PROJECT IDEA #2: IMAGE PAIRINGS OR CLUSTERS.

In the first project idea, I’m exploring which words an algorithm might use to describe a photo of a person. With this next idea, I’d be seeking to understand how a computer algorithm might categorize those images based on similarity. The user would input/access a large body of images and then the program would generate a cluster of related images or image pairs. Ideally the algorithm would take into account object recognition, facial recognition, composition, and context.

I was very much inspired by the work done in Tate’s most recent project Recognition, a machine learning AI that pairs photojournalism with British paintings from the Tate collection based on similarity and outputs something like this:

The result is a stunning side-by-side comparison of two images you might never have paired together. It’s the result of what happens when a neural net curates an art exhibition – not terribly far off from what a human curator might do. I’d love to riff on this idea, perhaps using the NYPL’s photo archive of portraits.

Another project that has been inspiring me lately was this clustering algorithm created by Mario Klingemann that groups together similar items:

I would love to come up with a way to categorize similar images according to content, style, and facial information – and then generate a beautiful cluster or grid of images grouped by those categories.

PROJECT IDEA #3: DISTORTED IMAGES.

A variation on the first project idea, I’d like to explore the object recognition capabilities of popular computer vision libraries by taking a portrait of a person and slowly, frame by frame, incrementally distorting the image until it’s no longer recognized by the algorithm. The idea here is to test the limits of what computers can see and identify.

I’m taking my cues from the project Flower, in which the artist distorted stock images of flowers and ran them through Google’s Cloud Vision API to see how far they could morph a picture while still keeping it recognizable as a flower by computer vision algorithms. It’s essentially a way to determine the algorithm’s recognizable range of distortion (as well as human’s).

I’m interested testing the boundaries of such algorithms and seeing where their breakpoints are when it comes to the human face.*

*After writing this post, I found an art installation Unseen Portraits that did what I’m describing – distorted images of faces in order to challenge face recognition algorithms. I definitely want to continue investigating this idea.

PROJECT IDEA #4: interpreting BODY GESTURES IN paintings.

Finally, I want to return to my idea I started with last week, which was focused on the interpretation of individual human body parts. When a computer looks at an ear, a knee, a toenail, what does it see? How does it describe bodies?

Last week, I started researching hand gestures in Italian Renaissance paintings because I was interested in knowing whether a computer vision algorithm trained on hand gestures would be able to interpret hand signals from art. I thought that if traditional gestural machine learning tools proved unhelpful, it would be an amazing exercise to train a neural net on the hand signals found in religious paintings.

hand

This week, Shir and I have been discussing the biometric anti-surveillance kit that we will be building for our midterm project.

What is biometric data? 

Biometrics are the measurable, distinct characteristics that are used to verify the identity of individuals, including groups that are under surveillance. Biometric data includes fingerprints, DNA, face recognition, retina scans, palm veins, hand geometry, iris recognition, voice, and gait.

Problem area

Biometric data is extremely sensitive. If your data is compromised, it’s not replaceable (unlike password tokens). The widespread collection of personal, biometric data raises questions about the sharing of such data between government agencies or private companies. Many of us use the Apple Touch ID on a daily basis and yet we don’t think about the fact that Apple now has access to a snapshot of our fingerprint. In addition, biometric data is most often collected by the state about populations that are already vulnerable, including criminals, travelers, and immigrants.

Proposed project

We intend to put together a biometric resistance kit, a toolkit of wearable objects aimed at masking and altering user’s personal biometric identity. The aim of the project is to prototype non-intrusive objects that can be worn by anybody to protect their biometric identity in public spaces.

hacking-story-frame-works-001Contents of the kit

We had several ideas of what the kit could contain.

hacking-story-frame-works-007 hacking-story-frame-works-008 hacking-story-frame-works-009 hacking-story-frame-works-010 hacking-story-frame-works-011

Relevant projects

We researched what had been done in the past and found several other artists and engineers experimenting with anti-surveillance materials.

hacking-story-frame-works-013

Identity is a project by Mian-Wei that uses a band-aid made of silicon and fibers to trick the Apple Touch ID into thinking it’s a real fingerprint. The solution is simple and effective, something we would like to achieve with our project.

hacking-story-frame-works-014Biononymous Guide is a series of DIY guides for masking your biometric identity (specifically, DNA and fitness trackers). We loved the format of the website – the mix of printed materials, physical objects, and how-to videos match the kind of kit we’re hoping to build.

hacking-story-frame-works-015

Adam Harvey is an artist whose anti-surveillance work includes Stealth Wear, a line of Islamic-inspired clothing to shield against drone attacks & thermal cameras, and CV Dazzle, a makeup guide that beats facial recognition algorithms. We loved how he tried to work with styles that people would actually want to wear. This will likely be a major concern in the development of our project.

Field research and interviews

We’ve spoken with a few experts in the field of biometrics, surveillance, and facial recognition algorithms and are planning to continue these conversations.

First, we spoke to Nasir Memon, a computer science professor and biometrics expert at NYU’s Tandon School of Engineering. He had some ideas for the kit, including a hat with infrared lights that would beat facial recognition algorithms. We also spoke to NYU engineering students who are researching surveillance, machine learning, and biometrics. They gave us additional technical guidance about what in our kit would be most viable. We also have set up meetings with artist Adam Harvey and NYU professor Eric Rosenthal to discuss our project idea.

User personas

Shir and I have had many conversations about who this kit would affect. We realized that for the time being, there is no reasonable way to combat the fingerprinting that occurs for immigrants, foreign visitors, criminals, or people who are required to scan their fingerprints for work. Instead, we realized that this kit would be best suited for people who are worried about their privacy and surveillance when they’re in public spaces. Here are several example users we created:

hacking-story-frame-works-002 hacking-story-frame-works-003 hacking-story-frame-works-004

Terrapattern, Golan Levin

For this week’s assignment, we were to reframe or revisit our project idea through a scientific lens. Since computer vision — characterized by image analysis, recognition, and interpretation — is itself considered a scientific discipline, I struggled to find a new scientific framework through which to re-articulate my project.

Because my project is so deeply rooted in computer vision and optics, I’m interested in exploring the idea of “algorithmic gaze” as the means by which computers categorize and label bodies according to specific (and flawed) modalities of power.

Donna Haraway’s concept of the “scientific gaze” has very much influenced my research. In her paper “Situated Knowledges: The Science Question in Feminism and Privilege of Partial Perspective“, Haraway tears apart traditional ideas of scientific objectivity, including the idea of the subject as a passive, single point of empirical knowledge and the scientific gaze as objective observer. Instead, she advocates for situated knowledge, in which subjects are recognized as complex and the scientific gaze is dissolved into a network of imperfect/contested observations. In this new framework, objects and observers are far from passive, exercising control over the scientific process.

Haraway relies on the metaphor of vision, the all-seeing eye of Western science. She describes the scientific gaze as a kind of “god trick,” a move that positions science as the omniscient observer. The metaphor of optics, vision, and gaze will be central to the development of my project. I’m interested in exploring how the “algorithmic gaze” mediates and shapes the information we receive.

Sandro Botticelli (Florentine, 1446 - 1510 ), Portrait of a Youth, c. 1482/1485, tempera on poplar panel, Andrew W. Mellon Collection 1937.1.19
Sandro Botticelli (Florentine, 1446 – 1510 ), Portrait of a Youth, c. 1482/1485, tempera on poplar panel, Andrew W. Mellon Collection 1937.1.19

My first test was using ConvNetJS, a JS library built by Andrei Karpathy that uses neural networks to paint based on an image as input. I used a detail from the painting above and ran it through the neural network. Here’s an example of the process.
screen-shot-2016-09-29-at-2-53-35-pm screen-shot-2016-09-29-at-2-54-23-pmscreen-shot-2016-09-29-at-3-00-11-pm screen-shot-2016-09-29-at-2-54-29-pmscreen-shot-2016-09-29-at-3-03-10-pm

Problem framework.

For my midterm project, I’d like to address the ethics and implications of widespread biometric data collection.

Biometric identifiers are defined as measurable, distinctive characteristics that are used to label or describe individuals. They’re commonly used by governments and private organizations to verify the identity of an individual or group of individuals, including groups that are under surveillance. Physiological characteristics include fingerprints, DNA, face recognition, retina scans, palm veins, hand geometry, and iris recognition. Behavioral identifiers measure behavioral patterns like voice and gait.

Here’s the breakdown of identification accuracy based on biometric input:

screen-shot-2016-09-27-at-5-33-40-pm

The earliest record of fingerprinting cataloguing dates back to 1891. Biometrics arguably originated with “identificatory systems of criminal activity” as part of a larger system to categorize and label criminal populations. “The biometric system is the absolute political weapon of our era” and a form of “soft control,” writes Nitzan Lebovic. Under the post-9/11 expansion of the Patriot Act, biometric systems have expanded from the state to the private market and blurred the lines between public and private control.

While biometric data is seen as being more accurate and therefore more reliable as a way to identify an individual, it is also not replaceable. If your private password was somehow compromised, for instance, you could simply change your password. On the other hand, you can’t replace your fingerprint or change other physical characteristics.

Italian theorist Giorgio Agamben experienced the implications of “bio-political tattooing” firsthand in 2004 when he was told that in order to obtain a U.S. visa to teach a course at New York University he would have to submit himself to fingerprinting procedures. In a piece published in Le Monde, Agamben explains why he refused to comply, arguing that the electronic filing of finger and retina prints required by the U.S. government are ways in which the state registers and identifies naked life. According to Agamben, biometric data collection operates as a form of disciplinary power.

Audience.

This issue affects potentially everyone so our audience is very broad. Biometrics data is most often collected about populations that are already vulnerable, including criminals, the poor, and immigrants. Corporations put a monetary value on biometric data, and yet individuals don’t think about data collection as an intrusion.

The goal of this project is to foster an awareness of the implications and ethics of biometric data collection.

Ideas for the project and user journey.

Concept #1: A physical installation that gives the user personalized information based on a biometric input.

Concept #2: A speculative VR experience with advertisements tailored to user’s biometric data.

Concept #3: A kit of wearable objects aimed at masking and altering user’s personal biometric identity.


http://motherboard.vice.com/read/i-replaced-my-fingerprints-with-prosthetics-to-avoid-surveillance

Concept #4: Collect (non-identity-compromising) biometric data from various participants and sell data on eBay in order to gauge the monetary value of the data.


http://thecreatorsproject.vice.com/blog/this-artist-turned-herself-into-a-corporation-to-sell-her-data

 

screen-shot-2016-09-20-at-5-32-44-pm

She would not say of any one in the world that they were this or were that. She felt very young; at the same time unspeakably aged. She sliced like a knife through everything; at the same time was outside, looking on. She had a perpetual sense, as she watched the taxi cabs, of being out, far out to the sea and alone; she always had the feeling that it was very, very dangerous to live even one day.
– Virginia Woolf, Mrs. Dalloway

For this week’s assignment, we were asked to redesign a narrative experience according to the agile human-centric design principles we discussed last week.

For my source material, I drew from the themes and text of Virginia Woolf’s Mrs Dalloway, a 1925 novel written in a stream of consciousness literary style that sketches the portrait of life of one woman, Clarissa Dalloway, during the course of one single day.

In the first chapter of the book, Clarissa walks the streets around London running errands in preparation for a party she is throwing that night. When I reread the book, I was struck by the ways in which the novel sharpens our attention to details of time and space, especially the specificity of London during Clarissa’s walks. Time is a significant theme in the novel, with clocks ringing the hour and signs of aging and death made hypervisible in the text. So much of the narration in the novel occurs inside the head of the protagonist, with special attention paid to her surroundings.

With this project, I wanted to explore creating a film that employs this stream of consciousness narrative style while physically putting you in the shoes of the protagonist. I chose to reimagine Mrs Dalloway as an immersive VR/360 experience in order to explore this narrative style not only in text, but also in film.

The idea behind the project was to film myself walking in New York using 360 video, paired with a voice over narration of the opening chapter of the novel. I made slight changes to the text in order to accommodate the sharp departure in setting (from 1925 London to 2016 New York). Much of the narration in the novel is observational — Clarissa sees a woman in a taxi cab, she arrives at the park, she looks in shop windows — and I wanted to replicate those moments in the film as much as possible.

Check out the initial prototype of my idea.

YouTube:

 

My audience for this project could be anyone, really. Because it’s a 360 video, the user has full control over what he or she is looking at during the film. Just like London, New York is replete with observational details; I wanted the audience to experience that same sensory overload in my project.

Vimeo:

For this week’s assignment, I performed a traceroute on three sites I visit regularly from all the places I regularly connect to the internet. A traceroute is a computer network diagnostic tool that displays the route that packets take across an IP network. The route is recorded as the packets are received from each successive host (remote node).

To start, I examined the route the packets take to get the servers that host bit.ly, a website that is popular for generating short urls. I had noticed a few years ago that .ly is the internet country code domain for Libya, which seemed unusual to me. When I investigated further, I read this article that lays out the implications of adopting a domain name that’s associated with a country with an authoritarian or unstable government.

I traced the packets’ route, which revealed that the packets jumped from U.S. servers to a Swiss server in Zurich, and back to the U.S. See the journey here:

Project proposal.

I intend to use this class to explore generative text as a new poetic form, culminating in the production of some kind of physical or digital artifact.

Over the coming semester, I will conduct a series of text-based experiments using deep learning methods such as Recurrent Neural Networks (RNNs) for sequence learning and Convolution Neural Networks (CNNs) to classify images and text. I’ll also use Python (w/ Flask), Javascript (w/ Node), and Natural Language Processing (NLP) libraries in both of those programming languages. The goal behind these experiments is to teach myself different ways of training a computer program on text to generate something new.

I’m still not sure what form the final artifact will take, whether it’s a physical book, an installation, an interactive web-based tool, a chatbot, a mobile app, or otherwise. My hope is that the form will eventually emerge through my experimentation.

Some major questions I still have about this work deal with the audience response. What can I build that will elicit an emotional response? Will people understand the intent of this project? How will they connect with it if they aren’t writers/readers/theorists?

Here’s the project map I sketched out during our class activity:


img_6271img_6272

Next steps.

Since I’m still unfamiliar with some of the tools I’d like to use, for the first few weeks I intend to teach myself the basics of deep learning. I plan on using resources from Gene Kogan’s course Machine Learning for Artists, Patrick Hebron’s course Learning Machines, and Andrei Karpathy’s amazing work on RNN. I’m going to build a week-to-week schedule to lend some structure to my experimentation. I’m taking another Javascript-based generative text class right now, so my experiments might align with that class as well.

Resources & inspiration (an ongoing list I will update).

 

Listen: you are not yourself, you are crowds of others, you are as leaky a vessel as was ever made, you have spent vast amounts of your life as someone else, as people who died long ago, as people who never lived, as strangers you never met. The usual I we are given has…none of the porousness of our every waking moment, the loose threads, the strange dreams, the forgettings and misrememberings, the portions of a life lived through others’ stories, the incoherence and inconsistency, the pantheon of dei ex machina and the companionability of ghosts. There are other ways of telling.

― Rebecca Solnit, The Faraway Nearby

One of the goals of VR documentaries, suggested by the filmmakers behind Collisions and Clouds over Sidra, is to give the audience a sense of ‘presence.’ Immersive experiences in VR allow the participant to feel that he or she is physically in the same location as the camera. The technical constraints of VR filmmaking demand slow, deliberate camera movements. Many VR films have a structured narrative while still giving the audience the agency to decide what they will look at during the film. To paraphrase one reviewer of Clouds over Sidra, what moviemaker in the past would include 20 seconds of ceiling shots, looking up at the top of the tent?

I include these observations about VR film because I’d like to highlight the ways in which Russian filmmaker Andrei Tarkovsky uses similar techniques in many of his films to produce an immersive, experiential cinema experience.

During his career, Tarkovsky directed only seven feature films, including Solaris, Stalker, Andrei Rublev, and Mirror during the period 1962-1986. He was a pioneer in the field of cinema, experimenting with new narrative techniques and theories. Many of Tarkovsky’s films are characterized by extremely long takes, slow camera pans, and very few cuts. He developed a theory of cinema called “sculpting in time,” in which he explored how film can twist and alter the audience’s experience of time. Unedited movie footage and lengthly sequences were used to heighten that feeling of time passing.

I watched Solaris for the first time in college and I remembered being shocked by a long, drawn-out driving sequence within the first ten minutes of the film. The scene is nearly five minutes long.

Tarkovsky writes: “If the regular length of a shot is increased, one becomes bored, but if you keep on making it longer, it piques your interest, and if you make it even longer, a new quality emerges, a special intensity of attention.”

Tarkovsky also employed common motifs of running water, clouds, and reflections in his films. Many understood his preoccupation with reflective surfaces to mirror his own interest in self-reflection and introspection. Of Tarkovsky, Ingmar Bergman said: “Tarkovsky for me is the greatest (director), the one who invented a new language, true to the nature of film, as it captures life as a reflection, life as a dream.”

On a personal level, I’ve learned from Tarkovsky that compelling films do not need to have a strict narrative, nor follow time limits or other cinematic constraints. Tarkovsky’s films were so powerful because they pushed the audience into a state of heightened attention.

Crafters of immersive/experiential films often need to make similar decisions about timing, camera movement, and narrative in order to tell the most compelling story possible. The VR film Collisions, for instance, mixed beautiful, wide landscapes with meandering narration in very lengthy shots. With my work in this class, I’m interested in exploring the kinds of experiential, non-linear narrative that Tarkovsky’s films often embody.

Here’s the presentation I gave in class:

Andrei Tarkovsky from Rebecca Ricks

darknet markets

In Australia, 224 people were detained, including members of Asian criminal groups and biker gangs, three tons of drugs and 45 million Australian (35 million American) dollars were confiscated. The expressions “deep web” and “darknet” are periodically utilized conversely. Nonetheless, this isn’t right. The darknet is essential for the more noteworthy deep web. The deep web incorporates all unindexed destinations that don’t spring up when you do an Internet search. Australians use Darket Market in 2021 asap market link. In the course of the joint operation of the United States and Australia, ANOM app was developed and distributed in a criminal environment. Thanks to this, the police received the opportunity to monitor closed chats,The darknet is important for the deep web, yet it alludes to sites that are explicitly utilized for detestable reasons. Dark net sites are intentionally stowed away from the surface net by extra methods. in which drug smuggling was discussed, money laundering and even planning murders.