Cast magicks on internet trolls.

Screen Shot 2015-11-08 at 4.42.36 PM

Lately I’ve noticed something strange: Conversations about technology often locate themselves in the realm of the magical or the supernatural.

The sci-fi genre is replete with descriptions of machines that use language linked to animism, magic, witchcraft, the occult, and ghosts. In William Gibson’s 1984 sci-fi novel Neuromancer, the protagonist Case describes the posthuman body as “data made flesh,” a reference to Christian ontology and Jesus’ divine personhood.

These types of metaphors reaffirm one of the central assumptions at the core of the sci-fi genre: Breathing life into a machine is not far off from breathing life into a human body.

Similarly, the language used by today’s Silicon Valley tech kingpins reveals patterns in their thinking that link artificial intelligence to animism. “With artificial intelligence we’re summoning the demon,” remarked Tesla CEO Elon Musk in a 2014 MIT symposium. “You know those stories where there’s a guy with the pentagram, and the holy water, and he’s sure he can control the demon? Doesn’t work out.”

In a wonderful blog post entitled “Living with our Daemons,” Ingrid Burrington reminds us that Musk’s invocation of a metaphor to the supernatural is actually standard fare in the digital age. We’ve been living with so-called ghosts on the internet for a long time: Software wizards walk us through software installations, apps work “like magic,” and emails bounce back into our inbox from the mysterious MAILER-DAEMON. Evidently the tech world loves a good ghost story.

With that in mind, I decided to make a funny little game in p5.js in which users are prompted to “cast spells” on their internet enemies. Our assignment was to use some external media source in the sketch.

Check out my game here. 

Screen Shot 2015-11-08 at 4.36.43 PM

Screen Shot 2015-11-08 at 4.37.11 PM

The code for the game was fairly simple: I used dom elements to create buttons and capture video from a webcam.

Here’s the full code:

I’m satisfied with the way the project turned out. Here’s a video of someone (me) interacting with the interface:

The pcomp final: A few ideas.

I’ve been brainstorming several ideas for what I would like to build as my final project in pcomp. Here are the ideas that are catching me:

  1. The water harp. I want to continue exploring what is possible with the harp I’ve already begun building for my midterm. I’d like to try out different organic materials including water, but perhaps also wind.
  2. Feeling a heartbeat. I thought it would be beautiful to bring two strangers together and compare their heart rates as they interact with one another. I thought it would be interesting to build a physical heart that expands and contracts at the same rate as someone’s pulse, but now I’m considering other mediums.
  3. Biomorphic music. Something connecting the body with music.
  4. Sutures/strings. Something bringing strangers together with suturing.

First p5.js sketch using the NYT API.

Screen Shot 2015-10-27 at 1.01.55 PM

This week’s assignment was to create a sketch that employs an external data source. I had done this in one of my last assignments, where I was pulling data information from a CSV file to visualize the changing water levels in Lake Powell.

For this week’s project, I decided to work with the New York Times’ API to pull the NYT’s weekly Best Sellers list. I wanted to create a simple search so that users could see what the most popular books were on their birthday. Unfortunately, the API only allows you to pull data as early as 2008 but I decided to finish the project anyway.

See the final sketch here.

Before getting too deep into the project, I decided to make sure the NYT’s API was easy to use with ample documentation. Unlike the Goodreads API (which I’d spent a few hours playing around with), the NYT API is pretty intuitive and easy to use. It has a Best Sellers API that you can use after you’ve obtained the appropriate API key.

The URL that gets called each time a user searches is this:

http://api.nytimes.com/svc/books/v2/lists/overview.json?published_date=2012-11-18&api-key=f175980bb4d8913503354046d03a662b:4:56111762

Before writing any code, I had to construct the URL so that the input, which is a date (2012-11-18), gets wedged into the middle of the url.Screen Shot 2015-10-27 at 1.25.02 PM

 

 

Then in the function setup (), I created a button and a search bar. I also called a new function returnData() which pulls the data as soon as the mouse is pressed.

Screen Shot 2015-10-27 at 1.26.08 PM

 

 

 

 

 

The function returnData() constructs the URL as dataString and loads the JSON file. A JSON, or JavaScript Object Notation, is a programming language that parses and translates data into JavaScripts objects and arrays. The loadJSON() function takes two parameters: the URL (dataString) and what you’re telling the sketch to draw once you have the data (gotData).Screen Shot 2015-10-27 at 1.38.10 PM

 

 

Finally, the function gotData() is defined. Figuring out how to get the right data from the JSON file was tricky. The JSON file provides a series of objects and arrays nested in each other. There’s a lot of information to work with for each book: The title, the author, the publish date, an image of the cover, the price, the ISBN, the publisher, the contributor, the list, etc.

I decided I just wanted my function to pull pictures of the front covers of each book. To do so, I had to first create an empty array and push the URLs for the image into the new array. I printed the array to make sure it worked!

Next, I needed to use the p5.dom library in order to get the appropriate images from the URLs. I was introduced to the function createImg(), which creates images from the URL that appears in the parameter.Screen Shot 2015-10-27 at 1.41.11 PM

 

 

 

 

 

 

Screen Shot 2015-10-27 at 1.02.15 PMThat’s it! I got the search working. There were several lingering issues with the sketch that I didn’t have enough time to resolve, namely:

  1. There are duplicate book covers. Because I didn’t specify which Best Seller list to display, it’s displaying all of them at once. As such categories like hardcover_fiction and ebook_fiction are going to have repeats.
  2. The book covers aren’t wrapping. The books appear in a straight line because we added ‘inline-block’ to the display style, but the books do not wrap in order to fit within the canvas.
  3. The dates only go back to 2008. This is the only data the NYT API provided.
  4. The input is awkward. Entering in a date with the format YYYY-MM-DD is unwieldy. I would need to create three dropdowns or inputs so that users could enter the date information more easily.

See my full code here.

Midterm project: The (water) harp.

IMG_2465

The aspirational version of my water harp.

The project I proposed last week was ambitious to say the least. In my project proposal, I stated that I wanted to build an entire interaction around the tactile experience of running one’s fingers through a stream of water.

In reality, there were a lot of obstacles I hadn’t anticipated encountering and I realized that the project I thought I’d be building required a longer time frame to test out ideas. I still love the concept but I will need to keep testing out the project before it moves forward.

Here’s what I built:

The (water) harp. from Rebecca Ricks on Vimeo.

That being said, I think I build something pretty cool even if it was only one piece of what I’d planned to build.

The initial plan.

After I nailed down the concept, I talked to Pedro about the different kinds of sensors that were available to me. We discussed some different potentiometers: photosensors, lasers, etc. Since I was really looking to build a series of simple switches, he suggested I keep things simple by using what is called an end switch. I decided that I wanted the water to fall on 10 switches. As the participant interacted with the water, it would trigger different sounds.

IMG_2473

Step one: Fold the plexiglass into a shape that would create a waterfall wall of water. 

I sketched out a few different ideas for the shape of the plexiglass. Ultimately I decided it would make the most sense to build a waterfall that would stand on its own and sit on a tabletop surface. Using the plastic heater, it was a laborious process to bend the plexiglass but I was able to get it into a shape that I liked.

IMG_2410

IMG_2413

Step two: Test out the waterfall with different configurations.

The initial plan was to set up a system whereby the water drips straight off the plastic into a container and is then pumped back up to the top and drips out a pipe with holes drilled in it. I set up the components – piping, pump, acrylic – and started testing the water.

IMG_2437

The result of my experimentation was extremely frustrating. It seemed like there were so many factors I had failed to consider when I’d decided to work with water. First of all, the water made a huge mess, which I hadn’t anticipated. More importantly, water has an affinity to plastic and acrylic and so I wasn’t getting the consistent blanket waterfall shape I’d planned on working with.

It seemed like everyone on the floor had ideas about hydrologics and water pressure. I tested out different materials for making a lip for the acrylic but nothing seemed to even out the stream.

Step three: Build the hardware components.

After three days of testing the waterfall, I decided to shift gears and begin building the actual switches and the circuits that would connect to the Arduino.

I laser cut some acrylic “keys” that would serve as an extension of the end switches, which the waterfall would be hitting. I also laser cut a board with ten holes to fit the switches. I soldered the switches to wires that led to the breadboard, which connected the 10 switches to digital pins 3-11.

IMG_2448

The wires were connected correctly and I knew I would need to figure out a way to protect the hardware from getting wet. That would prove to be a really important issue if I got the waterfall to actually work.

I did like the feeling of pushing on the keys. You can push them in a wave pattern, parodying the feeling of water falling on them. It felt sufficiently tactile and I decided that since I was in a time crunch, I would have to adjust my concept slightly to account for the fact that I still hadn’t figured out the best way to make the water fall evenly.

IMG_2461

Step four: Write the code and add the sounds in p5.js.

I tossed around a few different ideas for the types of sounds I wanted to play. I thought about playing funny noises, spooky noises, water noises, human voices, and various tones, but the piece of music I kept returning to was Richard Wagner’s Vorspiel (overture) from Das Rheingold, the first opera in his Ring Cycle.

The opening of the opera is a realization of emergence, of becoming as process. Wagner was obsessed with origin stories and stripping away stories to their mythic core. Unlike Beethoven’s chaos, Wagner’s music begins with a monotonous E flat, building into more and more complex figurations of the chord of E flat major, which is meant to mimic the motion of the Rhine River, which runs through Germany. The piece lasts 136 bars and approximately four minutes.

There is something very watery about the piece of music. In his book Decoding Wagner, Thomas May writes: “The swirling textures of sound readily transmit the idea of water rushing and complement the music’s quickening into life.”

I chopped up the overture into 10 distinct “parts” that would correspond to the 10 keys. The result would be a layering of sounds as you run your hands over the keys.

IMG_2458

 

The photobooth: An interactive film using p5.dom.

In class on Thursday, we were introduced to the powerful dom library in p5. According to the p5 reference article about p5.dom, the library allows you to interact with HTML5 objects, including video, audio, text, and your webcam.

I was immediately interested in trying a first pass at making an interactive film in which the user could click a button to jump to another film. I knew that I wanted to make some kind of super cut using p5.dom.

Here’s an unfinished, unpolished version of my sketch. I’m still working on it.

I was inspired by the Bob Dylan music video for “Like a Rolling Stone” in which users could “channel surf” as different individuals sing the lyrics to his song. I also was thinking a lot about video artist Christian Marclay’s art installation The Clock, a 24-hour montage of hundreds of film clips that make real-time references to the time of day. The video clips are all tied by one thing: The presence of a clock and/or time. The result is an eerie, fragmentary portrait of what one day looks like in the movies.

clock2_2353636b

I also wanted to access the webcam in some way. I’m taking my cues from Paul Ford’s insanely well-written and lengthly Bloomberg piece “What is Code,” which accesses your webcam and automatically prints a PDF certificate of completion with your picture when you have completed the 38,000-word article.

With that in mind, I wanted to combine both ideas and build a photobooth. You can switch between disparate clips of characters using a traditional photo booth in different movies by clicking the button “span time.” You can press “play” or “pause” to stop the film:

movieButton = createButton(‘play’);
movieButton.position(700, 500);
movieButton.mousePressed(toggleVid);

 

 

 

Pcomp midterm proposal: The water harp.

f5ce8a5716220b4adbae524670be1ac1Photograph by Eric Rose.

In keeping with the general tenor of my physical computation projects, I will continue to look at creative ways to provoke interactions with water.

I want a lot of my future projects to be an exploration of cymatics – a subset of modal vibrational phenomena in which a surface is vibrated and different patterns emerge in some kind of medium (paste, liquid, water, etc). Cymatics is essentially a process by which soundwaves are made visible. I like the idea of measuring a person’s heart rate and then visualizing that vibration pattern in a liquid, for instance.

The more I thought about this midterm project, though, the more I was struck by the delightful feeling of running one’s fingers through a steady stream of water. I want to build the entire interaction around that tactile experience.

So here’s my proposal: I plan to build a water harp. This is the initial sketch of the project:

IMG_2389

The harp will consist of a rounded plexiglass board that water flows over, creating a waterfall effect. The water will hit a series of 8 sensors (either moisture sensors, photosensors, or another conductive material). There will be a water pump that pumps up the water and brings it back to the top.

Each sensor will be paired with a sound of a different frequency that will play from the computer using p5.js. I’m still trying to decide what kind of sound will be best suited to this project. It could be a series of different noises triggered by each sensor (such as rainfall, thunder, rivers, etc). I was also thinking a lot about using human voices singing at different pitches that would then harmonize with each other.

When the participant runs his/her hand through the waterfall, it will create gaps in the water, triggering different sensors. Overall, I want the experience to be as tactile and delightful as possible.

 

Haikus with Donald Trump.

Screen Shot 2015-10-14 at 2.07.44 AM

I have a confession to make: I couldn’t figure out how to get this project up and running with a potentiometer in time for class on Wednesday. Keep in mind that the process was littered with tiny successes and failures and here’s just a bit of what I learned.

The assignment this week was to review the labs we did in class and figure out a creative way to get the Arduino to communicate with p5.js using serial communication. For instance, we could push a button and display text, twist a potentiometer and create an animation, touch a pressure sensor and play music, etc.

I’ve been talking a lot lately with friends at ITP about generative text and computational poetry. I loved the idea of creating some kind of random haiku generator based on the transcripts of Donald Trump’s speeches.

I figured out pretty quickly that I would need to learn a lot more about Python in order to write a program that would analyze large bodies of text. I did learn, though, that Python has a function that allows you to count the number of syllables in each word, which will come in handy when I actually decide to build that version of the project.

So I decided to hack the project anyway. I searched online and realized that a few other people have been paying attention to Donald Trump’s campaign speeches and written down examples of unintentional haikus he’d said in public.

I pulled 15 haikus from a Washington Post article and loaded them into a spreadsheet. I divided each line of the haiku into a different column in the spreadsheet and called each column individually as an array, p. That way, whenever you clicked your mouse it would randomly cycle through the haikus. I was able to display a haiku by asking for line1[p], line2[p], and line3[p] whenever the mouse was clicked.

Success! I created a version of the haiku generator in p5 before adding the physical input we’d learned about in the labs. Check out the Donald Trump haiku generator here.

Screen Shot 2015-10-14 at 2.10.04 AM

Next, I added in some code to enable the serial communication between the Arduino (already loaded with some simple commands) and the computer’s serial port.

Screen Shot 2015-10-14 at 2.12.48 AM

I mapped the pot values from 0-1023 to 0-15, since there were only 15 haikus to cycle through.

Then I added in the lines of code we had reviewed in our labs. There was a lot of stuff going on to enable serial communication and map the correct values but I felt like I understood what each line of code was doing. I used the function print(newData) and was able to see the values change from 0-15 as I twisted the pot!

Except there was one problem: I couldn’t figure out how to write a command that told the computer that the newData values being received from the serial port should be equal to the variable p I mentioned above.

I tried writing a new function getHaiku(newData) that would display each haiku based on the pot number but it didn’t work. I felt like I tried everything, from creating new variables to setting p = newData, but I could not figure out how to get the pot value to control the animation.

Here’s my full code in case anyone has an idea of what I’m doing wrong:

 

Water elevation in Lake Powell.

85521.ngsversion.1422286517028.adapt.676.1Photograph by Michael Melford, National Geographic Creative

I’ve been living in Utah for the last six years, give or take, and my friends and I have spent a lot of time exploring southern Utah national and state parks.

One of the most iconic bodies of water in the region is Lake Powell, a reservoir on the Colorado River that straddles both Utah and Arizona. Lake Powell is best known for its orange-red Navajo Sandstone canyons, clear streams, diverse wildlife, arches, natural bridges, and dozens of Native American archeological sites.

Since its 1963 creation, Lake Powell has become a major destination for the two million visitors it attracts annually. You can see why we love spending time there:

10590510_10154540007750624_787829644479667926_nPhotograph by my friend Kelsie Moore.

10532870_10154540023060624_3323817325450871258_nPhotograph by my friend Kelsie Moore.

Lake Powell is the second-largest man-made reservoir in the U.S., storing 24,322,000 acre feet of water when completely full. The lake acts as a water storage facility for the Upper Basin States (Colorado, Utah, Wyoming, and New Mexico) but it must also provide a specified annual flow to the Lower Basin States (Arizona, Nevada, and California).

Recent drought has caused the lake to shrink so much, however, that what once was the end of the San Juan River has become a ten-foot waterfall, according to National Geographic. As of 2014, reservoir capacities in Lake Powell were at 51% and the nearby Lake Mead was at 39%.

Drought has really reshaped the Colorado River region. According to the U.S. Drought Monitor, 11 of the past 14 years have been drought years in the southwest region, ranging from “severe” to “extreme” to “exceptional” depending on the year. You can see how drastically the landscape has changed over the past decade by taking a look at this series of natural-color images taken by a the Landsat series of satellites.

This week in ICM, we’re learning how to use objects and arrays in javascript. I wanted to produce a simple data visualization that displayed historical data about the water elevation in Lake Powell since its creation in the 1960s. I also knew that I wanted to use some kind of organic sound in the visualization, exploring p5.sound library.

See the final visualization here.

Screen Shot 2015-10-08 at 11.59.52 AM

I found a database online that contained the information I needed and I created a CVC file that detailed the year in one column and the elevation values in another column.

At first, I envisioned an animated visualization that snaked across the screen and split into fractals as you cycled through each year in the database. I liked the idea of having the design mimic the structure of the Colorado River. Here was my initial sketch:

FullSizeRender

I started playing around with the code and was able to produce an array of values from the CVC file. For instance, I created an array “elevation[]” that pulled the water elevation value for a given year. I wrote some code that allowed me to cycle through the years:

Screen Shot 2015-10-08 at 12.01.54 PMScreen Shot 2015-10-08 at 12.02.06 PMScreen Shot 2015-10-08 at 12.02.23 PM

After getting the years to cycle chronologically, I made an animation of a white line moving across the screen. For each new year, I wanted to draw a bar extending from the white line that helped visualize how the water levels were changing from year to year.

I created a function Bar () and gave it some parameters for drawing each of the bars.

Screen Shot 2015-10-08 at 12.06.52 PMScreen Shot 2015-10-08 at 12.06.59 PMScreen Shot 2015-10-08 at 12.07.31 PM

After defining the function, I started the animation by typing bar.display () with the specified parameters under function draw (). The bars were now a new object.

Next, I wanted to add sound to the visualization. I thought about a few different organic sounds: rainfall, rivers flowing, thunder, etc. In the end, I found a field recording of a thunderstorm in southern Utah and I immediately fell in love with the sound.

Every time a new year started, I introduced a 20-second clip of the sound so that over time you can hear the rolling thunder. I added some brown noise to sit underneath the sound file and some oscillation effects.

Screen Shot 2015-10-08 at 12.16.06 PM

When a new year starts, a new sound file plays, layering over the last sound. When the visualization finishes, the sound disconnects.

Screen Shot 2015-10-08 at 12.16.14 PMScreen Shot 2015-10-08 at 12.16.21 PMHere’s a video of the visualization:

Overall, I liked how this sketch turned out, but I had some major problems with this visualization.

First off, I think that the data I obtained (water elevation values by year) told a much less dramatic story than I had expected. I realized as I was doing research for this blog post that during droughts, it’s not the reservoir levels that experience the most dramatic decline, but the outflux of water is reduced significantly. I think that if I were to do this project again, I would have spent more time researching the data set I wanted to use.

Second, I really didn’t love the simple animated graph I produced. Yes, it told the story in a straightforward way, but I really wanted to produce a fractal/river shape that was more visually compelling than just straight lines. I couldn’t figure out how to do it in time so I might try doing it for a future project.

I think that adding the sounds made this visualization much more interesting and I want to keep exploring the p5.soundLibrary for future sketches.

 

 

Junk food game redux.

Screen Shot 2015-10-01 at 1.34.33 PM

This week’s assignment was to improve upon a previous project by cleaning up and rewriting some of the code for the project. I was excited to spend more time working on my junk food game I built with Jamie last week because there were still a number of things I thought I could improve.

Here’s the updated version of our game.

Before I’d done too much work on revising the code, Jamie told me she’d done some work and had (amazingly!) moved some things around and cleaned up a lot of the code we’d written last week. I added some cosmetic changes to the game, including a change in the animation that occurs when the mouth touches food.

Overall, here are some of the major things that were fixed in the new code:

 

  • One of the reasons the code from the original game was so long was because for each piece of food, we’d had to write code with directions. In the updated version, Jamie defined a new function so that each piece of food had a set of consistent parameters. The new function looked like this:

function Food(f, s, p, ip) {

For each piece of food, f = load the image, s = speed of fall, p = points if it hits the mouth (either +10 or -10) and ip = if you hit the poop. The function was defined at the end of the code after function setup() and function draw():

Screen Shot 2015-10-01 at 1.10.24 PM

In function setup() we’d already established what each of these parameters would be for each piece of food:

Screen Shot 2015-10-01 at 1.10.06 PM

That helped simplify the code considerably, since we could just write a general formula for the entire game rather than for each piece of food individually.

  • In the older version of the game, there was a slight jitter when you clicked the initial button. The jitter was fixed by changing the function from ‘mouseIsPressed’ to ‘mousePressed.’ This seemed to fix the problem altogether.
  • Last week, we struggled to change the final “end” screens so that the screen cleared once the game finished. When you win or when you eat the poop, there are two different screens that are displayed. Here’s what the screen looked like last week:

Screen Shot 2015-10-01 at 12.59.41 PM

We updated the code to fix the error (with some added vulgarity because this is just for fun anyway):

Screen Shot 2015-10-01 at 1.06.50 PM

I thought overall that we made some good improvements to the code without changing the structure of the game overall. I’ve got to give Jamie a lot of the credit for simplifying the code and giving a much stronger architecture to the game we had built. I really want to keep improving the game!

Raw footage from our film project.

For our video assignment, our group decided that the piece we had initially decided to film might prove too difficult to complete in the timeframe we had. We chose to scrap the idea altogether and start again from the beginning.

Katie suggested that instead of doing a documentary-style piece we should focus on a simple narrative piece inspired by the Nico song “These Days.” It’s a melancholy song about a woman whose lover has left her and the ways in which that loss colors her daily life as she walks around the city. We were interested in exploring the ways in which the environment in which we live comes to mirror our internal emotions. I’ve discussed with friends the ways in which New York City has this remarkable ability to reflect your feelings back at you so that you are constantly being confronted by yourself.

With that in mind, we mapped out our storyboard and shot list. We knew that there were some locations we were set on filming: Coney Island, a coffee shop, a corner grocery store, etc. We also knew that new ideas would reveal themselves to us along the way and so we started the shoot with an open mind.

I should add that Naoki and Katie decided I should play the part of the young woman in the film (gahhhh). Here are some highlights from the footage:

Screen Shot 2015-09-29 at 8.53.33 PM

lol_im_a_hod_dog_catch_up_me

cafe

woder_wheel

Overall, I think that Naoki, Katie, and I got fairly adept at adjusting the settings on the camera. We were able to adjust the aperture and ISO in a hurry by the end of the day. I felt so much more confident by the end that I could use the camera well and that we had gotten some really good shots, especially at Coney Island.

It was difficult to capture all the scenes we wanted in one day so we are planning to record some additional footage this week. We have a good sense of the narrative structure, but one goal for this week is to nail down, shot by shot, what the film will look like.