Haikus with Donald Trump.

Screen Shot 2015-10-14 at 2.07.44 AM

I have a confession to make: I couldn’t figure out how to get this project up and running with a potentiometer in time for class on Wednesday. Keep in mind that the process was littered with tiny successes and failures and here’s just a bit of what I learned.

The assignment this week was to review the labs we did in class and figure out a creative way to get the Arduino to communicate with p5.js using serial communication. For instance, we could push a button and display text, twist a potentiometer and create an animation, touch a pressure sensor and play music, etc.

I’ve been talking a lot lately with friends at ITP about generative text and computational poetry. I loved the idea of creating some kind of random haiku generator based on the transcripts of Donald Trump’s speeches.

I figured out pretty quickly that I would need to learn a lot more about Python in order to write a program that would analyze large bodies of text. I did learn, though, that Python has a function that allows you to count the number of syllables in each word, which will come in handy when I actually decide to build that version of the project.

So I decided to hack the project anyway. I searched online and realized that a few other people have been paying attention to Donald Trump’s campaign speeches and written down examples of unintentional haikus he’d said in public.

I pulled 15 haikus from a Washington Post article and loaded them into a spreadsheet. I divided each line of the haiku into a different column in the spreadsheet and called each column individually as an array, p. That way, whenever you clicked your mouse it would randomly cycle through the haikus. I was able to display a haiku by asking for line1[p], line2[p], and line3[p] whenever the mouse was clicked.

Success! I created a version of the haiku generator in p5 before adding the physical input we’d learned about in the labs. Check out the Donald Trump haiku generator here.

Screen Shot 2015-10-14 at 2.10.04 AM

Next, I added in some code to enable the serial communication between the Arduino (already loaded with some simple commands) and the computer’s serial port.

Screen Shot 2015-10-14 at 2.12.48 AM

I mapped the pot values from 0-1023 to 0-15, since there were only 15 haikus to cycle through.

Then I added in the lines of code we had reviewed in our labs. There was a lot of stuff going on to enable serial communication and map the correct values but I felt like I understood what each line of code was doing. I used the function print(newData) and was able to see the values change from 0-15 as I twisted the pot!

Except there was one problem: I couldn’t figure out how to write a command that told the computer that the newData values being received from the serial port should be equal to the variable p I mentioned above.

I tried writing a new function getHaiku(newData) that would display each haiku based on the pot number but it didn’t work. I felt like I tried everything, from creating new variables to setting p = newData, but I could not figure out how to get the pot value to control the animation.

Here’s my full code in case anyone has an idea of what I’m doing wrong:

 

Water elevation in Lake Powell.

85521.ngsversion.1422286517028.adapt.676.1Photograph by Michael Melford, National Geographic Creative

I’ve been living in Utah for the last six years, give or take, and my friends and I have spent a lot of time exploring southern Utah national and state parks.

One of the most iconic bodies of water in the region is Lake Powell, a reservoir on the Colorado River that straddles both Utah and Arizona. Lake Powell is best known for its orange-red Navajo Sandstone canyons, clear streams, diverse wildlife, arches, natural bridges, and dozens of Native American archeological sites.

Since its 1963 creation, Lake Powell has become a major destination for the two million visitors it attracts annually. You can see why we love spending time there:

10590510_10154540007750624_787829644479667926_nPhotograph by my friend Kelsie Moore.

10532870_10154540023060624_3323817325450871258_nPhotograph by my friend Kelsie Moore.

Lake Powell is the second-largest man-made reservoir in the U.S., storing 24,322,000 acre feet of water when completely full. The lake acts as a water storage facility for the Upper Basin States (Colorado, Utah, Wyoming, and New Mexico) but it must also provide a specified annual flow to the Lower Basin States (Arizona, Nevada, and California).

Recent drought has caused the lake to shrink so much, however, that what once was the end of the San Juan River has become a ten-foot waterfall, according to National Geographic. As of 2014, reservoir capacities in Lake Powell were at 51% and the nearby Lake Mead was at 39%.

Drought has really reshaped the Colorado River region. According to the U.S. Drought Monitor, 11 of the past 14 years have been drought years in the southwest region, ranging from “severe” to “extreme” to “exceptional” depending on the year. You can see how drastically the landscape has changed over the past decade by taking a look at this series of natural-color images taken by a the Landsat series of satellites.

This week in ICM, we’re learning how to use objects and arrays in javascript. I wanted to produce a simple data visualization that displayed historical data about the water elevation in Lake Powell since its creation in the 1960s. I also knew that I wanted to use some kind of organic sound in the visualization, exploring p5.sound library.

See the final visualization here.

Screen Shot 2015-10-08 at 11.59.52 AM

I found a database online that contained the information I needed and I created a CVC file that detailed the year in one column and the elevation values in another column.

At first, I envisioned an animated visualization that snaked across the screen and split into fractals as you cycled through each year in the database. I liked the idea of having the design mimic the structure of the Colorado River. Here was my initial sketch:

FullSizeRender

I started playing around with the code and was able to produce an array of values from the CVC file. For instance, I created an array “elevation[]” that pulled the water elevation value for a given year. I wrote some code that allowed me to cycle through the years:

Screen Shot 2015-10-08 at 12.01.54 PMScreen Shot 2015-10-08 at 12.02.06 PMScreen Shot 2015-10-08 at 12.02.23 PM

After getting the years to cycle chronologically, I made an animation of a white line moving across the screen. For each new year, I wanted to draw a bar extending from the white line that helped visualize how the water levels were changing from year to year.

I created a function Bar () and gave it some parameters for drawing each of the bars.

Screen Shot 2015-10-08 at 12.06.52 PMScreen Shot 2015-10-08 at 12.06.59 PMScreen Shot 2015-10-08 at 12.07.31 PM

After defining the function, I started the animation by typing bar.display () with the specified parameters under function draw (). The bars were now a new object.

Next, I wanted to add sound to the visualization. I thought about a few different organic sounds: rainfall, rivers flowing, thunder, etc. In the end, I found a field recording of a thunderstorm in southern Utah and I immediately fell in love with the sound.

Every time a new year started, I introduced a 20-second clip of the sound so that over time you can hear the rolling thunder. I added some brown noise to sit underneath the sound file and some oscillation effects.

Screen Shot 2015-10-08 at 12.16.06 PM

When a new year starts, a new sound file plays, layering over the last sound. When the visualization finishes, the sound disconnects.

Screen Shot 2015-10-08 at 12.16.14 PMScreen Shot 2015-10-08 at 12.16.21 PMHere’s a video of the visualization:

Overall, I liked how this sketch turned out, but I had some major problems with this visualization.

First off, I think that the data I obtained (water elevation values by year) told a much less dramatic story than I had expected. I realized as I was doing research for this blog post that during droughts, it’s not the reservoir levels that experience the most dramatic decline, but the outflux of water is reduced significantly. I think that if I were to do this project again, I would have spent more time researching the data set I wanted to use.

Second, I really didn’t love the simple animated graph I produced. Yes, it told the story in a straightforward way, but I really wanted to produce a fractal/river shape that was more visually compelling than just straight lines. I couldn’t figure out how to do it in time so I might try doing it for a future project.

I think that adding the sounds made this visualization much more interesting and I want to keep exploring the p5.soundLibrary for future sketches.

 

 

Junk food game redux.

Screen Shot 2015-10-01 at 1.34.33 PM

This week’s assignment was to improve upon a previous project by cleaning up and rewriting some of the code for the project. I was excited to spend more time working on my junk food game I built with Jamie last week because there were still a number of things I thought I could improve.

Here’s the updated version of our game.

Before I’d done too much work on revising the code, Jamie told me she’d done some work and had (amazingly!) moved some things around and cleaned up a lot of the code we’d written last week. I added some cosmetic changes to the game, including a change in the animation that occurs when the mouth touches food.

Overall, here are some of the major things that were fixed in the new code:

 

  • One of the reasons the code from the original game was so long was because for each piece of food, we’d had to write code with directions. In the updated version, Jamie defined a new function so that each piece of food had a set of consistent parameters. The new function looked like this:

function Food(f, s, p, ip) {

For each piece of food, f = load the image, s = speed of fall, p = points if it hits the mouth (either +10 or -10) and ip = if you hit the poop. The function was defined at the end of the code after function setup() and function draw():

Screen Shot 2015-10-01 at 1.10.24 PM

In function setup() we’d already established what each of these parameters would be for each piece of food:

Screen Shot 2015-10-01 at 1.10.06 PM

That helped simplify the code considerably, since we could just write a general formula for the entire game rather than for each piece of food individually.

  • In the older version of the game, there was a slight jitter when you clicked the initial button. The jitter was fixed by changing the function from ‘mouseIsPressed’ to ‘mousePressed.’ This seemed to fix the problem altogether.
  • Last week, we struggled to change the final “end” screens so that the screen cleared once the game finished. When you win or when you eat the poop, there are two different screens that are displayed. Here’s what the screen looked like last week:

Screen Shot 2015-10-01 at 12.59.41 PM

We updated the code to fix the error (with some added vulgarity because this is just for fun anyway):

Screen Shot 2015-10-01 at 1.06.50 PM

I thought overall that we made some good improvements to the code without changing the structure of the game overall. I’ve got to give Jamie a lot of the credit for simplifying the code and giving a much stronger architecture to the game we had built. I really want to keep improving the game!

Raw footage from our film project.

For our video assignment, our group decided that the piece we had initially decided to film might prove too difficult to complete in the timeframe we had. We chose to scrap the idea altogether and start again from the beginning.

Katie suggested that instead of doing a documentary-style piece we should focus on a simple narrative piece inspired by the Nico song “These Days.” It’s a melancholy song about a woman whose lover has left her and the ways in which that loss colors her daily life as she walks around the city. We were interested in exploring the ways in which the environment in which we live comes to mirror our internal emotions. I’ve discussed with friends the ways in which New York City has this remarkable ability to reflect your feelings back at you so that you are constantly being confronted by yourself.

With that in mind, we mapped out our storyboard and shot list. We knew that there were some locations we were set on filming: Coney Island, a coffee shop, a corner grocery store, etc. We also knew that new ideas would reveal themselves to us along the way and so we started the shoot with an open mind.

I should add that Naoki and Katie decided I should play the part of the young woman in the film (gahhhh). Here are some highlights from the footage:

Screen Shot 2015-09-29 at 8.53.33 PM

lol_im_a_hod_dog_catch_up_me

cafe

woder_wheel

Overall, I think that Naoki, Katie, and I got fairly adept at adjusting the settings on the camera. We were able to adjust the aperture and ISO in a hurry by the end of the day. I felt so much more confident by the end that I could use the camera well and that we had gotten some really good shots, especially at Coney Island.

It was difficult to capture all the scenes we wanted in one day so we are planning to record some additional footage this week. We have a good sense of the narrative structure, but one goal for this week is to nail down, shot by shot, what the film will look like.

Interactive tech in the NY subway system.

10

Last week I was having a conversation with a classmate about how it seems like New York City has failed to integrate technology into its transportation system compared to other cities. I spent a few weeks visiting my parents in Tokyo this past summer and became very well acquainted with Tokyo’s metro system and the ways in which technology aids travelers. For instance, all passengers use a universal “Passmo” card that scans instantly at kiosks and allows for instantaneous transactions. The Tokyo metro stations all have detailed instructions about how to navigate the city on the metro. There’s also free WiFi everywhere.

For this reason, I was thrilled to see the installation of new “on the go interactive wayfinding kiosks” in many of the New York subway platforms. The kiosks were designed by Control Group, which partnered with the MTA to improve the city’s transportation information.

I decided to observe passengers interacting with the kiosks at the West 4th subway stop in the East Village. In terms of interface, the kiosks have a giant touch screen that displays neighborhood maps, visual directions based on real-time train information, and countdowns to train arrivals.

Before observing the kiosks, I assumed that most passengers would use the kiosks to see when the next train would arrive. From my observations, however, I saw that most passengers didn’t typically use the screens. Those who did appeared to be tourists or visiting from other countries. I noticed that the kiosks were used most often when trains were running late or running on an alternate route and passengers were seeking additional information.

The interface at the kiosks is extremely visual rather than text-heavy, which is conducive for people who might not be familiar with English. I didn’t notice people experience difficulties using the kiosks, although the information about train arrival times were not always accurate. People sometimes appeared frustrated by the lack of accurate information.

On average, most people used the kiosks for about 10 seconds to check the status of a particular train. Here’s a video the MTA made to promote the new installations:

Overall, I thought the kiosks were a brilliant move on the part of the MTA. It’s clear that the New York subway system needed more transparency in terms of information availability and ease of use. According to Chris Crawford, interaction is a cyclic process that requires two actors, human or otherwise, who alternately listen, think, and speak. In this way, physical interaction can be considered a kind of two-way conversation. In this regard, the kiosks are truly interactive.

 

 

Measuring cumulative rainfall with a rain gauge.

IMG_2249

I’ve spent about six years living in Utah, where the climate is arid and drought is a constant concern. According to the U.S. Drought Monitor, about a quarter of the state continues to experience severe (D2) level drought. The region often doesn’t receive the rainfall it needs to keep its reservoirs at capacity.

In keeping with my interest in humans’ relationship with their physical environment and ecological processes, I decided that I wanted my next project to collect information about precipitation.

Using a rain gauge that my friend Joao Costa had lent me, I was able to measure the accumulation of rainfall over time.
IMG_2231

A rain gauge is a self-emptying tipping bucket that collects and dispenses water. It allows you to display daily and accumulated rainfall as well as the rate of rainfall. The gauge essentially acts as a switch, making contact when a specified amount of water enters the bucket.

Rain collects at the top of the bucket, where a funnel collects and channels the precipitation into a small seesaw-like container. After 0.011 inches (0.2794 mm) of rain has fallen, the lever tips and dumps the collected water. An electrical signal is sent back to the Arduino where the digital counter can record what’s called the “interrupt input.”

IMG_2233

Once I decided that I wanted to measure the precipitation using the rain gauge, I did some research into the particular gauge I was using. According to its datasheet, the rain gauge connected to an adaptor, which then only connected to two center conductors. I hooked up the rain bucket like so:

IMG_2245

The thing that is really powerful about the rain gauge is that it can measure the cumulative rainfall over a period of time. I decided that I wanted to connect the LCD screen to the Arduino as an output in order to display the amount of precipitation.

Connecting the LCD screen was very difficult because of the number of wires that needed to be connected to the screen (eight!). The LCD screen also required that I set up a potentiometer to control the brightness of the screen, so I added that to the breadboard.

IMG_2250

Once the setup was complete, I wrote the Arduino program that would display the precipitation information I wanted. I figured out how to set up the LCD screen to display text on a single line.`

Next, I needed to figure out how to display the actual amount of precipitation that had fallen into the bucket. To do so, I created a variable “rainTipperCounter.” Every time the see-saw in the gauge filled with water and tipped, the count went up by one.

Screen Shot 2015-09-29 at 7.49.16 PM

I knew that each time the count increased, 0.011 inches of rainfall had collected in the rain gauge. I programmed the LCD to display the rainTipperCounter, multiplied by 0.011, so that the actual amount of accumulated precipitation was displayed.

Screen Shot 2015-09-29 at 7.51.36 PM

IMG_2247

And just like that, I’d set up a simple rain gauge that tracked cumulative precipitation. It wasn’t raining outside today so I had to test the switch by pouring water into the gauge. Here’s how the final product turned out:

Measuring cumulative rainfall with a rain gauge from Rebecca Ricks on Vimeo.

Junk food heaven. Or, my first game in p5.

Screen Shot 2015-09-24 at 1.49.33 PM

This week’s assignment for computational media was to build something that uses (1) a button or slider and (2) an algorithm for animation.

I immediately knew that however this project played out, I wanted it to involve both junk food and the Beach Boys.

Play the game here!

I was assigned to work with Jamie Ruddy. Jamie and I decided within the first 30 seconds of our conversation that we wanted to build a simple game using interactive elements and animation. Because both of us were drawn to the playful Japanese emojis that we use when we text, we designed a simple game in which the player moves a tongue that catches various pieces of food: hamburgers, apples, eggplant, etc. The player gains points for eating healthy food and loses points for eating junk food.

Here’s the initial sketch I drew:

IMG_2135

There were a lot of steps to build this game. First, we had to create a welcome screen that changed to the game screen once you clicked a button.

Screen Shot 2015-09-24 at 1.32.29 PM

Then, we had to create an assets library of PNG images to load into the game. Each piece of food would drop into the screen where the x-coordinate was random and the y-coordinate started at 0 and increased at each frame.

Screen Shot 2015-09-24 at 1.57.28 PM

Then came the most difficult step in the entire process. In order to animate the emojis and make them interact with one another, we had to create an object. This was a first for me, so I spent a lot of time reviewing the p5 reference library’s explanation of how to create an object.

Within the object, we created a function Food() that displayed each item of food, made it drop at a different rate than the other food, and then checked to see if it had touched the tongue. It also checked to see if the food had hit the bottom of the screen (y=height); if yes, then start the food again at the top of the screen (y = 0).

From there, we needed to ensure the score was increasing/decreasing based on which object the tongue touched. For instance, the apple would +10 and the hamburger would -10. Get to 50 points and you win.

Screen Shot 2015-09-24 at 1.23.28 PM

And for a bonus surprise: Eat the poop emoji and you just straight up lose the entire game (because, gross).

Screen Shot 2015-09-24 at 1.22.29 PM

To make the game more fun, I added the Beach Boys’ “Kokomo” to the sketch. It was simple to call the mp3 in the library by using the preload() function.

Screen Shot 2015-09-24 at 1.15.51 PM

Success! There’s a lot more I’d like to do with this game. For instance, I’d like there to be a location-based animation whenever the food hits the tongue  (right now the screen flashes white).

Here’s a video of how the game works:

 

 

Homelessness in Greenpoint: Initial storyboard.

The homeless population has always had a persistant presence in New York, but in the past several years homelessness in the city has skyrocketed. According to recent reports, the number of homeless in city shelters has risen from 53,000 to 60,000 during mayor Bill de Blasio’s term.

According to de Blasio, people are increasingly losing their permanent housing due to a perfect storm of plateauing income and a hot real estate market reaching into new neighborhoods, like Bushwick or Greenpoint, where landlords once accepted homeless families and their subsidies. Simply put, many families can’t simply keep up and end up in the shelter system.

With that in mind, our group is interested in examining how the homelessness crisis is playing out in one particular street corner in Greenpoint, Brooklyn: the intersection of Milton and Manhattan.

There are two churches on the street, one of which sponsors a daily food pantry for the neighorhood’s homeless population. Alongside the lines of homeless people, however, are the vestiges of gentrification: designer coffee shops, a CVS, boutiques, and a bank. This is just one of the battlegrounds where New Yorkers are increasingly losing their housing and entering the shelter system.

Here is the initial storyboard for the 3-5 minute film we will make:

 

68B90310-BF72-4855-AC85-30BD19FEE632

E33EB94B-E294-4BB1-B5D7-2ECC025748F6

BE5E117C-8FDB-4931-B0D7-29BB8168A2F2

Sonifying ice melting.

IMG_2108

“Art to me is our expression of being in love with (and fearing for) our world — our efforts to capture and predict the patterns, colours, movement we see around us,” says environmental strategist Dekila Chungyalpa.

Those words were written to coincide with a massive art installation piece Ice Watch by Icelandic artist Olafur Eliasson. For the piece, Eliasson obtained huge chunks of Arctic ice and installed them in front of Copenhagen’s city hall, where they slowly melted, a powerful reminder to the public of the reality of climate change.

I’ve been thinking a lot lately about ways in which we can sonify many of the natural geological processes that are simply not audible to human ears: the sound of water levels gradually dropping, the sound of tectonic plates sliding, the sounds of topography and mountain ranges, the sounds of glaciers melting, for instance. These objects have their own internal auditory patterns and acoustics.

Eliasson’s installation piece, along with Paul Kos’ “The Sound of Ice Melting” inspired my analog assignment this week. I wanted to generate a sound from the process of ice chunks melting.

To do so, I first purchased a simple rain/moisture sensor that would function as the analog input in the circuit. The sensor is essentially a pentiometer because it has a variable resistance: The amount of resistance varies based on the amount of water/moisture present.

IMG_2099

IMG_2100The board that senses the presence of moisture.

Using the laser printer (first time yeah!) I cut a piece of acrylic to mount the sensor on. I had to melt hot glue on some of the wires to make sure the water wouldn’t interfere with the electricity.

I connected the sensor to the Arduino board via analog pin #A0 and then connected the Piezo as a digital output from the digital pin #3.

IMG_2112

After wiring up the board, I needed to test the sensor to see what range of signal values I would be dealing with. This is the code that printed those values, which ranged from 0 to 1023.

Screen Shot 2015-09-22 at 11.03.16 PM

Eventually, I knew that I wanted the sounds emitted by the dripping ice to create a sense of urgency or anxiety. To do so, I needed to change the code so that the tone sped up.

After determining the range of values available, I decided to write some additional code that would change the delay between tones based on how much water was present. When there was only a tiny droplet, the Piezo would buzz at a slow rhythm. As more water dripped onto the sensor, the rhythm would speed up.

Screen Shot 2015-09-22 at 11.03.30 PM

Finally, I froze water in different sized chunks to create the ice. I put the chunks in fish netting and dangled them above the sensor, letting them melt at room temperature. Gradually the sounds sped up as the ice melted more quickly.

output_ioVLd3

Here is the final product! 

The sound of melting ice. from Rebecca Ricks on Vimeo.

Full code below.

#define rainSensor A0
#define buzzer 3

void setup() { //analog input from rainSensor, digital output from buzzer
Serial.begin(9600);
//pinMode(rainSensor, INPUT);
pinMode(buzzer, OUTPUT);

}

void loop() {
int sensorValue = analogRead(rainSensor); //read the analog input from pin A0.
Serial.println(sensorValue);
delay(100);

if(sensorValue == 0) { // write the digital output. Values from 0 to 1023.
tone(buzzer, 440);
} else if(sensorValue > 0 && sensorValue < 300) {
tone(buzzer, 440);
delay(20);
noTone(buzzer);
delay(20);
} else if(sensorValue > 300 && sensorValue < 600) {
tone(buzzer, 440);
delay(50);
noTone(buzzer);
delay(50);
} else if(sensorValue > 600 && sensorValue < 900) {
tone(buzzer, 440);
delay (100);
noTone(buzzer);
delay(100);
} else if(sensorValue > 900 && sensorValue < 1023) {
tone(buzzer, 440);
delay (200);
noTone(buzzer);
delay(200);
}

Overall, I had hoped to do a lot more with this project I think. For instance, I would have liked to have changed the setup so that the melting ice was interacting with a sensor in a more interesting way (beyond just dangling above the sensor).

Learning to animate in p5.

Screen Shot 2015-09-17 at 1.22.58 AM

Last week we drew static drawings from primitive shapes. This week we were assigned the task of animating our sketches in p5 using a set of new functions, including frameRate(), rotate(), and transform(), or new variables like mouseX and mouseY.

To complete the assignment, our sketch needed to contain: (1) One element that changed over time independent of the mouse; (2) One element controlled by the mouse; and (3) One element that’s different every time you run the sketch.

I’ve been a cyclist for a few years now. In Salt Lake City, where I lived for the past two years, I loved biking up into the canyon in the late afternoon before sunset. That experience was the inspiration behind my sketch this week, in which a bike rides into the mountains.

See my final sketch here.

Here was my initial sketch:

IMG_2029
I started planning out the sketch in Adobe Illustrator to get a better sense of the composition of the drawing.

mountain

My plan was to make the sun and its rays rotate in a circle independent of the mouse. That actually proved a lot more difficult than I’d anticipated. The action required me to use a new set of functions, including push(), transform(), rotate(), and pop(). Here are the lines of code I wrote that drew the sun:

push();
translate(350,100);
rotate (rads);

for (var d = 0; d < 10; d ++){
noStroke();
fill(206+d,188+d,122+d);
ellipse(0,0,175,175);

stroke(206+d,188+d,122+d);
strokeWeight(3);

for (var i = 0; i < 36; i ++) {
line(0,0,x,y);
rotate(PI/20);
}
pop ();

rads = rads + 1.57;

I started layering on primitive shapes for the mountains and bicycle using the beginShape() function I we learned in class. I decided to make the bicycle a different color every time you load the page by calling out var r, var g, and var b and then defining them in the setup:

r = 230;
g = random(100,200);
b = random(100,200);

Then the most difficult task lay ahead: Getting the bicycle to move at a variable speed controlled by the mouse. I tried a lot of different things, including making an object out of the bicycle using some new tricks we learned in Javascript, but ultimately this is the code that really worked out:

var speedX = 1;

var speedY = 1;

//bicycle

stroke(r,g,b,255);
strokeWeight(5);
noFill();

ellipse(225+speedX,520-speedY,100,100);
ellipse(400+speedX,490-speedY,100,100); 
ellipse(225+speedX,520-speedY,10,10); 
ellipse(400+speedX,490-speedY,10,10); 
quad(225+speedX,520-speedY,305+speedX,505-speedY,350+speedX,435-speedY,265+speedX,450-speedY); 
line(260+speedX,445-speedY,305+speedX,505-speedY); 
line(400+speedX,490-speedY,352+speedX,435-speedY); 
ellipse(305+speedX,505-speedY,30,30);
quad(250+speedX,450-speedY,245+speedX,440-speedY,275+speedX,440-speedY,275+speedX,445-speedY);

speedX+=map(mouseX,0,width,1,5); 
speedY+=0.15;

What’s happening above is that I’ve created two new variables, speedX and speedY, that will determine the velocity of the bike in relation to my mouse. The last two lines of code establish the range of values that the speed can be. When mouseX is high (i.e. it’s on the far right of the sketch), the bike moves more quickly.

Here’s a video of the animation.

Full code below.

var fr = 30;
var x = 1200;
var y = 0;
var rads = 0; // declare variable rads, angle at which sun will rotate
var speedX = 1; //declare variable speedX
var speedY = 1; //declare variable speedY
var r;
var g;
var b;
function setup() {
createCanvas(1100,600);
background(252,240,232);
r = 230;
g = random(100,200);
b = random(100,200);

}

function draw() {

//draw sun and rays and make them rotate.
push();
translate(350,100);
rotate (rads);

for (var d = 0; d < 10; d ++){
noStroke();
fill(206+d,188+d,122+d);
ellipse(0,0,175,175);

stroke(206+d,188+d,122+d);
strokeWeight(3);

for (var i = 0; i < 36; i ++) {
line(0,0,x,y);
rotate(PI/20);

}
}
pop ();

rads = rads + 1.57;

//mountains

//mountains layer three
stroke(136,167,173);
fill(136,167,173);
beginShape();
vertex(0,600);
vertex(0,400);
vertex(200,300);
vertex(300,350);
vertex(400,250);
vertex(500,325);
vertex(600,100);
vertex(750,200);
vertex(875,60);
vertex(1000,150);
vertex(1100,100);
vertex(1100,600);
endShape();

//mountains layer two
stroke(92,109,120,100);
fill(92,109,120,100);
beginShape();
vertex(0,600);
vertex(0,400);
vertex(275,375);
vertex(350,400);
vertex(425,375);
vertex(575,375);
vertex(800,200);
vertex(900,300);
vertex(1100,250);
vertex(1100,600);
endShape();

//mountains layer three
stroke(92,109,112,200);
fill(92,109,112,200);
beginShape();
vertex(0,600);
vertex(0,550);
vertex(500,400);
vertex(575,425);
vertex(600,400);
vertex(800,400);
vertex(875,300);
vertex(925,375);
vertex(1100,300);
vertex(1100,600);
endShape();

//mountains layer four
stroke(213,207,225,25);
fill(213,207,225,25);
triangle(0,600,1100,425,1100,600);

//bicycle

stroke(r,g,b,255);
strokeWeight(5);
noFill();

ellipse(225+speedX,520-speedY,100,100); //bike wheel
ellipse(400+speedX,490-speedY,100,100); //bike wheel
ellipse(225+speedX,520-speedY,10,10); //inner bike wheel
ellipse(400+speedX,490-speedY,10,10); //inner bike wheel
quad(225+speedX,520-speedY,305+speedX,505-speedY,350+speedX,435-speedY,265+speedX,450-speedY); //frame
line(260+speedX,445-speedY,305+speedX,505-speedY); //frame
line(400+speedX,490-speedY,352+speedX,435-speedY); //frame
ellipse(305+speedX,505-speedY,30,30); //frame
quad(250+speedX,450-speedY,245+speedX,440-speedY,275+speedX,440-speedY,275+speedX,445-speedY);

speedX+=map(mouseX,0,width,1,5); //x coordinate of the mouse determine speed of the bike
speedY+=0.15;
}