Internet freedom in the Arab world: An interactive map.

Screen Shot 2015-12-18 at 2.48.58 AM

For my final ICM project, I created an interactive map tracking individuals in the Arab world who had been detained, prosecuted, or harassed by their governments in 2015 because of their online activity.

Check out the map here.*

Data sources: Committee to Protect Journalists’ 2015 report on jailed journalists | Global Voices’ Digital Citizen project

When I had initially proposed this project, I planned to limit the scope of my data to just imprisoned journalists. As I did more research, however, I realized that journalists weren’t the only citizens being actively censored by their governments for the things they said on the internet. Activists, outspoken citizens, bloggers, and gay people are receiving lengthy prison sentences for expressing themselves online. In many cases, individuals are being arbitrarily detained without any clear accusations or charges.

THE DATA

The data for this project came from several sources. First, I combed through the data collected by the Committee to Protect Journalists in its 2015 report on jailed journalists. Second, I consulted Global Voices’ outstanding project Digital Citizen, a biweekly review of human rights in the Arab world. For every individual, I found at least one other piece of journalism online confirming the incident. The result was a long list of individuals who had been detained, prosecuted, physically harassed, or killed by their government between December 2014 – December 2015.

OVERALL TRENDS

Here’s what is most troubling: The number of individuals being targeted for their online behavior in the Arab world is increasing. According to Freedom House’s 2015 Freedom on the Net Report, in the past year there was a spike in public floggings of liberal bloggers, life sentences for online critics, and beheadings of internet-based journalists in the Middle East.

The report states that in the past year “penalties for online expression reached a new level of severity as both authorities and criminal groups made public examples of internet users who opposed their agenda.” In Egypt, for instance, two journalists received life sentences for their online coverage of a violent government crackdown on a Muslim Brotherhood protest.

arab-map

THE LEGAL CLIMATE

With this project, I wanted to explore the factors driving the boost in imprisonments and detainments for online behavior. Specifically, I was interested in how the legal climate and attitudes towards the internet in each of these countries contributes to the problem.

The adoption of sweeping cybersecurity and anti-terrorism laws in 2015 has been cited as one of the major causes of increased imprisonments. This year, Mauritania proposed two draft laws on cybercrime and the information economy that punish “insults” against the government with up to seven years in jail. Tunisia passed a counter-terrorism law that arbitrarily restricts freedom of expression. A new freedom of information act was passed in Sudan that legalizes government censorship. This year Egypt passed a number of cybercrime and anti-terrorism laws that criminalize broad online offenses, allowing the government to crack down on human rights activists. The Jordanian government broadened its legal definition of “terrorism” to include critics who “disturb its relations with a foreign state.” Kuwait adopted a controversial anti-terrorism law. Other countries in the region continue to enforce their cybercrime and anti-terror laws, including the U.A.E., which has been know to give the death penalty for defamation charges.

A quick look at the data suggests that these were the charges most often brought against individuals:

Untitled-1

I plan to continue investigating this issue in order to better understand why there has been an uptick in human rights abuses against journalists/internet users/bloggers/activists.

A WORD ABOUT THE DATA

*A major limitation of this data set: It is impossible to have a complete picture of human rights abuses right now. We do not yet have access to information about every detained or imprisoned citizen in the Middle East.

For countries in which there is no rule of law (i.e. Libya and Syria), access to information about killings and detainments is limited. In addition, it’s incredibly difficult to get an accurate read on human rights violations in Israel/Palestine and so data from the country has been temporarily omitted from this map.

I will continue to add additional individuals to the map as the media continues to report on human rights abuses that occurred in the past year.

ICM Project Proposal: Mapping jailed journalists.

arab-world

For my final ICM project, I intend to design an interactive map that flags countries where freedom of speech is under attack. Taking data from the year 2015, I will show where journalists were imprisoned around the world.

Last year, the Committee to Protect Journalists (CPJ) published a comprehensive list of journalists who were imprisoned around the world in 2014 and their perceived offense against their country’s government.

Here is a test map of the MENA region that I began designing a few weeks ago. It’s based on data from Global Voices’ Digital Citizen bi-monthly newsletter about journalists who are imprisoned for the things they post online.

The broader question for me is about how the internet is being used in the Arab world. To no small degree, the spike in internet use and monitorial tools like Periscope and Twitter have empowered activists across the region to organize into collectives to fight abuses of power. The protests in Cairo’s Tahrir Square during the Arab Spring, for instance, were organized and promoted on simple social networks like Twitter and Facebook.

In a very optimistic 2005 academic paper “The Internet in the Arab World: Playground for Political Liberalization,” Albrecht Hofheinz suggests that the internet will expand the possibilities of what can be said in public spaces and usher in a new era of liberalization in Middle East countries. While we have witnessed major strides towards greater transparency and democratization in the region due to the internet, there is still a long way to go.

Most shockingly, in recent years the internet has been wielded as a tool for authoritarian regimes to discipline those individuals who are doing the very critical work of reporting human rights abuses as they are occurring. Not only is censorship at an all-time high in many of these countries, but many governments are seeking to pass new cybersecurity laws that would sanction the arrest of journalists speaking out against the government in online spaces.

With my project, I hope to not only visualize where these abuses are occurring, but I would like to give them a name and a face. I would also like to explore the legal statutes and cybersecurity laws that are governing how governments are using the internet in the Middle East. Are these actions sanctioned by the laws? Are lawmakers paying attention to the internet? How will the relationship between the Internet and the Arab world evolve in the coming years?

There are a few projects I will look to for inspiration:

 

 

Cast magicks on internet trolls.

Screen Shot 2015-11-08 at 4.42.36 PM

Lately I’ve noticed something strange: Conversations about technology often locate themselves in the realm of the magical or the supernatural.

The sci-fi genre is replete with descriptions of machines that use language linked to animism, magic, witchcraft, the occult, and ghosts. In William Gibson’s 1984 sci-fi novel Neuromancer, the protagonist Case describes the posthuman body as “data made flesh,” a reference to Christian ontology and Jesus’ divine personhood.

These types of metaphors reaffirm one of the central assumptions at the core of the sci-fi genre: Breathing life into a machine is not far off from breathing life into a human body.

Similarly, the language used by today’s Silicon Valley tech kingpins reveals patterns in their thinking that link artificial intelligence to animism. “With artificial intelligence we’re summoning the demon,” remarked Tesla CEO Elon Musk in a 2014 MIT symposium. “You know those stories where there’s a guy with the pentagram, and the holy water, and he’s sure he can control the demon? Doesn’t work out.”

In a wonderful blog post entitled “Living with our Daemons,” Ingrid Burrington reminds us that Musk’s invocation of a metaphor to the supernatural is actually standard fare in the digital age. We’ve been living with so-called ghosts on the internet for a long time: Software wizards walk us through software installations, apps work “like magic,” and emails bounce back into our inbox from the mysterious MAILER-DAEMON. Evidently the tech world loves a good ghost story.

With that in mind, I decided to make a funny little game in p5.js in which users are prompted to “cast spells” on their internet enemies. Our assignment was to use some external media source in the sketch.

Check out my game here. 

Screen Shot 2015-11-08 at 4.36.43 PM

Screen Shot 2015-11-08 at 4.37.11 PM

The code for the game was fairly simple: I used dom elements to create buttons and capture video from a webcam.

Here’s the full code:

I’m satisfied with the way the project turned out. Here’s a video of someone (me) interacting with the interface:

First p5.js sketch using the NYT API.

Screen Shot 2015-10-27 at 1.01.55 PM

This week’s assignment was to create a sketch that employs an external data source. I had done this in one of my last assignments, where I was pulling data information from a CSV file to visualize the changing water levels in Lake Powell.

For this week’s project, I decided to work with the New York Times’ API to pull the NYT’s weekly Best Sellers list. I wanted to create a simple search so that users could see what the most popular books were on their birthday. Unfortunately, the API only allows you to pull data as early as 2008 but I decided to finish the project anyway.

See the final sketch here.

Before getting too deep into the project, I decided to make sure the NYT’s API was easy to use with ample documentation. Unlike the Goodreads API (which I’d spent a few hours playing around with), the NYT API is pretty intuitive and easy to use. It has a Best Sellers API that you can use after you’ve obtained the appropriate API key.

The URL that gets called each time a user searches is this:

http://api.nytimes.com/svc/books/v2/lists/overview.json?published_date=2012-11-18&api-key=f175980bb4d8913503354046d03a662b:4:56111762

Before writing any code, I had to construct the URL so that the input, which is a date (2012-11-18), gets wedged into the middle of the url.Screen Shot 2015-10-27 at 1.25.02 PM

 

 

Then in the function setup (), I created a button and a search bar. I also called a new function returnData() which pulls the data as soon as the mouse is pressed.

Screen Shot 2015-10-27 at 1.26.08 PM

 

 

 

 

 

The function returnData() constructs the URL as dataString and loads the JSON file. A JSON, or JavaScript Object Notation, is a programming language that parses and translates data into JavaScripts objects and arrays. The loadJSON() function takes two parameters: the URL (dataString) and what you’re telling the sketch to draw once you have the data (gotData).Screen Shot 2015-10-27 at 1.38.10 PM

 

 

Finally, the function gotData() is defined. Figuring out how to get the right data from the JSON file was tricky. The JSON file provides a series of objects and arrays nested in each other. There’s a lot of information to work with for each book: The title, the author, the publish date, an image of the cover, the price, the ISBN, the publisher, the contributor, the list, etc.

I decided I just wanted my function to pull pictures of the front covers of each book. To do so, I had to first create an empty array and push the URLs for the image into the new array. I printed the array to make sure it worked!

Next, I needed to use the p5.dom library in order to get the appropriate images from the URLs. I was introduced to the function createImg(), which creates images from the URL that appears in the parameter.Screen Shot 2015-10-27 at 1.41.11 PM

 

 

 

 

 

 

Screen Shot 2015-10-27 at 1.02.15 PMThat’s it! I got the search working. There were several lingering issues with the sketch that I didn’t have enough time to resolve, namely:

  1. There are duplicate book covers. Because I didn’t specify which Best Seller list to display, it’s displaying all of them at once. As such categories like hardcover_fiction and ebook_fiction are going to have repeats.
  2. The book covers aren’t wrapping. The books appear in a straight line because we added ‘inline-block’ to the display style, but the books do not wrap in order to fit within the canvas.
  3. The dates only go back to 2008. This is the only data the NYT API provided.
  4. The input is awkward. Entering in a date with the format YYYY-MM-DD is unwieldy. I would need to create three dropdowns or inputs so that users could enter the date information more easily.

See my full code here.

The photobooth: An interactive film using p5.dom.

In class on Thursday, we were introduced to the powerful dom library in p5. According to the p5 reference article about p5.dom, the library allows you to interact with HTML5 objects, including video, audio, text, and your webcam.

I was immediately interested in trying a first pass at making an interactive film in which the user could click a button to jump to another film. I knew that I wanted to make some kind of super cut using p5.dom.

Here’s an unfinished, unpolished version of my sketch. I’m still working on it.

I was inspired by the Bob Dylan music video for “Like a Rolling Stone” in which users could “channel surf” as different individuals sing the lyrics to his song. I also was thinking a lot about video artist Christian Marclay’s art installation The Clock, a 24-hour montage of hundreds of film clips that make real-time references to the time of day. The video clips are all tied by one thing: The presence of a clock and/or time. The result is an eerie, fragmentary portrait of what one day looks like in the movies.

clock2_2353636b

I also wanted to access the webcam in some way. I’m taking my cues from Paul Ford’s insanely well-written and lengthly Bloomberg piece “What is Code,” which accesses your webcam and automatically prints a PDF certificate of completion with your picture when you have completed the 38,000-word article.

With that in mind, I wanted to combine both ideas and build a photobooth. You can switch between disparate clips of characters using a traditional photo booth in different movies by clicking the button “span time.” You can press “play” or “pause” to stop the film:

movieButton = createButton(‘play’);
movieButton.position(700, 500);
movieButton.mousePressed(toggleVid);

 

 

 

Water elevation in Lake Powell.

85521.ngsversion.1422286517028.adapt.676.1Photograph by Michael Melford, National Geographic Creative

I’ve been living in Utah for the last six years, give or take, and my friends and I have spent a lot of time exploring southern Utah national and state parks.

One of the most iconic bodies of water in the region is Lake Powell, a reservoir on the Colorado River that straddles both Utah and Arizona. Lake Powell is best known for its orange-red Navajo Sandstone canyons, clear streams, diverse wildlife, arches, natural bridges, and dozens of Native American archeological sites.

Since its 1963 creation, Lake Powell has become a major destination for the two million visitors it attracts annually. You can see why we love spending time there:

10590510_10154540007750624_787829644479667926_nPhotograph by my friend Kelsie Moore.

10532870_10154540023060624_3323817325450871258_nPhotograph by my friend Kelsie Moore.

Lake Powell is the second-largest man-made reservoir in the U.S., storing 24,322,000 acre feet of water when completely full. The lake acts as a water storage facility for the Upper Basin States (Colorado, Utah, Wyoming, and New Mexico) but it must also provide a specified annual flow to the Lower Basin States (Arizona, Nevada, and California).

Recent drought has caused the lake to shrink so much, however, that what once was the end of the San Juan River has become a ten-foot waterfall, according to National Geographic. As of 2014, reservoir capacities in Lake Powell were at 51% and the nearby Lake Mead was at 39%.

Drought has really reshaped the Colorado River region. According to the U.S. Drought Monitor, 11 of the past 14 years have been drought years in the southwest region, ranging from “severe” to “extreme” to “exceptional” depending on the year. You can see how drastically the landscape has changed over the past decade by taking a look at this series of natural-color images taken by a the Landsat series of satellites.

This week in ICM, we’re learning how to use objects and arrays in javascript. I wanted to produce a simple data visualization that displayed historical data about the water elevation in Lake Powell since its creation in the 1960s. I also knew that I wanted to use some kind of organic sound in the visualization, exploring p5.sound library.

See the final visualization here.

Screen Shot 2015-10-08 at 11.59.52 AM

I found a database online that contained the information I needed and I created a CVC file that detailed the year in one column and the elevation values in another column.

At first, I envisioned an animated visualization that snaked across the screen and split into fractals as you cycled through each year in the database. I liked the idea of having the design mimic the structure of the Colorado River. Here was my initial sketch:

FullSizeRender

I started playing around with the code and was able to produce an array of values from the CVC file. For instance, I created an array “elevation[]” that pulled the water elevation value for a given year. I wrote some code that allowed me to cycle through the years:

Screen Shot 2015-10-08 at 12.01.54 PMScreen Shot 2015-10-08 at 12.02.06 PMScreen Shot 2015-10-08 at 12.02.23 PM

After getting the years to cycle chronologically, I made an animation of a white line moving across the screen. For each new year, I wanted to draw a bar extending from the white line that helped visualize how the water levels were changing from year to year.

I created a function Bar () and gave it some parameters for drawing each of the bars.

Screen Shot 2015-10-08 at 12.06.52 PMScreen Shot 2015-10-08 at 12.06.59 PMScreen Shot 2015-10-08 at 12.07.31 PM

After defining the function, I started the animation by typing bar.display () with the specified parameters under function draw (). The bars were now a new object.

Next, I wanted to add sound to the visualization. I thought about a few different organic sounds: rainfall, rivers flowing, thunder, etc. In the end, I found a field recording of a thunderstorm in southern Utah and I immediately fell in love with the sound.

Every time a new year started, I introduced a 20-second clip of the sound so that over time you can hear the rolling thunder. I added some brown noise to sit underneath the sound file and some oscillation effects.

Screen Shot 2015-10-08 at 12.16.06 PM

When a new year starts, a new sound file plays, layering over the last sound. When the visualization finishes, the sound disconnects.

Screen Shot 2015-10-08 at 12.16.14 PMScreen Shot 2015-10-08 at 12.16.21 PMHere’s a video of the visualization:

Overall, I liked how this sketch turned out, but I had some major problems with this visualization.

First off, I think that the data I obtained (water elevation values by year) told a much less dramatic story than I had expected. I realized as I was doing research for this blog post that during droughts, it’s not the reservoir levels that experience the most dramatic decline, but the outflux of water is reduced significantly. I think that if I were to do this project again, I would have spent more time researching the data set I wanted to use.

Second, I really didn’t love the simple animated graph I produced. Yes, it told the story in a straightforward way, but I really wanted to produce a fractal/river shape that was more visually compelling than just straight lines. I couldn’t figure out how to do it in time so I might try doing it for a future project.

I think that adding the sounds made this visualization much more interesting and I want to keep exploring the p5.soundLibrary for future sketches.

 

 

Junk food game redux.

Screen Shot 2015-10-01 at 1.34.33 PM

This week’s assignment was to improve upon a previous project by cleaning up and rewriting some of the code for the project. I was excited to spend more time working on my junk food game I built with Jamie last week because there were still a number of things I thought I could improve.

Here’s the updated version of our game.

Before I’d done too much work on revising the code, Jamie told me she’d done some work and had (amazingly!) moved some things around and cleaned up a lot of the code we’d written last week. I added some cosmetic changes to the game, including a change in the animation that occurs when the mouth touches food.

Overall, here are some of the major things that were fixed in the new code:

 

  • One of the reasons the code from the original game was so long was because for each piece of food, we’d had to write code with directions. In the updated version, Jamie defined a new function so that each piece of food had a set of consistent parameters. The new function looked like this:

function Food(f, s, p, ip) {

For each piece of food, f = load the image, s = speed of fall, p = points if it hits the mouth (either +10 or -10) and ip = if you hit the poop. The function was defined at the end of the code after function setup() and function draw():

Screen Shot 2015-10-01 at 1.10.24 PM

In function setup() we’d already established what each of these parameters would be for each piece of food:

Screen Shot 2015-10-01 at 1.10.06 PM

That helped simplify the code considerably, since we could just write a general formula for the entire game rather than for each piece of food individually.

  • In the older version of the game, there was a slight jitter when you clicked the initial button. The jitter was fixed by changing the function from ‘mouseIsPressed’ to ‘mousePressed.’ This seemed to fix the problem altogether.
  • Last week, we struggled to change the final “end” screens so that the screen cleared once the game finished. When you win or when you eat the poop, there are two different screens that are displayed. Here’s what the screen looked like last week:

Screen Shot 2015-10-01 at 12.59.41 PM

We updated the code to fix the error (with some added vulgarity because this is just for fun anyway):

Screen Shot 2015-10-01 at 1.06.50 PM

I thought overall that we made some good improvements to the code without changing the structure of the game overall. I’ve got to give Jamie a lot of the credit for simplifying the code and giving a much stronger architecture to the game we had built. I really want to keep improving the game!

Junk food heaven. Or, my first game in p5.

Screen Shot 2015-09-24 at 1.49.33 PM

This week’s assignment for computational media was to build something that uses (1) a button or slider and (2) an algorithm for animation.

I immediately knew that however this project played out, I wanted it to involve both junk food and the Beach Boys.

Play the game here!

I was assigned to work with Jamie Ruddy. Jamie and I decided within the first 30 seconds of our conversation that we wanted to build a simple game using interactive elements and animation. Because both of us were drawn to the playful Japanese emojis that we use when we text, we designed a simple game in which the player moves a tongue that catches various pieces of food: hamburgers, apples, eggplant, etc. The player gains points for eating healthy food and loses points for eating junk food.

Here’s the initial sketch I drew:

IMG_2135

There were a lot of steps to build this game. First, we had to create a welcome screen that changed to the game screen once you clicked a button.

Screen Shot 2015-09-24 at 1.32.29 PM

Then, we had to create an assets library of PNG images to load into the game. Each piece of food would drop into the screen where the x-coordinate was random and the y-coordinate started at 0 and increased at each frame.

Screen Shot 2015-09-24 at 1.57.28 PM

Then came the most difficult step in the entire process. In order to animate the emojis and make them interact with one another, we had to create an object. This was a first for me, so I spent a lot of time reviewing the p5 reference library’s explanation of how to create an object.

Within the object, we created a function Food() that displayed each item of food, made it drop at a different rate than the other food, and then checked to see if it had touched the tongue. It also checked to see if the food had hit the bottom of the screen (y=height); if yes, then start the food again at the top of the screen (y = 0).

From there, we needed to ensure the score was increasing/decreasing based on which object the tongue touched. For instance, the apple would +10 and the hamburger would -10. Get to 50 points and you win.

Screen Shot 2015-09-24 at 1.23.28 PM

And for a bonus surprise: Eat the poop emoji and you just straight up lose the entire game (because, gross).

Screen Shot 2015-09-24 at 1.22.29 PM

To make the game more fun, I added the Beach Boys’ “Kokomo” to the sketch. It was simple to call the mp3 in the library by using the preload() function.

Screen Shot 2015-09-24 at 1.15.51 PM

Success! There’s a lot more I’d like to do with this game. For instance, I’d like there to be a location-based animation whenever the food hits the tongue  (right now the screen flashes white).

Here’s a video of how the game works:

 

 

Learning to animate in p5.

Screen Shot 2015-09-17 at 1.22.58 AM

Last week we drew static drawings from primitive shapes. This week we were assigned the task of animating our sketches in p5 using a set of new functions, including frameRate(), rotate(), and transform(), or new variables like mouseX and mouseY.

To complete the assignment, our sketch needed to contain: (1) One element that changed over time independent of the mouse; (2) One element controlled by the mouse; and (3) One element that’s different every time you run the sketch.

I’ve been a cyclist for a few years now. In Salt Lake City, where I lived for the past two years, I loved biking up into the canyon in the late afternoon before sunset. That experience was the inspiration behind my sketch this week, in which a bike rides into the mountains.

See my final sketch here.

Here was my initial sketch:

IMG_2029
I started planning out the sketch in Adobe Illustrator to get a better sense of the composition of the drawing.

mountain

My plan was to make the sun and its rays rotate in a circle independent of the mouse. That actually proved a lot more difficult than I’d anticipated. The action required me to use a new set of functions, including push(), transform(), rotate(), and pop(). Here are the lines of code I wrote that drew the sun:

push();
translate(350,100);
rotate (rads);

for (var d = 0; d < 10; d ++){
noStroke();
fill(206+d,188+d,122+d);
ellipse(0,0,175,175);

stroke(206+d,188+d,122+d);
strokeWeight(3);

for (var i = 0; i < 36; i ++) {
line(0,0,x,y);
rotate(PI/20);
}
pop ();

rads = rads + 1.57;

I started layering on primitive shapes for the mountains and bicycle using the beginShape() function I we learned in class. I decided to make the bicycle a different color every time you load the page by calling out var r, var g, and var b and then defining them in the setup:

r = 230;
g = random(100,200);
b = random(100,200);

Then the most difficult task lay ahead: Getting the bicycle to move at a variable speed controlled by the mouse. I tried a lot of different things, including making an object out of the bicycle using some new tricks we learned in Javascript, but ultimately this is the code that really worked out:

var speedX = 1;

var speedY = 1;

//bicycle

stroke(r,g,b,255);
strokeWeight(5);
noFill();

ellipse(225+speedX,520-speedY,100,100);
ellipse(400+speedX,490-speedY,100,100); 
ellipse(225+speedX,520-speedY,10,10); 
ellipse(400+speedX,490-speedY,10,10); 
quad(225+speedX,520-speedY,305+speedX,505-speedY,350+speedX,435-speedY,265+speedX,450-speedY); 
line(260+speedX,445-speedY,305+speedX,505-speedY); 
line(400+speedX,490-speedY,352+speedX,435-speedY); 
ellipse(305+speedX,505-speedY,30,30);
quad(250+speedX,450-speedY,245+speedX,440-speedY,275+speedX,440-speedY,275+speedX,445-speedY);

speedX+=map(mouseX,0,width,1,5); 
speedY+=0.15;

What’s happening above is that I’ve created two new variables, speedX and speedY, that will determine the velocity of the bike in relation to my mouse. The last two lines of code establish the range of values that the speed can be. When mouseX is high (i.e. it’s on the far right of the sketch), the bike moves more quickly.

Here’s a video of the animation.

Full code below.

var fr = 30;
var x = 1200;
var y = 0;
var rads = 0; // declare variable rads, angle at which sun will rotate
var speedX = 1; //declare variable speedX
var speedY = 1; //declare variable speedY
var r;
var g;
var b;
function setup() {
createCanvas(1100,600);
background(252,240,232);
r = 230;
g = random(100,200);
b = random(100,200);

}

function draw() {

//draw sun and rays and make them rotate.
push();
translate(350,100);
rotate (rads);

for (var d = 0; d < 10; d ++){
noStroke();
fill(206+d,188+d,122+d);
ellipse(0,0,175,175);

stroke(206+d,188+d,122+d);
strokeWeight(3);

for (var i = 0; i < 36; i ++) {
line(0,0,x,y);
rotate(PI/20);

}
}
pop ();

rads = rads + 1.57;

//mountains

//mountains layer three
stroke(136,167,173);
fill(136,167,173);
beginShape();
vertex(0,600);
vertex(0,400);
vertex(200,300);
vertex(300,350);
vertex(400,250);
vertex(500,325);
vertex(600,100);
vertex(750,200);
vertex(875,60);
vertex(1000,150);
vertex(1100,100);
vertex(1100,600);
endShape();

//mountains layer two
stroke(92,109,120,100);
fill(92,109,120,100);
beginShape();
vertex(0,600);
vertex(0,400);
vertex(275,375);
vertex(350,400);
vertex(425,375);
vertex(575,375);
vertex(800,200);
vertex(900,300);
vertex(1100,250);
vertex(1100,600);
endShape();

//mountains layer three
stroke(92,109,112,200);
fill(92,109,112,200);
beginShape();
vertex(0,600);
vertex(0,550);
vertex(500,400);
vertex(575,425);
vertex(600,400);
vertex(800,400);
vertex(875,300);
vertex(925,375);
vertex(1100,300);
vertex(1100,600);
endShape();

//mountains layer four
stroke(213,207,225,25);
fill(213,207,225,25);
triangle(0,600,1100,425,1100,600);

//bicycle

stroke(r,g,b,255);
strokeWeight(5);
noFill();

ellipse(225+speedX,520-speedY,100,100); //bike wheel
ellipse(400+speedX,490-speedY,100,100); //bike wheel
ellipse(225+speedX,520-speedY,10,10); //inner bike wheel
ellipse(400+speedX,490-speedY,10,10); //inner bike wheel
quad(225+speedX,520-speedY,305+speedX,505-speedY,350+speedX,435-speedY,265+speedX,450-speedY); //frame
line(260+speedX,445-speedY,305+speedX,505-speedY); //frame
line(400+speedX,490-speedY,352+speedX,435-speedY); //frame
ellipse(305+speedX,505-speedY,30,30); //frame
quad(250+speedX,450-speedY,245+speedX,440-speedY,275+speedX,440-speedY,275+speedX,445-speedY);

speedX+=map(mouseX,0,width,1,5); //x coordinate of the mouse determine speed of the bike
speedY+=0.15;
}

 

Portrait of a classmate with p5.js.

This week’s project was an exercise in patience because, to paraphrase the poet Mary Ruefle, I’m just a handmaiden with a broken urn when it comes to writing code. I felt like I learned a lot, though, and I’m slowly developing a process for these assignments.

In our first class, we were introduced to the basic functions in p5.js, function setup() and function draw(). Our assignment this week was to draw a portrait of one of our classmates using only primitive shapes in p5.js (translation: no complex shapes, no animation, no interactive elements).

Here’s a link to the final portrait.

My process was simple: (1) Sketch out the drawing in Illustrator; (2) Create a quick outline in p5.js; and (3) Code like hell.

I found it helpful before even writing a line of code to use Illustrator to make a sketch so I had a better sense of what kinds of shapes I’d be drawing. In terms of color, I used the site Coolors to generate the RGB values that matched the color scheme I envisioned.

Here’s my initial sketch and the outline of shapes I planned to use:

Sketch

Sketch---shapes

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

From there, I started coding.

Overall, I wanted the portrait to be symmetrical so I was doing a lot of math in my head to calculate some of the x and y coordinates for each shape that I drew. Just about all the shapes I drew were various combinations of quad(), rect(), ellipse(), triangle(), and line(). There was a lot of trial and error, since I was pretty much guessing at where different points would be on the canvas.

That proved to be a massive headache since I was constantly trying to remember which line of code corresponded to which shape in the drawing and which coordinate corresponded to which point on the shape. It’s no big deal when you have three shapes, but when you are drawing 30+ shapes it can get messy.

I found it helpful to organize my lines of code with some subheadings: //face, //hair, //shirt, etc. I think that in the future I will create a much more detailed system for labeling shapes in my code.

I quickly realized that the portrait I’d sketched out at the beginning of the project was going to be an ambitious undertaking with the limited tools that were available to me in p5.js (since we could only use functions that drew primitive shapes). It took me longer than I’d anticipated, but I found that I was able to add a lot of detail to the portrait despite these limitations.

Here’s a screenshot of the final portrait:

Portrait screenshot

Overall, the project was an excellent exercise in learning the basics of p5. Here’s my full code:

function setup() {
createCanvas(1000,1000);
background(132,220,207);
}

function draw() {

//shirt
stroke(33,131,128);
fill(33,131,128);
quad(50,450,80,350,420,350,450,450);
quad(80,350,190,300,310,300,420,350);
stroke(130,150,133);
fill(130,150,133);
triangle(190,300,290,400,220,395);
triangle(310,300,290,400,330,350);
stroke(60,132,131);
fill(60,132,131);
ellipse(232,382,10,10)
stroke(97,61,193);
fill(97,61,193);
quad(120,330,160,312,190,450,155,450);
quad(400,337,350,317,375,450,400,450);

//face
stroke(230,202,171);
fill(230,202,171);
triangle(190,300,310,300,290,400);
quad(190,300,200,270,300,270,310,300);
quad(180,230,320,230,280,300,220,300);
rect(180,160,140,70);
quad(180,160,320,160,300,90,200,90);

//hair
stroke(161,131,87);
fill(161,131,87);
quad(180,230,320,230,280,300,220,300);
triangle(180,230,180,190,199,230);
triangle(320,230,320,190,301,230);
quad(170,185,180,185,200,130,177,115);
quad(200,130,177,115,210,65,267,55);
quad(200,130,267,55,290,90,270,90);
quad(267,55,270,90,300,110,320,100);
triangle(270,90,250,110,260,90);
quad(300,110,320,100,330,140,310,150);
quad(310,150,330,140,320,185,315,185);
triangle(300,105,299,140,315,170);
triangle(200,130,196,155,180,175);
rect(207,149,23,4);
triangle(207,149,207,153,199,160);
rect(271,149,23,4);
triangle(294,149,294,153,303,160);

//face #2
stroke(230,202,171);
fill(230,202,171);
ellipse(175,185,26,50);
ellipse(325,185,26,50);
triangle(200,229,240,229,220,250);
triangle(300,229,260,229,280,250);
ellipse(250,260,33,11);

//facial features.
stroke(133,138,227,100);
strokeWeight(1.2)
fill(222,255,240);
quad(226,168,210,168,203,177,233,177);
quad(274,168,290,168,297,177,267,177);
stroke(86,88,87,190);
fill(86,88,87,190);
ellipse(223,172,8,7);
ellipse(287,172,8,7);
stroke(222,242,200);
fill(222,242,200);
ellipse(225,170,3,2);
ellipse(289,170,3,2);
stroke(133,138,227,0);
fill(133,138,227,110);
quad(203,177,233,177,226,181,210,181);
quad(297,177,267,177,274,181,290,181);
quad(243,230,259,230,267,223,235,223);
quad(267,223,235,223,242,218,260,218);
stroke(133,138,227,100);
strokeWeight(2.2);
line(258,217,258,163);
line(228,181,210,200);
line(272,181,290,200);
line(290,310,280,370);
line(230,313,260,350);

//hands
stroke(230,202,171);
fill(230,202,171);
rect(136,410,68,13);
rect(138,425,70,13);
rect(136,437,70,13);

//hair #2
stroke(161,131,87);
fill(161,131,87);
triangle(186,185,189,163,175,160);
triangle(314,185,311,163,325,160);
triangle(175,140,175,123,165,145);
triangle(210,67,190,79,210,75);
triangle(330,138,338,143,325,120)

//glasses
stroke(71,44,27,230);
strokeWeight(2.7);
noFill();
ellipse(217,175,44,24);
ellipse(284,175,44,24);
line(239,175,262,175);
line(195,175,182,170);
line(306,175,318,170);

}