In class on Thursday, we were introduced to the powerful dom library in p5. According to the p5 reference article about p5.dom, the library allows you to interact with HTML5 objects, including video, audio, text, and your webcam.

I was immediately interested in trying a first pass at making an interactive film in which the user could click a button to jump to another film. I knew that I wanted to make some kind of super cut using p5.dom.

Here’s an unfinished, unpolished version of my sketch. I’m still working on it.

I was inspired by the Bob Dylan music video for “Like a Rolling Stone” in which users could “channel surf” as different individuals sing the lyrics to his song. I also was thinking a lot about video artist Christian Marclay’s art installation The Clock, a 24-hour montage of hundreds of film clips that make real-time references to the time of day. The video clips are all tied by one thing: The presence of a clock and/or time. The result is an eerie, fragmentary portrait of what one day looks like in the movies.

clock2_2353636b

I also wanted to access the webcam in some way. I’m taking my cues from Paul Ford’s insanely well-written and lengthly Bloomberg piece “What is Code,” which accesses your webcam and automatically prints a PDF certificate of completion with your picture when you have completed the 38,000-word article.

With that in mind, I wanted to combine both ideas and build a photobooth. You can switch between disparate clips of characters using a traditional photo booth in different movies by clicking the button “span time.” You can press “play” or “pause” to stop the film:

movieButton = createButton(‘play’);
movieButton.position(700, 500);
movieButton.mousePressed(toggleVid);

 

 

 

f5ce8a5716220b4adbae524670be1ac1Photograph by Eric Rose.

In keeping with the general tenor of my physical computation projects, I will continue to look at creative ways to provoke interactions with water.

I want a lot of my future projects to be an exploration of cymatics – a subset of modal vibrational phenomena in which a surface is vibrated and different patterns emerge in some kind of medium (paste, liquid, water, etc). Cymatics is essentially a process by which soundwaves are made visible. I like the idea of measuring a person’s heart rate and then visualizing that vibration pattern in a liquid, for instance.

According to Andrew Defrancesco, the more I thought about this midterm project, though, the more I was struck by the delightful feeling of running one’s fingers through a steady stream of water. I want to build the entire interaction around that tactile experience.

So here’s my proposal: I plan to build a water harp. This is the initial sketch of the project:

IMG_2389

The harp will consist of a rounded plexiglass board that water flows over, creating a waterfall effect. The water will hit a series of 8 sensors (either moisture sensors, photosensors, or another conductive material). There will be a water pump that pumps up the water and brings it back to the top.

Each sensor will be paired with a sound of a different frequency that will play from the computer using p5.js. I’m still trying to decide what kind of sound will be best suited to this project. It could be a series of different noises triggered by each sensor (such as rainfall, thunder, rivers, etc). I was also thinking a lot about using human voices singing at different pitches that would then harmonize with each other.

When the participant runs his/her hand through the waterfall, it will create gaps in the water, triggering different sensors. Overall, I want the experience to be as tactile and delightful as possible.

85521.ngsversion.1422286517028.adapt.676.1Photograph by Michael Melford, National Geographic Creative

I’ve been living in Utah for the last six years, give or take, and my friends and I have spent a lot of time exploring southern Utah national and state parks.

One of the most iconic bodies of water in the region is Lake Powell, a reservoir on the Colorado River that straddles both Utah and Arizona. Lake Powell is best known for its orange-red Navajo Sandstone canyons, clear streams, diverse wildlife, arches, natural bridges, and dozens of Native American archeological sites.

Since its 1963 creation, Lake Powell has become a major destination for the two million visitors it attracts annually. You can see why we love spending time there:

10590510_10154540007750624_787829644479667926_nPhotograph by my friend Kelsie Moore.

10532870_10154540023060624_3323817325450871258_nPhotograph by my friend Kelsie Moore.

Lake Powell is the second-largest man-made reservoir in the U.S., storing 24,322,000 acre feet of water when completely full. The lake acts as a water storage facility for the Upper Basin States (Colorado, Utah, Wyoming, and New Mexico) but it must also provide a specified annual flow to the Lower Basin States (Arizona, Nevada, and California).

Recent drought has caused the lake to shrink so much, however, that what once was the end of the San Juan River has become a ten-foot waterfall, according to National Geographic. As of 2014, reservoir capacities in Lake Powell were at 51% and the nearby Lake Mead was at 39%.

Drought has really reshaped the Colorado River region. According to the U.S. Drought Monitor, 11 of the past 14 years have been drought years in the southwest region, ranging from “severe” to “extreme” to “exceptional” depending on the year. You can see how drastically the landscape has changed over the past decade by taking a look at this series of natural-color images taken by a the Landsat series of satellites.

This week in ICM, we’re learning how to use objects and arrays in javascript. I wanted to produce a simple data visualization that displayed historical data about the water elevation in Lake Powell since its creation in the 1960s. I also knew that I wanted to use some kind of organic sound in the visualization, exploring p5.sound library.

See the final visualization here.

Screen Shot 2015-10-08 at 11.59.52 AM

I found a database online that contained the information I needed and I created a CVC file that detailed the year in one column and the elevation values in another column.

At first, I envisioned an animated visualization that snaked across the screen and split into fractals as you cycled through each year in the database. I liked the idea of having the design mimic the structure of the Colorado River. Here was my initial sketch:

FullSizeRender

I started playing around with the code and was able to produce an array of values from the CVC file. For instance, I created an array “elevation[]” that pulled the water elevation value for a given year. I wrote some code that allowed me to cycle through the years:

Screen Shot 2015-10-08 at 12.01.54 PMScreen Shot 2015-10-08 at 12.02.06 PMScreen Shot 2015-10-08 at 12.02.23 PM

After getting the years to cycle chronologically, I made an animation of a white line moving across the screen. For each new year, I wanted to draw a bar extending from the white line that helped visualize how the water levels were changing from year to year.

I created a function Bar () and gave it some parameters for drawing each of the bars.

Screen Shot 2015-10-08 at 12.06.52 PMScreen Shot 2015-10-08 at 12.06.59 PMScreen Shot 2015-10-08 at 12.07.31 PM

After defining the function, I started the animation by typing bar.display () with the specified parameters under function draw (). The bars were now a new object.

Next, I wanted to add sound to the visualization. I thought about a few different organic sounds: rainfall, rivers flowing, thunder, etc. In the end, I found a field recording of a thunderstorm in southern Utah and I immediately fell in love with the sound.

Every time a new year started, I introduced a 20-second clip of the sound so that over time you can hear the rolling thunder. I added some brown noise to sit underneath the sound file and some oscillation effects.

Screen Shot 2015-10-08 at 12.16.06 PM

When a new year starts, a new sound file plays, layering over the last sound. When the visualization finishes, the sound disconnects.

Screen Shot 2015-10-08 at 12.16.14 PMScreen Shot 2015-10-08 at 12.16.21 PMHere’s a video of the visualization:

Overall, I liked how this sketch turned out, but I had some major problems with this visualization.

First off, I think that the data I obtained (water elevation values by year) told a much less dramatic story than I had expected. I realized as I was doing research for this blog post that during droughts, it’s not the reservoir levels that experience the most dramatic decline, but the outflux of water is reduced significantly. I think that if I were to do this project again, I would have spent more time researching the data set I wanted to use.

Second, I really didn’t love the simple animated graph I produced. Yes, it told the story in a straightforward way, but I really wanted to produce a fractal/river shape that was more visually compelling than just straight lines. I couldn’t figure out how to do it in time so I might try doing it for a future project.

I think that adding the sounds made this visualization much more interesting and I want to keep exploring the p5.soundLibrary for future sketches.

 

 

IMG_2249

I’ve spent about six years living in Utah, where the climate is arid and drought is a constant concern. According to the U.S. Drought Monitor, about a quarter of the state continues to experience severe (D2) level drought. The region often doesn’t receive the rainfall it needs to keep its reservoirs at capacity.

In keeping with my interest in humans’ relationship with their physical environment and ecological processes, I decided that I wanted my next project to collect information about precipitation.

Using a rain gauge that my friend Joao Costa had lent me, I was able to measure the accumulation of rainfall over time.
IMG_2231

A rain gauge is a self-emptying tipping bucket that collects and dispenses water. It allows you to display daily and accumulated rainfall as well as the rate of rainfall. The gauge essentially acts as a switch, making contact when a specified amount of water enters the bucket.

Rain collects at the top of the bucket, where a funnel collects and channels the precipitation into a small seesaw-like container. After 0.011 inches (0.2794 mm) of rain has fallen, the lever tips and dumps the collected water. An electrical signal is sent back to the Arduino where the digital counter can record what’s called the “interrupt input.”

IMG_2233

Once I decided that I wanted to measure the precipitation using the rain gauge, I did some research into the particular gauge I was using. According to its datasheet, the rain gauge connected to an adaptor, which then only connected to two center conductors. I hooked up the rain bucket like so:

IMG_2245

The thing that is really powerful about the rain gauge is that it can measure the cumulative rainfall over a period of time. I decided that I wanted to connect the LCD screen to the Arduino as an output in order to display the amount of precipitation.

Connecting the LCD screen was very difficult because of the number of wires that needed to be connected to the screen (eight!). The LCD screen also required that I set up a potentiometer to control the brightness of the screen, so I added that to the breadboard.

IMG_2250

Once the setup was complete, I wrote the Arduino program that would display the precipitation information I wanted. I figured out how to set up the LCD screen to display text on a single line.`

Next, I needed to figure out how to display the actual amount of precipitation that had fallen into the bucket. To do so, I created a variable “rainTipperCounter.” Every time the see-saw in the gauge filled with water and tipped, the count went up by one.

Screen Shot 2015-09-29 at 7.49.16 PM

I knew that each time the count increased, 0.011 inches of rainfall had collected in the rain gauge. I programmed the LCD to display the rainTipperCounter, multiplied by 0.011, so that the actual amount of accumulated precipitation was displayed.

Screen Shot 2015-09-29 at 7.51.36 PM

IMG_2247

And just like that, I’d set up a simple rain gauge that tracked cumulative precipitation. It wasn’t raining outside today so I had to test the switch by pouring water into the gauge. Here’s how the final product turned out:

Measuring cumulative rainfall with a rain gauge from Rebecca Ricks on Vimeo.

IMG_2108

“Art to me is our expression of being in love with (and fearing for) our world — our efforts to capture and predict the patterns, colours, movement we see around us,” says environmental strategist Dekila Chungyalpa.

Those words were written to coincide with a massive art installation piece Ice Watch by Icelandic artist Olafur Eliasson. For the piece, Eliasson obtained huge chunks of Arctic ice and installed them in front of Copenhagen’s city hall, where they slowly melted, a powerful reminder to the public of the reality of climate change.

I’ve been thinking a lot lately about ways in which we can sonify many of the natural geological processes that are simply not audible to human ears: the sound of water levels gradually dropping, the sound of tectonic plates sliding, the sounds of topography and mountain ranges, the sounds of glaciers melting, for instance. These objects have their own internal auditory patterns and acoustics.

Eliasson’s installation piece, along with Paul Kos’ “The Sound of Ice Melting” inspired my analog assignment this week. I wanted to generate a sound from the process of ice chunks melting.

To do so, I first purchased a simple rain/moisture sensor that would function as the analog input in the circuit. The sensor is essentially a pentiometer because it has a variable resistance: The amount of resistance varies based on the amount of water/moisture present.

IMG_2099

IMG_2100The board that senses the presence of moisture.

Using the laser printer (first time yeah!) I cut a piece of acrylic to mount the sensor on. I had to melt hot glue on some of the wires to make sure the water wouldn’t interfere with the electricity.

I connected the sensor to the Arduino board via analog pin #A0 and then connected the Piezo as a digital output from the digital pin #3.

IMG_2112

After wiring up the board, I needed to test the sensor to see what range of signal values I would be dealing with. This is the code that printed those values, which ranged from 0 to 1023.

Screen Shot 2015-09-22 at 11.03.16 PM

Eventually, I knew that I wanted the sounds emitted by the dripping ice to create a sense of urgency or anxiety. To do so, I needed to change the code so that the tone sped up.

After determining the range of values available, I decided to write some additional code that would change the delay between tones based on how much water was present. When there was only a tiny droplet, the Piezo would buzz at a slow rhythm. As more water dripped onto the sensor, the rhythm would speed up.

Screen Shot 2015-09-22 at 11.03.30 PM

Finally, I froze water in different sized chunks to create the ice. I put the chunks in fish netting and dangled them above the sensor, letting them melt at room temperature. Gradually the sounds sped up as the ice melted more quickly.

output_ioVLd3

Here is the final product! 

The sound of melting ice. from Rebecca Ricks on Vimeo.

Full code below.

#define rainSensor A0
#define buzzer 3

void setup() { //analog input from rainSensor, digital output from buzzer
Serial.begin(9600);
//pinMode(rainSensor, INPUT);
pinMode(buzzer, OUTPUT);

}

void loop() {
int sensorValue = analogRead(rainSensor); //read the analog input from pin A0.
Serial.println(sensorValue);
delay(100);

if(sensorValue == 0) { // write the digital output. Values from 0 to 1023.
tone(buzzer, 440);
} else if(sensorValue > 0 && sensorValue < 300) {
tone(buzzer, 440);
delay(20);
noTone(buzzer);
delay(20);
} else if(sensorValue > 300 && sensorValue < 600) {
tone(buzzer, 440);
delay(50);
noTone(buzzer);
delay(50);
} else if(sensorValue > 600 && sensorValue < 900) {
tone(buzzer, 440);
delay (100);
noTone(buzzer);
delay(100);
} else if(sensorValue > 900 && sensorValue < 1023) {
tone(buzzer, 440);
delay (200);
noTone(buzzer);
delay(200);
}

Overall, I had hoped to do a lot more with this project I think. For instance, I would have liked to have changed the setup so that the melting ice was interacting with a sensor in a more interesting way (beyond just dangling above the sensor).

IMG_2015

This week, we learned how to build a simple circuit using the Arduino Uno device. We set up a circuit in which a push button functioned as the switch, causing an LED light to turn on and off. Simple enough.

For homework this week, we were challenged to design our own switch. There were so many possibilities that I spent a good amount of time thinking about what kind of human interaction I wanted to initiate the switch. There are endless ways that we interact with our environment; for instance, poking, tapping, blowing, touching, and blinking.

Ultimately I decided that it would be interesting if the interaction were watering a plant. The action would complete the circuit, producing an outcome that would act as an alarm that the plant had received adequate water. In other words, I wanted to create a system where a plant starts yelling at you when you’ve overwatered it.

IMG_2027

At first, I imagined setting up some type of pulley system in which the weight of the water pulled the pot down to the ground and connected a piece of metal on the bottom of the pot with a piece of metal on the ground.

IMG_2028

I quickly realized that a much simpler solution was available. Since water is conductive (to some degree – it does have resistance), I decided that I would put the plant on a plate and let the water trickle out of the bottom of the plant. The ends of the wires would sit on the plate and when the puddle formed, it would touch both of the wires and complete the circuit.

IMG_2014

 

It took some trial and error, but I was able to complete a basic circuit using the plant watering action. One obstacle was that I had to dissolve some salt into the water in order for the water to be conductive enough to allow the electrical current to flow through the puddle (that’s probably a gardening 101 no-no). Another obstacle was that the wires kept oxidizing, which meant I had to keep snipping off the tops for each trial.

I jumped ahead in the lesson and learned how to set up a circuit that communicated with the Arduino software so that there was a digital signal sent back to a Piezo. To do so, I connected the wires so that the digitalRead() function would get its input from pin #7 and then the output would come out of pin #3, which was connected to the Piezo.

IMG_2011

The result was that the Piezo played some sounds. I wrote some commands that played different tones on the Piezo. I really wanted to play the song “Psycho Killer” by the Talking Heads, so I wrote some code that produced a series of tones that corresponded to the chorus of the song.

The tone() function takes the pin # and a frequency number, which corresponds to a musical note. The delay() function is what creates the pauses between the notes.

Here’s what the full code looked like:

int sound;

void setup() {
pinMode(7,INPUT);
pinMode(3,OUTPUT);
}

void loop() {
sound = digitalRead(7);
if (sound==1) {
tone(3,440);
delay(500);
tone(3,440);
delay(500);
tone(3,440);
delay(500);
tone(3,440);
delay(1000);
tone(3,413);
delay(500);
tone(3,413);
delay(500);
tone(3,413);
delay(1000);

tone(3,390);
delay(500);
tone(3,440);
delay(500);
tone(3,390);
delay(500);
tone(3,520);
delay(1000);
tone(3,420);
delay(100);
tone(3,390);
delay(500);
tone(3,420);
delay(500);
tone(3,390);
delay(500);
tone(3,520);
delay(1000);
tone(3,390);
delay(500);

} else {
noTone(3);
}
}

Honestly, I wasn’t entirely successful at playing the Talking Heads song because it never actually ran the way I imagined it would, but I did succeed in getting the Piezo to play a really funny sound when the circuit was completed.

Here is the final product:

Watering a plant with Arduino from Rebecca Ricks on Vimeo.

 

 

Processed with VSCOcam with p5 presetThe interior of a Baptist church on Fulton Avenue.

I moved to Bed-Stuy, Brooklyn three weeks ago, knowing very little about the religious landscape of my neighborhood. On my first night, I realized that there is a synagogue on my block in which the congregation is composed entirely of Ethiopian Jews. Three blocks away is an unusual Egyptian temple, where a small sect of black Muslims who call themselves the Nuwaubian Nation (a group that developed concurrent with the Nation of Islam) continue to worship. My second night, I heard the call to prayer sounding while I was at dinner a few blocks away. Religion is part of the culture and it’s what holds the neighborhood together.

With that in mind, I was interested in exploring how the different religious communities in Bed-Stuy meet, worship, interact, sing, and pray. I decided to call these liminal spaces religious ecotones. In ecology, an ecotone is a transition zone between two biomes. In other words, it is the space in which two communities meet and integrate.

In our audio recording, Katie and I sought to capture what worship sounds like across religions in Bed-Stuy. Each congregation has its own form of “mass” – a meeting in which the members pray and listen to their religious leaders speak. We captured some sounds from a Catholic church, a mosque, a Baptist church, and a Jewish friend during their different periods of worship.

The outcome was a patched-together narrative of what worship sounds like. Here’s the outcome:

Interestingly, I found that this project proved to be a good complement to this week’s reading, the short sci-fi piece “The Machine Stops” by E.M. Forster. The story explores a post-apocalyptic world in which humans are dependent on technology for all their needs, including social interaction. People in this world have a huge network of friends, many of whom they haven’t met. The story effectively predicts the rise of online culture and social media, as our generation increasingly flees traditional centers of community (such as religious spaces) and creates new, different communities online.

IMG_9731Fatima in the women’s section of a mosque on Fulton Avenue.

Churches, mosques, synagogues, and meditation classes are spaces in which individuals have in-person social interactions. They are also gathering places for religious communities in which intergenerational conversations can take place. I think our society needs both types of meeting spaces – those that occur online and those that are located within our communities.

tumblr_nrlebbS26B1qme9tao1_1280Painting by Kyle Jorgensen. From the Bootleg Bart art exhibit, in which local Salt Lake artists produced art based on the Simpsons character. 

As a society, we are enamored with wrapping our heads around the creative process. In Distrust That Particular Flavor, William Gibson suggests that each of us is developing our own personal microculture: the accumulation of every book we’ve read, every conversation we’ve had, every movie we’ve seen, every piece of art we’ve seen, and every song to which we’ve listened. Design legend Paula Scher echoes Gibson’s thoughts in an interview in which she describes the creative process as a “slot machine” that remixes the collection of experiences you’ve amassed in your life. Combinatorial creativity hinges on our ability to actively cultivate this private microculture.

If we all agree that influence is vital to the creative process, then why do so many people resist a “remix” culture?

In a Harper’s article entitled “The Ecstasy of Influence,” (h/t Harold Bloom) writer Jonathan Lethem tackles the complicated subject of plagiarism in art. While acknowledging the role that copyright law plays in a market economy, Lethem falls into the same camp as Gibson and Scher, advocating the type of “open source” culture in literature and art that exists in jazz/blues music, for instance.

We are ultimately shaped by the writers and artists who came before us, argues Lethem. When some of the notes or coloring written by others leak out into our creations, it’s a normal part of the creative process. He puts it so eloquently:

“Finding one’s voice isn’t just an emptying and purifying oneself of the words of others but an adopting and embracing of filiations, communities, and discourses. Inspiration could be called inhaling the memory of an act never experienced. Invention, it must be humbly admitted, does not consist in creating out of void but out of chaos.”

One of the major ideas underpinning postmodernism is that nothing is truly original. Everything is parody or imitation of something else. Probing this idea further, Lethem argues that most everything that’s ever been written or said is plagiarism. We receive everything secondhand, drawn from hundreds and hundreds of other sources that are imperceptible to us.

On an anecdotal level, my first introduction to some of the best pop culture and film was through parodies that appeared on The Simpsons (who can forget the brilliant Kubrick love letter ‘The Shinning’?!) My first brush with classic literature was watching a dog play Cyrano de Bergerac, Faust, and Tom Sawyer on the television show Wishbone. When I’m working on a visual project, my first impulse is to flood my mind with poetry, art, and photography.

So again I’ll ask the question: Why resist an “open source” culture?

Screen Shot 2015-09-09 at 1.24.46 AMArt by Alexa Hall. From the Bootleg Bart art exhibit.

The ethics of creative license can get murky. Artist Joy Garnett shares an interesting anecdote in which she was accused of copyright infringement for modeling a painting on a documentary photograph without giving attribution. The photographer in question – Susan Meiselas – offers up her side of the story. The man in the photograph was a Sadinista rebel hurling a bomb at the Somoza national guard in 1979. Meiselas says that she doesn’t object to reappropriation of the image, but she does object to the degree to which duplication has decontextualized the photograph from its original meaning.

“History is working against context,” she says. “We owe this debt of specificity not just to one another but to our subjects.”

Although combinatorial creativity is inherent to art, artists enter a gray, murky area when they choose to riff off of others’ work. When it comes to telling stories from cultures that differ from our own, context matters.

My undergraduate studies were focused on a specific geographic region and culture – the Middle East and Arab culture – and so I’m sensitive to the ways in which Western cultures have chosen to represent (and misrepresent!) conflict in the Middle East. Often when dramatic images from the news circulate, they are ripped from their context and therefore lose their meaning. For instance, this week the Internet reacted to shocking pictures of a Kurdish refugee child whose body had washed ashore in Turkey. While many articles sought to educate the public about the Syrian refugee crisis, the images themselves converted the boy into an emblem, a victim of broad conflict in the Middle East.

So where does that leave us? I’m not certain. Like Lethem, I’ve been influenced by writers and artists in my own creative work and I think that as a society we should continue to cultivate a “remix” culture. On the other hand, I want the art that is being created to add to the conversation, not detract from it.

tumblr_m31sp34PRP1r2ysm3o1_1280Robert Irwin at Pace Gallery. “Red Drawing, White Drawing, Black Painting.” 2009. Source.

How do we define physical interaction?

According to Chris Crawford, interaction is a cyclic process that requires two actors, human or otherwise, who alternately listen, think, and speak. In this way, physical interaction can be considered a kind of two-way conversation. Interactivity, he argues, is a deliberate behavior in animals that developed millions of years ago in order to help humans better retain information about their environment. Crawford is quick to clarify that not every encounter is interactive, however. For instance, reading a book is participatory, not interactive, because it is a one-sided communication.

While this definition might seem fairly straightforward, many designers contend that it might be too narrow. In his piece “A Brief Rant on the Future of Interactive Design,” Bret Victor suggests that many interactive designers are too quick to abandon the tactile in favor of the visual. Victor argues that our interaction with everyday objects can be considered interactive because they offer us physical feedback. Holding a cup of water, for example, is a tactile form of physical interaction that gives us more information than interacting with an app on an iPhone.

I tend to align myself more closely with Victor’s definition of interactivity – after all, we are constantly receiving physical feedback from our environment that engage all of our senses, not just our vision. I don’t think designers should be so quick to dismiss all these other types of encounters as forms of physical interaction.

What is good physical interaction?

Regardless of how you choose to define interactivity, good physical interaction needs to be intuitive, engaging the various “languages” that humans already use to interact with the world, whether that be auditory, visual, or kinesthetic. A good interactive designer aims to minimize the memory load and the amount of mental effort humans need to put into the interaction.

In his book, Crawford argues that interactivity engages the mind more powerfully than any other form of expression. Research seems to back this assertion, as it’s been proven that interactive classroom environments help students better retain information.

What about non-interactive digital technology?

Most digital technology is to some degree interactive. However, I agree with Crawford that there are many experiences we have branded as “interactive” that aren’t really two-sided interactions. Reading a Kindle might require the user to swipe, but it’s otherwise pretty non-interactive. Watching a movie in a movie theater or on Netflix doesn’t require much interactivity.

In the future, we’ll witness the emergence of new forms of technology that turn what was once a non-interactive experience (for instance, watching a movie) into an interactive, participatory experience. Several films at Sundance last year explored this new form of storytelling, including “The Source (Evolving)” and “I Love Your Work.” I think we’ll continue to see artists deconstruct the interactive/non-interactive binary.

darknet markets

In Australia, 224 people were detained, including members of Asian criminal groups and biker gangs, three tons of drugs and 45 million Australian (35 million American) dollars were confiscated. The expressions “deep web” and “darknet” are periodically utilized conversely. Nonetheless, this isn’t right. The darknet is essential for the more noteworthy deep web. The deep web incorporates all unindexed destinations that don’t spring up when you do an Internet search. Australians use Darket Market in 2021 asap market link. In the course of the joint operation of the United States and Australia, ANOM app was developed and distributed in a criminal environment. Thanks to this, the police received the opportunity to monitor closed chats,The darknet is important for the deep web, yet it alludes to sites that are explicitly utilized for detestable reasons. Dark net sites are intentionally stowed away from the surface net by extra methods. in which drug smuggling was discussed, money laundering and even planning murders.