Process Post 1: Making Artwork with ml5

March 1st, 2022

Background

This year, I have made several interactive works using processing and p5js. Now I will try something new, which is to create an artwork that uses machine learning.

More specifically, the ml5js library for p5js (or plain JavaScript).

So what is machine learning?

Arthur Samuels Describes it as a "field of study that gives computers the ability to learn without being explicitly programmed" (Self Learning and Checkers, 1959).

The three steps involved in Machine Learning are:

  1. Collect data
  2. Train the model (using the collected data)
  3. Deploy the model

It is also possible to use a pre-trained model in which case, only step 3 is required.


My Concept

My work consists of a projected display that is being informed by the movement of viewers in the gallery. With an active feed from a webcam, my code will extract key points from viewers' bodies and create an abstracted representation of their bodies that strips away recognizable features.

As viewers move in the gallery, their bodies will leave a fading trail where they have been. If a viewer stands in the exact spot where someone else has been, their abstracted body will glow for as long as they occupy that spot. 

There are more ideas that I wanted to incorporate into my project but I think they are beyond the scope of this project and I won't have time to implement them. I had thought of writing the code in a way that would store the body trails of all viewers and randomly release some of them after the viewers have left. When new viewers become part of the display, they will see the movements of previous visitors. When a viewer stands in the same spot as someone who had visited in a different time, their bodies will glow and a sound would be emitted. 

I intended to do this as an exploration of what it means to share a space. I think sharing is not limited to time, and that we can share an experience with those who are from another time. This interaction would allow viewers to reflect on what it means to share and what it means to be separated from others by time. 

It would be nice if I could incorporate an alternate or simplified version of this concept for my project that is due in 5 days. 

Inspiration

As I discussed my concept with my Programming instructor, Paul Robert, he showed me the work of Craig Fahner. In particular, the web-based interactive piece titled Eigengrau, which tracks and draws the outlines of users' eyes. When the users' eyes are closed, the work emits an audio frequency.  


Craig Fahner's artist statement about Eigengrau.




 

I tried to figure out how Fahner made his artwork and found that he used a combination of tools. I only plan to use p5js in combination with ml5 so my project will be made quite differently from Fahner's.
 


Work in Process

Finding a Helpful Reference on ml5

I looked through the ml5 website and found that PoseNet is the best pre-trained model for my concept. I tried others such as BodyPix but they didn't work as planned. 





I watched some tutorials from Daniel Shiffman's YouTube channel "The Coding Train." 

This is the first one I watched:


Creating a Custom Code

After watching the video and talking with Paul, I was able to get started with my first version of my project. Paul and I worked with the PoseNet sample and modified it by hiding the video feed to show the mapped skeleton only. 

Here are some of the other modifications completed this week: 

  • The background opacity was set to 5 to make the skeleton leave a trail
  • The keypoints on main joints of the skeleton were replaced with assigned numbers from 0 to 16
  • Keypoint #15 from the skeleton was used to generate a trail of green circles 
  • The circles were made to change from green to yellow when they overlap previous circles in the trail
  • The trail was restricted so that we can only see the 100 most recent circles from the trail. Older circles get deleted.
  • Each new skeleton drawing is assigned a random colour (red, magenta, or blue) to create differentiation.
One of the challenges I faced was flipping the webcam video so that it acted as a mirror. I think it is important for the viewer to see their movements reflected (not inverted) back at them. 

There were 3 different methods I tried to flip the webcam:
  • Replace all "x" values with width-x. 
  • Use filter(INVERSE)
  • Scaling the webcam footage by (-1,1)
The third option of scaling worked the best. The filter(INVERSE) actually inverts the colours of each pixel to create a negative image.

Now that this is ready to go, I have to manipulate the code to create a display that looks artistic and not generic like the default.

Introducing Artistic Elements into the Code

I watched a Coding Train video about particle systems to learn how to incorporate a sparkle effect into my code. In the video, Shiffman shows a paper which describes particle systems as "a technique for modelling a class of fuzzy objects."

 
Something interesting I learned from the video is that the term 'particle system' was coined for a custom visual effect made for Star Trek II: The Wrath of Khan

I also learned how (in p5.js) particle systems are basically a class of random circles with different velocities and lifespans.

I recreated the code by following along to the video. Here's a screenshot of what that looked like:



For my project, I want the key points in the skeleton to emit particles. In the video I followed, Shiffman explains how to emit particles from a single fixed point, which is helpful but requires me to do some more research. 

Fortunately, Shiffman made another video about coding multiple particle emitters!


In this second video, Shiffman nests the particle class within an emitter class to make multiple emitters. Each new emitter is created through the user's mouse click action. So, now I know that the way to have multiple sources of particles is to nest the particle class within an emitter class.

My main goal right now is to figure out how to create particle emitters that originate from the PoseNet skeleton's keypoints. 

However, as I was looking for answers, I came across this other video by Shiffman regarding
Particle Systems with Inheritance. It essentially shows to go create a particle emitter that produces different shaped particles.


This gave me the idea of creating a particle emitter that looks more custom made and unique.

I ended up creating this fireball effect by modifying one of Shiffman's codes. Instead of displaying the particles as ellipses, they are being displayed as glowing points. I loaded an image and used is a texture for each particle. When they accumulate together (with the ADD blendMode), the particles unite to create a fireball effect. Here's a screen shot of my modified display:





The fireball position changes with the mouse location.

The next step was to incorporate this fireball into my modified PoseNet code. Luckily, it was fairly straightforward. I created a copy of the FireBall file and pasted sections of my PoseNet code into it.

I was able to fix the fireball to the center of my face by replacing the x and y coordinates of the emitter to PoseNet nose position.




 
The effect still works when I wear a face mask and my nose is covered. This is excellent news because all the viewers in the gallery will be wearing masks due to restrictions.

The next step is to find out how to make the skeleton have particle emitters on multiple key points! And for multiples skeletons as well!


In the process, I also had to learn how to make the size of the fireballs change depending on the user's proximity to the webcam.

At this point, I could manually adjust the size of the fireball to any value. In the two previous images, the size was 48 points. By adjusting it to 100 points, it became a filter that made my face glow.

You can also see that the lines were drawn only when the entire body part was within the frame. That's why the arm on the right had lines and the left arm did not.

Paul suggested that I "use variable 'd' on the emitter.show() function ... which calls the particle.show() function (pass d to that as well) ... which draws your image to the size you request (e.g. d * 4) ... instead of getSize()." This helped a lot and made the emitter size change with user proximity to the webcam.

In regards to my code, here are my latest questions:
  • How do I make apply an emitter to various skeletons/poses (multiple people)? 
  • How do I apply emitters to various body parts?
  • How can I toggle my preview to fullscreen?
  • My mouse is hidden when I hover over the preview, how can I enable it again?
  • Can I make the lines of the skeleton look like they're glowing?
  • Can I reduce the jitter movement of the skeleton?
  • Can I simplify my code to make it run faster?

Finding the Right Gallery & Display Method for My Project

Of the three media arts galleries, I'm thinking that the Space In-Between gallery is the best option for my project. 

It might be convenient because I could place my webcam where the Xbox currently is. I saw a computer in that gallery as well and am hoping to use it to run my p5.js file. 


I am planning to use my personal webcam, the Logitech C920e, which records video in 1080p.

It uses a USB cable to connect to a computer. Although it is not shown in the image, the cable is integrated into the webcam and cannot be removed.

I will probably have to buy or rent a long adapter cable to make the webcam reach the computer in the gallery. A potential risk with a long cable is that it could cause the video to lag and for the experience to stop feeling like it's real-time.


My second option is to display my work on a rented monitor or TV screen from Audio Visual Services. The advantage to this option is that I could rest the webcam on the top of the screen and avoid the long adapter cable. 

I reached out to Peter Redecopp, the Media Arts Technician at AUArts, to ask for any tips he might have to make this project work.


In summary, my main questions are:

  • How do I use a webcam in the Media Arts galleries? 
  • Will a long adapter cable for my webcam cause lag? 
  • How can I modify the display to fit my aesthetic and concept?
  • Can I use the computer to play audio on the speakers?


In the meantime, I will continue working on my code!











Comments

Popular Posts