Process Post 1: Making Artwork with ml5
March 1st, 2022
Background
This year, I have made several interactive works using processing and p5js. Now I will try something new, which is to create an artwork that uses machine learning.
More specifically, the ml5js library for p5js (or plain JavaScript).Arthur Samuels Describes it as a "field of study that gives computers the ability to learn without being explicitly programmed" (Self Learning and Checkers, 1959).
The three steps involved in Machine Learning are:
- Collect data
- Train the model (using the collected data)
- Deploy the model
It is also possible to use a pre-trained model in which case, only step 3 is required.
My Concept
My work consists of a projected display that is being informed by the movement of viewers in the gallery. With an active feed from a webcam, my code will extract key points from viewers' bodies and create an abstracted representation of their bodies that strips away recognizable features.
Inspiration
Craig Fahner's artist statement about Eigengrau. |
I tried to figure out how Fahner made his artwork and found that he used a combination of tools. I only plan to use p5js in combination with ml5 so my project will be made quite differently from Fahner's.
Work in Process
Finding a Helpful Reference on ml5
I looked through the ml5 website and found that PoseNet is the best pre-trained model for my concept. I tried others such as BodyPix but they didn't work as planned.
I watched some tutorials from Daniel Shiffman's YouTube channel "The Coding Train."
This is the first one I watched:
Creating a Custom Code
After watching the video and talking with Paul, I was able to get started with my first version of my project. Paul and I worked with the PoseNet sample and modified it by hiding the video feed to show the mapped skeleton only.
Here are some of the other modifications completed this week:
- The background opacity was set to 5 to make the skeleton leave a trail
- The keypoints on main joints of the skeleton were replaced with assigned numbers from 0 to 16
- Keypoint #15 from the skeleton was used to generate a trail of green circles
- The circles were made to change from green to yellow when they overlap previous circles in the trail
- The trail was restricted so that we can only see the 100 most recent circles from the trail. Older circles get deleted.
- Each new skeleton drawing is assigned a random colour (red, magenta, or blue) to create differentiation.
- Replace all "x" values with width-x.
- Use filter(INVERSE)
- Scaling the webcam footage by (-1,1)
For my project, I want the key points in the skeleton to emit particles. In the video I followed, Shiffman explains how to emit particles from a single fixed point, which is helpful but requires me to do some more research.
Fortunately, Shiffman made another video about coding multiple particle emitters!
In this second video, Shiffman nests the particle class within an emitter class to make multiple emitters. Each new emitter is created through the user's mouse click action. So, now I know that the way to have multiple sources of particles is to nest the particle class within an emitter class.
Particle Systems with Inheritance. It essentially shows to go create a particle emitter that produces different shaped particles.
This gave me the idea of creating a particle emitter that looks more custom made and unique.
At this point, I could manually adjust the size of the fireball to any value. In the two previous images, the size was 48 points. By adjusting it to 100 points, it became a filter that made my face glow.
You can also see that the lines were drawn only when the entire body part was within the frame. That's why the arm on the right had lines and the left arm did not.
Paul suggested that I "use variable 'd' on the emitter.show() function ... which calls the particle.show() function (pass d to that as well) ... which draws your image to the size you request (e.g. d * 4) ... instead of getSize()." This helped a lot and made the emitter size change with user proximity to the webcam.
In regards to my code, here are my latest questions:
- How do I make apply an emitter to various skeletons/poses (multiple people)?
- How do I apply emitters to various body parts?
- How can I toggle my preview to fullscreen?
- My mouse is hidden when I hover over the preview, how can I enable it again?
- Can I make the lines of the skeleton look like they're glowing?
- Can I reduce the jitter movement of the skeleton?
- Can I simplify my code to make it run faster?
Finding the Right Gallery & Display Method for My Project
Of the three media arts galleries, I'm thinking that the Space In-Between gallery is the best option for my project.
It might be convenient because I could place my webcam where the Xbox currently is. I saw a computer in that gallery as well and am hoping to use it to run my p5.js file.
I am planning to use my personal webcam, the Logitech C920e, which records video in 1080p.
It uses a USB cable to connect to a computer. Although it is not shown in the image, the cable is integrated into the webcam and cannot be removed.
I will probably have to buy or rent a long adapter cable to make the webcam reach the computer in the gallery. A potential risk with a long cable is that it could cause the video to lag and for the experience to stop feeling like it's real-time.
My second option is to display my work on a rented monitor or TV screen from Audio Visual Services. The advantage to this option is that I could rest the webcam on the top of the screen and avoid the long adapter cable.
I reached out to Peter Redecopp, the Media Arts Technician at AUArts, to ask for any tips he might have to make this project work.
In summary, my main questions are:
- How do I use a webcam in the Media Arts galleries?
- Will a long adapter cable for my webcam cause lag?
- How can I modify the display to fit my aesthetic and concept?
- Can I use the computer to play audio on the speakers?
Comments
Post a Comment