V Motion Project – Part II: The Visuals

In this post I’ll explain how the visuals came together for the V Motion Project. For an overview of the Kinect controlled instrument and how it works, see Part I – The Instrument. You can watch the music video here.

The climax of the project was a performance in downtown Auckland, with speaker stacks blaring and a set of giant projectors shooting visuals at a 30 metre by 12 metre wall in front of the motion artist. Three different film crews were there shooting the event for a TV commercial, a music video, and a live TV feed.

The projected visuals needed to be both spectacular and informative. It was important that anyone watching the show could instantly see that Josh was creating the music with his movements, not just dancing along to the song. This meant we needed to visually explain what he was doing at all times, how the song was being built up bit by bit, and how each motion was affecting the music. We also need to put on an exciting show with visual fireworks to match the arc of the song.


A shot from the performance


Matt’s early concept sketch

Design and motion graphic maestros Matt von Trott and Jonny Kofoed of Assembly led the design effort. This early concept sketch shows the main elements of the visuals. The motion artist’s digital avatar, a giant ‘Green Man’, is center stage surrounded by a circular interface. Around him a landscape grows and swells as the track progresses. The motion artist stands directly in front of the wall, so when viewed from behind his silhouette is sharply defined against the glowing backdrop.

I wrote the software to run the visuals in C++ with OpenFrameworks. For the performance, it was running on a Mac Pro with 16 GB of RAM and a NVDIA video card with 1 GB VRAM and three outputs. The main visual output for the wall was 1920 x 768, secondary output for the other building was 432 x 768, and a third video-out displayed a UI so I could monitor the machine and tweak calibrations. The A/V techs at Spyglass furnished the projectors, generators, and speakers for the performance. The main visuals ran to their Vista Spyder which split the display across two 20K projectors and took care of blending the overlap.

Feeding the Machine

As I described in my previous post, the instrument runs off two separate computers; one in charge of the audio side of things, the other the visual. The first challenge was to get as much realtime data as we could out of the audio machine into the visuals machine, so we could show what was happening with every aspect of the instrument. We did this by sending JSON data over UDP on every frame. The audio system tells the video system where the player’s skeleton joints are in space, which set of audio controls they are manipulating and the current state of these controls. The visuals system also takes an audio line-out from the external sound card and processes the sound spectrum. The visuals system is in charge of the “air keyboard”, so it already knows the state of the keyboard and which keys are down.

Layers within Layers

To help us understand how the visuals were going to come together, I organized the realtime rendering to use distinct layers stacked up and blended together for the final image, just like a Photoshop or After Effects file. This way we could divvy up the layers to different people and update the system with their latest work as we went along. The illustration below shows the 4 layers of the visuals; the background ‘landscape’, the machine, the Green Man, and the interface elements.

To combined the layers, Matt and Jonny wanted to use the equivalent of Screen blending in Photoshop. This gives a great look to the vector-monitor style of the visuals (like the old Battlezone or Star Wars arcade machines) since the lines get brighter where they overlap. This also turned out to be a great technique to increase the speed of realtime rendering. Screen blending works in a similar way to additive blending, where black has no effect on the output. So this means we didn’t need to use an alpha channel anywhere, we just rendered graphic elements against a black background. By getting rid of the alpha channel, we saved 8bits per pixel which, when you’re rendering 2million+ pixels per frame, can really add up.

// enable screen blending
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_COLOR);

// draw stuff..

// disable blending
glDisable(GL_BLEND);

The Landscape

Matt and the team spent weeks crafting a series of epic environments and effects that correspond to the build-up of the song. Because we knew the footage of the performance needed to match the radio edit of the song for the music video to work, we were able to lock down the exact timing of the musical transitions and Matt used these to orchestrate the visuals of the landscape. The final render took over 3 hours on Assembly’s massive render wall.

In the visuals software, the landscape layer contains a large video that’s precisely synchronized to the music. A big challenge was starting the video at the exact time that Josh “kicks in” the track. We do this by sending a signal from Paul’s music system to my visuals machine over UDP. The playback needs to be fast and stable, so I used QTKit to play the video. It’s 64-bit, hardware accelerated and runs in a separate thread so it did the job fantastically, running the 1.9GB Motion JPEG Quicktime at a steady 60fps.

The Green Man

For the motion artist’s on-screen representation, we wanted a glitchy/triangulated look that matched the look of the rest of the visuals. Creating the effect turned out to be fairly simple thanks to the silhouette that you can easily get out of the Kinect’s depth information. You simply ignore all the pixels of depth information beyond a distance from the camera (4 meters in our case.) If you don’t have any obstructions around the subject, you can get a very clean silhouette of the person.


Left: Early style frame for Green Man, Right: final Green Man

To create the triangle effect, I just run the silhouette through a few stages of image processing. I use OpenCV to find the contours, and then simplify it by taking every 50th point. I then run this series of points through a Delaunay triangulation. I do one more check against the original depth information to decide what color to fill the triangle: Triangles are lighter the nearer they are to the camera, and also lighter from top to bottom. This makes it feel like the figure is being lit from above, giving it a subtle sense of form.

The User Interface

The user interface of the instrument has a huge job to do. It needs to very clearly show how the instrument works and what Josh is doing to create the music. Jonny and the guys spent a lot of time trying different ideas and looks for the interface. It was important to us that nothing about it was fake, and that every element had a purpose.

The visuals for the interface are created by a combination of pre-rendered transitions, live elements drawn with code, and sprite animations triggered by the motion artist’s actions. For instance, when Josh hits a key on the keyboard, I draw a “note” on the score that’s ticking away above him as well as fire off a sprite animation from the key that he hit. If you watch the video closely, when he’s finished creating a sequence of notes, it gets sucked into a little reel-to-reel recorder in the bottom left which then loops it back.

Each different tool that Josh uses to create the music needs to be visualized in a slightly different way, to help explain what he’s doing.

The keyboard keys change color when hit, and the notes he’s played are drawn in an arc across the interface. The second control is nicknamed “Dough”, because the motion artist uses it to kneed and shape the sound.. the ball (and sound) grows when his hands are wide, and shrinks when they’re close together. The rotation of the ball affects the sound as well. When he’s controlling the LFO (that distinctive dubstep “wobble” effect), we draw yellow arrows moving at the same frequency as the audio oscillation that he pulls up and squashes down.

The sprites that come out of each keyboard key have a distinct look. This video shows how each animation was designed to match the way the sound of the sample.

And here’s a bit of code that I used all over the place for drawing the live gauges, midi tracks, keyboard keys and other elements into the circular interface. Fight the tyranny of right angles!

void visualsWindowListener::circleStroke( int center_x, int center_y, int radius, int strokeWidth, float startAngle, float endAngle, int resolution ){  
    float resolutionOfArc = ((endAngle - startAngle) / (2 * PI)) * resolution;
    float radiansPerSegment = (endAngle - startAngle) / resolutionOfArc;
    float angle = 0;  

	ofBeginShape();  

    for( int i=0; i<resolutionOfArc; i++ ){  
        angle = startAngle + i*radiansPerSegment;  
        ofVertex(center_x + rad*cos(angle), center_y - radius*sin(angle));  

    }  
    ofVertex(center_x + radius*cos(endAngle), center_y - radius*sin(endAngle)); 

    int radius2 = radius - strokeWidth;  

    ofVertex(center_x + radius2*cos(endAngle), center_y - radius2*sin(endAngle)); 

    for( int i=resolutionOfArc-1; i>=0; i-- ){  
        angle = startAngle + i*radiansPerSegment;  
        ofVertex(center_x + radius2*cos(angle), center_y - radius2*sin(angle));  
    }  

    ofEndShape(true);  
}

What’s next?

This technology is amazing and it feels like the tip of the iceberg. I’d love to see v2.0.. multiple musicians, flexible loop editing, realtime VJing. And as new cameras come out with higher frame rates and higher resolution, the responsiveness and power will get better and better. It’s going to be fun!

[Update 7/20: For more ‘next level’ Kinect miss-use, check out the interactive music video we just released for Neil Finn’s (of Crowded House fame) latest band. You watch and control the video in 3D as it plays in your web browser! Some behind the scenes details here.]

24 comments
  1. Wow!!! Thanks for posting this summary! Tons of info, amazing performance!

    1. Superb….Remarkable….only the begining…I can hardly wait…

  2. Wow… This project is truly astounding.
    As a computer scientist and a music entrepreneur, this project absolutely BLEW my mind. I hope to one day hone my programming skills to be able to collaborate with so many brilliant minds in a project like this.

    Props.

    P.S. Thanks for the detailed article, it’s great that after being mindblown by the music video I was able to read about all the work that went into it.

  3. Are there plans to release the technology as a product to the public?

  4. Well done to all involved 🙂
    Nice design and the output scale really gives it extra punch! I also really enjoyed looking over such nice documentation. Will be interesting to see what happens with future iterations.

  5. Bravo for the project, the videos and the documentation. Truly inspiring.

  6. WOW! It opens minds!
    I’ve the same opinion: is a next step to a VJing system.

  7. Absolutely amazing. I would love to see it developed into a consumer product.

  8. […] V Motion Project- Part II: The Visuals  […]

  9. i would love to have me one of thoes or to have the info on how to make one, if that is possible 8o)

  10. I’ve seen now many Kinect installations and applications and I’m working on some as well.
    But you really nailed it.
    Together with the monkey business installation, you are at the top the coolest Kinect things I’ve seen yet.
    Because I’m lazy I not going to post this at Pajama Club 3D as well 😉

    If there is ever going to be a commercial version for PC or Xbox 720 I’m your first customer.

    1. Sorry have to add Marco Tempests performance “A magical tale” at TED to the Coolest ones

  11. […] Logic, a duo from New Zealand, presents in two blog posts (part 1 and part 2) how they develop the interface. A very interesting read, especially regarding the use of a double […]

  12. […] le nom 😉 mais une belle prouesse technique et multidisciplinaire. Le tout est détaillée ici et là, explications, schémas, codes, tout est là. Share this:TwitterFacebookJ'aime ceci:J'aime2 […]

  13. […] You can follow the entire project from beginning to the music video here (Part 1) and here (Part 2). […]

  14. […] V Motion Project – Part II: The Visuals […]

  15. Cool man, simply amazed me. I am doing a research on projects like the one you guys created for my master’s degree in digital art. As well as trying to put together a small team effort to create one kinect art installation, nothing compared to what was done here, just something to ilustrate the power of computer graphics driven by people performance. Is there any kind of way of getting more inside info about the project or there is any advice for a small project aimed only for visual, not sound, results ? I can get graphics going on in c4d using kinect with the help of plugings, but no clue on how to get real time results… can this only be accomplished with programing language or is there somekind of software or easier way to do this for a simpler project ? Hope you read this, thanks.

  16. […] V Motion Tech: How We Built It V Motion: The Visuals […]

  17. Hy….. i allready knew that this technology exists i`m a user off it aswell like youre VJ for example but the ppl that use it on me just don`t give a damn about me… they just want me to rip my head off. and take all the music themselfs.. thanks

  18. is there wanyway to actualy get this program, or atleast the bones of it for running on a single machine?

  19. […] Część I: link Część II: link […]

Comments are closed.