Artist/designer and technologist Josh Nimoy and VFX artist Matt Motal are rock stars. Working at Motion Theory, Nimoy, a regular contractor since 2005, and Motal, in-house at sister company 1.1 VFX, didn’t work together, but were both part of the post-production team for the hit Black Eyed Peas music video, “Boom Boom Pow.”
The video is visually stunning due in part to the complexity of the post. “A programmer is generating things that Maya and Adobe After Effects cannot do,” Nimoy explains, who previously worked on the Nike “One” commercial, where “generative diagrams and graphics swirl around and hover over the people’s heads.” (The spot won a Type Director’s award.) “We find that these things are dealing with large amounts of data, custom particle behavior, physics simulation, footage and image analysis, 3D model processing, randomness and chaos in magnitude, and everything in between.” Motal, a master at Autodesk Flame, elaborates that “the coders [are] creating elements for the compers to integrate into the shot, but there’s a lot of overlap and back and forth.”
Rather than adjusting small details one by one, Nimoy’s toolset includes “slider bars, key controls, and data files. A big part of this is that when you are writing programs to generate an outcome — even if the outcome looks as though someone could have done it by hand — the art direction process was totally different. We are dealing with a greater diversity of creative options by writing software-art, and we are choosing from these options much more rapidly. It’s both exotic and efficient.”
Watch the final product:
How did you interpret the programming needed from the original treatment? How early in the production were you involved in the video’s visual style?
Josh Nimoy I was there fairly early, maybe coming in right after they collected desired imagery from the web and did basic concept. I worked on some of the aesthetic research — a lot of which actually did not make it into the final cut. We were doing a lot of footage-to-particle systems. Keith was deforming 3D models based on music input and doing camera-vision processing to track shapes as they moved. We had a lot of options to choose from in the end. That’s usually the case with these projects. You create a lot of options.
Matt Motal The Flame needs are usually determined by a consensus from our directors, producers, art directors, and Flame department. We try to use the department for the strengths the system offers: onlining the edit, building and maintaining the conform, breakouts, realtime viewing, color, high end comping, finishing, client sessions and layoffs.
What challenges arose? Any happy accidents?
MM When you’re developing a high end video like “Boom Boom Pow,” there are lots of great ideas coming from a lot of brilliant people. You have to be able to adapt the edit and individual shots to keep pace with the latest vision. There are a lot of changes that happen all the way up to the last minute.
Anytime you’re pushing the envelope on new design and VFX, there are going to be looks and techniques that you stuble upon that end up being successful.
JN The biggest challenge is looking at the initial design ideas and trying to adapt them or take them further using programming. Another challenge is having a kind of foresight and wisdom about what the art direction is going to ask for, when they see your basic application. Like what new features will they request? Sometimes a modification is not so easy to add in given the way you originally coded something. Other times, I can just change a number or two and hit recompile.
Some of the visuals in the video appear tied to colors and edges, while others overlay with live-action movement and CG renderings. Can you describe some of your specific contributions? How much is generated from code versus frame-by-frame animation?
JN That’s computer-vision. The specific library we used is called OpenCV. This was all generated from code, and no rotoscope was done. That’s code that Keith Pasko wrote, not me. However, I do know about computer vision as I am the author of JMyron, another CV library for Processing.
MM My contribution on this video was a little different than my typical role on most. Usually, my major role is comping shots, but for “Boom Boom Pow,” I spent most of my time onlining and breaking out all the shots, updating the edit via EDL’s from our Final Cut editor, building and mainting the master conform, and running daily review sessions. Due to there being something like 300+ shots that needed updating everyday, it warranted having an artist dedicated fulltime. I did pitch in to comp a few shots, but my focus was the conform.
Sometimes VFX artists use their own code or open source actions to speed up or hack the application. Did you use any open source code or is it all custom for the job?
MM With Flame, our tricks aren’t necessarily custom code, more along the lines of different techniques for the different tools. The Flame artist community is comprised of a small group of artists that each have really amazing techniques you’ll never find in a book or on the internet. It’s one reason I look forward to freelancers coming in because I always learn powerful new ways to work. For example, Danny Yoon and I use certain techniques from Chris Moore so much that we give them names like “The Moore Track” or “The Moore Deform.”
JN All of my jobs incorporate open source from my past projects. Often times, the only proprietary part of the code is the stuff at the highest level that glues it all together to make the final app. The rest of it is a bunch of “engines” or “toolkits” that I wrote in the past, or were provided by other people. 3 big toolkits we’ve used across several production companies have been Processing, jttoolkit (my own C++ rapid devkit) and OpenFrameworks. Lately I’ve also been incorporating wxWidgets to provide a lot more GUI than normal software art tends to have.
How did you work with your team, Keith Pasko and Ryan Alexander?
JN Keith Pasko, Ryan Alexander, me, and even people beyond that project are part of a network of colleagues (“Ooh_Shiny”) which communicates every day about general techniques (without discussing specifics about what’s under NDA). By hiring one or a few of us, the client is buying into this “scene.” The coders on the same project are always sharing code, techniques, advice, and personal/political counselling, but for the most part, we each have our own relationships with a shared art director, or a shared group of art directors.
MM Danny Yoon, Chris Moore, and Rob Winfield are all great guys and amazing artists. It’s really important to be organized because we overlap on so many things that, by the time a shot or task gets completed, a few different artists have worked on it. You have to be flexible to changes so it’s imperative to be on the same page and be able to prioritize effectively. Similar to coding, you need to leave your work in a way that’s easy for the next artist to pick up and continue.
What was your favorite part of the project, be it in the production process or final product?
MM My favorite part would have to be getting to see the entire video progress as one unit through the project. Usually, I’ll only be seeing my individual shots and only see the spot as a whole near the end when it’s time to do the finishing. With this project, I was able to watch the video evolve everyday.
JN My favorite part of the project was listening to the music and enjoying really getting to know it — also knowing I was contributing to mainstream pop music culture made me very happy.
Where do you think music video and live event visuals can go next?
MM Since music videos have, for the most part, moved from MTV to the web, I think interactivity is the next step. Rather than watching the same video over and over, I think there could be dynamic elements that customize the viewing experience. Think about if you were watching a video and your Facebook photo or latest Google search was utilized in some way to provide a more personal/interactive experience. There’s an almost unlimited supply of data to be mined and utilized by code.
JN People are beginning to introduce interactive elements both in immediate space and at home through networks. I think we’re going to see a wider diversity of VJing styles: stuff that responds to the music better and stuff that does more computer vision. Personally, I think music videos should spend bigger budgets than they currently do — people should be hiring Industrial Light + Magic, Pixar, and Dreamworks to be making very serious and important pieces of motion picture for music videos. And that is the case with a select few — but I don’t think the industry needs all that. All one seems to need for a music video these days is bitches and bling. And a YouTube account. I just think there’s so much more opportunity for creative expression and we’re not taking it.
Got something to add?