August 26, 2011

Making-of: “The Lacquerer”

Hi, I’m Ollie “cosmic” Borgardts, and I am the creator of the animated short “The Lacquerer”. I’m going to do my best to make this an interesting and entertaining read, but I should probably say up front that I will go into detail on how the movie was made. I’ll also touch upon the creative process and my personal experiences from making this production. At the bottom of this post I have attached a 30-minute walkthrough video in which I explain some of the techniques I used.

Behind “The Lacquerer”

The whole production took about 2 months (full time) to complete, and with such a long production time I feel I have plenty of tips about the production process and the ideas and techniques that went into the piece. The short was released at the Revision Demoscene Party 2011 where it won the Animation Competition.

The whole project started two months before the competition was to be held, when a good friend of mine, Pro from the demo group Nuance, told me about a promotional campaign for a 3D package named Messiah Studio at a very good price. I have been a faithful Maya-user for many years, but when I checked out the package, some of the features it offered were very interesting. Even though I had never used it before, I promptly ordered a license in the belief that I could learn what I needed to know in good time before the competition deadline got too close.

One trick I learned was to watch tutorials at double or triple speed to save some time

I really wanted to finish a cool production for Revision, so with hindsight being 20-20, I probably picked a bad time to start learning a new 3D package. I proceeded to spend quite some time just watching various tutorials online, to get to know the application and the differences from Maya. One trick I learned was to watch tutorials at double or triple speed to save some time (no joke, it’s what I did and it was very effective).

Just to touch on the fact that I decided to switch 3D packages. I do simply love Maya. I use an older version (6) which I bought several years ago and I’ve been using it so long that I know the program in and out and can therefore work quite efficiently in it. However, I do have some issues with it.

When creating digital characters I would begin in ZBrush and then bring it into Maya for rigging. To bind the bones to the geometry Maya requires you to go through a process called “weight painting”. This tells the skeleton how much influence each single bone has on the bound geometry. There are a few scripts that help you automate then I need to bind the bones to the geometry and do a process called weight painting. This tells the inserted digital skeleton how much influence every single bone has upon the bound geometry. There are scripts to partly automate the process, but to be honest: most of them suck, and doing it manually sucks even more.

There are most likely newer versions of Maya that has new features or techniques for rigging and skinning, but for me, taking the step over to Messiah really helped me out. One example: in Maya, if I want to change something after the bone structure has been set up and bound to the mesh, I would have to unbind all of it and repeat the process on the new or changed mesh.

For an artist, this workflow is killing productivity. I want to be free from the very beginnng, the planning stage, the modeling, texturing, animation and right up to the final export or the encoding process. I want to be able to make changes at any point in that toolchain. Period.

Another example: changing the character from a humanoid biped to, say, a 3-legged beast with 10 arms, I’d have to repeat the whole process. Again, this doesn’t work for me. This is one of the areas where Messiah shines. It has a very clever way of separating the binding/rigging-stage from the animation. If I wanted to add more bones, I’m able to do so, and the animation and other parts remain untouched. Awesome.

Thoughts on creativity (and art in general)

I was now in a place where I knew how to use Messiah — which is exactly the time that the creativity black hole hit me. I’d like to share some thoughts about creativity in general and about the approach I use when realizing my ideas.

My tools of choice are Maya (or now: Messiah), After Effects, Photoshop and ZBrush. Not necessarily in that order.

I believe that there are really no “new things”. Pretty much everything has been done before in some way. Even so I look at other artists and what they doing and I’m definitely learning something new every single day, and that’s a good thing. However, if it’s one thing I know it’s that creativity can’t be forced when you are on a deadline.

I want to surprise the people at the event with my production, but how could I surprise them if I can’t even surprise myself? For each project I start, my personal goal is always to surprise myself in some way — to create something I’d never thought of before it’s actually finished.

So how can this be achieved? Well, I know what I’m able to. I know what I’m too unskilled or too lazy for. I know my tools. I know my skills and my preference when it comes to art. I know these things inside and out, so how do I “trick” myself? Well, one approach is to combine existing things. Take, for example, three existing things and combine them in some random fashion. It doesn’t matter which way you do it. The point is to not be logical and simply negate the facts.

Just to give you a little example of the process that works well for me:

Look out your window at the scenery outside. Then go: “Why do trees have their roots planted in the groud? What if it was upside down?” Argument one: defined.

Continue with the simple assertion: “What would happen if the tree was planted upside down?” Well, for one, it wouldn’t be able to access any water. So the consequence of this could be that the roots would need to stretch to the clouds to access water. Argument two: defined.

But what with the birds that usually sit on the soft branches? They don’t want to sit on the ugly, muddy roots. No, they want to sit on the branches and leaves as before. Therefore the result could be that they hang themselves upside down instead, reaching for the branches. Argument three: defined.

..and boom — there’s your scene. The rest is just a matter of a technical completion, however you choose to proceed.

The image to the right isn’t exceptional, but I hope you get the idea how everything in an artistic world could be created in this way. Just let the things happen. You don’t want to control everything or be too logical here. That simply doesn’t work — or, to be more precise — it simply doesn’t work for me. I know a thing or two about the technical stuff, but in the creation process of something artistic, taking a logical approach is just disturbing the art.

However, when the creative process is completed, you can come back and go on working on it in a professional, logical and disciplined way.

When perfectionism and ego gets in the way

At an early stage of making “The Lacquerer” I just wanted to use ZBrush, model something and then animate it. I asked a friend of mine, Marc Ewald (a musician from the Netherlands), if he’d like to do a soundscape for it (just like he did for “Julie” my last short movie experiment for Breakpoint 2009). I believe he’s a genius in what he’s doing and so the brief I gave him went something like this:

Please think of a concept for a short film (but don’t tell me what it is) and then create a soundscape for it. No identifiable sounds like bird chirps, car horns and such. Just make something abstract that will fit with loosely defined terms of contrast, such as “fast”, “slow”, “large”, “small”, “oppresive”, “liberating”. Again: don’t tell me the meaning of it. I plan to let the story unfold itself while I’m making it.

The reason for this was of course to have no preference or preconceived notions when starting up. I could listen to the sounds as a small child would and just let the imagination kick in. What would I think of those sounds — what would they look like? Then I could concentrate on creating something visual around the sound completely disconnected from the visuals.

What I got from Marc was very good. Bombastic, epic, almost score-like in nature. I could picture it in a game such as “Metal Gear Solid”. It was absolutely fantastic and I was speechless. My problems started when I was trying to visualize them. Suddenly my creativity was .. gone.

I had vivid pictures in my mind, such as huge boss-fights in epic PlayStation 3-games. The sound was great, but I kept thinking: “How on earth am I going to be able to match this, visually? How can I live up to the expectations the sound is setting the scene for?”

I thought about all those things for a couple of days, and then, as it does, a happy accident occurred.

I stumbled upon some audio samples I had laying around on my harddrive. They sounded interesting so I chopped them up randomly to a 3 minutes collage. I also, for some reason, started modelling an old guy in ZBrush.

At this point I was slightly tired of the whole thing, but stuck to it anyway. Working on something was better than thinking about it. I believed that a solution would present itself, as it usually does.

I threw some bones onto the pretty sloppily created mesh and started animating it in a random way. No plot — just trying to forget how I would go about handling the miserable situation I was in. I had no idea on what to create, and I felt really bad for Marc who had created a great piece that simply wasn’t what I needed at the moment. I really wanted to use it, but it was dawning on me that it wasn’t what I was looking for this time.

I rendered out some seconds of crap and brought it into After Effects where I started playing with color correction and other generic things. I just wanted to keep working, waiting for the solution — the idea — the special technique.

This is when I more or less randomly started playing with a feature in After Effects called..

Time Remapping

Time Remapping squeezes and stretches your video footage by changing the timing and playback rate. You can slow down things or speed them up, and it does the same with the sound — just like a record on a turntable. It’s a very basic feature, and pretty much the first thing you learn when you start using it. However, I had I never thought about using this in a creative way. It was too simple, far away from the “epic effect” that you see in the movies. It’s totally overused, boring and just too simple.

In fact it’s so simple that I thought everyone using After Effects is using it. This is where ego kicked in.. “I wouldn’t use that — not me — the best digital artist in the universe — the one that only uses super-complex techniques — not even people at ILM do what I do..”

What utter bullshit! Ego struck again! The little bastard in my head took over and was controlling me in a very destructive way that lead to total stagnation — and I didn’t even realize it at the time. Instead of having fun with the new software, the digital tools and playing around with them to create something cool in a fun way, my ego — the perfectionism and logical approach — ruined everything.

This cheap and simple trick which I just played with by accident ended up as the concept for the shole short. All the content I was to create in the coming four weeks would be held together by this very basic tool. I decided to write it down so as not to loose it: “Create a very detailed creature, animate it in some way, sync it all with Time Remapping afterwards!”

This cheap, simple trick, found “by accident” was the concept for the whole movie. This would hold everything together. So I wrote it down that this won’t be lost again: “I’m going to create a creature with the best skills I have at this time – I’m going to bring it into messiah and animate it in some way, but in perfect sync to the audio to “destroy” it afterwards with simple time remapping“.

Then another problem presented itself: I still wanted Marc to do the soundscape. He has studied composition after all, and he’s by far my musician of choice to work with. But he needed some clips of the concept to create the audio, and that was a problem because none really existed at this point. My solution ended up being that I would animate the short with the glitchy audio I had already created, and he would create a new, better sound for it. Remixing it, in a way, but keeping the same major sync points.

Unfortunately, that never happened.

I was working non-stop with animating right up until the last day before Revision was about to begin, and therefore I couldn’t expect Marc to be able to create everything needed in only 12 hours. I’m quite sad about that, but I didn’t want to put him in the same position as I was in earlier — what if he wasn’t inspired with what I had created? Lesson learned: the next time I’m going to involve him at an earlier stage, promise.

Giving life to the creature (blendshape creation with 3D layers)

I spent some time modelling the figure, texturing it and creating the blendshapes for it. All of this was done inside of ZBrush. I love ZBrush, it has never let me down. In other applications you can very easily end up stuck in a one-way street somewhere down the pipeline or the production process, but with ZBrush I have never had this happen. At any point there is a way to backtrack and get back to what you need to change.

So what are blendshapes? You can think of them as layers in Photoshop, except in 3D space. You create different “versions” of your model on layers and with a slider you can switch between these and create stuff like facial expressions this way.

These are then exported with the original mesh and connected inside of the 3D application. Every major-vendor 3D package has those – Blender, mMx, Maya etc, but their lingo might differ (like “morph targets”). These imported blendshapes are then invisibly connected to your original model and can seamlessly be blended into each other. This again can then be keyframed and animated.

Principles of complex effects (or: “Hollywood has time and money”)

One look I wanted to create for the project the figure was one of having the character look like he was coated with something like clay or mud. If you’ve seen any of the X-men movies then you probably know the character “Mystique” (if not: this is what that looks like). Her skin morphs in a pretty crazy way. It’s not just blended though, it looks like there’s more going on under the surface, like tiny, greebly, flaky stuff. It looks really hard, but actually it is — once again — very simple.

I couldn’t find the technique described in any tutorial, book or in the 3D forums online, so I couldn’t get it exactly right. My effect isn’t as elaborate or worked-through like the one I mentioned, but I think it worked out well and I’m going to show you the basic principles behind it.

The basic idea is to create two morph targets — one untouched (a clean slate, if you will) and one for the tranformed shape (a mask or one with scales or whatever you like). Step one is taking these two models and blend them using influence modifier objects like a sphere or a cube or any shape you can think of. That gets you started, and you have great control over the look of the animation, but the desired “greeble-look” isn’t there yet. So step two is to add complexity where the blending areas of the structure are switching from one shape to another. For this I created a third map that distorts the model in a very overdone way. Very random and crazy.

At that blending point this third map is basically used to “destroy” the geometry in a small rim. This gave me that “wrapping around” effect you see when the figure is coated. I could have used an animated map as well to enhance this effect even more (and I think this is how they create the skin effect on “Mystique” as mentioned above), but I was satisifed with the way it looked.

Another, simpler, technique would have been to render out two versions of the animation and then simply blend them in post-production. However, this won’t give you correct shadows and the compositing job would have been at least triple the work in the end. Real geometry is nearly always best, if you can get it right.

The devil is in the Multi Displacement Map layering (aka “details”)

As you might know, the models you can get from ZBrush can have a very high poly-count. With an average computer with 4 GB of RAM you can easily make models containing up to 100 million polygons or more.

ZBrush can handle this because internally it’s not using polygons, but something they call pixols — more commonly known as voxels. These are pixels with depth information. It’s a clever fake so that you can work comfortably inside of ZBrush without lag, but once you actually export the model, it is converted into real polygons. The most common approach to maintain a high detail level is to create a displacement map or normal map and place this onto a lower poly-count version of the model.

Displacement maps are nothing more than grayscale images (colored ones for normal maps) that create the illusion of having the same high detailed level. A little clarification: with normal maps this is the truth, but with displacement maps the high poly count is acually created during the rendering process.

I took these mapps and added them to the blending process in Messiah with the same influence blend modifieres to bring in details in the model and color-space as well. However, a single displacement map didn’t yield the result I was looking for so I had to research the technique of using so-called “Multi displacement maps”.

Imagine a base model, like a sphere with 100 polygons. You can create a displacement map that deforms this sphere in a good shape, like the basic shape of a human head. With only one displacement map you can just pull “outwards and inwards” to create indentations. The side planes of each newly created “extraction” can then not be affected anymore by the same displacement map and this is a shame.

If you extract a normal with a displacement map that results in some sort of “cubic” form. Then you have no information left in the map to extract the faces again in direct way, because the “detailing side parts” are simply just being created during the rendering process. With this in mind, a single displacement map wasn’t really that useful. You need another map that provides that information, and the good thing is that you can stack tons of maps onto each other and get really deep details out of it. At the time I started this project I didn’t know how to do this, but as it turns out, it was surprisingly simple (as in: just the click of a button). You can watch the process in the making-of video at the bottom of this post.

Compositing is everything (or: “How to make crap look good”)

With the model, morph targets and textures done, it was time to getting it to look good. While I’m proficient in creating 3D models, it’d be a lie to say that this is my favourite thing in the process. I see it more as a necessary evil, because creating things in virtual 3D space eventually brings you to the point where it starts to suck. Or to put it differently: you can never expect to finish your projects in your 3D programme. You need post-processing. Even if your lighting setup is perfect, you animation looks good and the models are perfect — the true power comes when you step out of 3D space and enter post-processing land.

When I started toying with 3D modelling and animation almost 15 years ago I thought that the way the professionals worked was to set up a scene, model, animate it, set up the lights and click the render-button. In reality, no movie is made in this way, because it’s simply neither practical nor yield good enough results. Think about it like layers: what if the base rendering takes 15 hours to finish? What are you going to do if the shadows are too dark or the colors are slighty off? You can’t just render it again — a very bad workflow.

The correct (and only) way to do it is to composit different layers on top of another, with the base rendering being the lowest lever in your chain. Everything is done in layers, and when combining them (following some simple principles and a few standard rules) is what nails the look and feel of a movie. Once you know these rules you are also capable of breaking them, which is what I usually do. In all my pieces they were at some point at a stage that I just played around in Photoshop to try the find a look or a feel, but the basic workflow and the principles remained the same.

Warning — soapbox moment! It’s really about the basics. Turning on a plugin isn’t just lame, it’s counter-productive, because it means you’ll never know what’s truly going on under the hood, and therefore unable to control it well. Some plugins can make your life easier when you are in a hurry, but is no substitute for having a really good set of basic skills. Do your homework. Buy books on anatomy and lighting. Knowing these basic principles will make your work better.

Oh, and a top-tip: always render to still image sequences. If you suffer a crash or a powerloss during rendering to a movie-file, all is lost, while if you worked on a still image sequence, you can just set your renderer to continue where it left off.

Shading the character

The best looking renders, in my opinion, are the ones I get directly out of ZBrush. ZBrush is fast and uses lighting baked into so-called MatCap Materials. The only problem with ZBrush is that it has no real 3D space. Again, these are only pixels with depth information and so you can’t “go around them” (or I should rather say “walk inside of a model”), because you can’t rotate 2D. In ZBrush it’s actually 2.5D information you’re dealing with

To make skin look real you need a feature called subsurface scattering. Think of it as holding your hand in front of a strong light-source at night, where you can see the light through our fingers. This is due to the scattering of light inside of your skin. I really wanted this look in the movie, so to enhance it I set up a simple shading network inside of Messiah to get a rim shader.

For stills I get everything out of the programme: depth Pass, diffuse pass, SSS, specularity and so on. The trick lies in using these elements to make it look real in the compositing stage in Photoshop (if you’re working on a still image) or After effects (for movies).

The key things are: color, perspective, depth and hue.

I’m going to show you a simple way of getting a simple, really ugly and random scene to look realistic in some way by just following a few simple steps. Mind you, this technique is for still images only. You can see the end result in this image to the right, and for the how-to, watch the making-of video at the bottom of this post where I go through each step of the creation of this image.

Getting the look down to a sensible render time

I wanted to use the so-called MatCap Materials for the shading of the skin. Matcaps handle the way of shading in a very clever and fast way, and they look really cool, so I wanted to use these inside of Messiah. In Maya or Blender the setup for such a shading network is pretty easy, but not in version 6 of Maya. In messiah I had no clue on how do it, but luckily there is a nice and friendly Messiah-community from which I got help an tips on this. One of them even set up a full shading network for me, but sadly had no time to implement it due to the looming deadline. In my next production I’m going to use these techniques.

However, I still want to show you how the MatCaps can be used inside of Blender. The workflow can be most likely be applied to every other major 3D package.

In ZBrush, just render out a sphere with the desired material applied to it. Make sure the document width and height are at the same settings and that the sphere is perfectly placed in the center. Oh, and turn of perspective in ZBrush — using the orthogonal view will give you the same result. Export that texture and map it using to the color channel using the normals. Make sure that your shader is turned to shadeless because the light is already baked into the the texture, and the painted color information from the actual model is simply added on top of that. This way, you’ll get the exact same look as if you were still in ZBrush. That’s it.

With everything rendered out, it was time to have fun! I used Time Remapping here and there in After Effects and was fascinated by the sense of randomness and oppressive mood that it caused. The character was absolutely animated to the sound, but with the added effect of having the sound also stretched and squeezed where I used Time Remapping. I really liked how everything was coming together at this point.

One thing I haven’t touched upon is the glass panel in the movie. It was entirely done in post. Just a simple 2D tracking of the hands touching the glass and then a few dirt textures on a null object.

In terms of the story, my first idea was to the character break the glass when he hits it, but I found it more creepy and scary if he was still stuck behind the wall of glass.

“Making-of” and explore the scene files

If you’re interested in the scene files for this movie you can download them (project files & head), play with them and explore how they were done. I have also created a 30-minutes “Making-of” video which touches on many of the techniques described above.

I want to say thank you for your time and your interest in how all this was done. Of course not every single aspect and every problem that occurred during the creation process could be described here. Even if I’d tried to cover them here to warn you before embarking on your own projects, other problems will still occur. The devil is in the details.

It was really, really hard work to create this movie. Pain and fun and I earned a lot by going through the process. If they happen to read this, I would like to offer an apology to my friends and family who were really forgiving during my mood-swings. :) Also, thanks to the people sharing their comments and critique about the movie so far — I hope you will take time to watch it and comment below as well.

So, stay tuned, keep creating you own productions, don’t let discipline and perfectionism ruin your creativity — just don’t take life too serious and have fun (yes, I know — this applies to me as well :)

avatar
About the author, Ollie "Cosmic" Borgardts

I am a graphic artist and audio engineer. I consider myself highly experienced in the use of Maya, ZBrush and Photoshop. I was born in 1975 in a small town in Bavaria, south Germany (that's where the milk and beer comes from). I work with 3D modeling, visualisation and animation. I also do concept art, model creation, jewelry and sculpting via rapid prototyping.

1 Comment Post a comment
  1. avatar
    psonice
    Aug 31 2011

    Good article Ollie. I used to do a lot of 3d (lightwave going back to the amiga, later with maya) but stopped around 10 years back, so this was a really interesting read for me.

    Small question (I’m always curious about this stuff :) on displacement maps: are they still simple greyscale maps that just extrude outwards from the surface? Or can you extrude in any direction? The little pic you gave as an example of combining stuff in photoshop to get decent rendering looks like a classic displacement map, but then some of the spikes are “lying down”, which you couldn’t do with the traditional method.

    Another small question: what format does zbrush output these displacement maps in? I’m writing something that does cool stuff with displacement (but realtime of course :), maybe zbrush would be useful. Are the zbrush maps you used in the project files?

Leave a Comment