Disease Treat

Know About Ulcers Blog

Breakdown & behind the scenes – “King of Pain”


Hello friends. As promised earlier in June,
here’s the video about that one other video that I made. Well, as usual, I’ll be explaining
the thought & design processes behind everything, I’ll be doing shot breakdowns, and, hopefully,
this time, I’m gonna try to make my video more entertaining and more condensed than
my previous ones. You can make it even denser yourself by using the playback speed option!
Let’s get into it! Before we jump into the thick of it, a quick
word on why I made this in the first place. It was a few things coming together. First,
back during The International 2015, the two finalists for the yearly Arcana vote were
Queen of Pain, and Zeus. The latter won. Second, at the time, I was also listening to a band
I hadn’t paid attention to since I was 4: The Police… back then, in elementary school,
we’d have this sort of games fair at the end of every year (called “Kermesse” in French),
and there were prizes to win; I won an audio tape of the Synchronicity album by being really
good at fishing for plastic ducks. Third, I saw this old drawing by a friend… and
all of this kinda clicked together and I started getting lots of ideas really fast. Character design is a tricky thing. In the
case of Dota, while the game features, in my opinion, much better designs than its competitors,
it compromised certain of them for fanservice purposes. And so, in setting to create the
male version of Queen of Pain, I figured it should be, in the end, as clean of a gender-swap
as possible; much like the Queen is this attractive succubus that’s pandering to guys, the King
should be pandering to women, and bi/gay guys as well. After showing the drawing that sparked the
idea to a dozen-or-so people representative of this target audience, what became very
clear was that it wasn’t attractive; it came off as more of a parody, a joke, than anything
else. What came back the most was “it’s too drag”. In hindsight, it was quite obvious:
he shouldn’t have the exact same clothes as his counterpart. And so, the design was iterated upon and upon,
until the final result. We settled upon a Magic Mike kind of stripper body, not too
muscular; guys often think that a hundred pounds of muscle mass is what is sexy to most
women, but it’s really not. Well, no one, especially me, a guy, can speak for all women,
of course, but my conclusion after this empirically-driven design process is that “fanservice for women”
leans a lot more towards the “Edward Cullen” end of the scale; nice, sharp facial features,
deep eyes, a toned body (not necessarily “muscular”), an apollo’s belt, and of course, a defined
butt with shape. The body only does half the work, though; what makes someone attractive
is how they behave, and so that other half would be defined later, in animation. I’ve seen a lot of guys assume that the male
and female characters of Dota are equally sexualized, equating the gratuitous cleavage
on most female heroes with the fact that heroes like Axe, Beastmaster, etc. are shirtless
hunks of muscle. But it’s a staggeringly huge false equivalency. Really, if you want to
make fanservice for women, it’s not hard… all you’ve got to do is ask them. Involving
them in the process is even better, of course; it’s Stephanie Everett, who most Dota players
familiar with the workshop know as “Anuxi”, who modeled the King. Overall, the design was treading quite a few
thin lines; being a parody VS. being something badass that can stand on its own, being fanservicey
enough VS. having substance (not that the two are mutually exclusive, but they tend to
repel each other), and having an actually-sexualized guy VS. not getting too much of the latent
homophobic reactions that tend to surface when you have sexualized male designs. There’s
a lot more I could say on the subject, but I’m not sure how to phrase it, and
I don’t think it’s this video’s place to linger on it for too long. We could have gone a bit further and made
him more scantily clad, especially for the pants; and we could have given him some more
muscular thighs instead of going towards lean, in order to maximize sex appeal; but when
all’s said and done, I think we settled in a place that’s reasonably close to the ideal
goal. And in their current form, the pants allow this gem to truly shine. I love this
gem. I really do. You can make so many puns with it. “They’re the family jewels”. “Diamonds
are a girl’s best friend”. And, of course, “hard as a rock”. And, I’ve gotta say, I have
really polished the hell out of this gem. Two custom shader masks and a custom cube
map just for this bad boy right here. It’s the equivalent of whatever the hell these
two things are on Queen of Pain’s belt. The King’s head is loosely based on Sting.
Who is Sting? Well, you’ve heard him in the video, because he’s The Police’s lead singer!
And, to be more specific, it’s loosely based on how Sting appears in the music video for
“Synchronicity II”. And I don’t mean just his facial features, but his hair too. Except
it’s encompassed by horns now. And black. But still, the aim was to have a loose resemblance,
a bit of a connecting thread of sorts. Side-by-side you can KINDA tell, and that’s the point!
Now, something else that’s more vague, but hopefully you see what I’m getting at: the
Queen’s face is unmistakenly feminine, but it also has more masculine features than the
other women of the game. Likewise, I wanted the King’s face to blur the lines a bit, and
then make up for it with… make-up. More on that in a minute. It’s a part that I find
interesting; both characters are the gender & sex binary taken to an extreme, EXCEPT the
face. By the way, we also referenced Sting for the body; it was based on his appearance
in David Lynch’s 1984 adaptation of Dune. Last thing to mention here: they have the
same eyes. I remember Anuxi tried different ones at some point but it looked really odd,
so I reverted to the original ones at like, 90%. Eyes in games are notoriously tricky;
remember the Mass Effect: Andromeda issues? You need to have the glint, these reflections
in the eye. However, Dota 2 has no eye shader, unlike other Source games. Back then, it was
special shaders who were in charge of going all-out on the eyes, adding reflection maps,
bumps for the cornea, ambient occlusion, and glint. The glint is very powerful. Eyes are
shiny. And so, thankfully, even though Dota 2 has none of the eye shader fanciness, all
you really need is to paint the glint straight into the eyes. Observe what happens if I take
it out… creepy, right? And even though the glint moves with the eyes, which it shouldn’t,
and that the glint is straight-up mirrored across both eyes, which it really shouldn’t
be… it doesn’t matter, because that’s enough for our brains to not see their eyes
as creepy lifeless things. Akasha (that’s the Queen’s name, by the way)
has these things on her face. Make-up? Tattoos? Face paint? Whatever it is, the King needed
those too. But we had to come up with something original and which felt distinctly masculine,
while also highlighting his features, artificially “sharpening” them much like how make-up could.
Christian Gramnaes (whom you might know as “ChiZ”, the artist behind many great sets)
designed a bunch of tattoos. I picked a mix between 3 and 8, with a couple of small tweaks.
I liked the result so much that I asked him to design tattoos for his chest as well! The
reason for this being, even though he has this sort of mini-leather jacket top thing,
his chest is… more empty. Well, that’s kind of an awkward phrasing, but you know what
I mean. The chest tattoos serve as a way to bridge the gap a little. Through shader masks,
I also added some subtle glint to them. Now, onto the arms… his shoulder pads are
pretty much a verbatim copy of hers. And, I’m not sure what she’s supposed to have on
her forearms; could it be latex gloves? Something vaguely fetishy? Either way, the more masculine
version of that ended up being hand wraps. They end in this way to mirror how her
whatever-they-are things tear apart here. The wings… are the same. Wings are tricky
to make, rig, and animate; so why bother making new ones when hers fit him just as well? Doing
this allowed me to re-use, mix, and match QoP’s wing animations in some places. His
are actually scaled up a bit, to reflect the fact that he’s taller than her by about a
head. Unfortunately, he doesn’t have the weird boney back thing that the Queen has… I forgot
about that until it was way too late. So we just added those flesh bumps
under the bone parts. Facial animation is still tricky; not much
has changed since my first short, “Enigma’s Exasperation”. While there’s a DMX plugin
currently being developed for 3ds Max, we’re still unable to write facial animation rules
metadata into our models. So it was best to keep it simple, otherwise I’d have to start
tracking a lot of different animation channels, including for right and left parts of the face.
So I omitted that left and right separation for a bunch of the mouth shapes, and, in general,
kept the set of shapes narrowed down. While tech artists, these days, do all sorts
of really fancy things in order to set up facial capabilities as fast and as accurately
as possible, I did it all by hand. It was a bit tedious, because you want to make sure
you’re going to do the right thing as closely as you can. Any “error” that sneaks in will
be compounded when many shapes enter the playing field at once, which, believe me, they will.
I had to be especially careful with regards to the tattoos; any deformation must fade
away from its “main area” very gracefully, especially near the tattoos. In summary, for the design process, the priority
was to reason according to the final original Queen of Pain, and then, try to get as close
to the exact male equivalent that could have been produced. I would say that was the core
principle: if he was official, what would he look like? Of course, I don’t wish to pretend
for a second that our art is up to the quality of what Valve staff can do, but it obviously
doesn’t hurt to, in a way, try to “think like them”. In fact, with a couple of exceptions, he was made
and set up the same way as a regular hero model! However, with all that polish made on the
King, his counterpart also needed some attention. I’ve pointed out before that the rigging can
be sub-par on some older Dota heroes, especially those that were released during the initial
“catching up with Warcraft 3 DotA” period of the game. Unfortunately, Queen of Pain
is among those affected. For example, her fingers are very incorrectly weighted. But
there’s also… well, let me go on a tangent here, for those among you
who may not know this already. Remember what I said earlier, how Dota doesn’t
have an eye shader? Hero meshes also don’t have eyes… in the conventional sense. It’s
very much an optimization thing; after all, it’s a top-down game, so there’s not much
point in fully modeling out the tiny things, right? And it’s a decision that does make
sense, as far as the top-down gameplay camera angle is concerned. I personally disagree
with it, because, you only need the fully modeled eyes in the LOD0, and the gameplay
uses the lower-detail LOD1 anyway, so they really could have stuck a couple of half-spheres
in the LOD0. It’s not that many triangles. With eyes topologized the way they are, they
can’t be moved around without distortion. But there’s another decision that compounds
this problem. You see, facial morphs are automatically generated, with, as far as I know, fancy internal
Maya tools that were created by Bay Raitt. It’s a time-saving part of Dota’s character
creation pipeline which, again, makes sense, but was sometimes not cared for enough. So
the morph shapes can behave oddly on occasion. The most recent heroes DO have fully modeled
eyes… but things can still look kind of weird. Anyway, to get back to Queen of Pain: her
eyes do NOT work. At all. Thankfully, I was struck by an idea which, quite frankly, I’m
embarassed for not coming up with sooner. The idea is to duplicate the eyes, and then
move them a little bit outwards. Then, creating an outer loop that will be the equivalent
of the rest of the “sphere” of the eyes. It doesn’t really matter what UV space is associated
to that outer loop of geometry, but it’s a good idea to use whatever padding Valve artists
left. And last, the eyes will be rigged to three bones: the head, and one new bone for
each eye. Eyes with bones are pretty useful because you can drive them with aim constraints;
while the same can be achieved with morph targets, it’s a much more fancy process and
one that’s outside the scope of SFM2. Using aim constraints in SFM2 has some limits,
though. It offers nowhere near the same amount of control that you’d get in dedicated 3D
animation software, it can be a little off, the aim center is based on the rotation at
the time you hooked the constraint, and the up vector is considered to be the same as
the target, instead of the parent of the bone that is constrained. In layman’s terms, that
means rolling the head results in, uh, interesting results. Although it’s a bit troublesome,
there’s a way to “filter” the offending axis out, by using a second copy of the eyes, locking
it to the first, then unlocking those transforms as a way to “bake” the final constrained data
onto itself. Then the graph editor can be used to zero out the data on one axis. It’s not a perfect solution, and it’s definitely
a gross hack, but, much like how simple painted glint on the eyes makes a world of difference,
being able to have actual eye motion is an enormous upgrade, even if it’s sometimes a
bit off. To further prove my point, check out the difference it makes on the intro sequence. You want me? Come to think of it,
where’s the pleasure in that? The Queen is dead. Long live the King! Unfortunately, there was nothing that I could
do, at the time, about her hands. I discovered a solution while making my TI7 short film,
but I’ll talk about that in the behind-the-scenes video for THAT one. It’s time for the shot-by-shot breakdown!
I’ll be, well, breaking down every shot into its individual components in SFM, so you can
get a good look at the nitty-gritty sleight of hand tricks that are happening. I’ll also
be showing you every shot from alternative camera angles so you can see how beautifully
everything falls apart when you do that. And of course, I’ll be providing commentary as
the video goes on. My original plan had the video split in three kind of “parts”, the
introduction, the hallway, and then the fight. Then again, my original plan also had the
video be a minute and twenty seconds long, so…! Well, this is the first section: the introduction.
I wanted to pull off a bait-and-switch with the King. This is also why the video was initially
called “what if Queen of Pain won the Arcana vote”; I wanted to trick people into thinking
it was going to be about her. Unfortunately, that didn’t really work out, because people
thought I was seriously making a case that the King should *be* the Arcana. Anyway! My
regular partner-in-crime, Alexandra Kern, whose work you have already seen in my previous
shorts, came back and knocked it out of the park more than ever. I knew I wanted to have
the Queen talk directly to the viewer, from a throne room, and after a lot of research
and discussion to narrow down my goals, we started iterating on a concept. Then she did
some color keys, and I settled on the sunset-ish orange one, in order to contrast with her
blue skin. It also offered a much easier source of lighting to manage than torch fire, which
has to be omnidirectional, and that’s very tricky to make in SFM. The dialogue was spliced from existing lines,
and it was motion-captured. Yep, I acted that out in the real world. Well, I’m not too happy
with how some of the arm motion came out, but all in all, it’s serviceable. Something
I forgot to account for was that my chair is not as wide as a throne… and I couldn’t
adjust the animation too much, so the throne’s armrests had to be adjusted to meet me half-way,
almost literally speaking. Then, as you’d expect, the mocap was heavily animated over, by hand. This environment set is made of the same pieces
as the hallway later on, except for the bit that stands out the most; the pillars and
arched ceilings. The idea is that this place is some sort of dungeon, where, perhaps, brutish
mobs took as a stronghold… maybe with Wraith King as their leader. A thing about this movie
is that I really wanted to push my limits, and go ham on what you could call “worldbuilding”,
so none of this is a typical Dota scenario, and it does not take place in the Dota map
or anywhere near it. So all environments in the video are custom-made, but some are more
than others; the throne room and the hallway are 100% original assets. The models are all
very high-poly; they’re pretty much just decimated out of Zbrush, and not retopologized. Not
only because, it’s a movie, so performance doesn’t matter (to an extent), but also because
having all this detailed high-poly geometry allows the low-key grazing angle lighting
to actually get caught on the edges of all this stone… which looks pretty nice! Unlike my previous shorts, I did not use sky
domes in any way, but only cards, because they’re far easier for Alexandra to make.
This makes things a little bit less flexible on my end, but it’s a worthwhile tradeoff.
In this case, I wanted to fit the background mountains in a particular way, get the largest
peak aligned with the throne, so I use the pillar over there as a way to hide the seam
between the card and its flipped copy. I am doing this sort of trick in a LOT of instances,
and if you keep an eye out, you’ll keep seeing me putting conveniently-placed props that
reach to the top of the frame. Here’s the problem though… for lighting
reasons I’ve mentioned before and will mention again later, the far-clipping plane of the
camera, that is to say, the maximum distance the camera can “see”, the FarZ value, needs
to be as short as possible. Therefore, the cards are really sticking to the cliff’s edge
of this room. So you can tell it’s a backdrop. How to solve this? By making the cards move
the exact same as the camera. In a way, it’s akin to the motion matching that old-school
special effects artists had to do. But here, it’s much more simple; I create a copy of
the camera, and, without touching its motion data, I move it so that its first frame matches
the location of the root of the card set. (The second card, the copy on the left, is
locked to the root of the first one, therefore they move as a set.) Then that root is locked
to the movement of the copy of the camera, and there we go, it looks just like a regular,
infinitely-far-away backdrop. Obviously, all this motion transfer has to be done once the
motion in question is final. Let’s talk about the lighting setup in this
first shot. As every set for this movie is custom-made, tweaking the global light settings
from scratch was a necessary step. Here, the light itself isn’t used, but the ambient and
specularity are. Every other light is an SFM spotlight. Now for a quick reminder: it’s usually better
to use the global light source for everything whenever possible, because it’s the only one
that has true control over how the heroes are shaded. However, the shadow map will wrap
itself around everything the camera can potentially see, so its accuracy drops off near-exponentially
with distance covered. SFM lights are also much more flexible, you don’t have to go in
the Hammer Map Editor to edit their properties, and they also can be area lights, that is
to say, you can give lights a disc-based radius, so that they don’t occupy a single infinitely
small point in space, giving you “true” soft shadows. The biggest thing to know here is that I decouple
the light that casts shadows and the light that casts volumetrics. The reason for this is,
I want the volumetrics to be nice and sharp, but the shadows should be reasonably
softened with distance. This can’t happen if I don’t separate the two. Those two lights are very far away from the
scene, and have a very low field of view, 1 degree vertically and a bit more horizontally.
I want to ensure that they don’t cover the “outside” of the scene too much, and really maximize the resolution
of the shadow maps that way. Then I have a few more lights, carefully placed
and tweaked to fake radiosity, that is to say, the way light bounces off the ground
and onto the rest of the room. And then, a few more lights to highlight Queen of Pain’s
silhouette. Hey, you see those dust motes? They make a comeback in greater numbers later
on. MUCH greater numbers. I hope you’re ready for that. By the way, the lettering on these
banners is legible… can you figure out what it says? Last thing about this first shot, and also
a bit of an SFM lesson / reminder while I’m at it. Here, at the beginning, you see an
exposure change as the camera adapts to the brightness of the sunset flooding the room.
Whereas this could be done directly in Source 1 SFM, this is all done in post here, because the Dota 2
rendering pipeline is not built with HDR in mind. If something is pure white, in Source 1 SFM,
there is still lighting data that exists beyond that peak white point, to really represent
just how bright it is. While it may not be directly displayed to your screen, it’s still stored
on the back-end. And before I can explain the difference with the Dota 2 SFM, let me explain
what progressive refinement samples are. In order to render motion blur, depth of field,
and area lights (the ones I mentioned earlier), for every frame, SFM re-renders it slightly
differently a certain amount of times; those are the samples. If you have 4 motion blur
samples, and a frame at 1.00 seconds, it will render it internally four times at times that
cover the shutter speed of the camera, and then blend all of those together. Of course, only 4 samples for motion blur
is very coarse in terms of temporal resolution, and for this video, I went with the maximum
of 256 in almost all cases; only in a few shots did I drop to the next step down, 128.
Depth of field works by moving the camera in a disc-shaped area, and, as you might expect,
it’s the same for area lights. The distribution is uniform, otherwise the blurring would be
biased. You can see this process in action here; I’ve created a very powerful, but very
focused light, and given it a large radius. This allows us to see how it gets repositioned
across refinement samples. Refinement samples are also used to do jitter-based anti-aliasing
as well as smoothing out ambient occlusion. Now, let’s go back to explaining HDR and the
lack thereof. Here’s the Source 1 SFM. I’m lighting Queen of Pain with a spotlight which,
in real-time, is right onto her and strong enough to clip her into pure white. However,
it has a large radius, so only one or two refinement samples will have the light shining
onto her. Once refined, she’s darker, and the light is not clipping anymore. If you
try the same thing in the Dota 2 SFM, though, she will appear washed out. Let’s assume we have 4 samples for the purposes
of explaining this. All the samples are, in a way, blended together and “averaged”; they’re
not additive to each other. So if we have 4 samples, they will contribute 25% each to
the final refined image. Now, remember: in Source 1, we may be seeing Akasha clipped
out on our screen, but internally, lighting data exists beyond the pure white point of
our screen. So when we “reduce” each sample to 25% of its “original luminance”, the data
is not clipped. In Dota 2, there is no high-dynamic range rendering, so there isn’t anything beyond
the pure white on our screen. What I’ve just said is not 100% technically
accurate, but for most intents and purposes, this means that there are certain things that
will never “look as good” as they would in the Source 1 SFM. If you remember how I showcased
the distribution of samples earlier, with that tiny but extremely focused light, well,
that’s one of such cases. The amount of “visual energy” per sample is effectively capped
in the Dota 2 SFM, so our circle will always be quite dim. In Source 1, though, it can
be anything your heart desires. Here, I’m creating two of those example circles, with
the right one having 20 times the intensity of the other. They both show up as pure white
on our screen, but the proper amount of luminance exists internally, so once refined, the difference
does show up. In the Dota 2 SFM, to reiterate, sample data goes as far as what’s represented
on your screen, so both circles look the same. And practically speaking, here are examples
where this limitatation has affected my lighting work and resulted in images that are partially
washed out. With careful planning and balancing of your
lighting, however, you can make the most out of this standard dynamic range. Color grading
also helps with this limitation. In the next shots, the lighting setup changes
a bit. I do use the global light, and you can tell because the shadows across her face
and in the background tend to jitter a bit. There is a way to mitigate this, and I do
use it for a couple shots later down the line. However, I didn’t consider it to be worth
the effort here. I’ll mention a little trick I’ve applied in all of my sets now; see, you
don’t have control over the global light in SFM. If you want to tweak its properties,
it all happens in the Hammer Map Editor. All you can do from SFM is toggle the shadows
on and off, ramp the global lighting’s intensity from 0% to 100% (all of it: light, ambient,
and specular), as well as transition from day to night, where 0% is night and 100% is
day. Day and night are just two sets of values that are arbitrarily named; and so I exploit
this to control only the global light; my “night” in this map is just the “day”, but
with the light off. The first shot was set to “night”, and the others to “day”. The volumetric light here moves much closer
and only encompasses what’s visible to the camera; this is, of course, totally breaking
the rules, but it allows me to have the camera clearly pass by the parts where it’s occluded,
so the effect stands out a little more instead of looking just like some sort of broad haze.
When her wings block out the sunlight, it becomes much more clear. It was a careful
balancing act to tweak the effect so that it wouldn’t be too strong, and would blend
with the background without revealing that the volumetric was way closer to the camera
than it should be. In hindsight, I do think I might have weakened it a little bit too
much for the cool part of it to be visible, but at least it’s still there and it’s subtle,
so it’s not that bad of an outcome. A fun detail is that her wings are translucent
and the light and shadows received by the back show up on the other side. I’m actually
not entirely sure how they set that up, as I don’t believe her in-game model has one-sided
geometry, but I could be wrong. I am still doing one of my favourite tricks
for wide-angle shots; after rendering a wide-angle field of view, you only need to distort it
back using optics compensation. This really helps make the render feel a lot less gamey,
and even a bit more “movie”-like. At first, there wasn’t any transition to the
next bit. Then I figured, since I have her raise her arms ceremoniously, that’s a great
excuse for a motion cut. This is the second section: the “four scenes”.
Except, really, it’s three, because the fourth one is the hallway section. The lyrics of
the song describe four things. Because the last one is “a skeleton choking on a crust
of bread”, I knew it was gonna be them killing Skeleton Ki—I mean, Wraith King. And so,
the other three ones might as well be assassinations. That’s probably what they’d do, that’s a fun
couple activity, right? My original idea was to fade out all the lights in the throne room
and have some sort of static set-pieces fade in and out from a black void… think something
like the Dishonored endings, but going one by one. However, that proved too cumbersome,
and the idea of building three mini-sets instead was much more appealing.
Let’s get to them in order. “There’s a king on a throne with his eyes
torn out”. Since Wraith King is the fourth target, I had a choice between Monkey King
and Sand King. But Sand King has like 16 eyes, depending on the cosmetics you’re wearing,
and he doesn’t exactly strike me as the type to be sitting on a throne. This set was an
exercise in wrangling the water shader of Dota 2. There are inconsistencies between
the game and SFM, there are inconsistencies between the Hero shader and the GlobalLitSimple
shader which is used by the world… it’s all a bit messy in an intangible way. I’m
not exaggerating when I say it’s intangible. Look! Things shift around based on what I’m
doing in this scene. Even the color of this rock has an influence?! Water is horrible,
it’s awful, and much like real life, it’s best if you stay away from it. You see this thing above the pedestal, out
of frame? It’s there to block the global light’s shadow
on that specific area. Because the global light needs to be
on for water to work properly. Also, water completely ignores the
existence of SFM lights, just like how the Source engine ignores the sanity and
psychological well-being of its users. Speaking of lights, the way the “caustic”
effect is achieved on those volumetrics is with a gobo (more on that later if you don’t
know what they are), and then simply rotating the light. While it’s the Queen holding the dagger, it’s
the King holding the head. The idea is to slowly introduce him in a sneaky way, but
still hidden until the proper reveal. He’s in the other two environments as well, as
well as in the background of some of the rapid-fire hallway shots. For this shot, however, I used
a… simplified version of him because things were already pretty messy
behind there as they were. “There’s a blind man looking for a shadow
of doubt”. Blind man, who’s got no eyes? Faceless Void. He’s from Claszureme, a dimension out
of time, so I figured, maybe I could have these two shots set on a kind of other planet
that could be in Claszureme… but on second thought, I do have to say, there are two issues
with this premise: 1) how would QoP and KoP get there, lore-wise, and 2) as much as I
like how it came out, it does have vibes of a low-budget 1960s Star Trek TOS version of
an alien planet, really… though, to be fair, now that I remember, I had just gotten into
Star Trek at the time and was binging the original series, so it was likely an influence.
If every shot so far hasn’t given you a clue as to how much I abuse volumetric lighting
in this video, this set should definitely give you strong hints now. I experimented
with getting water in there, and even exploiting the water fog as height-based fog to make
it look like there was heavy gas accumulating in pockets in the low parts of this craggy
terrain. Unfortunately, none of that worked out, because I needed the global light to
be out. I iterated on the second shot quite a few times until I decided it would just
be a lot better if I stopped trying to do something super fancy with the camera, and
also kept Faceless Void alive, make things a bit more interesting than panning over a
completely lifeless scene. “There’s a rich man sleeping in a golden bed”.
My initial idea was to have this very rich-looking bedroom, but it would take too much time to
model and iterate on everything. So I went the way of Scrooge McDuck. I modeled a single
coin, then created a few thousand copies of it, then physically simulated that in three
different ways. The scene was then populated with these three different stacks of coins;
they were duplicated, rotated, scaled, all over the place. It’s a little silly in terms
of performance; up to 46 million triangles end up going through the pipeline. Anyway… rich man, golden bed, it obviously
calls for Alchemist, since he generates so much gold. And beneath his treasure is his
final resting place. So it’s his bed. Or his tomb. Euphemisms! See, that part works really
well, I wish I’d found something like that for the “blind man” bit. Man, though, I wish I could linger on this
a little longer. I think it came out looking really nice.
Of course, I say that today, but in 5 years I’ll probably want to reach
into a time machine and slap myself as I say those words. Because continuously improving
as an artist is, in my opinion, a lot about constantly looking at your work from all angles
and think “what could I have done better, in what ways, how…” etc. anyway, you get
the idea. What makes it satisfying to have it somewhat nailed down is that gold can be
pretty hard to shade, and I think it came out nice, not just on the coins but the atmosphere
itself… the little shades of rose gold in the air… all that. Rose gold is a color
with lots of associations in my head… because I used to play the trumpet, and my trumpet
was rose gold. It was shiny. And nice. I haven’t played it for 10 years. But yeah. Speaking of lighting, there is an absurd amount
of lights going on in the second shot. Take a look. This giant underground vault is not
lit much; it’s a cubemap that does
the heavy lifting on the coins. See what’s on the coins? This compass rose
symbol. I tried to have nothing on the coins at first, but it looked awful, so I figured
I should at least try and add some detail. Why a compass rose? It’s a nod to a game show
from my childhood which has resonated through a lot of my work indirectly, as it shaped
my tastes in music and my desire for… places that go beyond, just, this feeling that there’s
something out there, waiting to be explored. It’s complicated to explain, but I remember
talking about this feeling in general in a previous behind-the-scenes video.
Hopefully you see what I’m getting at. This is the third section: the hallway.
A lot of challenges in this part. But, also, a lot of dustmotes. Honestly, it’s hard to
know where to begin, but since there’s so many dustmotes, that’s a good place to start. But even for that,
I’m not quite sure where to begin. You’d think dustmotes are simple,
right? Well no, they’re not. Just like in real life, dust is troublesome,
especially if you’re allergic. The particles aren’t lit; they are self-illuminated.
Therefore dustmotes shouldn’t stray outside the light shafts. The self-illumination is
constant, but the shaft is somewhat of a gradient. The difference can look weird, but it’s not
technically inaccurate; consider that a dustmote is a big object, but we “see” the light shafts
purely because the light would be hitting billions of very very tiny things. It’s the
same principle as concerts using smoke machines because they want their spotlights to create
beams; here, the hallway is filled with not only dustmotes, but… the air is foggy…
or whatever. Or maybe it’s not. Artistic license and all that. Half the reason the dustmotes are here is
to create some sick bokeh, but this requires them to be big. If they’re too small, they
will not create sick bokeh. If they’re too large, they will stand out too much and not
look like dust motes when in focus. There was a careful balancing act of size as well
as tweaking some very useful particle parameters that can clamp maximum screen-based size.
There was also additional cheating with copies of the particle system with different sizes
set, and those were used based on distance, camera aperture, etc. Okay, let’s switch to talking about animation
for a while. Here, you see Wraith King choking as hard as I do when playing as him against
Anti-Mage. You’ll note that King of Pain is still hidden, and that he’s the one holding
him in a chokehold. I’m not very proud of this animation, it was quite hard to make
him look like he was struggling while not moving too much, and this whole ordeal ends
in a snapping motion that is not a neck snap, yet it’s a snap, and that doesn’t make sense.
But I guess it doesn’t matter that much. I am however much happier what the right side
of the frame, with Queen of Pain being subtly lit by the green glow, as well as her acting
in general, the “oh, you’re disappointed?” look followed by the sadistic face, then the
smug look. This gives me an opportunity to mention, that canonically, Wraith King has
been trying to seduce her for a while, and he has — if you’ll forgive the pun — tried
to bone her since the days when he was still known as Skeleton King. The rest of the animation in this sequence
are walk cycles. I don’t have much to say about the King’s, I just tried to make it
look determined and confident; as for the Queen’s, it was inspired a little bit by fashion
catwalk. Her boots would be very uncomfortable for a human, but hey,
she’s probably got hooves in there. Last thing… jiggle physics. If you’ll allow me to be pedantic for a moment,
this is often a misnomer, as very little physics are involved in these systems, in the sense
of the physics engine that may be used by the game engine. Anyway, bouncing, jiggling,
etc., that’s fine; after all, these do have weight, perhaps more than you’d expect. So
there has to be some amount of secondary motion; being completely stiff doesn’t look right.
Likewise, there shouldn’t be too much bounce, otherwise you end up with the Dead or Alive
series. She only has one bone for both sides, but that’s not an issue. Some parts were animated
by hand, the rest was using this 3ds Max controller setup where the chest bone is not only using
the procedural jiggle, but its rotation is also constrained to an invisible box ahead
of the chest, whose position jiggles with different parameters. This results in different
positional and rotational animation. The idea with the next shots was to get them
in rhythm with the music, switch to lots of different angles, most being fanservice-ish,
and progressively sneak in shots that are in fact, not the Queen, but the King instead,
up until the reveal point. It horrendously breaks the 180° rule, I know, but it also
allows me to really go wild; my favourite shot is this wide-angle one from below. At
a 130 degrees, it might be the widest I’ve made yet. What I like about it is that it
frames her entire body while also showing a huge portion of the scenery even though
it’s a very wide 2.37:1 aspect ratio; it also highlights her features in a nice way without
being a stereotypical fanservice shot. I will say, though, it would have looked way kinkier
had the video been in the 16:9 ratio! My second favourite shot is this one… it has this
movie-like quality, atmosphere, look to the image, and it would make a great “dialogue
while walking” kind of thing. Now, speaking of lighting, let’s go back over
all those shots and break it down. There are a lot of shots to look at, so while this is
getting started, allow me to toot my own horn for a minute. I have to say that I’m pretty
damn proud of the lighting in this part. Remember, in this branch of the engine, there’s no HDR
rendering, there’s no proper handling of overbrightening, there’s no baked lighting, and all the important
shading settings on characters are global. With all that considered, I’m really glad
that this is the final result. However, in my opinion, that’s not even the best lighting
I managed to wrangle out of the Dota 2 SFM so far; that distinction goes to the classroom
in “What does a hero truly fear?”; I think it’s in that set that I’ve truly managed to
fake global illumination and radiosity to a point where it looks convincing. You have
no idea how satisfying that was, and still is. Anyway, here’s how things are set up:
each window has its volumetric shaft, then its actual source of light, then another light
to properly light up the edges of the window, then a fill from the bottom wall back towards
the window to emulate the bounce. However, that bounce was, most of the time, severely
toned down, or tweaked to only illuminate the environment and leave the characters alone.
The characters had a rimlight each, as well as what I call the “bottom bounce”; a light
that is exactly under them, and with a large radius so that it will basically do this [show
me with flashlight illuminating bottom of my face]. You might think, why bother with
shadows and radius when all I need is a subtle source from below? Well, using shadows is
the only way to ensure that the light will only hit their bottom-facing parts; without
shadows, the light will fully wrap around the character, in a half-lambert kind of way. At first, the reveal shot was like this, but
the feedback that I got was that it felt too distant, and one member in my test audience
didn’t even realize what was going on until the close-up on his face. While I liked the
contrast in that shot and how I thought it showed how they were, well, each other’s counterpart,
it didn’t work. So I brightened up things a bunch, and reframed him to be closer and
more towards the center. That eliminated the confusion. The following shot’s goal was really
to “crawl up his body”, and that worked really well, especially the smirk; the feedback on
that was unanimously good. However, I cut short the following shot of them side-by-side,
and replaced two-thirds of its time by another shot which crawls up his body, though it’s
a different enough perspective to not be redundant. It also helps highlight the gem a bit. I would
like to once again extend my special thanks to the women who helped me refine the first
shots of the King to make them as appealing and attractive as possible. The last shot with the door came out the best
of this whole section… man, I wish I managed to reach that level of “movie-like” treatment,
and, as I’ve said just before, that sort of, y’know, aesthetic, everywhere. You might have
noticed, the door crack is not centered. I tried that, but, it turns out, visual weight
and symmetry are concepts that often diverge. In fact, as I’m recording this script, one
example that caught the internet’s attention is the new Google logo breaking a few rules
for the sake of visual weight. I think what makes it really nice
is a combination of three things… 1) The rimlights look really good, for once,
they almost look like they’re out of the Source 1 SFM, which is a miracle. 2) The dustmotes
are there in layers, and even though there’s, well, so many of them overall, they work nicely
here, they don’t dirty up the image, and kind of help read it by giving it depth; the way
they disappear into a nice bokeh as they fly towards the camera is satisfying. 3) The lighting
that leaks through is not uniform in a lot of ways, and its hue shifts with intensity.
In my opinion, subtle hue shifting in lighting gives a nice feel. I love that. One of the things behind that hue shifting
is also how some light shafts flood my scenes in rays. It was accomplished with the use
of custom gobo textures. A “gobo” is, in a way, a mask for the light source. You might
also think of it as a texture. In SFM1, it was possible to overlay noise on volumetric
lights, on top of the gobo. In fact, it was even animated! It emulated the effect of smokey
air quite well. This feature is missing from SFM2. I made my own noise texture, and added
a bit of “chromatic aberration” to it. This is done by shifting the color channels a couple
pixels away from each other. Controlling the sharpness of that noise, and therefore the
rays, becomes a very easy process. Remember what I told you way earlier; light radius
has a large effect on volumetric lights. Just adding a little bit of it “blurs” the gobo
enough for the noise to vanish away very fast. Anyway, one last thing to bring up here with
regards to lighting. Remember when I was explaining progressive refinement to you earlier? Well,
you see, at maximum quality, you have 1024 depth of field samples, 256 motion blur samples,
and however many are used for the subpixel jitter anti-aliasing. If all of those had
to be rendered “on their own”, you’d need to apply the 256 motion blur samples to each
depth of field sample. So you’d have to render 262 THOUSAND samples overall. Thankfully,
SFM doesn’t do this. Samples are shared across operations. However, this creates weird edge
cases. While I tried to minimize them, one of them appears in the background right here.
When you have an area light and depth of field at the same time, it is possible for the two
to cancel each other out. Don’t ask me how, all I know is that it happens. This results
in out-of-focus shadows appearing to be way sharper than their surroundings. One way to
work around it is to tweak the radius of the area light, but that can be problematic, as
you might want the large radius to get very smooth and soft shadows in the foreground.
There’s no One True Solution™ to this. This is the fourth section: outside the dungeon. Unlike any previous shots I’d made in the
Dota 2 SFM before, this one covers a vast amount of distance, especially vertical. And
having the backdrop elements be fully in sync with the camera had the feel of an old FPS
map, where you can too easily tell how different the skybox and the 3D level are. So instead,
the backdrops only move about 90% like the camera, in order to have a bit of “parallax”.
That really helped, but not enough to my liking, because we were still dealing with entire
backdrops, not many separate elements like in my TI6 short film, “Lanaya is mine”. Tree
cards were the solution. And likewise, they match the motion of the camera, but not entirely.
The further away something is meant to be, the more it does; the closer, the less they do. Here, only having volumetric lights as “fog”
wasn’t enough. The trees over here aren’t lit by the setting sun; they’re in the shade
of the mountain. However, a certain level of light is bouncing off of the scenery and
onto the trees; that’s ambient light. Ambient light has a directionality to it that’s very
diffuse, but patches of fog tend to be high frequency detail. Think of it this way…
clouds really just are big thick patches of fog, but in the sky. So I needed fog over
there, in the shade, to give a sense of depth between the 3D scenery and the 2D trees. Volumetric
lights wouldn’t work, because they’re additive, and can’t effectively convey high-frequency
detail over an area. Remember? Volumetric light + detail + radius=all detail gets
lost. The solution was particle-based fog. Much better. These subtle atmospheric effects
really are a huge chunk of the atmosphere you can give to 3D environments. Just good
old simplistic planar fog alone conveys distance and scale extremely well. In particular, I
want to point out that Nintendo is a master of atmospheric particle effects, especially
in the Zelda series. Fog is not something uniform, it very much has texture and variation, and the way light interacts with it
is also variable to a degree. All those cliff rocks are the same model from
the first act of Siltbreaker. It’s a very versatile model. The dungeon itself goes quite
far back as well… the throne room would be over there in the back. The last shot of this section has a sweeping
vista towards the next location. I do believe I could have probably blended 2D and 3D better
here, but regardless, I’m happy with it. This is the fifth section: the meadow. In
the distance, you can still see the dungeon they came from. While it’s minor, I love that
kind of visual continuity, being able to see both places from each other. The lighting here is very simple: global light
+ two volumetrics. Much like before, they’re far away so that the “beams” are reasonably
parallel. The second one is there to create a few more rays on the right side of the frame. Now let’s talk about this map. You might remember
me talking about how having uneven ground in the Dota map is impossible because it’s
all made with a tile-based system. If I wanted to have non-flat terrain in the Dota map,
I’d have to remake it from scratch. Which I will probably end up doing some day, if
fate pushes me towards it. Well, none of the maps in this video use this tile system, only
the good old tools. In order to place all this vegetation, I had to use the Asset Sprayer.
It’s a very nice tool where you define a list of models, and all the different ways in which
they can be placed; do they follow the direction of the terrain, what are the bounds of the
random scales and rotations they will be created with, etc. However, because it’s not abstracted
behind a tile-based system, every single piece of vegetation is a full- fledged prop entity.
And you know me, I’m a very reasonable person, so I only have, uh, a very small amount, you
know, just 16,105 of them. When entity counts get that large, the process of spraying new
entities progressively slows down to a horrendous crawl. It’s bad. You might be thinking, wait a second, Max,
why isn’t your grass the really nice grass that we have in the Dota map nowadays?
Well, I tried to figure out a way to bring that system over to “regular” displacement-based
terrain, but it seems that the grass is completely tied to tiles. Maybe I’ll figure out a way
one day, or maybe, if a miracle happens, Valve will document the system. The substitute is
all those grass models and bushes. They’re not set up to shake with the wind, but well.
At least they’re there. They make up half of the overall vegetation prop count. This is where I re-use the particle fog effect
from earlier. Again, I can’t stress enough how little subtle touches like this really
help. The lighting setup is the same, the lights were just moved a bit to make the beams
longer and more defined. So here, he actually goes along with the song,
right? As if he were its singer. I mean, his face is loosely based on Sting’s, so he would
probably have the same voice as well. And the song is about him, in the first person,
so it’s as if he were the “author” of the song. This is part of the reason I went ahead
with this project in the first place; the lyrics were flexible enough to be interpreted
in a whole bunch of ways, and while the song was originally written to be about, I believe,
Sting’s divorce, it fits a story in a fantasy universe just as well. “It’s the same old
thing as yesterday”, something’s happening again, maybe they’ve been here before,
but they forgot something? Ok, now, in this transition to the next section,
I have to introduce the thing I mentioned earlier: multi-pass rendering. This is done
to get around the low resolution and accuracy of the global light’s shadow map when the
camera’s FarZ plane is too far away. The trick is quite simple: the shot is rendered as usual,
with the full distance camera. Then, I duplicate the camera, and on that copy, I set the FarZ
distance to be low enough so that the shadows won’t look garbage. And thankfully, when the
background is empty, Source Filmmaker can output the “void” as a grayscale alpha channel.
However, there are some limitations with this technique; the color of the “void” is still
there in the image; motion blur or depth of field will exhibit gray edges when blending
the second pass on top of the first one. I looked into this issue, and this is probably
something to do with pre-multiplied alpha, or maybe the lack thereof. Either way, this
is 95% solved by choking the matte. In summary, if you’re familiar with the concept of cascaded
shadow maps in games, this is kinda like doing the same thing manually, and without a nice
soft transition between the two. In fact, another issue to watch out for when doing
this is the SSAO; it shows up at the edge of the FarZ plane. However, it’s easy enough
to get around that; you only need to tweak the ambient occlusion settings so that the
“seam” will be small enough to be choked out, or not show up significantly
in the first place. This is the sixth section: the meadow, but
in the past. Like I’ve said, I’m pretty much just going along with the song and trying
to make something that fits it. “It’s the same old thing as yesterday”, yesterday. The
past. Long ago. You know. This building’s a special card; I topologized
the drawing in order to give it depth. In a way, it’s the reverse of the usual modeling
workflow; instead of making a model from concept art, I… made a model from concept art? [pause]
Ok hang on. What I mean is, where the 3D model dictacted what the original art would
be rendered as, here it’s the opposite…? Yeah, I’m not sure how to describe that. Whatever.
The reason we did it that way is because… I’ve had this idea in my head for a while
and I’ve always wanted to try it. Also, it was way faster than actually modeling and
texturing a building. Unfortunately, one thing I didn’t realize until too late is that the
building should have been drawn without perspective, in an orthogonal way; that’s because the only
perspective you want applied to the drawing is the one that happens once it’s 3D. But
if you already “draw it in 3D”, then the perspective happens twice and it kinda looks off. Anyway,
it’s on screen for less than two seconds and it immediately explodes so WHATEVER. If you’re gonna ask me to describe what’s
happening in terms of narrative… the idea is that there’s this temple of Warlock-like
priests hidden in this valley, and they do demon summoning as their Sunday hobby, but
this time the invocation goes wrong, because they wanted a big-ass demon to chain to their will,
but they got more than what they bargained for. The invocation left an evil rune or marking
or whatever in the ruins of that place, and that’s the King was looking at a couple shots ago. To further highlight that it’s a different time,
there are slight changes to the environment; the ground uses a different texture set.
I wish I could have done something a bit more obvious like having the present meadow clearly
have orange, autumn trees, vs. the spring/summer trees of the past, but unfortunately, sometimes
you’ve got to make do with what you’ve got. The cracks going up the building is a pretty
simple trick; using the same mesh, this time, the magical glowing cracks that signal impending
doom are drawn onto the existing drawing. Then they’re exported separately as a transparent
texture, which is applied to a copy of the building card mesh. This copy is slightly
angled away from the camera and pulled towards it across a few frames, so that it looks like
the cracks are going from the ground towards the top, instead of appearing all at once.
It’s the cheapest way to do it, and it’s all in-engine, so I don’t have to do it in post,
which, in hindsight, I could have, but perfect is the enemy of done. The exploding rubble is a more-than-reasonable
amount of copies of various particle systems that usually happen when the Dire ancient
explodes as you lose 25 MMR. Both building cards are scaled down to nearly 0% and disappear
under the terrain, a bunch of lights appear, etc. it’s all very messy. A tricky part was
getting enough rubble to appear so that it would look like a reasonably plausible explosion.
Unfortunately, a lot of effects in Dota, including this one, assume that they will be landing
on completely flat ground, so there was a bunch of scaling, rotating, and animating
of all these sets of rocks so that they wouldn’t land in mid-air despite starting high up there. Some of these rubble explosions have an element
of randomness to them, too, and I got lucky that, instead of having rocks throwing themselves
at the camera to clip into it, I had these cover the lower half as I refocused and zoomed,
just as I hoped. The global wind properties are also animated
to obscene extremes to have the trees get “pushed back” by the shock wave of the explosion,
but it’s not very noticeable. From another angle, though, it’s a neat effect. Now let’s talk about this awful weather we’re
having today. Moderately strong winds, heavy rain, it’s not a deadly storm, but I wouldn’t
take a stroll outside. Ok, how does that work in SFM? Obviously, particle effects,
but there’s something else. The global light has the ability to control
not only the amount of specular that comes back from materials, but also the broadness
of the reflection. This is a global control, so if it’s set, all world elements take on
those properties, including props. This is how everything looks wet and soaked. I based myself off of the rainy weather particles.
I greatly increased the amount of pretty much everything in there, and tweaked the sub-systems
to get the rain to look nicer from all these non-top-down perspectives. Now, if we pull
back away from the camera, the magic trick is revealed; there is only rain around the
camera. Just… a lot of it. Because having the rain everywhere would absolutely murder
the Source engine. Thankfully, because the lifetime of a single rain drop trail is very
short, it’s ok for the system to be parented to the camera, even if it moves a bit fast.
For example, this wouldn’t be possible with snow, because while a single rain trail has
a lifespan that averages half a second, a snowflake would have a lifespan of minimum
5 seconds, as they fall slowly and need plenty of time to fade in and out without it being
visually obvious. The atmospheric fog effect from earlier makes a comeback in a modified
form to emphasize wind, and also represent the “misty” part of the rain. I wish I could have had scrolling droplet
overlays on the characters, but back then, I didn’t know how to override materials for
existing models. Even then, I think that kind of detail may be outside the scope of the
Dota hero shader without some heavy compromises, or really weird tricks. That said, the particle
droplets are pretty nice as they are. They’re based off of the stock particle system that
is applied to characters under rainy weather, but tweaked heavily; I’ve also got like 4
different versions of it because each character needs different properties, max counts, scaling,
and emitting rates. Even though particles are supposed to be able to be spawned directly
on models, it turns out that either this is a broken feature, or I’ve got to do a weird
summoning ritual with virgin goats under a new moon to get it to work. Instead, they’re
spawned along hitboxes. Thankfully, Queen of Pain has the proper hitboxes set up. I set them up manually on the King,
and as accurately as I could since it can be a little tricky
to properly encompass all these non-boxy shapes with boxes. The Hellsworn Golem, though, had…
these. If I wanted to fix this, I would have had to reimport it from scratch. Sylei, the
creator of that Warlock set, saved the day and provided me with the source files! On
top of being able to create the proper hitboxes, the texture resolution was greatly increased,
and the material settings were enhanced to have nicer metal,
as she originally desired to have. This is also the only scene (besides the “blind
man” set) where I use the game’s own sky domes. However, they’re affected by fog, which I
am also heavily using here. But color grading allowed me to pull out detail from the flattened
gray tones. A cool thing about those sky domes, however, is that you get to rotate them around
manually! I do that in order to vaguely simulate the fast-moving clouds of stormy weather. I simultaneously hate and love this shot.
I hate the first half and love the second. The way he starts to fly is awkward, and when
he gets punched, it’s too cartoony. But the Queen flying in looks super badass. I… I
don’t want to look at this shot anymore. Anyway… I’ve had people ask me whether this
was a reference to Shadow of the Colossus. It wasn’t! At least, not deliberately. It’s
possible that it was a subconscious influence, as many things are… but I’ve never played
Shadow of the Colossus. Or ICO. I did, however, play and really enjoy The Last Guardian…
and I want to play Shadow of the Colossus whenever its next remake comes out. Technical trivia time. What’s happening here
is that the golem is scaled up, then Akasha is locked to its rootTransform, so she inherits
its scale. So I then have to scale her down. Then the camera is locked to her rootTransform…
but something internally doesn’t like that, manipulating transforms becomes a little weird.
And because of this, I have to resort to something which is usually a last resort: splitting
a shot into two. Because you see, that knife has to switch hierarchies in the middle of
the shot, but doing it the usual way is pretty much impossible due to the scaling shenanigans
that happen twice before reaching the knife down the hierarchy. A cool thing about SFM is that, when you modify
the order of transforms, by locking or unlocking a transform to another, the animation will
not change. However, scale is, with the exception of One Weird Trick™ (that I can’t remember
at the moment because it’s vague and counter-intuitive), always inherited from the parent. Given the
weird scaling things that are happening in this hierarchy, I’m sure you can imagine where
this is going. So the hierarchy switch is “performed” across a cut, and I manually matched
the scale and the rotation of the knife as closely as I could across the switch. Thankfully,
the particle systems played nice across the cut. When she drops back down, a happy accident
occurs; all the droplets underneath her makes it look like she is splashing a lot of water
upon landing. It’s not something I did, it just looks that way. It also happened when
the King was running towards the Golem. This part where she lifts her companion off
the ground is something I wanted to have since the early days of this idea floating around
my head. In terms of stereotypes, it’s another gender-swap. In fact, this is kinda the trend
of the video; at first, you think it’s about the Queen, then it turns out it’s about the
King, but then, it’s about her again. This is another way to give her more depth than
“well, I guess she seduces people and kills them in their sleep afterwards…”, like,
nah, she can probably kick their ass too. I gotta say though, that scene was a royal
pain to animate (no pun intended). A pair of wings is bad enough as it is, but another
that has to look floppy without intersecting with the ground or both characters… This is the… seventh section? It’s just
the ending, in the present again. Technically it’s still the fifth because
it’s in the same file, but whatever. Remember when I keep saying that I’m really
bad at estimating how long my ideas are gonna take… I actually almost outlasted the song,
to the point that it’s a little awkward that you’re still not supposed to be hearing what
they’re saying, as the song is fading out. AH WELL. What’s done is done. Now, I didn’t touch on this subject until
now because I believe that this is very much a “show, don’t tell” kind of thing, and that
what’s portrayed in the video should speak for itself, but hey, I’m also here to talk
about as many things related to this video as possible. The thing in question is, how
do these characters relate to each other? Well, they are definitely not related, but
I like to think they’d be literal soulmates; for demons, hell, demons that do what they
do, I think there’d be a certain beauty in them being very loyal and loving to each other.
Either way, I wanted to bring some humanity to them, and not just treat them as sexual
objects, especially Queen of Pain. Now, before we wrap things up, I’m gonna quickly
bring up every little thing that I didn’t manage to work into a previous section, as
well as additional questions that were sent to me for this video. And actually, it turns
out the first question deserves its own section! The question in question for this section
is “how long did this take you?”; so I figure it’d be fun to try and lay out a project timeline
that’s as complete as possible. But to answer the question directly, you could say, two years, since I got the idea around The International 2015, but of course, that’s not an accurate answer.
Progress was really on-and-off, intermittent, irregular, for a long time, and I only worked
on it full-time during the last couple of weeks. I have an extensive collection of WIP
screenshots, files, and chat logs which happen to have time stamps! However, this will not be
100% exhaustive as I’m going by the traces I left. The King’s model started being worked on in
November of 2015. Here are screenshots from between December 1st and 6th. Then, the following
February, he was looking like this. The face tattoos were concepted on the 15th.
I started rigging him at that time. The intro mocap was recorded on March 26th. Then nothing happened
for a long time. Things picked up again in the middle of October, where I got around
to finishing the King’s rigging. A few days later, I started looking into how I’d make
the facial animation shapes. I had considered bones for a while, due to the lack of blending
rules, but went with morph shapes in the end. Before that could start, teeth had to be modeled
by Anuxi, and that happened in early November. Then nothing happened for a while. This is
how the model looked like at that point, and how it would stay until early November of
2016, when the tattoos were refined a bit, and the chest ones added. In the next few
days, I started actually making the first facial shapes, and also doing the first engine
imports. I also started hooking up the wings and doing the weird arcane crap that comes
with rescaling an already rigged mesh. Towards the end of the month, I started doing the
more troublesome part of facial animation: the mouth. Then, nothing until late February of 2017.
I imported the mocap data that I had acted out a year prior, for some basic engine previsualization.
You can see, here, how it progressed, as I (re)animated over the mocap. In the middle
of March, I did the eyes hack for Akasha. Hey, you know who
this screenshot reminds me of? Dril. And in the last few days of the month, a lot of things happened: the facial
shapes were finalized, the throne room set was blocked out,
and I started working on walk cycles. First couple days of April, I started working
on the second section and its set. Then nothing much for a while,
then I got started on the “blind man” set. On April 18th, this was how things were looking. May comes around, I get started on the hallway section. May 7th, this is the current draft. Then not much happens until the end of the
month, which is when Anuxi started working on the environment art. Now, the pace really
picks up, so I’m going to narrate it a bit more like a list. June 1st: environment art is in. I start lighting
it. I also look into exploiting water fog in the “blind man” set, but it doesn’t work out.
I go back to the intro section and animate Akasha’s eyes. Besides polish, that was one
of the last things to do animation-wise. The environment art is still missing there. June 2nd: experimentations with giving the
tattoos different shader parameters. June 3rd: animating the struggle a bit more,
lighting it as well. June 5th: experiments with color grading on
the intro section. I try a couple more ideas in the “blind man” scene before settling on
the final one. Also starting to light the entire hallway with a defined set of lights
for each window. 43 seconds are content-complete, that is to say, the vast majority of the work
is done in all steps of the process, animation, lighting, color grading, etc. “Content-complete”
is a bit of a misnomer; after all, remember, “art is never finished, only abandoned”! June 6th: working on the hallway section’s
rapid fire shots. This is the day when I polished the gem. While tweaking the shader, this really
cool-looking accident happened. June 7th: the final environment art for the
throne room is arriving. I start tweaking the lighting and color grading to adapt for it. June 8th: still tweaking lighting and color
grading for the intro. June 9th: besides a few minor color grading
tweaks pertaining to saturation of certain ranges, the intro reaches final state. June 10th: I reuse the throne room’s ground
tile in the hallway, scaled down and flattened. All the hallway shots are in. June 11th: I notice that, when rendering at
4K, SFM2 doesn’t smooth ambient occlusion at all. I look into mitigating the issue in
After Effects. 60 seconds are content-complete. I start building the set for the exterior
of the dungeon. June 12th: Most of the work is done on that section. Here’s the draft render; notice the lack of tree cards in the first shot, as well
as the unfinished vista card in the second. June 13th: I rework two shots from the King’s
reveal moment after receiving more feedback. I also use the opportunity to plump up his
behind a little more. June 14th: I rework his texture to highlight
his apollo’s belt more. I also get started on the meadow set. June 15th: preliminary render of all present-time
meadow shots. You can see my first idea for the transition into the past kinda coming
together; I was thinking about having a close-up on his face as lighting shifted by, then when
the camera pulled away it’d be the past. However, after realizing it was not a good idea, I
ended up doing it the way it is now. June 17th: testing the rain in the meadow’s past. June 18th: replaced the transition shot with
a new one. It becomes clear I’ll need to figure out a new way to tackle
the inaccuracies of shadows. Still not sure what the monster they’re gonna fight should be! Then I see the awesome
Hellsworn Golem again and I realize it’s the one. June 19th: meadow explosion previz. June 20th: 1mn25s are content-complete.
Messing around with a cool sweeping shot idea. I discovered that a certain combination of
parameters on the Hero shader prevents fog from being applied to the material, and while
I didn’t use this newfound trick in this video, I did use it in my TI7 short film. June 21st: making this cool threatening shot
of the golem. Then realizing that it would make a lot more sense if he was showing off
some cool magical powers. Then also realizing that it would be a lot better if he faced
left, because when the King walks towards him, he is facing right. That way, they’d
face each other, and it would look more confrontational. The shot was already done, so…
I flipped it in editing! June 22nd: animating the bit where the Queen
picks up her boy off the ground. June 23rd: same. June 24th: making the ending section. June 25th: finishing the golem fighting shots.
And releasing! You might have noticed, towards the very end,
information got more sparse. When making a little bit of progress every so often for
so long, it makes sense to share it, screenshot it, make draft renders, all traces that I
could go by to establish a timeline. But when things were at their busiest, that didn’t
happen. What I do remember, however, is that the two pieces that were missing until the
last moment where this shot, as well as the Queen up on the golem’s face. Anyway, I hope you found that timeline interesting. Do you do storyboards? I can’t draw and I picture everything in my
mind, so I don’t have much use for them. Storyboards are great and sometimes necessary for team
collaboration, but I worked alone; the teams that I get together to help me on my movies
are there for custom assets. Besides, when it comes to work that is derivative like this,
and also takes place inside a game engine, you could argue storyboards aren’t as useful
as they’d be in an environment where everything is made from scratch. You know what I mean? How would King of Pain sound when he screams? So yeah, in summary, I listened to a song,
I thought the lyrics were cool and could fit a fantasy universe such as Dota’s, I wanted
to bring a bit of personality to these characters, I wanted to make something a little subversive,
and I liked the challenge of seeing a character through to completion, from the very beginning
to the very end. I hope you enjoyed this labor of love, and if you didn’t, I still hope you
enjoyed this breakdown video. If you have any questions that I didn’t cover here, please
feel free to ask in the comments; I’ll do my best to answer! See you next time!

27 thoughts on “Breakdown & behind the scenes – “King of Pain”

  1. Hey folks! Much like last year, I fed my huge transcript to YouTube in order to create subtitles! My accent being what it is, don't hesitate to turn them on 🙂

    Also, don't forget to read the description; there's a table of contents with time codes in there, if you wanna jump straight to something that interests you.

  2. SFM2 is looking so freaking amazing but learning all the new things it has to offer and the new model format, maps etc… ugh…

  3. the amount of time you invest in this short movie…….and there are those who spitting trash words to you…….

  4. just finished this video. these are always my favorite and I love seeing your personal process.

    out of curiosity how much of what you do (not just in this film, but in general) is mo-cap when it comes to animation for humanoid characters? I love what it can do, especially in the right hands but the loss of some important classical animation properties always makes me sad when I see it in some AAA properties

  5. I'm wondering how to make the same background. You can add any picture there and how to shove it into sfm. (Google translator, sorry for the "super" English)

  6. By the way, I've updated my "open-sourced assets" website for the first time in forever; among some models that I've used in my Dota shorts, I've also made King of Pain available, as well as the two CAT rigs I made for him and his counterpart. http://source.maxofs2d.net/

  7. Will you do a breakdown of " What a hero truly fear" ? as a 3d artist, these breakdown videos sure enlightened me a lot! thanks for such crazy effort!

  8. That's one among golden voices the broadcasting firms are looking for mate. It's about time for you to step up to the public light

Leave a Reply

Your email address will not be published. Required fields are marked *