Inside Animation: The Process

As I descend further into the depths of animation Hell, I thought it would be good to expose my animation process to the world. At this stage of the project, I’ve completed half of the cutscenes in the game. I’m able to tackle one cutscene every week. I have a good sense of the workflow and I’m getting “in-the-zone”, so I feel ready to talk about the steps I take to bring these scenes to life.

Long time readers of this blog (all three of you) may remember the first time I did this. Last year, when I was still animating the Demo, I wrote two articles about my process. Here are the links to that old article, parts one and two. Enough has changed since then that it warrants a new piece.

( NOTE: I won’t go into detail about how I animate the character’s faces, since last week’s blog was a deep dive into that system. Feel free to check that out. )

Now, how do we make the cutscenes in Where Shadows Slumber? To make this process more concrete, I’m going to focus on a relatively innocuous cutscene that I just got finished animating this week called Beach.

 

Cutscene-Writing.jpg

Step 1: Write A Story

You may not have expected this to be the first step on our journey, but it is! Writing is the most important step, by far. Before any work begins on a cutscene, Jack and I have to agree on the game’s story and how it will be told to the player during the game. This process took place a long time ago, at the beginning of 2017, when we locked ourselves in a room and did not emerge until the game’s narrative was pinned down.

Cutscene-Script-Beach

Before we named the character, “Grongus” served as a funny temporary name.

The original script was revised two times (officially) with some extra cuts happening unofficially in conversations between the two of us. Our original idea was to have a lot of cutscenes – I think 15 or 16 cutscenes in total – in the entire game. Since there are 8 Worlds, we wanted an intro cutscene to each World and a “finale” cutscene after the player completed all the cutscenes in that World. I agreed to this not only because I loved animation, but also because I vastly underestimated the scope of the work.

However, we eventually decided to eliminate a lot of the intro scenes. They weren’t really necessary, and it was jarring for players to watch two cutscenes in a row (a finale for one World, and an intro to the next) when they really wanted to get back to the gameplay. We only kept intro scenes for moments where the Player would be genuinely confused without them. The best example is the cutscene called Escape which takes place very early in the game. Prior to this cutscene, Obe is captured by human-like animals in a finale cutscene, and his Lantern is taken. In the very next Level, he’s freely walking around a volcanic prison with his Lantern in hand. Without a cutscene like Escape, players might wonder what happened to the animals, how the Lantern returned, and why Obe is not still in some kind of cell.

Cutscene-Script-River.JPG

Not only was this intro cut from the game, but this World’s puzzles don’t even operate the way we indicate in this script. This is why it’s good to leave cutscenes for the very end of the project!

Therefore, the Beach scene is a bit of a relic as far as cutscenes go. It’s one of just two intro cutscenes left in the game, taking place at the beginning of World 3, the Aqueduct. I felt it was important to show the transition between the River World and the Aqueduct World because they are quite different, and the River finale doesn’t hint at the Aqueduct in the slightest bit.

Here’s the short version: The scripting process is important, because if we can’t agree on whether or not a cutscene should be in the game, I can’t go forward and spend 40 hours creating it!

 

Cutscene-Sketch-Header.jpg

Step 2: Sketch the Scene

Execution begins with sketching the scene on pen and paper. There is a long gap between the writing process and the actual execution of the cutscene. For reference, I began this cutscene 1 week ago on May 22nd 2018, but the story was written in January of 2017. That’s over a year! As I mentioned above, one reason for this is because puzzles are more important to the game than cutscenes are. Puzzles get top priority! Also, since edits to the script happen sporadically as the game evolves and our scope shrinks, it’s good to sit on the script for a while. That’s why I’m doing cutscenes last.

There’s one more good reason, though! Since cutscenes happen after the game’s art has been completed, the sketching process is a lot easier. Most of the game’s artwork is done using a modular set of puzzle-piece 3D models that can be arranged along a grid to form pathways, bridges and obstacles. I’ve also created a bunch of materials for each World. That means when it’s time to lay out how a cutscene is going to look, I have a wealth of building blocks to work with. Really, all I need to do is draw a few pictures to determine the camera’s position, and I’m good to go.

Cutscene-Sketch

When I sketch a scene, I’m trying to make it look just like the puzzles. My goal for cutscenes is that you never even feel like you’ve left the game. The camera is in the same position and rarely moves, just like the game. I use the same models, colors, camera effects, and even some ambient audio, to keep that feeling of similarity. So when I draw a picture of the scene, I’m trying to get everything in one shot. I need it to work in portrait mode on an iPhone, with room for superfluous art on the sides that only iPad users can see.

That’s why for Beach I composed the scene with the outlet pipe near the top of the scene. I know Obe is going to wake up, walk to it, and climb in. Arranging the scene this way avoids a messy camera transition, and lets us focus on the stillness of the moment.

With a good picture to work from, we’re ready to set things up in Unity 3D.

 

 

BurnedLaptop

 

Step 3: The Unity Smoke Test

You were probably expecting Step 3 to be “model everything in the scene” or “begin animation” – but I don’t dive into that right away. I’ve gotten into the habit of doing a smoke test whenever possible, before beginning a large amount of work. This is an old phrase from computer programming that refers to plugging in a machine and seeing if it starts to smoke, or light on fire. It’s also known as a sanity test.

As the picture above indicates, the modern version of a smoke test is when I douse my computer in gasoline, light it on fire, change my name and move to Mexico. (Wait, that’s not a smoke test. That’s Operation: Secret Grongus. Whoops! Jack, please remind me to delete this paragraph before I hit Publish)

The modern version of a smoke test is when you intentionally do placeholder work just so you can test it and see if something is going to function correctly. After all, if it doesn’t work now, it won’t magically work later. It’s especially important when making a transition from one tool (3DSMax) to another (Unity). The game’s cutscenes will be animated in 3DSMax, but they’ll be viewed by the player in a build of the game generated by Unity. We need to make sure that pipeline works before we dedicate 40 hours of work to something.

First, I create a scene in 3DSMax to work with. I import (3DSMax calls it “merging”) Obe’s character model, and the models of any other characters that are in the cutscene, into the file. I also merge in a few models that I know I will need. For example, in Beach, I know I need to use my ladder pieces so Obe can climb into the pipe.

Cutscene-SmokeTest.JPG

Will Obe, his Lantern, and my modular ladder make it into Unity properly?

I give Obe some basic placeholder animations. Really, it’s just a few frames that will all be deleted later. I make Obe wave his hands, move in a T-pose, do jumping jacks, or something silly. My goal is to make sure the animations are properly translating over to Unity. I do a similar process for objects in the scene and other characters. Obe is animated separately from them because I’m using the same Unity prefab that is used in the real game. This adds another step, but it’s worth it in case there are crucial last-minute changes to his prefab. Along with that, there’s a lot of little things to do – Animation Controllers for each FBX file, setup in the scene, camera positioning, light adjustments, and much more. Anything could go wrong, so I’d rather find out before I’ve done a few grueling hours of animation.

Cutscene-SmokeTest-2.JPG

Every cutscene needs its own Unity scene, FBX files for Obe and the rest of the cutscene, and Animation Controllers for Obe and the rest of the cutscene.

I have a small checklist of things I go down:

  • Can I animate Obe?
  • Can I animate his Lantern separately from him?
  • Can I animate his Lantern if he’s holding it and it follows his hand?
  • Are Obe’s hands, feet, and pelvis “Linked to World”?
  • Can I animate other characters?
  • Can I animate other characters holding objects?
  • Can I animate objects on their own?
  • Do other characters require their body parts to be “Linked to World”?
  • Is there a light? Is that light attached to the Lantern?
  • Is the Lantern flickering properly?
  • Does the camera need to be re-positioned, or zoomed in?

When I’m confident that Obe’s animations and the animations of everything else in the cutscene are working well, I can begin modeling the scene in earnest. Now I’ve made sure there won’t be any surprises during the next step.

 

Cutscene-Model-3DSMax.JPG

Step 4: Model Static Objects

We’re ready to bring my ink sketch to life by creating the scene in 3DSMax. This is done by using modular building blocks wherever possible, and also creating new 3D models. Beach is a bit of a hybrid in this regard. The ladder, for example, is the same model and material used throughout the game whenever Obe climbs a ladder during a puzzle. The water is the same rig we use during Levels, albeit with a special material. But other specific objects, like the sandy beach, the wall, and the outlet pipe are unique to this scene. I gave up the strictly modular approach a little while ago, and I think the game is better for it. (Above, the scene in 3DSMax. Below, the same scene in Unity.)

Cutscene-Model-Unity

Now that the models are in place, and nothing is going to change, I can go forward with confidence. I place Obe in an initial pose that matches the terrain, and begin animating the scene by hand.

 

Cutscene-Animation.JPG

Step 5: Keyframe Animation

Recently, when I was at PAX East 2018, someone asked me if the cutscenes in our game were animated using motion capture technology. I took this as a compliment, because I think most people assume motion capture animations are an indicator of high quality. Thanks, random person!

For those unfamiliar with motion capture, think of the character Gollum in Peter Jackson’s Lord of the Rings trilogy. Gollum was animated in 3D, but not by hand – rather, the actor Andy Serkis dressed up in a silly motion capture suit and performed the role himself. Later, computer imagery was placed on top of the scene using data captured from his performance. This technology has also been used to great effect in the Uncharted series of games. As it grows in popularity, there are boundless examples to use. I can’t name them all!

However, that is not how animations are created for Where Shadows Slumber.

Motion capture is the proper tool to use when your resulting animation is intended to be life-like, gritty, and serious. Characters like Joel and Ellie from The Last of Us work well because they are intended to be portraits of real people, so it makes sense to have actors play them. Motion capture also requires a financial investment that only AAA studios can afford. If you’re using motion capture, that means you’re paying actors some money, purchasing a large studio room to perform in, purchasing high-speed cameras, and purchasing (or creating) software to bring it from the stage into the virtual world. We don’t have the resources to afford that, and I don’t want to work that way anyway!

Cutscene-Animation-Frames.png

By setting key frames at 600 (Obe takes a step) and 605 (Obe slips a bit in the uneven sand) the trivial frames between (601, 602, 603, 604) are filled in by the computer program.

Animation for Where Shadows Slumber is done the old fashioned way – by mouse-and-click keyframe setting. I’m fairly certain Pixar does this as well, albeit with more complex tools than the 3DSMax Animation Timeline. If you’ve ever seen a documentary on how Walt Disney created those first frames of Mickey Mouse by hand on cell sheets, you get the idea – the lead animator sets a pose for one period of time, and then sets a different pose for a different period of time. His subordinates fill in the gaps, and the result is the illusion of animation.

I don’t have any subordinates, so 3DSMax fills in the gaps for me. Sometimes I work with it, and sometimes I have to fight it because it filled in the gaps wrong. You need a lot of key-frames, but animation frames are just a fraction of a second ( 1/30th a second, in our game ). That means an hour of work may get you just 3 quick seconds of animation. The process is painstaking, and easily takes the longest amount of time in the cutscene creation process. Beach, a relatively simple 50 second cutscene, required 7.5 hours of animation to complete. The previous cutscene, Wolf, which is a very involved fight scene that lasts 100 seconds, required 48.5 hours of animation!

 

Cutscene-Footprints

Step 6: Special Effects

We’re not even close to done yet. Animating the characters in a scene is not enough to bring it to life! Every cutscene needs some kind of special effects, whether it’s footprints in the sand or the drip-drip-drip of a leaky pipe. This never takes as long as actual animation, but it can still be a painstaking process. For example, in the Wolf scene I mentioned above, every time an object fell into the water I had to trigger a particle burst to make it seem like the objects were splashing. That was as fun as it sounds!

To achieve my special effects, I wrote a script called Cutscene Manager. This thing will fire off effects based on the time of the animation, and I save it only for things I can’t animate by hand. Here’s two examples to show you the difference:

Example 1: Footprints in the sand

These footprints can be animated by hand, so I don’t need to use my script. Notice how they appear after Obe touches his feet to the ground – what’s happening here? Well, they are actually just hiding under the sand! I triggered their animations using keyframes, just like anything else in the scene. Above, you can see one that I have selected that is still burrowed under the ground, waiting to rise up.

Example 2: Obe’s Lantern light grows, and then shrinks

We use the solid color black a lot in this game. It represents total darkness, which makes it handy for scene transitions. Every Level and cutscene begins with the world in total darkness, and then a light grows somewhere and the animation begins. I think this helps focus the attention of the player, and it makes transitions less jarring. However, since Lights are a Unity component, their Range values can’t be animated in 3DS Max. 3DS Max has no idea they even exist! Instead, my Cutscene Manager script knows to change the Range of a specific Light at a specific speed at a specific point in the animation. It may seem like a crude solution, but it’s the best we came up with. At the end of the scene, the Light gets another trigger to shrink down to zero – pitch black.

You can see why special effects necessarily need to come after principal animation. So many of these things require specific timing! If the underlying animation changes, they’d have to change, too. It’s better just to wait.

 

Step 7: Recording for Alba and Noah

Recording the cutscene is my final step, although the cutscene is not done yet. Using OBS, I record my screen with the animation playing. I mute the sound in the game, and I talk during the cutscene to tell our audio engineers what is happening. Some things are obvious, and I don’t need to say them (e.g. he’s walking in sand, which sounds like the sound of someone walking in sand). Other times, a noise comes from off-screen and has no visual representation. Without my direction, Alba and Noah couldn’t possibly guess at what is happening in the scene. My recording is set to be the exact same time-frame that it will be in the game, which means they can “score” this video as if it was a short film. From the work they’ve done so far on earlier cutscenes, I can tell the cutscene audio is going to be incredible.

I briefly flirted with the idea of using a high-quality recording of the cutscene in the game, instead of having people view the cutscenes in real-time. However, I don’t trust Unity’s ability to play videos across multiple iOS devices and countless Android platforms. I also wanted to avoid including 10 large MP4 files into the game’s databanks, for fear it would clog up the game. The last reason is that our final cutscene transitions seamlessly into the credits, which need to be translated into multiple languages. This would result in 15 different movie files! I prefer to have that done on the fly using Jack’s JSON file setup.

Once Alba and Noah score the cutscene, I’ll put that file into the game and the audio will play in-sync with the animation, all in real time! Players can pause the cutscene from a top menu, go to the level select screen, skip the cutscene, or resume the animation seamlessly.

 

 


 

I don’t exactly know how Alba and Noah score these cutscenes, so I’ll leave that for another blog post. I invite them to share their knowledge with you, dear readers, whenever they feel the desire to do so. (Maybe I’ll interview them about it?)

That’s all for now. I need to go back to the animation mines and make more cutscenes… I’ll see you back here next Tuesday for the June State of the Art. Don’t miss it!

 

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

We hope you enjoyed this look at the cutscene animation process. You can find out more about our game at WhereShadowsSlumber.com, ask us on Twitter (@GameRevenant), Facebookitch.io, or Twitch, and feel free to email us directly at contact@GameRevenant.com.

Frank DiCola is the founder of Game Revenant and the artist for Where Shadows Slumber.

Inside Animation: Face Morphing

When I was showing off Where Shadows Slumber earlier this year at MAGFest 2018, one of my fellow game developers gave me a stellar compliment. As he watched the game’s second cutscene, he said “these animations are so evocative.” What he meant was that the animation was conveying a large amount of emotional detail even though the characters never speak a word. This is especially impressive considering the cutscenes don’t even have sound effects yet!

Sometimes, we only remember the one negative comment we get in a sea of compliments. But for once, a positive remark stuck with me. Evocative. If there’s one thing I can do as the animator for this game, it is to ensure that the player feels a range of emotions when they watch the game’s story unfold. But how can this be accomplished when our character is so small on the screen? More practically, how is this actually achieved using a 3D modeling studio and the Unity 3D engine?

This blog post is a quick glimpse at how I set up the facial animation rigs for the characters in Where Shadows Slumber.

 

3Ds

First: The Old and Stupid Way

Before I show you how I animate the faces in the current build of the game, I should show you the first way I tried it back when we were creating a Demo of the game. The old Obe model, shown above, had a perfect sphere for a head. In the image above, it’s grey. Then, I put in two snowman eyeballs as flat discs (they look teal in the image above) and a mouth plane that wrapped around his ball-head (obscured above). So far, so bad – nothing can be animated here! These objects are static. His face won’t look evocative at all.

My answer was to create little patches of skin that could be moved around to simulate facial animation. Though they look peach in this image above, they would blend in 100% with his skin tone thanks to Jack’s shader. My philosophy was simple – if the skin slabs were out of the way, his eyes were open. If they blocked his eyes partially, that was a facial expression. In the image above, near the bottom-right, you can see that Obe’s unsuspecting opponent has his skin slabs set to angry because they partially block his eyes in a slanted direction. By moving the slabs around in time with the animation, facial expressions were simulated.

This was supposed to be a “quick and dirty” way of doing facial animation, but it ended up being a “takes forever and looks terrible” way of doing facial animation. I’ll never return to an amateur system like this! The silliest part is that 3DS Max has a system perfectly set up for preset facial animations called Morpher.

 

HeadAnimations

The Morpher Method

By spending more time modeling Obe’s head, I was able to create a flexible skull with some textures mapped onto it (black for features, white for skin) and preset animations with Morpher. This skull can be tuned to different emotions, and even combinations of emotions. Above, you can see how Obe can express a range of poses: angry, devastated, confused, joyous, blissful. Now that you’ve seen the final product above, here’s how to set up your own:

Morph-Base

Step 1: Model the base head

Spend some time crafting a base head for your character. Note that you’ll be unable to edit it once you begin Morphing, so take your time. Create flexible eyes, a mouth, a nose and ears (if your character has those) and be sure to add enough loops so they can move around later without looking jagged. This time, I gave Obe detached cartoon eyebrows so I could be more ambitious with his facial expressions.

Morph-Poses.JPG

Step 2: Duplicate the head as a Copy (not an Instance) and pose it

Now you must copy the base head and move it somewhere else in the scene. (I like to make a Game of Thrones style wall of faces.) Edit the vertices on this model into an extreme pose, such as furious anger or deep sadness. This pose will be what “100%” of this emotion looks like. Note that the vertices from the base head are going to move (morph, if you prefer) into the new positions you give them here, as well as every point in-between. Pay close attention to the topology of your model when you choose new positions for these verts, and your animations will look smooth. Above, you can see I do mouth poses and eye poses separately, so a wide open mouth (agape) can exist separately or simultaneously with wide open eyes and raised eyebrows (shock).

Morph-Combo

Step 3: Connect your pose to the base head in the Morpher modifier

The base head will have the Morpher modifier on it. None of the others need it. From the base head, you can use Pick Object From Scene to slot in certain poses as animation sliders. Then, using the arrows shown next to the poses, you can “morph” these targets from 0 to 100. 0 is going to look like your base head – 100 is going to look like 100% of the pose. If you combine two poses, as I did above, you may get weird results. But in this case, shocked eyes and a mouth agape work well together.

Morph-Gallery.JPG

Step 4: Repeat Steps 2 and 3 for every face pose you’ll need for this character

I made separate poses for Obe’s mouth (left of center) and his eyes (right of center). The yellow shape in the center is his base head. I tried to do every emotion I’d need, as well as building blocks like “shut R” for the right eye being closed. One thing I didn’t need to do is detailed mouth animation for talking, since he never says anything in a real human language. He just wails in terror a lot. But if you were doing this for a regular animated film, you’d want a whole set of mouth animations for the various sounds we make with our mouths (Chuh! Puh! Quah! Teh!) I’m happy I didn’t need that, because I hate doing those.

Morph-Swag.JPG

Step 5: Animate in a Scene when it’s all ready

This massive setup time bears fruit once you begin animating. Having a flexible facial animation system is remarkable. I love this system so much, and I never have to worry about whether Obe is expressing the emotion I want. Everything is correct and his face is super easy to read, even at a distance. Here, he’s giving an “…OK” kind of look as he escapes prison early in the game’s story. Though this look is not programmed in directly, it’s a combination of four Morph Targets: left eye closed, right eye closed, mouth closed, and “serious.” That’s the beauty of working with Morpher!

 

If you’re building your own facial animation system, be warned that it’s a lot of work. However, it will pay off in the end. Good luck making your animations evocative! Feel free to ask me any questions in the comments, over email, or on Twitter. I’m always eager to help. Happy blending, everyone!

 

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

We hope you enjoyed this technical look at the systems behind the game’s artwork. You can find out more about our game at WhereShadowsSlumber.com, ask us on Twitter (@GameRevenant), Facebookitch.io, or Twitch, and feel free to email us directly at contact@GameRevenant.com.

Frank DiCola is the founder of Game Revenant and the artist for Where Shadows Slumber.

Unity’s Performance Debugging Tools

Last week I discussed some of the basics of how rendering works in Unity. As I mentioned, all of that was setup for this week’s blog post. Since I’m working on rendering optimization now, I figured it would be a great time to go over the debugging tools Unity provides in order to aid rendering performance. Online resources can be a little scarcer for rendering than they are for other aspects of coding, so hopefully anyone who’s working on their own game might glean some useful information from this post. And even if you’re not working on anything right now, I hope you follow along and maybe learn a bit!

Unity is a nice little game engine, and, as such, it does a lot of the work for you. For the most part, when making a game, you don’t have to worry about the nitty-gritty stuff like rendering. When building for mobile, however (especially when you have specific graphics/lighting customization), you might have to descend into shader-land. Fortunately, Unity provides a few tools that can help you to deal with optimizing your rendering pipeline.

 

4-24-Profiler.JPG

Profiler

The first step in fixing rendering performance issues is to know about them. The best way to do that is with the Profiler window (Window -> Profiler). While you’re running your game, the Profiler keeps track of a lot of incredibly useful information, like how long each frame takes to render, split up by category. For instance, the Profiler will tell me that a frame took 60 milliseconds to run, 40 of which were due rendering and 15 from script execution, etc. This is the first place you should check when trying to improve performance – there’s no point in optimizing your rendering if it’s actually your scripts that are running slowly!

profiler

So much information!

For the purposes of rendering, there’s an entire Profiler section! The Rendering Profiler keeps track of the number of batches, setPass calls, triangles, and vertices in each frame. Looking here for inconsistencies, spikes, and just high numbers in general is a good way to get an idea of why your game is taking so long to render. The Profiler also has a lot of other info that’s useful for diagnosing and debugging performance problems. I really recommend profiling your game and thoroughly looking through the results to get as much information about how your game is running as possible.

 

android_debug_bridge

Android Debug Bridge

While profiling in the editor is pretty useful, it doesn’t tell us much – of course our game will be fast on a great big computer, but how does it run on a crappy phone?

The is where ADB, or the Android Debug Bridge, comes in. ADB allows your computer to communicate with your Android phone about all sorts of stuff. Specifically (for our use cases), it allows you to profile your game while it’s running on a device. If you plug your phone into your computer, build the game directly to your phone, and open the profiler, you should see some results. This is the information we want, because it tells a much truer story about how your game runs on a phone.

Where Shadows Slumber, for instance, runs at ~200 fps in the Unity editor. When I plug my phone (the Google Pixel 2) into the profile, I get a framerate of ~60 fps. This is pretty good, so I know our game can run on newer devices. However, when I plug in my old phone (a broken HTC One M8), I get closer to ~12 fps. Looking at the profile during this run will give me much more useful information about what I should fix, since this is the device where performance is actually suffering. If you’re making any big decisions or changes based on profiler results, make sure those results come from your actual targeted device, and not just from the editor.

ADB usually comes with the Android SDK – if you have the Android SDK set up with Unity (which allows you to build to Android devices), then you should be able to use ADB with the profiler pretty painlessly.

I should also mention that there might be an equivalent tool for iOS debugging, but, as I do all of my development on a Windows machine, and all of my testing on an Android phone, I wouldn’t know what it is. Sorry!

 

4-24-Header

Frame Debugger

The next most important tool for rendering performance is the Frame Debugger (Window -> Frame Debugger). While the Profiler tells us a lot about what’s happening during rendering as a whole, it still treats the rendering process as a black box, not letting us see what’s actually happening. The is where the Frame Debugger comes in – it allows us to see, step by step, exactly what the GPU is doing to render our scene.

As I mentioned last week, the GPU renders the scene through a bunch of draw calls. The Frame Debugger allows us to see what each of those draw calls is drawing. This allows us to determine which materials/shaders are causing the most draw calls, which is one of the biggest contributors to rendering lag. It also provides a bunch of information about each draw call, such as the properties passed to the shader or geometry details. The important thing that it tells you is why this draw call wasn’t batched with the previous draw call.

frame debugger

All of this happens in a single frame

Batching is Unity’s first defense against rendering lag, so it makes sense to batch as much stuff into a single draw call as possible. Because rendering is such a complex process, there are a lot of reasons why draw calls can’t be batched together – certain rendering components simply can’t be batched, meshes with too many vertices or negative scaling can’t be batched, etc. The frame debugger will tell you why each draw call isn’t batched with the previous one, so you can determine if there are any changes you can make that might reduce the number of draw calls, thereby improving rendering performance.

For example, in Where Shadows Slumber, we re-use meshes in certain places. Sometimes, if we require a “mirrored” look we’ll reuse a mesh, and then set the scale to -1. This was before we really looked into rendering performance, and, unfortunately, it causes problems – a mesh with negative scaling can’t be batched with a mesh with positive scaling, so this ends up creating multiple draw calls. Rather than setting the scale of the object to -1, we simply import a new, mirrored mesh and update the object, allowing these draw calls to be batched and improving performance.

 

4-24-Stats.JPG

 

Stats

That’s it for the heavy-hitters; between the Profiler, Frame Debugger, and ADB, you should be able to get a pretty good idea of what’s going on in render-land. Unfortunately, digging through them can take a while – sometimes you just want a quick indicator of what’s going on in your scene. Enter the Stats window.

The Stats window (click “Stats” in the Game View) is a small overlay in the game view which gives you a quick rundown of various rendering indicators in real time. It’s not as in-depth, but it gives a much quicker picture of performance.

stats

That’s a lot of batches!

While it sounds like the stats window doesn’t add much – after all, the Profiler can give you the same information – I’ve found it to be very useful. The Profiler is probably better when you’re actively debugging rendering performance, but the stats window allows you to notice places where rendering performance might take a hit, even when you’re doing other things.

When I’m testing some other part of the game on my computer, I’m not going to notice any rendering lag, because my computer is so much more powerful than a phone. I’m also not going to be looking at the Profiler or Frame Debugger, because I’m not worrying about rendering at the moment. However, if I have the stats window open and I notice that the number of draw calls is in the hundreds, then I know something is going on. At that point I can get out the Profiler and see what’s happening – but I wouldn’t even have known there was anything amiss if it weren’t for the stats window.

 

4-24-SceneView.JPG

Scene View Draw Mode

As we get further and further down the list, we’re moving from “debugging all-star” to “it’s useful, but you probably won’t use it much”. Scene View Draw Modes fall into this category, but they’re still good to know about. You can access different Scene View Draw Modes by clicking the drop down menu at the top right of the scene view window.

The Scene View in Unity is one of the main windows that you use to make your game – it shows everything in the scene, allowing you to move around through the scene and select, move, rotate, scale, etc., any game objects. Usually the Scene View just displays the objects exactly as they would be displayed in the game. However, it has a bunch of other modes, and some of them are actually pretty useful. The two that I find the most useful when considering rendering concerns are listed below, although they’re all worth checking out:

Shaded Wireframe: This is my default draw mode, as it looks pretty similar to the normal shaded mode. The difference is that it also shows all of the triangles and vertices that you’re drawing. This is useful because certain shader operations are performed once for every vertex. Decreasing the number of vertices in your scene can give you a bit of a performance boost, and the shaded wireframe draw mode helps you see when you might have too many vertices.

3-4toomanytris

The shaded wireframe shows that there are too many polys.

Overdraw: This mode draws each object as a single transparent color. This makes it very easy to see when multiple objects are being drawn in the same spot on the screen. Since the GPU has to draw every pixel of each object (even if that pixel will be overwritten later), it ends up wasting some calculations. Areas that are very bright will waste even more calculations. Switching to this draw mode every so often lets you know if there are any places where you might want to remove some meshes.

 

161004-worst-hacks-history-feaure

The Internet!

It should pretty much go without saying, but one of your best resources for debugging performance is the internet. Unfortunately, when it comes to rendering in Unity, the information out there is pretty scarce. Unlike with normal imperative coding, where you can simply Google “how to pathfinding” and get 30 implementations, you have to work a bit harder with rendering stuff. I find it’s best to do what you can and only resort to the internet with very specific questions. That said, there is still a lot of helpful information out there. You just have to know going in that only one of every three stack overflow questions makes any sense, and only one of every four Unity forum threads are using the most recent APIs. It’s like “Googling: Nightmare Mode”!

For anyone reading this post who is actually working on rendering stuff – I’m very, very sorry. I hope that this post and the tools I discussed help to shed at least a little bit of light in the dark underworld that is shader-land, and I hope you can achieve your rendering goals and make it back to the mortal realm before your soul is forever lost.

For everyone else who hasn’t done any rendering stuff, I hope you learned a bit, and that maybe I inspired you to get involved with some rendering code! It’s really not that bad, I promise!

 

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

If you didn’t already have a working knowledge of rendering, I hope this post helped! If you do know about rendering stuff, I hope you don’t hate me too much for my imprecision! You can always find out more about our game at WhereShadowsSlumber.com, find us on Twitter (@GameRevenant), Facebookitch.io, or Twitch, join the Game Revenant Discord, and feel free to email us directly with any questions or feedback at contact@GameRevenant.com.

Jack Kelly is the head developer and designer for Where Shadows Slumber.

Rendering in Unity

As you probably know, Where Shadows Slumber is starting to ramp up toward a release this summer. It’s an exciting, terrifying time. We can’t wait to share the entirety of what we’ve been working on with the world, but there’s also a daunting amount of stuff to do, and not much time to do it.

If you’ve played any of the recent beta builds, hopefully you like what you’re seeing in terms of design, functionality, polish, art, and sound. Unfortunately, if you’ve played the beta on anything other than a high-end device, you’ve probably noticed something that you don’t like: lag.

Lag is annoying. Lag is something that can take a great game and ruin it. It doesn’t matter that your level design is perfect, your models are beautiful, and your music is entrancing if it only runs at 10 frames per second. If that’s the case, nobody is going to enjoy playing it. And, regrettably, that happens to be the case for Where Shadows Slumber.

LikeButta

Like butta’!

So, one of my biggest tasks before we release is to optimize the game, making it run faster and allowing us to have higher frame rates. The area with the most opportunity for improvement is during rendering. A game consists of a lot of logic – Obe’s location, things changing in shadow, etc. – but rendering is the process of actually drawing the scene onto the pixels of your screen.

Earlier this week, I started a post about the different tools you can use to help optimize your rendering performance. It seemed like a good idea, since that’s exactly what I was doing. However, I realized that if you don’t know how rendering works in the first place, most of it is complete gibberish. So I’m gonna leave that post for next week, and this week I’ll give a quick introduction to how 3D rendering works in Unity.

Blog-Render.JPG

Rendering

Rendering is the process by which the objects in your game are drawn to the screen. Until it’s rendered, an object in your game is just a collection of information about that object. That information gets translated from information the game engine understands into information the GPU can understand. There are a few important concepts to understand here:

  • An object’s mesh describes the shape of the object. It consists of a collection of vertices and triangles.
  • An object’s material is a description of how that object should be drawn. It encapsulates things like colors and shininess.
  • Every material uses a shader. This is the program which calculates exactly what color each pixel should be, based on the information in the mesh and material.
  • World space is the 3D coordinate space in which all of your game objects live.
  • Screen space is a 2D coordinate space that represents the screen to which the game is drawn.

The basics of rendering are pretty easy to understand, at least from a high-level view. The meshes for the objects in your game are translated from world space to screen space, based on the camera that’s doing the rendering. For instance, in Where Shadows Slumber, objects that are further away in the x-axis will be higher up and more to the right when viewed on the screen. Fortunately, we don’t have to mess with this too much – Unity’s cameras do a good job of making this translation.

Once we know where each pixel should be drawn, we need to determine what color that pixel should be – this is where the material and shader come in. Unity provides a whole bunch of information to the shader (position, angle, information about lights in the scene, etc.). The shader uses that information, plus the information from the material, to determine exactly what color the given pixel should be. This happens for every pixel on the screen, resulting in a beautiful picture of exactly what you expect to see.

The GPU

Now that we understand the basics of rendering, let’s take a deeper look into how it actually happens: the GPU.

The GPU, or graphics processing unit, is the part of the computer in charge of calculating the results of our shaders to determine a pixel’s color. Since modern phones have over 2 million pixels, our shader code must be run over 2 million times per frame – all within a fraction of a second.

How does the GPU manage to do so many calculations so quickly? It’s due to the design of the GPU, and can be summed up in one very important sentence: the GPU is good at performing the same operation, a bunch of times, very quickly. The key thing to remember here is that it’s good at performing the same operation; trying to perform different operations is what slows it down.

Specifically, switching from one material to another causes a bit of a hiccup in terms of speed. The properties of the material are passed to the GPU as a set of parameters in what is known as a SetPass call. SetPass calls are one of the first and most important indicators when it comes to optimizing rendering performance, and are often indicative of how quickly or slowly your game will run.

Because SetPass calls take so long, Unity has a strategy for avoiding them called batching. If there are two objects that have the same material, that means they have the same parameters passed to the GPU. This means that those parameters don’t need to be reset in between drawing the two objects. These two objects can be batched, so the GPU will draw them at the same time. Batching is Unity’s first line of defense against rendering slowness.

The CPU

While the GPU is the star of the show when it comes to rendering, the CPU, or central processing unity, still does some important stuff that’s worth mentioning (even if it doesn’t have a huge bearing on the optimization steps we’ll be taking). Of course, the CPU is in charge of running your game, which includes all of the non-shader code you’ve written for it, as well as any under-the-hood things Unity is doing, like physics and stuff.

The CPU does a lot of the “set up” for rendering, before the GPU comes in and does the heavy number-crunching. This includes sending specific information to the GPU, including things like the positions of lights, the properties of shadows, and other details about the scene and your project’s rendering config.

One of the more important rendering-related things the CPU does is called culling. Since the CPU knows where your camera is, and where all of your objects are, it can figure out that some objects won’t ever be viewed. The GPU won’t know this, and will still perform calculations for those objects. In order to avoid doing these unnecessary calculations, the CPU will first remove any of the objects that won’t be drawn, so the GPU never even knows about them.

Image

All of these Hitlers would be culled by the CPU (image credit: smbc-comics.com)

Since we’re talking about performance, it should be noted that the GPU and the CPU are two different entities. This means that, if your game is experiencing lag, it’s likely due to either the GPU or the CPU, but not both. In this case, improving the performance of the other component won’t actually make your game run any faster, because you’ll still be bottlenecked by the slower process.

So, now that we know a little bit more about how rendering actually happens, maybe we can use that knowledge to improve performance! At least, that’s what I’m hoping. If Where Shadows Slumber never comes out, then you’ll know I’ve failed. Either way, I’ll see you next week for a look into the tools you can use to help you optimize rendering performance in Unity!

 

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

If you didn’t already have a working knowledge of rendering, I hope this post helped! If you do know about rendering stuff, I hope you don’t hate me too much for my imprecision! You can always find out more about our game at WhereShadowsSlumber.com, find us on Twitter (@GameRevenant), Facebookitch.io, or Twitch, join the Game Revenant Discord, and feel free to email us directly with any questions or feedback at contact@GameRevenant.com.

Jack Kelly is the head developer and designer for Where Shadows Slumber.

Where Shadows Slumber at PAX 2018’s Indie Minibooth

I’ve just returned from an exhausting trip to Boston for PAX East, where I had the pleasure of demoing Where Shadows Slumber at the Indie Megabooth. In this blog post, I’ll briefly describe what the application process was like, how the show went, and my thoughts on the whole setup.

 


 

 

0 - IMB.jpg

Applying to the Indie Minibooth

Those who have followed our development for the past year may remember that we went to PAX East last year, as part of the Indie Showcase for 2017. It was an honor to be included in that amazing contest! Reading that old blog post is funny, because it shows you just how far we’ve come in the past year. At that time, the cutscene in our demo hadn’t even been animated yet! (Which is completely my fault, lol) It’s amazing to think that now, a year later, the game is nearly complete.

Anyway, we knew we wanted to return to Boston because the crowd at PAX East is huge, but we had a predicament. How do you get an affordable spot at the show? We didn’t want to be relegated to the fringes of the expo hall, which is where they usually place you when you buy a booth on your own. We obviously couldn’t be accepted into the Indie Showcase a second time, (although we are totally going to try for PAX West’s competition) so what were our options?

We heard about the Indie Megabooth because of last year’s PAX – they were right near us, and the space was impressive. We decided to apply via their website, and on November 6th, 2017 we submitted our application for their booth at PAX East 2018 and GDC 2018. The application was essentially a pitch for the game, complete with images, video, and a build their judges could play.

Although we were denied for GDC 2018, we got an email on February 1st of this year notifying us that we were accepted and we needed to reply as soon as possible. We paid the $1,200 fee toward the end of the month, which covered everything from booth space, shelving, promotion, and electricity at the show. All of this was very secretive, which is why we didn’t mention it on this blog or on social media. They wanted the roll-out to be all in unison, so they told developers not to spill the beans that they had been accepted.

I decided that since the space around the Minibooth was so limited, it wasn’t worth bringing a ton of stuff in my car. Instead, I took the train up to Boston on Friday and began to set up for the show!

 

2 - Setup.jpg

The Setup

The setup for the Minibooth is a vertical kiosk with a table, and a monitor on top. Our setup looked like the image above: just enough room for mobile devices, Where Shadows Slumber pins, and drop cards. The monitor was playing a 10 minute looping video reel I created prior to the show.

20180406_183246.jpg

Here we are on Friday night, setting up for the weekend. Minibooth was created to be a more affordable way to attend events, so it’s set up in kind of a strange way. The Minibooth arcade had 10 games on Thursday and Friday, and then we moved in to take their spot on Friday night so we could take over for the weekend shift.

I don’t know how this is decided, but I do remember choosing our preferred days on the application form. Personally, I think the weekend spot is way better and I do sort of feel bad for the Thursday/Friday crew. But I guess the logic is that Thursday and Sunday are both slow, and Friday and Saturday are both crazy, so everyone gets one of each. I feel like we got really solid traffic on both days, but Sunday definitely died out at around 3 pm. Hopefully everyone got their moneys worth!

 

20180406_215257.jpg

They Threw Us A Party!

This was a nice perk that I didn’t even expect, but there was an Indie Megabooth mixer just a few blocks from the convention center on Friday night. The timing worked out well, since both Minibooth groups were in town at that point. I still kind of feel like an outsider at these events, so I can’t pretend I did a whole lot of “networking” – still, I appreciate the effort to get a nerd like me out of his shell! There was even free food and an open bar. What more can you ask for? [ ^_^]

 

 

20180408_114251.jpg

Let The Show Begin!

The two days of the Minibooth were exhausting, in a good way. Standing on your feet for 8 hours straight two days in a row is not exactly what I’m used to as a nerdy computer artist. But it was for a good purpose! The traffic during these PAX shows is always really consistent. There was never a dull moment, which is exactly what you want. This is probably due to the good reputation of the Indie Megabooth, but it also didn’t hurt that the Megabooth is in the center of the giant convention hall next to two giant avenues. We never felt “out of the way” or like we were in an obscure part of the space.

No one found any errors that we didn’t already encounter at SXSW, since we brought the same build. (The shows were too close together to worry about rebuilding) I also made a point to not really ask for feedback, and instead pitched the demo, our beta, and this blog. It’s good to know going into a show what you’re looking to get out of it. This one was purely about promotion.

20180408_170101.jpg

(Chris put a grip tape line down between our booths because the crowd was out of control!)

These shelves were super useful, because the customers couldn’t see them and they made good use of the limited space. I might buy some for Game Revenant to use during future shows. Typically when we go to conventions, Jack is the Charger Master and we’re constantly rotating a few devices between a few limited charging stations. (At SXSW, we actually used the MacBook as just a power brick LOL) I was nervous about handling this show on my own at first. However, having power provided for us – along with my power strip and these shelves – made it a breeze! The devices were always topped off and no one had to be turned away.

 

20180408_191406.jpg

It’s Over!

Overall, the Indie Minibooth seemed like a great investment of time and money, and I highly recommend it. (I even recommended it to other developers while I was at the show!) The caveat is that it will cost you a non-trival amount of money to secure the Minibooth spot and get a hotel, so plan accordingly. If you want your indie game to succeed, you need to take a financial risk like this eventually.

If you found out about this blog because you met me at the Indie Minibooth, welcome! Take a journey backward through time and check out all of our other posts. We’ve been posting a blog every week for over a year, so if you’re curious about anything related to this game, chances are good that we’ve covered it in-depth already. It also goes without saying that official announcements about the game’s release date will be posted to this feed, so be sure to smash that follow button if you have a WordPress account.

Hope to see you all next year at PAX 2019!

 

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

Thanks for reading this blog! Stay tuned for more updates and announcements related to Where Shadows Slumber. You can find out more about our game at WhereShadowsSlumber.com, ask us on Twitter (@GameRevenant), Facebookitch.io, or Twitch, and feel free to email us directly at contact@GameRevenant.com.

Frank DiCola is the founder of Game Revenant and the artist for Where Shadows Slumber.