This is a repost from a featured Gamasutra article which was also in German print and web magazine makinggames.de
Gamasutra link
Making Games link

Cooking Up Gumbo

written by Kevin Maloney and Sheridan Thirsk of Harebrained Schemes

Our most recent release, Shadowrun: Dragonfall – Director’s Cut (DFDC), features an in-house AI system that we developed called “Gumbo.” In this article, we will detail how Gumbo came about, what our goals for the system were, and how, in the end, it largely accomplished them. We hope that our takeaways from this project will be helpful towards your own turn-based (or real-time) AI endeavors.

First, a little background. After a Kickstarter campaign that ended successfully in April, 2012 with $1.8 million in funding, Shadowrun Returns shipped on July 25, 2013. Seven months later we released a major DLC expansion entitled Shadowrun: Dragonfall, and on September 18, 2014 we released Shadowrun: Dragonfall – Director’s Cut.

The Dragonfall DLC began as a stretch goal in our Kickstarter campaign. However, by the time it went out the door, it had grown into a full expansion that improved on many aspects of the core game.

Both the team and the community responded positively to the improvements in Dragonfall. Still, the team felt that a stronger version of Dragonfall was possible. Thus, we embarked on creating Dragonfall – Director’s Cut. Free to all owners of the DLC, DF-DC was not only an improved version of Dragonfall, it was also now standalone, so new fans could jump right in to our latest and greatest.

When we came up with a list of areas that we wanted to invest in for DF-DC, smarter AI was high up on the list (especially given that combat was also being overhauled). Both Shadowrun Returns and Shadowrun: Dragonfall used AI derived from a branching tree of xml edited in Unity. They also used custom code to read data and perform actions in the game. While this system succeeded in giving a variety of enemies’ basic behaviors, we felt that there was room for improvement on several fronts. With the existing system, any tuning or unique behaviors that designers wanted to make required engineering support, followed by the lengthy process of rebuilding our assets and code. This made iteration very slow and hid the AI agent’s decision-making process from the designers.



In game theory and economics, the term utility describes the ability to satisfy an AI agent’s needs and wants. In our games, we want to maximize agent’s utility in three ways, to reduce amount of damage taken, to assist allies, and to maximize amount of damage to the enemies. Often these goals require a sequence of actions, each with potential random elements and failure points. In an intelligent dynamic world, the AI agents should still make decisions which have high utility.

These were our primary development goals: Easy Safe Expandable and Fast


LibSDL is a library to access common computer systems, and it works for almost every platfomr, including raspberry pi.

SDL2 supports things like hot swapping of an xbox 360 controller whereas SDL1 does not get updated for years at a time.
My raspbian image is based on debian wheezy https://packages.debian.org/wheezy/. Wheezy doesnt allow an apt-get install, but SDL2 is in the newer jessie https://packages.debian.org/jessie/

So we do it the manual way, get source, compile everything. If there is a critical feature or bugfix, we can use the update instructions below.

Downloading SDL



Here is a brief setup guide for very basic bluetooth and samba setup

Get it up and running

  1. Download raspbian image https://www.raspberrypi.org/downloads/
  2. Image raspbian to the CF
  3. Connect network, bluetooth, hdmi, and lastly power (which will power on the device)
  4. Use my router to get the IP of the rpi
  5. Download PuTTY and connect to the IP http://en.wikipedia.org/wiki/PuTTY
  6. Connecting using default user : password is pi : raspberry


Configure basic things



I bought a raspberry pi 2 with the intention of writing open GL simple game and controlling it with a joystick or arcade stick.

The process involves setting up basic tools, then setting up SDL, a library for input, video, networking, rendering.

I bought a

  • rpi 2
  • usb bluetooth dongle
  • 32GB class 10 microCF card

I intend to buy a usb wifi dongle but wired is okay for now.

My goal is to write a game in openGLES and SDL2 using C++.

I will have some step by step processes in later articles.

Lets go!




Shadowrun was a successful kickstarter project and they had a great amount of crowd support and funding. They had a working prototype, but needed to add the features to make it a high quality 3D isometric tactics game. There was also a huge effort to create a full-featured editor and trigger script language, as well as a branching conversation tool. They brought me on once the funding money came through. My focus was on the client gameplay since I had experience there and the designs in place lacked flexibility we would need going forward.

My first task was to create a Turn Director. This was the manager which kept track of the active team, and which charactor/actor is the currently selected. The human player can cycle at will between the actor they wish to control, and the AI generally follows a specific order. I used a ready rating system to sort all potential actors. Some conditions would increase the ready rating, and some would decrease it. As the project moved forward, the concept of a turn, and turn priority would become very muddy and need to handle exceptions to the standard turn order. By the time I was finished, the AI actors could take turns simultaneously, and they would control their own lock condition if there was event they were waiting for to carry on. It also handled multiple dimensions like matrix, or a dream sequence.

Next I was setting up many of the classes and creatures which had different roles and behaviors. Some examples include a rigger which controls any number of drones with remote control gadgets, and a shaman which can create a spirit from environment objects like a water fountain. The life cycle of an actor had many steps before “ready” state and many steps after the “death” state. Signals were sent to other internal systems and gameplay systems and we used a mixture of string event messenger and a larger data payload event messenger using structs and C# templates. The editor trigger logic was setup to only listen to general broadcasted string events, but for specific entities, a keyed event messenger could subscribe and respond to specific things with more context data.

Another major system I wrote was a 1×1 or 2×2 pathfinding algorithm. The collision was written in such a way that a 2×2 did not need to compute the nearest wall edge. A tile adjacent to the wall with direction the South or East was written with a fuzzy collision bit. A 1×1 actor can path through it, but a 2×2 actor can not because then they would overlap the wall. This system also included cover data and more exceptions due to gameplay rulebreaking. Using a highly optimized A star algorithm, we were able to handle pathing and repathing of a flock of teammates all at once. They used a system to claim the final destination tile, and to ‘soft’ claim the tile infront of them. Other actors would prefer to avoid anything with a ‘soft’ claim, to prevent mid-motion stacking , and would never be able to path to something claimed by someone else. Later on we broke this system too with gameplay exceptions that allowed ‘shoving’ and friendly actors to path through stationary actors, where enemy actors could not. Also because some maps had upward of 20000 tiles, the A star algorithm had to intelligently truncate, as well as store data in a compressed size for low memory considerations.

As the project moved forward and the main systems had been established, I added fun gameplay actions that designers thought would make the levels more fun. One obvious feature was to add destructible barrels and walls. Also I had wandering behavior for a city area filled with idle citizens. We also had an enemy type which would turn into a spirit when it died and was immune to physical damage. Then after two turns it would return to its corpse and start punching again. It turned out to be a headache for the users, but it gave variety to the combat experiences.

Lastly we ported the game to android and did massive performance optimizations. I was tasked to rewrite some fog and water shaders, as well as refactor any slow algorithms to perform on an ipad mini and comperable android devices. In this process I used techniques like queuing tasks, caching common data, preloading and unloading assets, and allowing the game to function with partially loaded assets.

Also we added steam workshop and steam achievement integration which I helped implement on the game client side. I made all the debug tools which assisted designers with under the hood game information. I also snuck in some cheat codes like bighead and paintball colored bullets!

See shadowrun on steam http://store.steampowered.com/app/234650/



My work on Halo 4 was quite limited because my role was to support the UI Lead on a short term contract.

Primarily I worked on one or two simple screens and the bottom update ticker. It was not easy to implement other people’s screens with code I was not familiar with, but it was made easier because they were very similar to the previous iteration of Halo. The update ticker involved a few more departments: information from networking updates, and updates from the friend list. I tried to follow instructions from multiple voices but I wasn’t confident that it was a polished experience. Going forward I would try harder to understand how to make the end result, but in my position as a contractor I did not really know my coworkers and the proper protocols.

After this phase was finished and we approached ship date, I was finding rare bugs and fixing them. A very popular game has many users who hammer on all the buttons or menus in every possible combination. The UI needs to use guaranteed results and transitions that are impossible to fail and handle interrupts gracefully. I was comfortable with this process because I’ve shipped a bunch of games before this and knew that it takes time and attention to detail to iron out the last bugs. Unfortunately this meant many long hours for many weeks in a row.

Once the bugs in my area were resolved. I moved on to testing the game overall. I used some game modes that allowed the user to create a level however they wish. In one ski jump level, I caused a physics glitch when part of the vehicle moved upwards into the ceiling at an extremely high speed and crashed the game. I had a few other people come help me investigate how to fix it, but reproducing the bug was very rare and I could not make it happen reliably. I have dealt with such physics problems before, and I wasnt surprised that smashing complicated geometry at high speeds had the potential to fail. I dont think much happened to this bug because the debug tools were not able to give enough information on a hard crash. I know consoles are more difficult than PCs to understand how to crash but this reinforced the need for debug information in all situations.



My second global game jam I used a coworker and an old coworker I am still friends with.

We used skype for resolving some design challenges and it was not ideal, but luckily the decisions we made independently were fine for the others because we were flexible and easy going developers.

Gameplay and design:
Leaderless is a traditional platformer in the same style and world as Mario. We did not want to use Mario as a character however, and only refer to him as the one who killed our leader. The main character was a turtle who had some interesting abilities that were added as he progressed.

The core idea was that a turtle needed a job after the koopa leader died, and he had to buy his shell back, then wings and magic attacks. We also wanted an element where he has to build his home and feed his family who are struggling, similar to the war-torn border officer in Papers Please. This made the coin collecting in a typical mario game more purposeful and desperate. Some coin sections could not be reached without wings or magic attacks so there was a bit of back-tracking. Because we did not want people to backtrack to retrieve the identical coin as the last attempt, we had to store the coin data and other level data to be persistent. This was a bit challenging and the first attempt was a bit buggy. The second attempt worked alright but it took a fair amount of time and we didnt make enough levels as a result.

Moving the character was a decent challenge because of struggles with the sides of platforms and jumping up from below a platform. We had moving platforms too which always cause problems with colliders that are touching. The ability to slide dash as a turtle shell was fun and we needed more places to use it. The magic ability was thrown in at the end and it shooted a projectile diagonally up. This required some skill to use, and didnt really pay off because it was easier to move and jump as most people are accustomed to do.

Game art and sound:
We also had someone creating art full time and although he was not an artist by trade, he did a great job and I was able to throw the full screen backgrounds and the sprite animations together quickly using my previous experience. The style was playful and somber at the same time and fit well with the story elements. We tried to mimic other sounds and enemy types from mario which sped up the process.

User experience and presentation:
We spent a fair amount of time working on the visuals outside of gameplay, with fullscreen animated storefronts and an overworld map similar to SMB3. The public showings had great response to every story progression point or ability reveal. Also because the levels were not very long, putting some downtime inbetween each level helped give it a better pace and more context to what was going on.

This game was the most successful presentation and story. It was fun to tell people the elevator pitch and people were excited to see what would happened next because we had such a creative angle. The gameplay performed pretty well, but it was quite standard fare.



Five Dojo Simulator 2014

Global Game Jams are different from ludum dares in that they are always a group effort and also connected to a local scene.

Our focus was to make a four player game as a top down brawler. The themes revolved around ninjas, and other martial arts.

We had discussed the merits of a side view brawler like smash brothers, versus a top angled view like the original zelda games. We wanted a skilled throw projectile which means the characters should be spaced out. To create space in a side view brawler, the level needs to be vertical and we decided that would be too challenging to create and tune things like gravity and one-way platforms. It turns out that we had time to implement AI on the last day, and the top view AI was much easier to write because they did not need to traverse up and down and over obstacles.

The abilities we had were sword attack, throw dagger, and phase dash. The phase dash passed through obstacles and attacks. The phase dash also traveled a certain distance which could not be adjusted once the dash starts. This leads people to certainly fall off the edge of the map a couple times. The sword attacks could reflect thrown daggers and kill the enemy. Throwing a dagger had some very long animation frame lengths which balanced their usefulness by adding the vulnerability period.

Game art:
I was not involved in drawing the art and instead we had people with experience using tablets to draw ninjas with fluttering fabrics and slicing sword swings. We spent more than half of our time on the character animations with two full time artists creating 4 characters each with sword slash, knife throw and a phase dash.
The fluidity was key and we advanced through sprites with a very manual control process. I started using the built in unity animations but grew quickly frustrated with the lack of control. For our attacks we needed to engage the damage element at a specific frame, and then play other frames for longer periods of time. I ended up writing a state machine to control the sprite speeds and report back events like sword_on and sword_off. This is closer to what professionals do with 3d model animations. The difficult part was storing the data and I think embedding meta data and frame length has been done many times into json or other formats but I didn’t have time to investigate that, so it was hard-coded for this game. Hardcoding works for a short period of time but it makes tuning and adding additional animations take longer than they should.

The level art was a challenge because we didnt know what type of tile sets would work, and what dimensions would show of the space compared to the main character. What we finished with was the first draft and had very basic rocks and solid colors for grass. If we had more time we could have made it more interesting and made the cliff edges and pits clearer because they had the most impact on the game. We also had pillars which blocked collision but we didnt know what type of art style they should have.

User experience and presentation:
The biggest flaw of the game is the intro where we start the players moving around behind the title screen. They are permitted to run around and practice abilities before the fight begins which is a great way to familiarize with the controls. However it is not clear which human controls which character, and joining the fight involved walking inside the center attack ring. However this ring did not really make sense and we didnt have clear text or countdowns to show when the game would start, stop and show the winner.

This game was very fun and had a great showing at the seattle convention center where people played the demonstration. The addition of AI at the end helped other people at home to try and get an idea of what the gameplay was like if they didnt have 3 friends. The challenges with the sprite made me work all night one of the nights, which was not great time management. It was necessary however because there would not have been much of a game if the sprites didn’t play as we designed.

Five Dojo Simulator 2014

ninjam_team_photo2 (1)


Deadshot was an unreleased mobile game with creative mechanics.

I took a job at a very small company that was finishing up a creature battle game, with strategy similar to pokemon. The next project was open to whatever we felt would sell visually and with fun fast gameplay.

Our design was a low color cover shooter using swipe mechanics and a timing game to fire the gun. The colors were white and black and red which is sometimes a color treatment done in movies like Sin City. We had some quality player models and animations for soldiers to traverse behind and over obstacles.

My role was to help make the motion interactions to move from one barrier to one of the neighboring barriers. This was a directional swipe that worked quite well. There were powerups and different ranges of weapons so moving around was a key element. When the character decided to shoot, they brought up the sights on the gun and were looking at moving crosshairs. They could shoot by tapping when the crosshair matched their target. A target further away required more precision and it was a bit of a minigame to do damage instead of relying on precise finger tapping. The game felt like an arcade light gun game, except we reduced the complexity of aiming because we didnt have an input device that allowed aiming in that manner. Using timing as the aiming skill was feasible on a touch screen and turned out to be pretty fun when players got better at the crosshair mechanic.

My next role was to create AI who acted similar to humans. I had made AI systems before but I wanted these agents to be scalable and smart. The AIs had different behavior on who they would target, and the manner in which they would approach their targets. I split their decisions to be frequent and low cost ticks, and less frequent high cost ticks. Over a hundred agents could regularly take their turn using low cost tick decisions, but only a couple agents could use the high cost ticks. Some decisions which are rapid are adjusting where to aim if the current target moves. They may also run backwards if they are damaged too much. The other type of decisions which are slower are things based around scanning. Expensive tasks involve scanning or pathfinding. Some agents scan for open space on the map, or the navigate to a preferred item pickup.

While it was a 3D game, it was limited to movement to and from node points. This simplified both user and AI movement dramatically.

The downfalls revolved around getting players to play on servers or in local proximity. We also didn’t have a great way to profit from players who were having fun. In the end, the game didn’t get released.



My latest ludum dare was a third person action game with humanoid robots. This sounds very ambitious but I used a few techniques to reduce the work.

Because I was challenged to try a full 3D, and I was comfortable with the built in Unity character controller and follow camera, I decided to rig up a human and use that for the base of my game. I struggled with making projectiles work and using ragdoll type phsyics on only part of the body at a time. A headshot would be extra satisfying because everything would crumble in the physics simulation.

The art was different for me and I would like to try more art in this area. I used the existing animations for a human and modelled a robot to fit the exact proportions. I also scrounged and found some metal and ground textures that fit the open sky theme. I had to model the terrain and buildings too, all of them in Blender. It was my first time using blender so it was a bit rough, but I got the hang of UV wrapping and making things symmetrical. I think I need some more tutorials to learn the proper way to manipulate the models and textures, but I got something done so I was happy about that.

The gameplay was strange because instead of attacking by punching or shooting, the robots would detach their arms or legs and throw it at the enemy. Enemy arms or legs which get hit would fall off the robot and cause problems. If the head was hit, then the robot would die and they would respawn later. The arms and legs are limited, so I also added regenerating limbs so that you could wait and then throw the limb again a few seconds later. I implemented a power up which was rapid regeneration and had a fancy star-mode texture change, but it turned out to be not clear to the user.

The limb detachment was weird and making them robots was the easy choice where something alien or zombie could have worked better. It was a bit gross to have humans rip their own arm off, and I’m not sure the theme is so great. On the review section though it placed around #100 of 2000 for the theme, so it seemed well received.

The controls were a huge challenge because I didnt want to rely on traditional aiming, moving sideways, or joystick controls and instead make it feel very restrictive and robotic. This was probably a bad idea because it was the main complaint and people couldn’t feel comfortable moving around.

The AI was quite poor as they just targeted the items or an enemy and stopped at a certain attack distance away. They moved enough to feel alive, but not enough to feel interesting.

For my next 3D game, I should do some more blender training first, and maybe my enemy type will be more mindless creatures which have more leniency if they are stupid. This would require the level to be more interesting to compensate.