Finite State Machines and the AI of Half-Life | AI 101

Finite State Machines and the AI of Half-Life | AI 101


Hi I’m Tommy Thompson and welcome to AI 101:
the series that explores the tools and techniques used in video game development here on AI
and Games. In the first entry of this series I looked at the navigation mesh: a data structure
used in 3D games to enable AI characters to move around an environment, then we followed
this up by exploring behaviour trees – arguably the most commonly used AI technique used currently
in AAA video games for handling character behaviour. In this episode we’re going to explore one
of the most pivotal techniques used in building AI systems in video games for more than twenty
years. The Finite State Machine. While not as pervasive as they once were, finite state
machines are still a great starting point for many an aspiring AI developer and next
to navigation meshes are one of the first AI techniques I still teach to this day for
character control. I’m going to explain what a finite state machine is, how you go about
designing these systems, the innovations and changes made over the past 20 years and return
to the game that defined their popularity in Valve’s classic 1998 shooter, Half Life. Alright so let’s talk some basic theory: a
finite state machine – also referred to as a finite state automaton – is a model commonly
used to simulate simple sequential logic. It’s largely derived from two bodies of work
by George H. Mealy and Professor Edward F. Moore in 1955 and 56 respectively. The system
is a collection of one or more pre-defined states. When modelling AI behaviour in a game,
a state will represent a specific behaviour that a character or other system in the game
should execute, this can be standing idle, attacking the player, moving to a point in
the world, interacting with an object, whatever the designer sees fit. Often this means we’re
handling various aspects of gameplay systems such as animation, sound and decision making
for whatever system the finite state machine is controlling. The state machine will continue
to hold the current state as active until it receives an input that it recognises, after
which point it will then transition to another state within the system. As a designer you
can decide what inputs a state receives are a valid for a transition to occur, as well
as what states it will transition to based on this information. I say states plural,
because you can decide to have a state transition to one or more states in the event it reads
a given input. This results in either a deterministic finite state machine, which is where a state
reads an input and can only transition to one other state, or a non-deterministic finite
state machine, meaning that if that input occurs, the system could transition to a number
of different states. Now the benefit of this approach, is that
it means you can define a multiple states that lace into a much more nuanced behaviour.
A good example is the AI of games such as Pac-man and Ms. Pac-man, where the state in
which a ghost hunts the player is unique for each enemy type, but the at an abstract level
they’re all doing the same thing. But once the player grabs a power-pull, all the ghosts
transition into an evade state which changes how they move through the world. Now the movement
code is largely similar, but it’s trying to avoid the player instead of chasing them and
to help players understand what’s happening, the audio and visual change accordingly. But
once the power-pill fades, each code can transition back to chasing the player again. Finite State Machines help create AI that
can not only respond to their own internal memory – give they make decisions about states
to transition to based on information stored internally – but can also react to events
happening in the world and be versatile to change often driven by the player. As a result,
for many years Finite State Machines where the de-facto standard of how to build AI in
games until arguably the mid to late 2000’s. Depending on the problem scale and size, this
is ideal for your character behaviours and can provide a variety of rich and interesting
gameplay opportunities. A great example of this can be found in the
Batman: Arkham franchise, that relies on Finite State Machines to construct the enemy characters
both in combat as well as in stealth segments. This fits well with the overall approach taken
to the enemy designs: they need to react to what’s happening around them and the governing
gameplay systems can send specific inputs to them to dicate what they should be doing
at any given time. For example, the combat system sends input signals to each character
to ensure they combat is both dynamic as well as challenging: this can mean picking up a
crowbar or gun, moving into an attack position or trying to a shot at the Dark Knight. Meanwhile
in stealth segments, the armed guards will typically patrol, but are often reacting to
changes in the world: their buddies are taken down, they perhaps see the player swinging
past or hear exploding gel taking out a wall. In each case, these send inputs to the system
that dictate what to do in that capacity. But despite the benefits of having this level
of modular control, there are a number of reasons why the AAA games industry has moved
away from FSMs over the past ten years, with the likes of behaviour trees popularised by
Halo 2 as well as planning techniques such F.E.A.R.’s Goal Oriented Action Planning challenged
the idea that FSMs are the default approach to take. There are several reasons for this,
but the two big ones are how labour intensive they are to build, but also that they don’t
scale well as the number of behaviours and transitions increases. The more possibilities
you design has to cater towards, the more unique transitions between states that need
to be captured and in-turn the more circumstances you need to be able to capture, debug and
support. Now one approach to resolving this is to use
HFSMs – Hierarchical Finite State Machines, which were originally conceived back in 1987.
In this instance, you can group states together such that a transition can either go to a
specific state as usual but it can also go to a collection of states that have been built
to transition in specific ways. In essence, you could go so far as to build a state machine
that effectively transitions between more modular state machines. More carefully managing
the operation of specific behaviours and how entire subsets of behaviour move between one
another. Hierarchical FSMs are still adopted as recently as 2016 as part of the AI toolchain
in id Tech 5, with the likes of MachineGames Wolfenstein: The New Order and the reboot
of DOOM still using them to full effect. But it is worth noting that while this still enables
some flexibility, the issues of scalability and complexity are nonetheless compounded
using a hierarchical finite state machine, you’ve essentially just shifted the issue
one level higher in the architecture. So now that we’ve got the theory out the way,
let’s chat about the AI of Half Life. Back in 1998 the AI was pretty groundbreaking and
became highly influential in subsequent years and I want to talk about why this was the
case. The actual C++ code is publicly accessible and can be found by downloading the Half Life
SDK – and I’ve linked to in the description if you want to take a look. While quite dated
by current standards, many of the core principles of how state machines were adopted in subsequent
years – including the eventual introduction of planning in games such as F.E.A.R., which
we’ll return to in a future episode – can be seen in this codebase. All non-player characters in Half Life are
derived from a common Monster type and each type, every AI character be it a scientist,
security guards, head crabs, soldiers or aliens is ultimately a ‘Monster’ per the C++ polymorphic
hierarchy. The core Monster classes defines a state – but it isn’t the state of a finite
state machine, a ‘state’ in the Half Life codebase reflects how the AI character is
operating at that time. This ranges from idle, to alert, prone or even dead. In addition
to this, there are also conditions – which reflect the information that a Monster has
at at that point in time – and also a set of sensors that help those conditions to be
updated. I’ll come back to this in a minute given these are important factors in dictating
how and why certain AI actions are executed. The key part, is that each Monster type is
capable of executing a variety of tasks. Each tasks essentially corresponds to states of
the Finite State Machine and there are over 80 unique tasks available for use. These range
from primitive simple behaviours such as facing a given object or crouching all the way to
walking paths through the game world, finding good points of cover for combat, dodging attacks
and playing sound effects. Given the polymorphic nature of the code,
the base monster classes handle many of the commonly used tasks such as movement while
tasks that might require unique configuration per type can be handled in each of the individual
monster classes. This means that the security guards and scientists can have unique variations
of the same behaviour that better fit their roles within the game. Meanwhile different
soldier types can handle target aquisition and attacks in their own way. But the thing is that won’t work in a purely
reactive capacity. If there are so many different tasks being executed, the system needs to
know what transitions it can make in the state machine as well as which tasks make sense
to run at a given point in time. So the Half Life AI becomes more deliberative: meaning
that it needs to work through the finite state machine and transition from state to state
in practical and interesting ways that enables for a more deliberate and long-term behaviour
to be established. There are two ways that Half Life’s AI supports this, first through
schedules and then through goals. Schedules glue together tasks in meaningful
ways, often resulting in macros of intelligent behaviour. There are around 40 unique schedules
in the game, with them often glueing together movement, attack, sound and animation actions
into a more cohesive behaviour. One thing that is important here is that tasks can’t
be merged or blended, hence if a character needs to get into cover given the player is
firing on them, you’ll notice that they’ll give up shooting at you in order to retreat
rather than laying down a more suppressive fire, given attacking you and running are
two distinct tasks in the system. In some cases, an AI character might require multiple
schedules to be executed in sequence in order to achieve an even more long-term behaviour
and that’s where goals come in handy. There’s only five of them in the game but in each
case when active it dictates that upon completing a given schedule, another one needs to be
selected that will help that goal be realised. Now outside of goals, there are other ways
a schedule can change and that’s either upon completion of the current one – where in many
instances the final task listed – TASK_SET_SCHEDULE – is telling the system to select a new one
– or given the dynamic nature of the game, something in the world will cause a schedule
to become invalid and the monster needs to select a new one. Each schedule has its own
set of conditions that have to stay true in order for it to not only be selected, but
also continue to operate. In the case of either ensuring the current schedule is still valid
or selecting a new one is needed, that’s where the ‘state’ and ‘conditions’ I’ve mentioned
previously come in handy. The states are important given that AI that is dead or incapacitated
are unable to make any decisions – and rightfully so. Meanwhile the conditions – which are how
the AI character sees the world are updated based upon the execution of the schedule as
well as the new data received from vision, sound and smell sensors. Look sensors are driven by line of sight for
a given AI within their respective view cones, while sound is based largely on whether a
sound effect should have been heard by another AI based on their proximity to the point of
origin. But as mentioned, some of the monsters – notably the aliens – have a sense of smell,
and this is actually the same systems as the audio, only it’s just an inaudible sound event
being played in this instance. The 32 conditions an AI can recognise are
binary in nature – given they’re stored in a 32 bit integer – meaning they’re either
true or false. This is a pretty compact method for storing a variety of information, such
as whether an enemy is visible, they received damage, heard a sound as well as two special
fields that can be customised by each monster type. And ultimately this data will help each
Monster type to decide whether a new schedule needs to be selected, given it might be that
the current schedule is now invalid or something has happened in the world that dictates it
needs to change its behaviour more drastically. Whilst over 20 years old, the AI behind Half
Life is still very effective for it’s needs and for aspiring AI developers is worth exploring.
Heck I’m pretty sure you can adapt this into your own games and get something that would
still be more than adequate for a small-tier indie project. I hope this helps everyone out there better
understand the underlying theory of how finite state machines operate, their history in games
and you maybe even learned a thing or two about the games you love along the way. I’ve
listed some other useful resources on state machines in the video description for you
to check out, but also I want to help you make FSMs for building AI in your own games.
So be sure to check out my tutorial channel ‘Table Flip Games’, where you can find a series
of videos showing you how to build a simple AI character using a simple finite state machine
implementation in the Unity game engine. Plus here on AI and Games, be sure to check
out my existing videos on Batman: Arkham Asylum as well as DOOM 2016 to deep dive into how
FSMs are used in AAA titles. With this topic completed, it leads us to deal with more deliberative
behaviour. As we saw in Half Life, being purely reactive isn’t sufficient, you need your AI
to be able to make more long-term decisions, and this will lead us to a future episode
on AI 101 on automated planning. Specifically, we’ll take a look at the Goal Oriented Action
Planning system, meaning we can revisit the game that started the AI and Games channel
– Monolith’s 2005 horror shooter First Encounter Assault Recon. Thanks for watching this episode of AI 101
here on AI and Games and don’t forget to like and subscribe for more on the AI of your favourite
games. AI 101 alongside my case study and design dive series are sponsored by and voted
for by my supporters over on Patreon.com. If you want to join our community and have
a say in future episodes on the show, join the AI and Games patreon using the links on
screen and in the description. Thanks for watching and I’ll see y’all again soon.

47 thoughts on “Finite State Machines and the AI of Half-Life | AI 101

  • In this AI 101 we wind the clock back to Half-Life and the heyday of the Finite State Machine. In the late 90's and early 2000's FSMs were the standard for AI behaviour in games until Halo 2 and F.E.A.R. changed everything. Plus it's crazy that it's taken me 5 years to get around to having Half-Life on the show, but here it is with all the juicy AI details explained!

  • Amazing analysis! I'm a big fan of your videos, you are motivating me on learning more on AI development 😀 I think you should consider doing a video analizing the AI on the indie game called "Intruders: Hide and Seek", is not so known, but it has an interesting AI!!

  • I think some of these systems were more representative of what a conscience is at it's core more than "intuitive" AI that are in development right now.
    I think a real sentience is made of several kind of systems. Some simple, some with a lot of redundancy and complexity, like deep neural network-using source codes.

  • When I bought F.E.A.R. so many years ago, I'd never have thought that I would keep meeting it so often when people explain AI and game environments. I found it to be quite cool, but didn't realize that I had stumbled over a technical icon. Great explanation and visualization of FSMs. Thanks.

  • I'd be interesting to delve into ai, like in MGS 5, or just anything where there's seemingly an out of reach command center, sending guards around the map, making decisions. It's something i'm interested in when making my own games

  • Love it. Half Life's AI fascinated me for quite awhile after its release and that was a very interesting look into it. The use of schedules and goals in a FSM or HFSM is also interesting and I enjoyed the discussion of the systems limitations / drawbacks.

  • Very informative!
    Would love to see a video covering mechanics for AIs that need to function in a group/horde. Days Gone as an example.

  • Half life and fear ai as well as crysis I think have the best ai for my type of playstyle. All hail the Combine!

  • thx for video, looking forward to AI in F.E.A.R. video 🙂
    I am also interested in Unreal Tournament's AI… possible video in future?

  • Great video as always! I wish u had given a example of using hirearchecal state machines like u did with fsm.

  • For those interested in the new tutorial channel mentioned at the end, here's a link:
    https://www.youtube.com/channel/UCjG7y5Iw4TLHcK6ckuqUj_A

  • Since HL1's system has tasks, goals and schedules I'm curious how it is distinct from goal-oriented planning systems? I guess you'll cover this in subsequent videos about planning AI?

  • For one of your videos, I want to see you take on Mass Effect and RPG elements. It should be quite a challenge for you.

  • If the FSM system has states, machines and transitions, whether the transition functions from one state to another should be in the state or should be completely independent of the state. What are the advantages and disadvantages of each approach? I understand the state as one that does not know about other states. The logic that determines the transition should be within the machine itself and not the state. In this way, it is possible to implement different types (classes) of transition (hardcoded, decision tree, fuzzy logic, utility functions…) independently of the state. Does this approach have certain defects?

  • why is it called ai when its just scripts set to act during certain condition responses. the bots themselves have no intelligence. its like an answering machine, auto recording a message. is the answering machine now ”artificially intelligent”.

  • The way I always see it.

    FSM = States with manually managed transitions between them.

    HFSM = Stacking states dynamically to allow for semi-automated transitions between states.

    Behavior Tree = A HFSM with some extra building blocks to make more complex logic out of simple states.

  • I've seen a few of your videos that have covered FSMs and behavior trees in the past, and I'll be honest – I don't quite understand in what way they are different. It seems to me as though a behavior tree is just an extremely complicated FSM with enough different paths and branches that it becomes too difficult to draw out a full graph very easily. Is there any chance you know of a video or writing, or plan one yourself, that discusses in a very concrete way the exact differences in implementation, capabilities, etc. of finite state machines and behavior trees? Sorry if you've already made a video on the subject – if you have I must have missed it!

  • Informative as always and concise tommy!

    Quick question, do you mind if I ask what you would suggest as a laptop for development sharing? I'm doing my PhD and need a portable machine for conferences.

    Do you think I'd get away with an i5 and 1060? Loving the videos!

  • The aigamedev.com link seems to be dead (might just be temporary of course) but it's on archive.org https://web.archive.org/web/20150110074135/http://aigamedev.com/open/article/halflife-sdk/

  • You, Sir, have helped my company (video game art outsourcing – current client Cloud Imperium) extend ourselves into the AI field.
    Why? Because you've supplied us with enough info to produce adequate documentation for Industry AI Pros to take us seriously and come under our umbrella of Associates.

    Short version: thank you!

  • The deeper I get to know AI, the more apparent it becomes that there is no "I" there. It's all "skip logic". That is maybe the most scary part of the world heading towards increased automation. There will never be any real reasoning, just flawed, increasingly complex logic – often puzzled together by people that are as underpaid and overworked as 3D artists.

  • 11:40 I've played through Half-Life 1 completely 4 times, in various parts countless times. I have never known the tops of bins were destructible. Other than that, fantastic and informative video. Thank you.

Leave a Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © 2019 Toneatronic. All rights reserved.