Skip to main content
fsm

Finite State Machines in Javascript

I’ve been talking a lot about Behavior Trees (BTs) lately, partially because I’m using them for my PhD. But although, BTs provide a powerful and flexible tool to model game agents, this method still have problems.

Suppose you want to model a bunch of sheeps (just like my last Ludum Dare game “Baa Ram Ewe”), these sheep have simple behaviors: “run from cursor”, “stay near to neighbor sheeps”, “don’t collide with neighbor sheeps” and “follow velocity and direction of neighbors”. A sheep can also have 4 states: “idle” when it is just eating grass, “obey” when it is being herded by the player (using the mouse), “stopping” between obey and idle, and “fear” when a predator is near. The behaviors are always executing, but they may have different weight for different states of the sheep. For example, when a sheep is “obey”-ing, it try to be near other sheeps more than when it is eating grass or running scared.

Modeling this as a Behavior Tree is hard because:

  1. BTs don’t really model states well. There is no default mechanism to define or consult which state an agent is; and
  2. All behaviors are executed each tick, thus this agent wouldn’t exploit the BT advantages of constrained executions.

Notice that, you still can model these sheeps with BTs, but the final model would be a lot more complex than it would be using other simple methods.

In previous posts, I also talked about how Behavior Trees have several advantages over Finite State Machines (FSMs). But, in cases like this a FSM is a lot useful and considerably easier to use than BTs.

Implementation

Like my Behavior Tree implementation, I want to use a single instance of a FSM to control multiple agents, so if a game has 100 of creatures using the same behaviors, only a single FSM instance is needed, saving a lot of memory. To do this, each agent must have its own memory, which is used by the FSM and the states to store and retrieve internal information. This memory is also useful to store sensorial information, such as the distance to nearest obstacles, last enemy position, etc.

First, consider that all states and machines have a different id, created using the following function:

and to simply inheritance, we will use the Class function:

We will use a Blackboard as memory for our agents. Notice that, this is the same blackboard used in my behavior trees.

We will also use a state object that implements the following methods:

  • enter“: called by the FSM when a transition occurs and this state is now the current;
  • exit“, called by the FSM when a transition occurs and this state is not the current one anymore; and
  • tick“, called by the FSM every tick in the machine. This method contains the actual behavior code for each state.

Our FSM will have the following methods:

  • add(name, state)“: adds a new state to the FSM, this state is identified by a unique name.
  • get(name)“: returns the state instance registered in the FSM, given a name.
  • list()“: returns the list of state names in the FSM.
  • name(memory)“: return the name of the current state. It can be null if there is no current state.
  • to(name, target, memory)“: perform a transition from the current state to the provided state name.
  • tick(target, memory)“: tick the FSM, which propagates to the current state.

Notice that, some methods must receive the blackboard and the target object as parameters, which can be a little annoying – this is the downside of using a single FSM to control multiple agents – but the cost is small compared to the gain in memory.

The target parameter is usually the agent being controlled, but in practice it can be any kind of object such as DOM elements, function or variables.

Example

Using a simple Boiding algorithm, we have 3 states: “idle”, “obey” and “stopping”.

Use the mouse to move the white balls:

(more…)

sheepdog_trails

Links to Understand The Boiding Algorithm

Ludum Dare 31, the last jam of the year, ended up very well for me. I created “baa ram ewe“, a game where you must herd sheeps with the mouse, moving them from a thin grass to a plentiful pasture. The game has a good – and solid, for an experimental game made in 48 hours – mechanics and a good overall aesthetics, I really liked the final result of this game. I also received a lot of positive feedbacks, mostly asking me to create a mobile version, which I decided to do.

Baa Ram Ewe uses a boiding algorithm (also known as flocking algorithm) to move the sheeps, to keep them together, and to avoid obstacles and dangerous elements. The boiding algorithm is a method to simulate collective movement of animals, such as fishes, birds, sheep, etc. For example, take a look a the following video, which shows a simulation of buffaloes running:

The algorithm is very simple. All agents in the simulation follow a set of simple rules. These rules defines how each agent will move accordingly to its neighbors flock-mates. The interaction between the agents generate an emergent behavior, as you can see in the video.

The rules used in these kind of simulation are really simple. Commonly, all boiding applications have the following ones:

  • Separation: the agent must avoid the nearest flock-mates by steering away from them;
  • Alignment: the agent try to head to the average position of the nearest flock-mates;
  • Cohesion: the agent try to move to the average position considering the nearest flock-mates;

You can see a visual example of these rules on the figure below (copied of the Craig Reynold’s site, the creator of this algorithm)

boiding_rules
(a) separation rule; (b) alignment rule; and (c) cohesion rule. From (http://www.red3d.com/cwr/boids/)

In Baa Ram Ewe, I used the algorithms presented by Conrad Parker in his site as basis. Summarizing, I have a FLOCKING function that moves the sheeps accordingly to a set of rules:

As you can see, you can define any number of rules, but be careful with that! More rules mean more complexity, you probably won’t be able to generate the behavior you want. Check it out the Conrad Parker site to a nice description of the basic rules.

See

 

 

b3editor-square

Behavior3JS, First Release!

After some weeks working on this, I finally released the first version of Behavior3JS, my javascript library for behavior trees.

http://behavior3js.guineashots.com

I wrote 2 tutorials this month about implementing a behavior tree from scratch (here and here), which content was based on this library. Together with the core classes and nodes, I also released an online and visual editor, where you can design your behavior tree with custom nodes and export it to JSON format.

b3editor

Unfortunately, I couldn’t release the visual debugger together (I will travel next week, so I preferred to release what I had right now), but for sure, this is the priority for the next versions.

Take a look in this project on Github: https://github.com/renatopp/behavior3js

I recommend you to also take a look at the user guide, so you can have an idea of how this works: https://github.com/renatopp/behavior3js/wiki

problem_memory

An Introduction to Behavior Trees – Part 3

This post is the third and last part of the Introduction to Behavior Trees series. This part explains some implementation details of BTs.

Fast links for other parts and tutorials:

If you’re looking for actual code to use, check it out:


Decorator Examples

Inverter

Like the NOT operator, the inverter decorator negates the result of its child node, i.e., SUCCESS state becomes FAILURE, and FAILURE becomes SUCCESS. Notice that, inverter does not change RUNNING or ERROR states, as described in algorithm below.

 Succeeder

Succeeder is a decorator that returns SUCCESS always, no matter what its child returns. This is specially useful for debug and test purposes. The algorithm below shows the pseudo-code for this node.

 Failer

Inverse of suceeder, this decorator return FAILURE for any child result, as shown below.

 Repeater

Repeater decorator sends the tick signal to its child every time that its child returns a SUCCESS or FAILURE value, or when this decorator receives the tick. Additionally, a maximum number of repetition can be provided.

 Repeat Until Fail

This decorator keeps calling its child until the child returns a FAILURE value. When this happen, the decorator return a SUCCESS state.

Repeat Until Succeed

Similar to the previous one, this decorator calls the child until it returns a SUCCESS.

Limiter

This decorator imposes a maximum number of calls its child can have within the whole execution of the Behavior Tree, i.e., after a certain number of calls, its child will never be called again. The limiter pseudo-code is described in algorithm below.

Max Time

Max Time limits the maximum time its child can be running. If the child does not complete its execution before the maximum time, the child task is terminated and a failure is returned, as shown algorithm below.

Prioritizing Behaviors

Considering the composite nodes described in the previous post, the Behavior Tree traversal is performed with a dynamic depth-first algorithm (notice that, except by Parallel node, all composite nodes run a child at a time and run orderly from left to right). This procedure allows a definition of priority among behaviors in the tree: the left branch of the tree (starting from the root) contains the high-priority behaviors while the right branch contains the low-priority behaviors.

In order to exploit this, the designer must put important behaviors such as auto-preservation and collision avoidance at the left branches and low-priority behaviors such as idle or rest at the right branches. Notice that, depending on the agent, may be necessary to add an unconditional behavior as the lowest-priority in order to keep the tree executing at least this behavior.

Treating Running States

One common question when implementing a Behavior Tree is that: what to do in the next tick after a node returned a running state? There are two answer to it: starting the graph traversal from the running node or starting it over from the first node.

The major drawback of starting the tick at the node that returned the running state is that this node can take too much time running an action and, thus, avoiding the execution of most important behaviors. For instance, suppose a robot performing an action to follow a certain path; in the half way the robot finds a hole, but it cannot avoid it because the tree is running the action of path following.

Therefore, the best option is always start over the tree, and if a most-important behavior wants to run, the previous node stops.

Composite Node Extensions

Node*: Remembering Running Nodes

When we start the tree traversal at the root every tick, even after some node returned a running state, we can fall at the following situation. Suppose an agent performing a patrol behavior, which is illustrated by the figure below. Now suppose that the agent completed the first action (“go to point A”) at the tick 1 and then, the second action (“go to point B”) is started, returning a running state. At tick 2, the traversal starts from the top and reach the first action again, which will send the robot again to point A, thus, never completing the second action and never even calling the third one.

problem_running

The * extension over the Priority and Sequence nodes overcome this problem by recording the last child that returned RUNNING. The figure below shows the same example above but now using a sequence with * extension. After completed the first action (“go to point A”), the second action (“go to point B”) is executed and returned RUNNING state. In the next tick, the sequence node do not execute the first action, but jumps directly to the second one.

problem_running_2

This extension is only valid for the Sequence and Priority nodes. Parallel node does not suffer from the problem cited here, because it don’t execute its children sequentially but concurrently.

Node~: Probabilistic Choice

When an agent performs exactly the same sequence of actions given a particular situation, the agent may become predictable. The simplest way to avoid the predictability is to using random choices on some nodes. For example, an agent with a grasp behavior can randomly choose left or right hand to grasp some object.

The ~ extension allows Sequence and Priority nodes to randomly choose its children, instead of sequentially select them. The ~ nodes randomly choose one of its children to execute, then ignore the selected ones and choose other until all children were selected.

problem_random

A common use of this extension is to choose children with equiprobable distribution, but it can use other distributions by weighting the choices.

 Multi-Parenting

Behavior Trees provide the advantage of allowing reuse of conditions, actions or even subtrees. The reuse of subtrees can be done in two ways: creating a new instance of the branch or using the same branch but with multiple parents. Duplicating subtrees have a major drawback in the memory usage, because it consumes more memory than necessary to represent the same thing, specially when duplicating large subtrees. On the other hand, multiple parents save memory but prejudice the visualization of the tree, decreasing readability.

The solution to this problem is to use a hybrid approach: the duplication of the subtrees happens only in the level of control and visualization while multi-parenting is used in the level of implementation and execution. Notice that, the duplication of the subtrees is only virtual and does not apply to the real structure of the tree.

Handling Persistent Data

Autonomous intelligent agents, whether virtual or real, need an internal state to store its belief about the world. These beliefs may include information from its perception system (e.g., last known position of the enemy; last known position of the allies; etc.) and other computed information (e.g., world position).

Similarly, some actions in Behavior Tree may need to store information (e.g., total time running the child node; time since last action; total failures; etc.), but we do not want to add this inside the node because it would be attached to a specific agent, therefore, obligating the use of a different tree for all agents. Running a different tree for each agent is a waste of resources and may be impractical.

problem_memory

A common approach to fulfill the requirement for persistent data is using memory pools, which can be used to store information about the world and allow behaviors to read and write new information on it. Notice that, this memory pools are individually maintained for each agent, thus allowing to share a single Behavior Tree for hundreds of agents.

node_parallel

An Introduction to Behavior Trees – Part 2

This is the second part of the tutorial on Behavior Trees for games and robotics.

Fast links for other parts and tutorials:

If you’re looking for actual code to use, check it out:


The Behavior Tree (BT) was born in the game industry and was quickly adopted and adapted by game developers around the world. However, the formalism required by the academy was developed later, mainly because its use in robotics. The lack of documentation and formalism, in conjunction with the fast adaptation of the BT, resulted in inconsistencies (of names, structure and definition) among papers and tutorials, for both robotics and games. We follow the definition of (Marzinotto, 2014)  and (Ogren, 2012) to describe how a Behavior Tree is structured and how it works.

As discussed in the previous post, Behavior Trees provide some improvements over the Finite State Machines, having some advantages such as:

  • maintainability: transitions in BT are defined by the structure, not by conditions inside the states. Because of this, nodes can be designed independent from each other, thus, when adding or removing new nodes (or even subtrees) in a small part of the tree, it is not necessary to change other parts of the model.
  • scalability: when a BT have many nodes, it can be decomposed into small subtrees saving the readability of the graphical model.
  • reusability: due to the independence of nodes in BT, the subtrees are also independent. This allows the reuse of nodes or subtrees among other trees or projects.
  • goal-oriented: although the nodes of BT are independent, they are still related due to the tree structure of the model. This allows the designer to build specific sub-trees for a given goal without losing  flexibility of the model.
  • parallelization: BT can specify parallel nodes which run all children at the same time without losing the control of the model execution. This is possible because the parallelization is locally contained to the parallel node.

Despite the name, the Behavior Tree is actually defined as a directed acyclic graph, because a node can have many parents. This structure may not be found in some papers but it have some advantages in the reusability and performance. The multi-parenting will be discussed in the part 3 of this post.

The model is composed of nodes and edges. For a pair of nodes connected by an edge, the outgoing node is called the parent and the incoming node is the child. There is no limit of how much children a node can have. The child-less nodes are called leaves while the parent-less node is called root; the nodes that stand between the root and the leaves can be of two types, composite or decorator nodes. Each subtree defines a different behavior, which can be simple ones (composed of few nodes) or complex behaviors (composed of a large number of modes).

The root of the BT generates a signal (called tick) periodically following a frequency f. The tick is propagated through the tree branches according to the algorithm defined by each node type. When the tick reaches a leaf, the node perform some computation and return a state value SUCCESS, FAILURE, RUNNING or ERROR). Then the returned value is propagated back through the tree according to each node type. The process is completed when a state value is returned to the root. Notice that, the tick frequency of the tree is independent of the control loop frequency of the
agent.

Node Types

The nodes types are divided into 3 categories: composite, decorator or leaf, which are described in detail below. In the nodes description, I present a graphicall example of how to use a node in a real situation and I also present their algorithms. The algorithms may use the Tick function in the statement Tick(child[i]), which triggers the algorithm corresponding to the i-th child node.

Leaf Nodes

The leaf nodes are the primitive building blocks of the behavior tree. These nodes do not have any child, thus, they do not propagate the tick signal, instead, they perform some computation and return a state value. There are two types of leaf nodes (conditions and actions) and are categorized by their responsibility.

Conditions

A condition node checks whether a certain condition has been met or not. In order to accomplish this, the node must have a target variable (e.g.: a perception information such as “obstacle distance'” or “other agent visibility”; or an internal variable such as “battery level” or “hungry level”; etc.) and a criteria to base the decision (e.g.: “obstacle distance > 100m?” or “battery power < 10%?”). These nodes return SUCCESS if the condition has been met and FAILURE otherwise. Notice that, conditions do not return RUNNING nor change values of system. Graphically, condition nodes are presented by gray ellipsis, as shown in the image below.

 node_condition

Actions

Action nodes perform computations to change the agent state. The actions implementation depends on the agent type, e.g., the actions of a robot may involve sending motor signals, sending sounds through speakers or turning on lights, while the actions of a NPC may involve executing animations, performing spacial transformations, playing a sound, etc.

Notice that action may not be only external (i.e, actions that changes the environment as result of changes on the agent), they can be internal too, e.g., registering logs, saving files, changing internal variables, etc.

An action returns SUCCESS if it could be completed; returns FAILURE if, for any reason, it could not be finished; or returns RUNNING while executing the action. The action node is represented as a gray box, as shown in the figure below, which presents 4 different example actions.

node_action

Composite Nodes

A composite node can have one or more children. The node is responsible to propagate the tick signal to its children, respecting some order. A composite node also must decide which and when to return the state values of its children, when the value is SUCCESS or FAILURE. Notice that, when a child returns RUNNING or ERROR, the composite node must return the state immediately. All composite nodes are represented graphically as a white box with a certain symbol inside.

Priority

The priority node (sometimes called selector) ticks its children sequentially until one of them returns SUCCESS, RUNNING or ERROR. If all children return the failure state, the priority also returns FAILURE.

For instance, suppose that a cleaning robot have a behavior to turn itself off, as shown in the figure below. When the robot try to turn itself off, the first action is performed and the robot try to get back to its charging dock and turn off all its systems, but if this action fail for some reason (e.g., it could not find the dock) an emergency shutdown will be performed.

node_selector

 Sequence

The sequence node ticks its children sequentially until one of them returns FAILURE, RUNNING or ERROR. If all children return the success state, the sequence also returns SUCCESS.

The figure below presents an example of the sequence node in a behavior that can be part of a real unmanned aerial vehicle (UAV) or a jet plane in a war game. In this behavior, the agent first verify if there is a missile following it, if it is true, the agent fires flares and then performs an evasive maneuver. Notice that, if there is no missile, there is not reason to fire flares nor to perform an evasive maneuver.

node_sequence

 Parallel

The parallel node ticks all children at the same time, allowing them to work in parallel. This node is a way to use concurrency in Behavior Trees. Notice that, using this node, the parallelization is contained locally, avoiding losing control of the execution flow as happens on the FSM.

Parallel nodes return SUCCESS if the number of succeeding children is larger than a local constant S (this constant may be different for each parallel node); return FAILURE if the number of failing children is larger than a local constant F; or return RUNNING otherwise.

As an example of use of a parallel node, consider an agent that is an intelligent house (real or virtual). The house have a behavior to turn the lights on and play music when it identifies that a human just entered the room.

node_parallel

Decorator Nodes

Decorators are special nodes that can have only a single child. The goal of the decorator is to change the behavior of the child by manipulating the returning value or changing its ticking frequency. For example, a decorator may invert the result state of its child, similar to the NOT operator, or it can repeat the execution of the child for a predefined number of times. The figure below shows an example of the decorator “Repeat 3x”, which will execute the action “ring bell” three times before returning a state value.

node_decorator

There is no default algorithm for decorators, it depends on their purpose. Next post I will present some common decorators used in games.

State Values

Following the definition proposed in (Champanard, 2012), we consider four different values for the node states:

  • SUCCESS: returned when a criterion has been met by a condition node or an action node has been completed successfully;
  • FAILURE: returned when a criterion has not been met by a condition node or an action node could not finish its execution for any reason;
  • RUNNING: returned when an action node has been initialized but is still waiting the its resolution.
  • ERROR: returned when some unexpected error happened in the tree, probably by a programming error (trying to verify an undefined variable). Its use depends on the final implementation of the leaf nodes.