This presentation given by Simon Clavet introduces the motion matching in For Honor developed by Ubisoft and released in 2016. the motion matching is a new approach to achieve the next-gen locomotion, which is accepted as the locomotion solution by the Last Of Us part 2(2020) and Control(2019), I don’t know whether ubisoft use motion matching in Assassin’s Creed Series or not, but I prefer they used.
Compared to other presentations on Motion Matching at GDC, Ubisoft’s presentation is the most basic, simplest, and easiest to understand, perhaps because it was the first time Motion Matching was mentioned at GDC. In this article, we will learn about Motion Matching by analyzing the presentation given by Simon Clavet.
1. About For Honor
For Honor is a hardcore fighting game that demands highly precise gameplay and believable, realistic animations. As it is a PVP game, in addition to high-quality realistic animations, the game should respond as quickly as possible to player inputs. To achieve this, Ubisoft opted for a code-driven approach (we will discuss data-driven vs code-driven in the future).
2. Before the Motion Matching
On the first day, god created the function PlayAnim()
.
Every time you would like to play an animation, you will finally call this function. This approach is very simple, but it becomes quite ineffective when dealing with complex states. Modern games do not employ such methods for playing animations.
As games become more and more complex, players and developers have started to demand higher quality animations. The old methods seem quite ineffective under these complex requirements. As a result, an increasing number of new approaches are being proposed. Among them, what is the most familiar with us must be the Finity State Machine(FSM).
Finite state machines have become the mainstream animation technology for this generation of game development, and they indeed save us a lot of time and effort. However, as we move into the next-generation games with more complex gameplay, state machines become so complicated that they are difficult to maintain.
The talenting programmers in this generation do develop some useful tools to save the FSM, such as the blend tree, and something like linked animation blueprint in Unreal Engine and so on. But it is time to find a new tool to describe the complicated animation instead of the FSM.
What is the problem with the FSM and tools as the expansion of the FSM?
first of all, it is hard to name a complex animation, the presentation gives an example, will you really named an animation with Start-Strafe90-TurnOnSpot45-Stop?
What’ more, what happened if a blend tree has 3 or 4 or even more parameters? it would be very hard to be put all the animations about the blend tree into a tree, under this situation, we usually create a query table to choose the animation.
Going on, Is there a unified way to handle common animation requirements like Start-Loop and transitions? For example, in our walking animation, it is generally made up of Start-Loop-End, and the transitions between animations need to be done through blending. It’s actually difficult to determine when to execute the Loop from the Start point (I think the author is trying to express that there are too many details that the program needs to pay attention to, such as the player might want to stop just after the Start is executed. If using a state machine to implement it, these situations need to be handled), and the same goes for transitions.
And what’s more, when finished an animation, what animation would you like to choose for the next animation? In FSM, we can predefined it, what would happen if we do not use FSM? If the player cancel the input but the character is still playing the spring animation, what animation would it to choose as a stopping animation?
Motion Graphs
To solve these problems, a good tool is the MotionGraphs, in fact, this tool is not mentioned today, it is an old guy, it is a real-time animation system invented in 2002 by a group of people from the University of Wisconsin for academic research. It is too old that I only discovered the PPT of it.
Its core idea is to use a program to detect the similarity of each frame’s action in all animation clips (comparing speed, acceleration, pose, etc.), predefine the transition points that can be animated, and generate transition animations.
When receiving player input, the animation will play to the predefined transition point, and then transition to the next animation clip. The predefined transition points ensure the similarity between the two animation clips during the transition, achieving a smooth transition. However, due to the time difference between player input and transition points, there is a delay in the operation feedback.
Like what is shown by the following image, only when the animation meet the transition point we can change the animation, which is not acceptable by gameplay.
Motion Fields
We are getting closer! the Motion Field is the inspiration for the Motion Matching concept. the motion field is a system invented by guys in University of Washington for academic usage. Just as they said:
We propose a structure called a motion field that finds and uses motion capture data similar to the character‘s current motion at any point. By consulting similar motions to determine which future behaviors are plausible, we ensure that our synthesized animation remains natural: similar, but rarely identical to the motion capture data. This frees the character from simply replaying the motion data, allowing it to move freely through the general vicinity of the data. Furthermore, because there are always multiple motion data to consult, the character constantly has a variety of ways to make quick changes in motion.
Compared to the predefined transition animation points in Motion Graphs, Motion Fields ensure smooth motion and timely control feedback, while the non-repetitiveness of animations makes the motion more natural, thus attracting the attention of the gaming industry.
However, the algorithm of Motion Fields is overly heavy and complex, and its large data volume and memory requirements could not be met by the mainstream consoles of that time, such as XBOX 360 and PS3. Therefore, Motion Fields were not suitable under those conditions. Nevertheless, the smooth and responsive blending effects of Motion Fields showed the gaming industry the potential for future game animations.
This theory of the motion field is very similar with the motion matching today.
3. Motion Matching
The core algorithm is very simple: Every frame, look at all mocap and jump at the best place.
Now we would like to introducing the concept of Cost: each animation frame will have a cost value calculated, If the candidate frame perfectly matches the current motion situation (Pose and Velocity) and also perfectly fits the desired direction of travel (Future Trajectory), then the Cost of this frame is 0. In another word, Every candidate jumping point has a cost. If the candidate matches the current situation and the piece of motion that follows brings us where we want, then the cost is zero.
We have discussed about how to calculate the future trajectory in the previous article of this series, but the actural calculation of the trajectory maybe more complicated, there is a Motion Trajectory plugin in Unreal Engine provides us a good example.
In 2013, Ubisoft developers began experimenting with this approach. They use a set of animation as the input of the algorithm. The core part the algorithm is as followed:
Here you would figure out a problem, what would happened if there is no animation we would like to use in the animation database? Obviously, the algorithm will failed with an output that does not match what we need. To avoid this, we have to ensure the animation database has a large enough coverage of the animation we would like to use in gameplay.
Mocap Dance Card
To ensure the animation database has a complete coverage of the locomotion, developers in Ubisoft invented the Mocap Dance Card. a motion dance card is a series of templated motion that require the motion capture actors to perform all the actions listed in the inventory, included:
1. Walks and Runs.
2. Small repositions.
3. Starts and Stops.
4. Circles (Turns).
5. Plant and turn (foot down to change direction) 45, 90, 135 and 180 degrees.
6. Strafe in a square (forward, left, back right – contains 90 degree plants).
7. Strafe plants (foot down though not turning) for 180 direction shifts.
Animation Play Rate
What if the character is now moving too fast, that none of all the mocaps in the animation database could match the speed of the character? Or the character is moving too slow, a simple but useful idea is that we can scale the animation play rate so that the animation mocap could match the speed of the character.
This idea is used in the motion matching(pose searching) in Unreal Engine 5, in the detail panel of the motion matching of the animation blueprint node, you could find the Play Rate Min and Play Rate Max, which uses this idea.
We will introduce the motion matching system in Unreal Engine 5 in the following posts, whichi is named as Pose Searching. currently it is still in experimental(May 14, 2023), which means Unreal Engine does not guarantee the plugin would be surported in the future edtion(Unity has given up).
Optimization
the presentation did not mentioned too much about the optimization, bu we know that the optimization may includes LOD, KD-Tree motion, shaders, etc.
Currently. the motion matching is mainly used in locomotion instead of gameplay ability or combat animation in ACT game.
4. Data-Driven vs Code-Driven
Currently, developers used motion matching and gives representation on GDC, all used Code-Driven in their games(For Honor, Control, the Last Of Us Part II) for responsive and many other reasons. For Honor is a PVP game so that reponsibility is the most important part, much more important than the realization. In the Last Of Us Part II, even through it is a single player game and the developers do find a way to use Data-Driven for motion matching, they still choose code-driven for NPC navigation control and NPC cinematic scene entry. So does control, it seems that the code-driven would be the mainstream in the future of motion matching. Data-Driven(Root Motion) are only used in several animations, at least not in the locomotion system.
It is obviously that before the motion matching, the Data-Driven could not support responsive performance but the result is realastic, the Code-Driven is responsive but not that realastic, however, the motion matching ensure the realization of the locomotion, so the disadvantage of the Code-Driven is disappear.
We would not introduce the code driven versus data driven in detail here, but if you would like to find more, the article 5 in the references would help you, if you can speak Chinese.
Reference
[…] Motion Matching in Unreal Engine 5(2) : Motion Matching in For Honor – the Walled Gardenjiahaoli.o… […]