Now we focus on the motion matching developed by Unreal Engine official. The plugin was named as PoseSearch, but the core theories are what we have disgussed in the last article.

1. Feature Vector

As we have discussed before, using motion matching requires for a Feature Vector, which contains the element we are concerned. For example, if we want to concern the rotation and velocity(2D, ignore the velocity on Z-axis) of the feet of the character, we first create a feature vector contains these elements:

V_{feature} =
\begin{bmatrix}
r_{foot_{lx}} \\
r_{foot_{ly}} \\
r_{foot_{lz}} \\
r_{foot_{rx}} \\
r_{foot_{ry}} \\
r_{foot_{rz}} \\
v_{foot_{lx}} \\
v_{foot_{ly}} \\
v_{foot_{rx}} \\
v_{foot_{ry}} \\
\end{bmatrix}

Then, a classical method to achieve the motion matching is that using every frame in the animation database to create a matrix composed by feature vector, we take each frame as a column of the matrix, which can be also thought as a feature vector of the frame, in the following expression, f1 means the frame1.

M_{animation-database} =
\begin{bmatrix}
V_{f_1} & V_{f_2} & \dots & V_{f_n}
\end{bmatrix} \\ 
=
\begin{bmatrix}
r_{foot_{lx-f_1}} & r_{foot_{lx-f_2}} & \dots & r_{foot_{lx-f_n}}\\
r_{foot_{ly-f_1}} & r_{foot_{lx-f_2}} & \dots & r_{foot_{ly-f_n}}\\
r_{foot_{lz-f_1}} & r_{foot_{lx-f_2}} & \dots & r_{foot_{lz-f_n}}\\
r_{foot_{rx-f_1}} & r_{foot_{lx-f_2}} & \dots & r_{foot_{rx-f_n}}\\
r_{foot_{ry-f_1}} & r_{foot_{lx-f_2}} & \dots & r_{foot_{ry-f_n}}\\
r_{foot_{rz-f_1}} & r_{foot_{lx-f_2}} & \dots & r_{foot_{rz-f_n}}\\
v_{foot_{lx-f_1}} & r_{foot_{lx-f_2}} & \dots & v_{foot_{lx-f_n}}\\
v_{foot_{ly-f_1}} & r_{foot_{lx-f_2}} & \dots & v_{foot_{ly-f_n}}\\
v_{foot_{rx-f_1}} & r_{foot_{lx-f_2}} & \dots & v_{foot_{rx-f_n}}\\
v_{foot_{ry-f_1}} & r_{foot_{lx-f_2}} & \dots & v_{foot_{ry-f_n}}\\
\end{bmatrix}

For each frame(gameplay frame), we find a column in the animation database, which lets:

i = \arg\min_{j \in \{1, 2, \ldots, n\}} \|V_j, V_k\|

The animation frame which is correspounding to the column i of the matrix will be chosen to play this frame.

Now back to the Unreal, in the plugin PoseSearch, many of the classes are relative to the Feature Vector, the following UML diagram show classes relative to how to build a Feature Vector, not all of the properties and methods are given in the diagram, we only focus on those who are relative with the feature vector.

Is the last article, when we discussing about the motion matching in For Honor, we mentioned that Ubisoft uses the Pose and Trajectory to compose the Feature Vector, However, for gameplay reason, they also take the position of the weapon into consider, in Unreal Engine 5, it calls each part of the feature vector as a channel.

Each channel has a member called ChannelCardinality, it is of the type int32, but actually initialized by a enumeration. The enumeration is declared in the FFeatureVectorHelper:

/** Helper class for extracting and encoding features into a float buffer */
class POSESEARCH_API FFeatureVectorHelper
{
public:
	enum { EncodeQuatCardinality = 6 };
	...

	enum { EncodeVectorCardinality = 3 };
	...

	enum { EncodeVector2DCardinality = 2 };
	...

	enum { EncodeFloatCardinality = 1 };
	...
};

For example, the Velocity2D only uses 2 float member, so the cardinality of a vector2D is 2. A Quaternion requires for 6 float member to decribe. So the cardinality of a quaternion is 6.

The Encode method encodes the target into a TArrayView, A TArrayView is a fixed-size view, looks like the std::array.

// FFeatureVectorHelper
void FFeatureVectorHelper::EncodeQuat(TArrayView<float> Values, int32& DataOffset, const FQuat& Quat)
{
	// unitary quaternions are non Euclidean representation of rotation, so they cannot be used to calculate cost functions within the context of kdtrees, 
	// so we convert them in a matrix, and pick 2 axis (we choose X,Y), skipping the 3rd since correlated to the cross product of the first two (this saves memory and cpu cycles)

	const FMatrix M = Quat.ToMatrix();
	const FVector X = M.GetScaledAxis(EAxis::X);
	const FVector Y = M.GetScaledAxis(EAxis::Y);

	Values[DataOffset + 0] = X.X;
	Values[DataOffset + 1] = X.Y;
	Values[DataOffset + 2] = X.Z;
	Values[DataOffset + 3] = Y.X;
	Values[DataOffset + 4] = Y.Y;
	Values[DataOffset + 5] = Y.Z;

	DataOffset += EncodeQuatCardinality;
}

void FFeatureVectorHelper::EncodeVector2D(TArrayView<float> Values, int32& DataOffset, const FVector2D& Vector2D)
{
	Values[DataOffset + 0] = Vector2D.X;
	Values[DataOffset + 1] = Vector2D.Y;
	DataOffset += EncodeVector2DCardinality;
}

There is also a speicial channel called UPoseSearchFeatureChannel_GroupBase, it contains sub channel and the cardinality of the group is the sum of the cardinalities of all the sub channel. A group channel is used to decribe the composed channel, like the pose we have mentioned. For example, if we consider the pose of the foot_l and foot_r as their velocity2D and location, then we uses a group channel which contains a velocity channel and a pose channel as the channel as the pose channel.

In fact, in Unreal Engine, the pose search does have a pose channel which inherits from the group base channel, and uses a enumeration bitmask to decide which feature to take consider.

void UPoseSearchFeatureChannel_Pose::Finalize(UPoseSearchSchema* Schema)
{
	SubChannels.Reset();

	for (int32 ChannelBoneIdx = 0; ChannelBoneIdx != SampledBones.Num(); ++ChannelBoneIdx)
	{
		const FPoseSearchBone& SampledBone = SampledBones[ChannelBoneIdx];
		if (EnumHasAnyFlags(SampledBone.Flags, EPoseSearchBoneFlags::Position))
		{
			UPoseSearchFeatureChannel_Position* Position = NewObject<UPoseSearchFeatureChannel_Position>(this, NAME_None, RF_Transient);
			Position->Bone = SampledBone.Reference;
			Position->Weight = SampledBone.Weight * Weight;
			Position->SampleTimeOffset = 0.f;
			Position->ColorPresetIndex = SampledBone.ColorPresetIndex;
			Position->InputQueryPose = InputQueryPose;
			SubChannels.Add(Position);
		}

		if (EnumHasAnyFlags(SampledBone.Flags, EPoseSearchBoneFlags::Rotation))
		{
			UPoseSearchFeatureChannel_Heading* HeadingX = NewObject<UPoseSearchFeatureChannel_Heading>(this, NAME_None, RF_Transient);
			HeadingX->Bone = SampledBone.Reference;
			HeadingX->Weight = SampledBone.Weight * Weight;
			HeadingX->SampleTimeOffset = 0.f;
			HeadingX->HeadingAxis = EHeadingAxis::X;
			HeadingX->ColorPresetIndex = SampledBone.ColorPresetIndex;
			HeadingX->InputQueryPose = InputQueryPose;
			SubChannels.Add(HeadingX);

			UPoseSearchFeatureChannel_Heading* HeadingY = NewObject<UPoseSearchFeatureChannel_Heading>(this, NAME_None, RF_Transient);
			HeadingY->Bone = SampledBone.Reference;
			HeadingY->Weight = SampledBone.Weight * Weight;
			HeadingY->SampleTimeOffset = 0.f;
			HeadingY->HeadingAxis = EHeadingAxis::Y;
			HeadingY->ColorPresetIndex = SampledBone.ColorPresetIndex;
			HeadingY->InputQueryPose = InputQueryPose;
			SubChannels.Add(HeadingY);
		}

		if (EnumHasAnyFlags(SampledBone.Flags, EPoseSearchBoneFlags::Velocity))
		{
			UPoseSearchFeatureChannel_Velocity* Velocity = NewObject<UPoseSearchFeatureChannel_Velocity>(this, NAME_None, RF_Transient);
			Velocity->Bone = SampledBone.Reference;
			Velocity->Weight = SampledBone.Weight * Weight;
			Velocity->SampleTimeOffset = 0.f;
			Velocity->ColorPresetIndex = SampledBone.ColorPresetIndex;
			Velocity->InputQueryPose = InputQueryPose;
			Velocity->bUseCharacterSpaceVelocities = bUseCharacterSpaceVelocities;
			SubChannels.Add(Velocity);
		}

		if (EnumHasAnyFlags(SampledBone.Flags, EPoseSearchBoneFlags::Phase))
		{
			UPoseSearchFeatureChannel_Phase* Phase = NewObject<UPoseSearchFeatureChannel_Phase>(this, NAME_None, RF_Transient);
			Phase->Bone = SampledBone.Reference;
			Phase->Weight = SampledBone.Weight * Weight;
			Phase->ColorPresetIndex = SampledBone.ColorPresetIndex;
			Phase->InputQueryPose = InputQueryPose;
			SubChannels.Add(Phase);
		}
	}

	Super::Finalize(Schema);
}

All of the cardinalities of the channels will finally compose the cardinality of the schema which contains these channels, to be exactly, when calling the method Finalize(). We take Position Channel as an example, it calculate the channel data offset of itself and recalculate the cardinality of the schema.

void UPoseSearchFeatureChannel_Position::Finalize(UPoseSearchSchema* Schema)
{
	ChannelDataOffset = Schema->SchemaCardinality;
	ChannelCardinality = UE::PoseSearch::FFeatureVectorHelper::EncodeVectorCardinality;
	Schema->SchemaCardinality += ChannelCardinality;

	SchemaBoneIdx = Schema->AddBoneReference(Bone);
}
int32 UPoseSearchSchema::AddBoneReference(const FBoneReference& BoneReference)
{
	return BoneReferences.AddUnique(BoneReference);
}

Each channel has a data offset, we decribes the start index of it in the schema data. If the data offset of a position channel is 3, that means the first element of it in the feature vector is the fourth element in the feature vector.

V_{feature}=
\begin{bmatrix}
.. & (DataOffset = 0) \\
.. & (DataOffset = 1) \\
.. & (DataOffset = 2) \\
x & (DataOffset = 3) \\
y & (DataOffset = 4)\\
z & (DataOffset = 5) \\
.. & (DataOffset = 6) \\
.. & (DataOffset = 7) \\
.. & (DataOffset = 8) \\
\end{bmatrix}

When finializing, nearly every(if build-in, only Group channel except) call the encode method of the FFeatureVectorHelper, as for what method exactly it will call depends on the cardinality of itself. If it maintains a bone refererce, it should also call the AddBoneReference of the schema, the latter method will initialize the bone reference.

So when will the method Finalize be called? The method is called by Finalize() of the Schema. Every operation that will change the schema asset calls Finialize() method.

...
void UPoseSearchSchema::PreSave(FObjectPreSaveContext ObjectSaveContext)
{
	Finalize();
	Super::PreSave(ObjectSaveContext);
}

void UPoseSearchSchema::PostLoad()
{
	Super::PostLoad();
	Finalize();
}
...

#if WITH_EDITOR
void UPoseSearchSchema::PostEditChangeProperty(FPropertyChangedEvent& PropertyChangedEvent)
{
	Finalize();
	Super::PostEditChangeProperty(PropertyChangedEvent);
}

#endif
This image would be helpful if you are not familiar with the methods PostEditChangeProperty and PreEditChange.

As for the finialize method of the schema, it will clear the bone reference array, call all of the finialize method of the channels, and then initialize all the bone references in the channels.

...
void UPoseSearchSchema::ResolveBoneReferences()
{
	BoneIndicesWithParents.Reset();
	if (Skeleton)
	{
		// Initialize references to obtain bone indices and fill out bone index array
		for (FBoneReference& BoneRef : BoneReferences)
		{
			BoneRef.Initialize(Skeleton);
			if (BoneRef.HasValidSetup())
			{
				BoneIndicesWithParents.Add(BoneRef.BoneIndex);
			}
		}

		// Build separate index array with parent indices guaranteed to be present. Sort for EnsureParentsPresent.
		BoneIndicesWithParents.Sort();
		FAnimationRuntime::EnsureParentsPresent(BoneIndicesWithParents, Skeleton->GetReferenceSkeleton());
	}

	// BoneIndicesWithParents should at least contain the root to support mirroring root motion
	if (BoneIndicesWithParents.IsEmpty())
	{
		BoneIndicesWithParents.Add(0);
	}
}
...
void UPoseSearchSchema::Finalize()
{
	BoneReferences.Reset();

	SchemaCardinality = 0;

	for (const TObjectPtr<UPoseSearchFeatureChannel>& ChannelPtr : Channels)
	{
		if (ChannelPtr)
		{
			ChannelPtr->Finalize(this);
		}
	}

	ResolveBoneReferences();
}

In short, the pose feature channels uses method Finalize() to decribe how to create the Feature Vector, the mothod is called by the schema. A schema describes how to create a feature vector, the process of creating a feature vector is in runtime. To be exactly, when building a query.

2. Build Query

Search and build a query requires for the cooperation of many classes, we post some of the classes in the following UML diagram, and then dive into them. This section we focus on building query.

First, the animation node, FAnimationNode_MotionMatching calls the method UpdateAssetPlayer every frame. The FAnimationNode_MotionMatching is the node we used in the animation blueprint graph:

As we can seen from the grapg node, it has a UPoseSearchSearchableAsset, usually implemented as a UPoseSearchDatabase. In the Update method, it calls the PoseSearchLibrary::UpdateMotionMatchingState(), the latter one is a namespace instead of a blueprint library class.

USTRUCT(BlueprintInternalUseOnly)
struct POSESEARCH_API FAnimNode_MotionMatching : public FAnimNode_AssetPlayerBase
{
	...
	// Collection of animations for motion matching
	UPROPERTY(EditAnywhere, BlueprintReadWrite, Category=Settings, meta=(PinShownByDefault))
	TObjectPtr<const UPoseSearchSearchableAsset> Searchable = nullptr;

	UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = Settings, meta = (PinShownByDefault))
	FGameplayTagContainer ActiveTagsContainer;

	// Motion trajectory samples for pose search queries
	UPROPERTY(EditAnywhere, BlueprintReadWrite, Category=Settings, meta=(PinShownByDefault))
	FTrajectorySampleRange Trajectory;
	...
	...
	virtual void UpdateAssetPlayer(const FAnimationUpdateContext& Context) override;
	...
};
void FAnimNode_MotionMatching::UpdateAssetPlayer(const FAnimationUpdateContext& Context)
{
	DECLARE_SCOPE_HIERARCHICAL_COUNTER_ANIMNODE(UpdateAssetPlayer);

	using namespace UE::PoseSearch;

	....

	// Execute core motion matching algorithm
	UpdateMotionMatchingState(
		Context,
		Searchable,
		&ActiveTagsContainer,
		Trajectory,
		Settings,
		MotionMatchingState,
		bForceInterrupt
	);

	...

The PoseSearchLibrary::UpdateMotionMatchingState() first create a FSearchContext, and uses it search.

void UpdateMotionMatchingState(
	const FAnimationUpdateContext& Context,
	const UPoseSearchSearchableAsset* Searchable,
	const FGameplayTagContainer* ActiveTagsContainer,
	const FTrajectorySampleRange& Trajectory,
	const FMotionMatchingSettings& Settings,
	FMotionMatchingState& InOutMotionMatchingState,
	bool bForceInterrupt
)
{
	QUICK_SCOPE_CYCLE_COUNTER(STAT_PoseSearch_Update);

	...

#if UE_POSE_SEARCH_TRACE_ENABLED
	// Record Current Pose Index for Debugger
	const FSearchResult LastResult = InOutMotionMatchingState.CurrentSearchResult;
#endif

	....

	// If we can't advance or enough time has elapsed since the last pose jump then search
	FSearchContext SearchContext;
	if (!bCanAdvance || (InOutMotionMatchingState.ElapsedPoseJumpTime >= Settings.SearchThrottleTime))
	{
		// Build the search context
		SearchContext.ActiveTagsContainer = ActiveTagsContainer;
		SearchContext.Trajectory = &Trajectory;
		SearchContext.OwningComponent = Context.AnimInstanceProxy->GetSkelMeshComponent();
		SearchContext.BoneContainer = &Context.AnimInstanceProxy->GetRequiredBones();
		SearchContext.bIsTracing = IsTracing(Context);
		SearchContext.bForceInterrupt = bForceInterrupt;
		SearchContext.bCanAdvance = bCanAdvance;
		SearchContext.CurrentResult = InOutMotionMatchingState.CurrentSearchResult;
		SearchContext.PoseJumpThresholdTime = Settings.PoseJumpThresholdTime;
		SearchContext.PoseIndicesHistory = &InOutMotionMatchingState.PoseIndicesHistory;

		IPoseHistoryProvider* PoseHistoryProvider = Context.GetMessage<IPoseHistoryProvider>();
		if (PoseHistoryProvider)
		{
			SearchContext.History = &PoseHistoryProvider->GetPoseHistory();
		}

		if (const FPoseSearchIndexAsset* CurrentIndexAsset = InOutMotionMatchingState.CurrentSearchResult.GetSearchIndexAsset())
		{
			SearchContext.QueryMirrorRequest =
				CurrentIndexAsset->bMirrored ?
				EPoseSearchBooleanRequest::TrueValue :
				EPoseSearchBooleanRequest::FalseValue;
		}

		// Search the database for the nearest match to the updated query vector
		FSearchResult SearchResult = Searchable->Search(SearchContext);

		...
}

So, the motion matching system would not build the query unity the calling stack is deep to the UPoseSearchableAsset, we take the implementation UPoseSearchDatabase as an example, it is also the offical implementation give by Unreal Engine.

When calling search method, the UPoseSearchDatabase will first find the proper method accroding to its optimization method, we take the KD-Tree method as an example, then, the interval search method will build a query, the query contains the Feature Vector, which will be used to search the animation frame soon.

UE::PoseSearch::FSearchResult UPoseSearchDatabase::Search(UE::PoseSearch::FSearchContext& SearchContext) const
{
	using namespace UE::PoseSearch;

	FSearchResult Result;

#if WITH_EDITOR
	if (!FAsyncPoseSearchDatabasesManagement::RequestAsyncBuildIndex(this, ERequestAsyncBuildFlag::ContinueRequest))
	{
		return Result;
	}
#endif

	if (PoseSearchMode == EPoseSearchMode::BruteForce || PoseSearchMode == EPoseSearchMode::PCAKDTree_Compare)
	{
		Result = SearchBruteForce(SearchContext);
	}

	if (PoseSearchMode != EPoseSearchMode::BruteForce)
	{
#if WITH_EDITORONLY_DATA
		FPoseSearchCost BruteForcePoseCost = Result.BruteForcePoseCost;
#endif

		Result = SearchPCAKDTree(SearchContext);

#if WITH_EDITORONLY_DATA
		Result.BruteForcePoseCost = BruteForcePoseCost;
		if (PoseSearchMode == EPoseSearchMode::PCAKDTree_Compare)
		{
			check(Result.BruteForcePoseCost.GetTotalCost() <= Result.PoseCost.GetTotalCost());
		}
#endif
	}
	
	return Result;
}

UE::PoseSearch::FSearchResult UPoseSearchDatabase::SearchPCAKDTree(UE::PoseSearch::FSearchContext& SearchContext) const
{
	QUICK_SCOPE_CYCLE_COUNTER(STAT_PoseSearch_PCA_KNN);
	SCOPE_CYCLE_COUNTER(STAT_PoseSearchPCAKNN);

	using namespace UE::PoseSearch;

	FSearchResult Result;

	const int32 NumDimensions = Schema->SchemaCardinality;
	const FPoseSearchIndex& SearchIndex = GetSearchIndex();

	const uint32 ClampedNumberOfPrincipalComponents = GetNumberOfPrincipalComponents();
	const uint32 ClampedKDTreeQueryNumNeighbors = FMath::Clamp<uint32>(KDTreeQueryNumNeighbors, 1, SearchIndex.NumPoses);

	//stack allocated temporaries
	TArrayView<size_t> ResultIndexes((size_t*)FMemory_Alloca((ClampedKDTreeQueryNumNeighbors + 1) * sizeof(size_t)), ClampedKDTreeQueryNumNeighbors + 1);
	TArrayView<float> ResultDistanceSqr((float*)FMemory_Alloca((ClampedKDTreeQueryNumNeighbors + 1) * sizeof(float)), ClampedKDTreeQueryNumNeighbors + 1);
	RowMajorVectorMap WeightedQueryValues((float*)FMemory_Alloca(NumDimensions * sizeof(float)), 1, NumDimensions);
	RowMajorVectorMap CenteredQueryValues((float*)FMemory_Alloca(NumDimensions * sizeof(float)), 1, NumDimensions);
	RowMajorVectorMap ProjectedQueryValues((float*)FMemory_Alloca(ClampedNumberOfPrincipalComponents * sizeof(float)), 1, ClampedNumberOfPrincipalComponents);
	
	

	SearchContext.GetOrBuildQuery(this, Result.ComposedQuery);

	TConstArrayView<float> QueryValues = Result.ComposedQuery.GetValues();

	const bool IsCurrentResultFromThisDatabase = SearchContext.IsCurrentResultFromDatabase(this);

	...
  ...
}

Soon it will call the UPoseSearchSchema::BuildQuery and the latter one wil call the BuildQuery method of each channel.

We take the position channel as an example, the codes are full of complicated boundary check and something else, but can be simply, mathmetically expressioned as:

T_{sample\_local\_pose}[] = SampleFromHistory(sample\_time* -1) \\
foreach\ T\  in \ T_{sample\_local\_pose}[] : \\
T_{sample\_component\_pose}[index] = \\
T_{sample\_local\_pose}[index] * T_{sample-component-pose}[parent\_index] \\
V_{position\_feature} = GetLocation(T_{sample\_component\_pose})



void UPoseSearchSchema::BuildQuery(UE::PoseSearch::FSearchContext& SearchContext, FPoseSearchFeatureVectorBuilder& InOutQuery) const
{
	QUICK_SCOPE_CYCLE_COUNTER(STAT_PoseSearch_BuildQuery);

	InOutQuery.Init(this);

	for (const TObjectPtr<UPoseSearchFeatureChannel>& ChannelPtr : Channels)
	{
		if (ChannelPtr)
		{
			ChannelPtr->BuildQuery(SearchContext, InOutQuery);
		}
	}
}
void UPoseSearchFeatureChannel_Position::BuildQuery(UE::PoseSearch::FSearchContext& SearchContext, FPoseSearchFeatureVectorBuilder& InOutQuery) const
{
	using namespace UE::PoseSearch;

	check(InOutQuery.GetSchema());
	const bool bIsCurrentResultValid = SearchContext.CurrentResult.IsValid();
	const bool bSkip = InputQueryPose != EInputQueryPose::UseCharacterPose && bIsCurrentResultValid && SearchContext.CurrentResult.Database->Schema == InOutQuery.GetSchema();
	if (bSkip || !SearchContext.History)
	{
		if (bIsCurrentResultValid)
		{
			const float LerpValue = InputQueryPose == EInputQueryPose::UseInterpolatedContinuingPose ? SearchContext.CurrentResult.LerpValue : 0.f;
			int32 DataOffset = ChannelDataOffset;
			FFeatureVectorHelper::EncodeVector(InOutQuery.EditValues(), DataOffset, SearchContext.GetCurrentResultPrevPoseVector(), SearchContext.GetCurrentResultPoseVector(), SearchContext.GetCurrentResultNextPoseVector(), LerpValue);
		}
		// else leave the InOutQuery set to zero since the SearchContext.History is invalid and it'll fail if we continue
	}
	else
	{
		FTransform Transform;
		if (InOutQuery.GetSchema()->BoneReferences[SchemaBoneIdx].HasValidSetup())
		{
			// calculating the Transform in component space for the bone indexed by SchemaBoneIdx
			Transform = SearchContext.TryGetTransformAndCacheResults(SampleTimeOffset, InOutQuery.GetSchema(), SchemaBoneIdx);

			// if the SampleTimeOffset is not zero we calculate the delta root bone between the root at current time (zero) and the root at SampleTimeOffset (in the past)
			// so we can offset the Transform by this amount
			if (!FMath::IsNearlyZero(SampleTimeOffset, UE_KINDA_SMALL_NUMBER))
			{
				const FTransform RootTransform = SearchContext.TryGetTransformAndCacheResults(0.f, InOutQuery.GetSchema(), FSearchContext::SchemaRootBoneIdx);
				const FTransform RootTransformPrev = SearchContext.TryGetTransformAndCacheResults(SampleTimeOffset, InOutQuery.GetSchema(), FSearchContext::SchemaRootBoneIdx);
				Transform = Transform * (RootTransformPrev * RootTransform.Inverse());
			}
		}
		else
		{
			check(SearchContext.Trajectory);
			// @todo: make this call consistent with Transform = SearchContext.TryGetTransformAndCacheResults(SampleTimeOffset, InOutQuery.GetSchema(), FSearchContext::SchemaRootBoneIdx);
			const FTrajectorySample TrajectorySample = SearchContext.Trajectory->GetSampleAtTime(SampleTimeOffset);
			Transform = TrajectorySample.Transform;
		}

		int32 DataOffset = ChannelDataOffset;
		FFeatureVectorHelper::EncodeVector(InOutQuery.EditValues(), DataOffset, Transform.GetTranslation());
	}
}

The following image shows the calling stack of the BuildQuery.

3. Search

The KD-Tree method is much faster than brute force, but using brute force in much more easier for us to understand. The PoseSearchDatabase get the result and passes it to the animation blurprint node.

UE::PoseSearch::FSearchResult UPoseSearchDatabase::SearchBruteForce(UE::PoseSearch::FSearchContext& SearchContext) const
{
	QUICK_SCOPE_CYCLE_COUNTER(STAT_PoseSearch_Brute_Force);
	SCOPE_CYCLE_COUNTER(STAT_PoseSearchBruteForce);
	
	using namespace UE::PoseSearch;
	
	FSearchResult Result;

	const FPoseSearchIndex& SearchIndex = GetSearchIndex();

	SearchContext.GetOrBuildQuery(this, Result.ComposedQuery);
	TConstArrayView<float> QueryValues = Result.ComposedQuery.GetValues();

	const bool IsCurrentResultFromThisDatabase = SearchContext.IsCurrentResultFromDatabase(this);
	if (!SearchContext.bForceInterrupt && IsCurrentResultFromThisDatabase)
	{
		// evaluating the continuing pose only if it hasn't already being evaluated and the related animation can advance
		if (SearchContext.bCanAdvance && !Result.ContinuingPoseCost.IsValid())
		{
			Result.PoseIdx = SearchContext.CurrentResult.PoseIdx;
			Result.PoseCost = SearchIndex.ComparePoses(Result.PoseIdx, SearchContext.QueryMirrorRequest, EPoseComparisonFlags::ContinuingPose, Schema->MirrorMismatchCostBias, QueryValues);
			Result.ContinuingPoseCost = Result.PoseCost;

			if (GetSkipSearchIfPossible())
			{
				SearchContext.UpdateCurrentBestCost(Result.PoseCost);
			}
		}
	}

	// since any PoseCost calculated here is at least SearchIndex.MinCostAddend,
	// there's no point in performing the search if CurrentBestTotalCost is already better than that
	if (SearchContext.GetCurrentBestTotalCost() > SearchIndex.MinCostAddend)
	{
		FNonSelectableIdx NonSelectableIdx;
		PopulateNonSelectableIdx(NonSelectableIdx, SearchContext, this, QueryValues);
		check(Algo::IsSorted(NonSelectableIdx));

		const FPoseFilters PoseFilters(Schema, NonSelectableIdx, SearchIndex.OverallFlags);
		for (int32 PoseIdx = 0; PoseIdx < SearchIndex.NumPoses; ++PoseIdx)
		{
			if (PoseFilters.AreFiltersValid(SearchIndex, QueryValues, PoseIdx, SearchIndex.PoseMetadata[PoseIdx]
#if UE_POSE_SEARCH_TRACE_ENABLED
				, SearchContext, this
#endif
			))
			{
				const FPoseSearchCost PoseCost = SearchIndex.ComparePoses(PoseIdx, SearchContext.QueryMirrorRequest, EPoseComparisonFlags::None, Schema->MirrorMismatchCostBias, QueryValues);
				if (PoseCost < Result.PoseCost)
				{
					Result.PoseCost = PoseCost;
					Result.PoseIdx = PoseIdx;
				}

#if UE_POSE_SEARCH_TRACE_ENABLED
				if (PoseSearchMode == EPoseSearchMode::BruteForce)
				{
					SearchContext.BestCandidates.Add(PoseCost, PoseIdx, this, EPoseCandidateFlags::Valid_Pose);
				}
#endif
			}
		}

		if (GetSkipSearchIfPossible() && Result.PoseCost.IsValid())
		{
			SearchContext.UpdateCurrentBestCost(Result.PoseCost);
		}
	}
	else
	{
#if UE_POSE_SEARCH_TRACE_ENABLED
		// calling just for reporting non selectable poses
		FNonSelectableIdx NonSelectableIdx;
		PopulateNonSelectableIdx(NonSelectableIdx, SearchContext, this, QueryValues);
#endif
	}

	// finalizing Result properties
	if (Result.PoseIdx != INDEX_NONE)
	{
		Result.AssetTime = SearchIndex.GetAssetTime(Result.PoseIdx, Schema->GetSamplingInterval());
		Result.Database = this;
	}

#if WITH_EDITORONLY_DATA
	Result.BruteForcePoseCost = Result.PoseCost; 
#endif

	return Result;
}

In conclusion, we do not need to implement our UPoseSearchableAsset, unless we need a algorithm that is much faster than the KD-Tree method. Usually, we only need to implement our channel, alough the build-in channels should be enough to the gameplay requirement I have seen yet.

By JiahaoLi

Hypergryph - Game Programmer 2023 - Now Shandong University - Bachelor 2019-2023

Leave a Reply

Your email address will not be published. Required fields are marked *