Camera management, one of the 3C, maybe the most important aspect to create a wonderful gameplay. Creating a effective camera system is not easy, it is relative to the physcis, renderer and more modules. In high level, the management of the camera system needs APlayerCameraManager in Unreal Engine. The Player Camera Manager is one of the most important parts of the gameplay framework but the documentation is somehow sparse. This post focuses on the introduction of the camera system.
1. Introduction to APlayerCameraManager
Consider about this: Does the player camera manager belong to a character? definitely not. A player camera manager should be a member of a player controller. This way the camera system could work even though there is no character.
If we attached a UCameraComponent with USpringArmComponent to a character, it is not a good idea to deattach the camera component and translate the camera component to the location where you should place a cinematic camera there. However, the player controller would not change the pawn it is controlling now.
An example of cinematic camera blend in God of War, I do not know exactly how they deal with this, but I just guess they did not change the location of gameplay camera which follows Kratos.
As a result, the player camera manager should be a member of the player controller, this is the class in charge of determining where the player’s point of view is and it does so through a variety of techniques. If we look at what the docs have to say about this page:
A PlayerCameraManager is responsible for managing the camera for a particular player. It defines the final view properties used by other systems (e.g. the renderer), meaning you can think of it as your virtual eyeball in the world. It can compute the final camera properties directly, or it can arbitrate/blend between other objects or actors that influence the camera (e.g. blending from one CameraActor to another).
Unreal Engine Documentation
2. How Does the Player Camera Manager Determine Which Camera To Use?
OK, first of all, the question is: What is a view target? Image that in a scene there you have a character with a camera component, a cinematic camera which belongs to a sequencer, the latter will be used in the cinematic scene after player trigger an event.
(1). View Target
A view target seems like an eyeball, there are several eyeballs here, the camera manager need to dicided which eyeball should be used as the eyeball of the game now.
Unreal Engine uses FViewTarget to decribes a view target:
/** A ViewTarget is the primary actor the camera is associated with. */
USTRUCT(BlueprintType)
struct ENGINE_API FTViewTarget
{
GENERATED_USTRUCT_BODY()
public:
/** Target Actor used to compute POV */
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category=TViewTarget)
TObjectPtr<class AActor> Target;
/** Computed point of view */
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category=TViewTarget)
struct FMinimalViewInfo POV;
protected:
/** PlayerState (used to follow same player through pawn transitions, etc., when spectating) */
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category=TViewTarget)
TObjectPtr<class APlayerState> PlayerState;
public:
class APlayerState* GetPlayerState() const { return PlayerState; }
void SetNewTarget(AActor* NewTarget);
class APawn* GetTargetPawn() const;
bool Equal(const FTViewTarget& OtherTarget) const;
FTViewTarget()
: Target(nullptr)
, PlayerState(nullptr)
{}
/** Make sure ViewTarget is valid */
void CheckViewTarget(APlayerController* OwningController);
};
You can see that the ViewTarget is used to associate with an actor, and there is a member POV of the type FMinimalViewInfo, which will be discussed soon.
(2). Minimal View Info
A FMinimalViewInfo is used to describes the parameters of the camera, it also provides the APi to blend ViewInfo parameters. It is a large class, we just list some of its.
USTRUCT(BlueprintType)
struct FMinimalViewInfo
{
GENERATED_USTRUCT_BODY()
/** Location */
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category=Camera)
FVector Location;
/** Rotation */
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category=Camera)
FRotator Rotation;
/** The horizontal field of view (in degrees) in perspective mode (ignored in orthographic mode). */
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category=Camera)
float FOV;
/** The originally desired horizontal field of view before any adjustments to account for different aspect ratios */
UPROPERTY(Transient)
float DesiredFOV;
/** The desired width (in world units) of the orthographic view (ignored in Perspective mode) */
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category=Camera)
float OrthoWidth;
/** The near plane distance of the orthographic view (in world units) */
UPROPERTY(Interp, EditAnywhere, BlueprintReadWrite, Category=Camera)
float OrthoNearClipPlane;
...
...
It uses method BlendViewInfo for blending different view info together, it is useful when dealing with camera transition. Creating seamless camera transition is always a big challenge for game developers.
void FMinimalViewInfo::BlendViewInfo(FMinimalViewInfo& OtherInfo, float OtherWeight)
{
Location = FMath::Lerp(Location, OtherInfo.Location, OtherWeight);
const FRotator DeltaAng = (OtherInfo.Rotation - Rotation).GetNormalized();
Rotation = Rotation + OtherWeight * DeltaAng;
FOV = FMath::Lerp(FOV, OtherInfo.FOV, OtherWeight);
OrthoWidth = FMath::Lerp(OrthoWidth, OtherInfo.OrthoWidth, OtherWeight);
OrthoNearClipPlane = FMath::Lerp(OrthoNearClipPlane, OtherInfo.OrthoNearClipPlane, OtherWeight);
OrthoFarClipPlane = FMath::Lerp(OrthoFarClipPlane, OtherInfo.OrthoFarClipPlane, OtherWeight);
OffCenterProjectionOffset = FMath::Lerp(OffCenterProjectionOffset, OtherInfo.OffCenterProjectionOffset, OtherWeight);
AspectRatio = FMath::Lerp(AspectRatio, OtherInfo.AspectRatio, OtherWeight);
bConstrainAspectRatio |= OtherInfo.bConstrainAspectRatio;
bUseFieldOfViewForLOD |= OtherInfo.bUseFieldOfViewForLOD;
}
(3). Camera Update(Tick)
After discussing about the data-structure used in Player Camera Manager, we can continue learning the interval camera management logic of player camera manager. Every tick, the player camera manager calls the method UpdateCamera(), the latter intervally calls DoUpdateCamera().
The implementation of UpdateCamera() is as followed, it is complex so I did not paste codes about network prediction here, which you can read by yourself, we just focus on the function calling stacks.
void APlayerCameraManager::UpdateCamera(float DeltaTime)
{
check(PCOwner != nullptr);
if ((PCOwner->Player && PCOwner->IsLocalPlayerController()) || !bUseClientSideCameraUpdates || bDebugClientSideCamera)
{
DoUpdateCamera(DeltaTime);
const float TimeDilation = FMath::Max(GetActorTimeDilation(), KINDA_SMALL_NUMBER);
TimeSinceLastServerUpdateCamera += (DeltaTime / TimeDilation);
if (bShouldSendClientSideCameraUpdate && IsNetMode(NM_Client))
{
SCOPE_CYCLE_COUNTER(STAT_ServerUpdateCamera);
const AGameNetworkManager* const GameNetworkManager = GetDefault<AGameNetworkManager>();
const float ClientNetCamUpdateDeltaTime = GameNetworkManager->ClientNetCamUpdateDeltaTime;
const float ClientNetCamUpdatePositionLimit = GameNetworkManager->ClientNetCamUpdatePositionLimit;
...
...
}
}
}
The manager class will first attempt to determine a collection of settings using the associated PlayerController
and any pawns that are currently being possessed.
An example of this can be seen in APlayerCameraManager::ProcessViewRotation
, which determines an initial rotation for the POV
and is called by the owning controller’s APlayerController::UpdateRotation
. The UpdateRotation
method is called during the controller’s own tick, but may be called by other components if they affect, wish to affect and update the player controller’s rotation (see UCharacterMovementComponent
for an example).
As you can see, this method calls DoUpdateCamera, the latter one is also a complex method
void APlayerCameraManager::DoUpdateCamera(float DeltaTime)
{
FMinimalViewInfo NewPOV = ViewTarget.POV;
// update color scale interpolation
if (bEnableColorScaleInterp)
{
float BlendPct = FMath::Clamp((GetWorld()->TimeSeconds - ColorScaleInterpStartTime) / ColorScaleInterpDuration, 0.f, 1.0f);
ColorScale = FMath::Lerp(OriginalColorScale, DesiredColorScale, BlendPct);
// if we've maxed
if (BlendPct == 1.0f)
{
// disable further interpolation
bEnableColorScaleInterp = false;
}
}
// Don't update outgoing viewtarget during an interpolation when bLockOutgoing is set.
if ((PendingViewTarget.Target == NULL) || !BlendParams.bLockOutgoing)
{
// Update current view target
ViewTarget.CheckViewTarget(PCOwner);
UpdateViewTarget(ViewTarget, DeltaTime);
}
// our camera is now viewing there
NewPOV = ViewTarget.POV;
// if we have a pending view target, perform transition from one to another.
if (PendingViewTarget.Target != NULL)
{
BlendTimeToGo -= DeltaTime;
// Update pending view target
PendingViewTarget.CheckViewTarget(PCOwner);
UpdateViewTarget(PendingViewTarget, DeltaTime);
// blend....
if (BlendTimeToGo > 0)
{
float DurationPct = (BlendParams.BlendTime - BlendTimeToGo) / BlendParams.BlendTime;
float BlendPct = 0.f;
switch (BlendParams.BlendFunction)
{
case VTBlend_Linear:
BlendPct = FMath::Lerp(0.f, 1.f, DurationPct);
break;
case VTBlend_Cubic:
BlendPct = FMath::CubicInterp(0.f, 0.f, 1.f, 0.f, DurationPct);
break;
case VTBlend_EaseIn:
BlendPct = FMath::Lerp(0.f, 1.f, FMath::Pow(DurationPct, BlendParams.BlendExp));
break;
case VTBlend_EaseOut:
BlendPct = FMath::Lerp(0.f, 1.f, FMath::Pow(DurationPct, 1.f / BlendParams.BlendExp));
break;
case VTBlend_EaseInOut:
BlendPct = FMath::InterpEaseInOut(0.f, 1.f, DurationPct, BlendParams.BlendExp);
break;
case VTBlend_PreBlended:
BlendPct = 1.0f;
break;
default:
break;
}
// Update pending view target blend
NewPOV = ViewTarget.POV;
NewPOV.BlendViewInfo(PendingViewTarget.POV, BlendPct);//@TODO: CAMERA: Make sure the sense is correct! BlendViewTargets(ViewTarget, PendingViewTarget, BlendPct);
}
else
{
// we're done blending, set new view target
ViewTarget = PendingViewTarget;
// clear pending view target
PendingViewTarget.Target = NULL;
BlendTimeToGo = 0;
// our camera is now viewing there
NewPOV = PendingViewTarget.POV;
OnBlendComplete().Broadcast();
}
}
if (bEnableFading)
{
if (bAutoAnimateFade)
{
FadeTimeRemaining = FMath::Max(FadeTimeRemaining - DeltaTime, 0.0f);
if (FadeTime > 0.0f)
{
FadeAmount = FadeAlpha.X + ((1.f - FadeTimeRemaining / FadeTime) * (FadeAlpha.Y - FadeAlpha.X));
}
if ((bHoldFadeWhenFinished == false) && (FadeTimeRemaining <= 0.f))
{
// done
StopCameraFade();
}
}
if (bFadeAudio)
{
ApplyAudioFade();
}
}
if (AllowPhotographyMode())
{
const bool bPhotographyCausedCameraCut = UpdatePhotographyCamera(NewPOV);
bGameCameraCutThisFrame = bGameCameraCutThisFrame || bPhotographyCausedCameraCut;
}
// Cache results
FillCameraCache(NewPOV);
}
This method calls UpdateViewTarget
which internally calls UpdateViewTargetInternal
, the latter one is implmented as:
void APlayerCameraManager::UpdateViewTargetInternal(FTViewTarget& OutVT, float DeltaTime)
{
if (OutVT.Target)
{
FVector OutLocation;
FRotator OutRotation;
float OutFOV;
if (BlueprintUpdateCamera(OutVT.Target, OutLocation, OutRotation, OutFOV))
{
OutVT.POV.Location = OutLocation;
OutVT.POV.Rotation = OutRotation;
OutVT.POV.FOV = OutFOV;
}
else
{
OutVT.Target->CalcCamera(DeltaTime, OutVT.POV);
}
}
}
It calls the CalcCamera() method of AActor if the BlueprintUpdateCamera returns false, by default, this method is marked as BlueprintImplementEvent
does return false, but it will be different if you are using a blueprint player camera manager.
It is complex, but in short, If a valid ViewTarget is set with an AActor
target, then it will invoke AActor::CalcCamera
to determine the POV
settings to use. This will override the assumptions that were made by the. UNLESS the camera manager class is extended via Blueprint, in which case the BlueprintUpdateCamera
takes precedence over the call to CalcCamera
if it returns true
.
(4). Summary
The player camera manager uses a FViewTarget to determine which camera to uses, and use a FMinialViewInfo to represents the parameters of the camera, like location, rotation, FOV, post pocessing and so on. The FMinialViewInfo object will be passed to the method CalcCamera() of the Actor(FViewTarget::Target) if there does exist a valid FViewTarget.
However, if using a blueprint inherits from the APlayerCameraManager and override the method, whether the CalcCamera will be called or not depends on the implmentation of the method BlueprintUpdateCamera().
3. The Relationship Among Actor, CameraActor and CameraComponent
To answer this question, we need to dive into the method AActor::CalCamera()
:
void AActor::CalcCamera(float DeltaTime, FMinimalViewInfo& OutResult)
{
if (bFindCameraComponentWhenViewTarget)
{
// Look for the first active camera component and use that for the view
TInlineComponentArray<UCameraComponent*> Cameras;
GetComponents<UCameraComponent>(/*out*/ Cameras);
for (UCameraComponent* CameraComponent : Cameras)
{
if (CameraComponent->IsActive())
{
CameraComponent->GetCameraView(DeltaTime, OutResult);
return;
}
}
}
GetActorEyesViewPoint(OutResult.Location, OutResult.Rotation);
}
As you can see, if there are camera components attach to the actor, when calculating the camera, an actor will find its first camera component and uses UCameraComponent::GetCameraView()
to Calculate the camera info.
If there is no camera components and did not override the method, it will call GetActorEyesViewPoint()
to calculate the camera info. This method is very simple:
void AActor::GetActorEyesViewPoint( FVector& OutLocation, FRotator& OutRotation ) const
{
OutLocation = GetActorLocation();
OutRotation = GetActorRotation();
}
The method UCameraComponent::GetCameraView()
is easy to understand but too long to paste here, you can read by yourself.
Sometimes when you reading the source code of a new project, you may find that there is no camera component but there does exist a camera, you can check whether the developers of the project overrided AActor::CalcCamera()
. An example is the Als Refactor.
The CalcCamera method has been overwriten, it calls UAlsCameraComponent::GetViewInfo() to calculate the camera info, the UAlsCameraComponent inherits from USkeletalMeshComponent instead of UCameraComponent.
void AAlsCharacterExample::CalcCamera(const float DeltaTime, FMinimalViewInfo& ViewInfo)
{
if (Camera->IsActive())
{
Camera->GetViewInfo(ViewInfo);
return;
}
Super::CalcCamera(DeltaTime, ViewInfo);
}
As for the ACameraActor, it is simple, it’s an actor with camera component, usually used viewport placed in a level when designing level.
/**
* A CameraActor is a camera viewpoint that can be placed in a level.
*/
UCLASS(ClassGroup=Common, hideCategories=(Input, Rendering), showcategories=("Input|MouseInput", "Input|TouchInput"), Blueprintable)
class ENGINE_API ACameraActor : public AActor
{
GENERATED_UCLASS_BODY()
private:
/** Specifies which player controller, if any, should automatically use this Camera when the controller is active. */
UPROPERTY(Category="AutoPlayerActivation", EditAnywhere)
TEnumAsByte<EAutoReceiveInput::Type> AutoActivateForPlayer;
private:
/** The camera component for this camera */
UPROPERTY(Category = CameraActor, VisibleAnywhere, BlueprintReadOnly, meta = (AllowPrivateAccess = "true"))
TObjectPtr<class UCameraComponent> CameraComponent;
UPROPERTY(Category = CameraActor, VisibleAnywhere, BlueprintReadOnly, meta = (AllowPrivateAccess = "true"))
TObjectPtr<class USceneComponent> SceneComponent;
public:
/** If this CameraActor is being used to preview a CameraAnim in the editor, this is the anim being previewed. */
TWeakObjectPtr<class UCameraAnim> PreviewedCameraAnim;
/** Returns index of the player for whom we auto-activate, or INDEX_NONE (-1) if disabled. */
UFUNCTION(BlueprintCallable, Category="AutoPlayerActivation")
int32 GetAutoActivatePlayerIndex() const;
private:
UPROPERTY()
uint32 bConstrainAspectRatio_DEPRECATED:1;
UPROPERTY()
float AspectRatio_DEPRECATED;
UPROPERTY()
float FOVAngle_DEPRECATED;
UPROPERTY()
float PostProcessBlendWeight_DEPRECATED;
UPROPERTY()
struct FPostProcessSettings PostProcessSettings_DEPRECATED;
public:
//~ Begin UObject Interface
virtual void Serialize(FArchive& Ar) override;
#if WITH_EDITOR
virtual void PostLoadSubobjects(FObjectInstancingGraph* OuterInstanceGraph) override;
virtual void PostEditChangeProperty(FPropertyChangedEvent& PropertyChangedEvent) override;
#endif
virtual class USceneComponent* GetDefaultAttachComponent() const override;
//~ End UObject Interface
protected:
//~ Begin AActor Interface
virtual void BeginPlay() override;
//~ End AActor Interface
public:
/** Returns CameraComponent subobject **/
class UCameraComponent* GetCameraComponent() const { return CameraComponent; }
/**
* Called to notify that this camera was cut to, so it can update things like interpolation if necessary.
* Typically called by the camera component.
*/
virtual void NotifyCameraCut() {};
};
Summary
- A camera component is used to calculate the camera info, when attach to an actor, the actor will choose the first active camera component to calculate the camera info.
- If an actor has no camera component and does not overwrite the method
CalcCamera()
, it will use its location and rotation as the location and rotation of the camera. - A camera actor is an actor with camera component, also has an ability to preview the camera animation.
4. Transition of Cameras
The transition of cameras is always a big challenge of game developping, sometimes we need to trans the gameplay camera to a static camera which is placed in level design, or change the cinamatic camera into gameplay, change the gameplay into the cinematic camera and so on.
The most generally used method for camera transition is the method APlayerController::SetViewTargetWithBlend()
, it is a virtual method, but no override in build-in classes of unreal engine.
void APlayerController::SetViewTargetWithBlend(AActor* NewViewTarget, float BlendTime, EViewTargetBlendFunction BlendFunc, float BlendExp, bool bLockOutgoing)
{
FViewTargetTransitionParams TransitionParams;
TransitionParams.BlendTime = BlendTime;
TransitionParams.BlendFunction = BlendFunc;
TransitionParams.BlendExp = BlendExp;
TransitionParams.bLockOutgoing = bLockOutgoing;
SetViewTarget(NewViewTarget, TransitionParams);
}
It uses a FViewTargetTransitionParams as param, the latter one is declared as:
/** A set of parameters to describe how to transition between view targets. */
USTRUCT(BlueprintType)
struct FViewTargetTransitionParams
{
GENERATED_USTRUCT_BODY()
public:
/** Total duration of blend to pending view target. 0 means no blending. */
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category=ViewTargetTransitionParams)
float BlendTime;
/** Function to apply to the blend parameter. */
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category=ViewTargetTransitionParams)
TEnumAsByte<enum EViewTargetBlendFunction> BlendFunction;
/** Exponent, used by certain blend functions to control the shape of the curve. */
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category=ViewTargetTransitionParams)
float BlendExp;
/**
* If true, lock outgoing viewtarget to last frame's camera POV for the remainder of the blend.
* This is useful if you plan to teleport the old viewtarget, but don't want to affect the blend.
*/
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category=ViewTargetTransitionParams)
uint32 bLockOutgoing:1;
FViewTargetTransitionParams()
: BlendTime(0.f)
, BlendFunction(VTBlend_Cubic)
, BlendExp(2.f)
, bLockOutgoing(false)
{}
/** For a given linear blend value (blend percentage), return the final blend alpha with the requested function applied */
float GetBlendAlpha(const float& TimePct) const
{
switch (BlendFunction)
{
case VTBlend_Linear: return FMath::Lerp(0.f, 1.f, TimePct);
case VTBlend_Cubic: return FMath::CubicInterp(0.f, 0.f, 1.f, 0.f, TimePct);
case VTBlend_EaseInOut: return FMath::InterpEaseInOut(0.f, 1.f, TimePct, BlendExp);
case VTBlend_EaseIn: return FMath::Lerp(0.f, 1.f, FMath::Pow(TimePct, BlendExp));
case VTBlend_EaseOut: return FMath::Lerp(0.f, 1.f, FMath::Pow(TimePct, (FMath::IsNearlyZero(BlendExp) ? 1.f : (1.f / BlendExp))));
default:
break;
}
return 1.f;
}
};
The method APlayerController::SetViewTarget()
simply called APlayerCameraManager::SetViewTarget()
internally.
void APlayerController::SetViewTarget(class AActor* NewViewTarget, struct FViewTargetTransitionParams TransitionParams)
{
if (PlayerCameraManager)
{
PlayerCameraManager->SetViewTarget(NewViewTarget, TransitionParams);
}
}
While the latter one is a complex method:
void APlayerCameraManager::SetViewTarget(class AActor* NewTarget, struct FViewTargetTransitionParams TransitionParams)
{
// Make sure view target is valid
if (NewTarget == NULL)
{
NewTarget = PCOwner;
}
// Update current ViewTargets
ViewTarget.CheckViewTarget(PCOwner);
if (PendingViewTarget.Target)
{
PendingViewTarget.CheckViewTarget(PCOwner);
}
// If we're already transitioning to this new target, don't interrupt.
if (PendingViewTarget.Target != NULL && NewTarget == PendingViewTarget.Target)
{
return;
}
if (UWorld* World = GetWorld())
{
World->GetTimerManager().ClearTimer(SwapPendingViewTargetWhenUsingClientSideCameraUpdatesTimerHandle);
}
// if viewtarget different then new one or we're transitioning from the same target with locked outgoing, then assign it
if ((NewTarget != ViewTarget.Target) || (PendingViewTarget.Target && BlendParams.bLockOutgoing))
{
// if a transition time is specified, then set pending view target accordingly
if (TransitionParams.BlendTime > 0)
{
// band-aid fix so that EndViewTarget() gets called properly in this case
if (PendingViewTarget.Target == NULL)
{
PendingViewTarget.Target = ViewTarget.Target;
}
// use last frame's POV
ViewTarget.POV = GetLastFrameCameraCachePOV();
BlendParams = TransitionParams;
BlendTimeToGo = TransitionParams.BlendTime;
AssignViewTarget(NewTarget, PendingViewTarget, TransitionParams);
PendingViewTarget.CheckViewTarget(PCOwner);
if (bUseClientSideCameraUpdates && GetNetMode() != NM_Client)
{
if (UWorld* World = GetWorld())
{
World->GetTimerManager().SetTimer(SwapPendingViewTargetWhenUsingClientSideCameraUpdatesTimerHandle, this, &ThisClass::SwapPendingViewTargetWhenUsingClientSideCameraUpdates, TransitionParams.BlendTime, false);
}
}
}
else
{
// otherwise, assign new viewtarget instantly
AssignViewTarget(NewTarget, ViewTarget);
ViewTarget.CheckViewTarget(PCOwner);
// remove old pending ViewTarget so we don't still try to switch to it
PendingViewTarget.Target = NULL;
}
}
else
{
// we're setting the viewtarget to the viewtarget we were transitioning away from,
// just abort the transition.
// @fixme, investigate if we want this case to go through the above code, so AssignViewTarget et al
// get called
if (PendingViewTarget.Target != NULL)
{
if (!PCOwner->IsPendingKillPending() && !PCOwner->IsLocalPlayerController() && GetNetMode() != NM_Client)
{
PCOwner->ClientSetViewTarget(NewTarget, TransitionParams);
}
}
PendingViewTarget.Target = NULL;
}
}
It uses a timer to call the callback method SwapPendingViewTargetWhenUsingClientSideCameraUpdates()
.
Also set a BlendTimeToGo
and PendingViewTarget
. These two members are used in DoCameraUpdate()
.
When finished transition, it will trigger an event OnBlendComplete.
Let’s back to the DoCameraUpdate():
...
if (PendingViewTarget.Target != NULL)
{
BlendTimeToGo -= DeltaTime;
// Update pending view target
PendingViewTarget.CheckViewTarget(PCOwner);
UpdateViewTarget(PendingViewTarget, DeltaTime);
// blend....
if (BlendTimeToGo > 0)
{
float DurationPct = (BlendParams.BlendTime - BlendTimeToGo) / BlendParams.BlendTime;
float BlendPct = 0.f;
switch (BlendParams.BlendFunction)
{
case VTBlend_Linear:
BlendPct = FMath::Lerp(0.f, 1.f, DurationPct);
break;
case VTBlend_Cubic:
BlendPct = FMath::CubicInterp(0.f, 0.f, 1.f, 0.f, DurationPct);
break;
case VTBlend_EaseIn:
BlendPct = FMath::Lerp(0.f, 1.f, FMath::Pow(DurationPct, BlendParams.BlendExp));
break;
case VTBlend_EaseOut:
BlendPct = FMath::Lerp(0.f, 1.f, FMath::Pow(DurationPct, 1.f / BlendParams.BlendExp));
break;
case VTBlend_EaseInOut:
BlendPct = FMath::InterpEaseInOut(0.f, 1.f, DurationPct, BlendParams.BlendExp);
break;
case VTBlend_PreBlended:
BlendPct = 1.0f;
break;
default:
break;
}
// Update pending view target blend
NewPOV = ViewTarget.POV;
NewPOV.BlendViewInfo(PendingViewTarget.POV, BlendPct);//@TODO: CAMERA: Make sure the sense is correct! BlendViewTargets(ViewTarget, PendingViewTarget, BlendPct);
}
else
{
// we're done blending, set new view target
ViewTarget = PendingViewTarget;
// clear pending view target
PendingViewTarget.Target = NULL;
BlendTimeToGo = 0;
// our camera is now viewing there
NewPOV = PendingViewTarget.POV;
OnBlendComplete().Broadcast();
}
...
5. Benifits of Using a Custom PlayerCameraManager
To use a custom player camera manager, just in simply uses:
ACustomPlayerController::ACustomPlayerController()
{
PlayerCameraManagerClass = ACustomPlayerCameraManager::StaticClass();
}
- One of the coolest things about working with this class directly is the level of control that you can have over your player’s point of view in your game. Rather than dealing with rigid or restrictive transforms on a
CameraComponent
that are trying to maintain a relative offset to its parent actor, we can just tell the camera manager where we want to be. As an example, this can make things like switching between first and third person view points a rather trivial procedure. Rather than switching between twoCameraComponent
s or trying to manually move aCameraComponent
between two positions, we can just provide the desired transform(s) directly. - Another benefit is the fact that we can reduce the number of
CameraComponent
s in our scene or add spectator support from the point of view of an NPC or actor that doesn’t have a camera of its own. This can even have performance benefits, especially in cases where you need to have many actors that can operate as the point of view for the player (for example, if you wanted to possess a specific unit in an real-time strategy game).
References:
Thank you for your articles. I make different camera modes for my game. The article made it possible to clarify the situation on some issues