Dead reckoning is a feature that interpolates the movements of other players, including their positions and directions, to make them appear more natural. The application simply specifies the interpolation method, update frequency, and other parameters. The system automatically sends and receives the necessary data and performs the required interpolation.Extrapolation by dead reckoning is performed on data sets for which extrapolation_filter
has been set in the DDL file.
NetZ
provides easy extrapolation of dataset values by using the implementation of a customized algorithm. The algorithm is simple to use and requires no a priori knowledge of changes in the value of a variable over time. Dead reckoning is an excellent way to mask latency. This chapter describes several latency masking techniques that are based on dead reckoning. In addition, dead reckoning can significantly reduce bandwidth use. To efficiently use the available bandwidth, the precision of the extrapolation may be either constant or based on the distance between the object and a user-defined observer, as detailed later in this chapter. On the other hand, if the dead-reckoning schemes provided with NetZ
are not adequate for your needs, you can implement your own custom dead reckoning by using the extension of the same name as described in Custom Dead-Reckoning in Useful Features and Reference Information.
Note that if the session clocks on each station are not synchronized, you will most likely observe strange dead-reckoning behavior. Before you start fine-tuning your implementation, check to see that the session clocks are synchronized. An easy way to check is to show the session clock on the screen. If clocks are not synchronized, adjust the appropriate session clock parameters as described in Chapter 6 Time Management.
Dead reckoning is implemented as an extension and, as such, you must use the following code to include the PHBDR.h
file.
Code 12.1 Including the PHBDR.h
File
#include<NetZ/PHBDR.h>
If neither dead reckoning nor any other latency masking techniques are implemented, at any particular time the duplica is not synchronized with its duplication master because updates take time to travel between stations. The duplica processes an event when it receives it assuming that it is valid at that time, but the event was actually valid at the time it was sent. The result, as illustrated in Figure 12.1, is that the duplica is shown accurately but with a time lag, and the duplication master and duplica are not synchronized. If your game can tolerate poor synchronization, this is not a problem. However, if it cannot, you must apply one of the dead reckoning-based latency masking techniques discussed in this chapter.
Figure 12.1 Timeline of Events When No Attempt Is Made to Mask Latency
NetZ
enables you to do a number of things to mask latency. On the duplica side, you can choose when to process an event according to the time stamp of the event. On the duplication master side, you can apply lookahead and loopback mechanisms. These techniques can be mixed and matched in whatever manner best suits your game, and they can also be adjusted at run time. Note that perfect time synchronization between the duplication master and its duplicas is often not necessary. Partial synchronization usually masks latency enough to prevent the players from noticing any inconsistencies. For example, say that you typically experience latency on the order of 300 milliseconds, but extrapolating duplicas' datasets for 300 milliseconds—or even 200 milliseconds—would lead to very poor predictions. To synchronize events better, you could apply a lookahead of 100 milliseconds on the duplication master, and apply dead reckoning with an extrapolation delay of 100 milliseconds on the duplica side. Predictions become more accurate when they are performed for a shorter length of time. Rather than stations being out of sync by 300 milliseconds, they would be out of sync by only 100 milliseconds, which in many games is not noticed by the players.
When a duplica receives an event update, the message’s time stamp affects how the duplica treats the message. The time stamp indicates the time at which the message data is valid. If it is the same as the current time, the duplica processes the event when received. If the time stamp is in the past or in the future, the duplica has a choice of how to treat the data. A time stamp can be valid in the future when lookahead is applied, as discussed later in this chapter. If a message timestamp indicates that an event occurs in the past, the duplica may (a) ignore the timestamp and assume that the data is valid at the current time, or (b) predict what would happen at the current time, knowing that the event occurred at a specific time in the past, as illustrated in Figure 12.2 a. If the timestamp indicates that the event occurs in the future, the duplica may (a) interpolate the behavior of the duplica at the current time knowing what it will do in the future, or (b) wait until the time indicated by the timestamp before processing the event, as illustrated in Figure 12.2 b.
Figure 12.2 Options for How a Duplica Can Process an Event According to the Timestamp
The situations where data extrapolation or interpolation is performed are commonly called dead reckoning. When dead reckoning is applied, the behavior of the duplica is typically, but not necessarily, predicted for the current time based on data that it knows occurred in the past or will occur in the future. When the message on the duplica has the same timestamp as the message on the duplication master, the use of dead reckoning eliminates the time lag between the duplication master and its duplicas. This synchronization does come at a price, though: the accuracy of the duplicas. To increase the accuracy of the duplica, you can decrease the time interval for which it extrapolates data by changing the timestamp. However, this increased accuracy is a trade-off with synchronization.
The original timestamp for a message is the one that is given by the duplication session master, but the duplica station may change this to be in the past, the future, or the current time. If the timestamp is changed, typically it is changed when dead reckoning is applied to produce a delayed but more accurate extrapolation on the duplica. In this case, the duplica adds a specific amount of time (the extrapolation delay) to the original timestamp of the message. Adding this delay makes the extrapolation more accurate because data is now extrapolated for a shorter time interval, but the duplica and the duplication master will be offset by the amount of the delay. For more information about extrapolation delay, see section 12.4.2. The decision about whether to change the timestamp clearly depends on the data in question, and the degree of accuracy versus synchronization that you require. You typically want to keep object interactions, such as collisions, well synchronized, whereas players are unlikely to notice if background avatars are not synchronized. If a duplica receives data that it cannot process at the specified time stamp, it has two choices: either to process the event assuming that the event is valid at the time received, or to drop the packet and not process the event at all. As always, the choice depends on your game.
In addition to helping mask latency, dead reckoning can also decrease bandwidth usage. When implemented, dead reckoning is performed automatically by NetZ
to determine when to update the dataset of a duplica. For datasets that change frequently, updating the dataset of a duplica every time the dataset of the duplication master was updated would waste limited bandwidth. It makes more sense to use an extrapolation filter to update the dataset of the duplicas, and to set a maximum error between the dataset values of the duplication master and its duplicas. This error is called the dead-reckoning extrapolation error: when it exceeds the error tolerance, the dataset of a duplica is updated from the duplication master.
Note: If error tolerance is not defined when using dead-reckoning, a default value of 0 is used for the error tolerance.
Extrapolation model and error computations are performed for each duplica of an object that uses an extrapolation filter on the station where the duplication master resides. The duplication master’s station is required to maintain multiple extrapolation models and perform multiple error computations. As a result, the reduction in bandwidth usage achieved with dead reckoning comes at the cost of increased CPU and memory usage. Nevertheless, these computations are relatively efficient and typically have little effect on performance.
Dead reckoning can only be used on datasets that are continuous, although they may contain infrequent continuity breaks.
Continuity breaks are discussed later in this chapter. To implement dead reckoning, you must perform the following three basic operations.
extrapolation_filter
is used, as described in section 14.1 DDL File Syntax.ErrorToleranceFunction::SetParameters
member function, or one of the appropriate wrappers, as described in this chapter.Use the DuplicatedObject::Update
member function to propagate the contents of the datasets of a duplication master to its duplicas. Use the DuplicatedObject::Refresh
function to refresh the values of the datasets of the duplicas for a buffered dataset. These methods need to be called periodically in every game. For example, the member functions may be called by using the member functions DoPhysics
and Render
, as in the following code.
Code 12.2 Propagating Datasets to Duplicas and Refreshing Data Values on Duplicas
void Sphere :: DoPhysics()
{
// compute the physics and then update the duplication
// master’s position
Update(m_dsPos);
}
void Sphere :: Render(unsigned long ulDummy)
{
if (IsADuplica())
{
// Refresh the position
Refresh(m_dsPos);
}
// render the object at the correct position
}
When this code runs, the DuplicatedObject::Update
member function checks whether the current extrapolation error is greater than the error defined by the error tolerance function. If the error is greater than the tolerance, a message is sent to correct the extrapolation model. On the duplica side, the dataset extrapolation model generated by NetZ
is used to refresh the dataset variables whenever required. The DuplicatedObject::Update
and DuplicatedObject::Refresh
member functions do not have to be synchronized. If the duplica runs faster than the duplication master, NetZ
automatically interpolates the values of a dataset.
When an extrapolated dataset uses unreliable communication, the loss of a message may cause a significant drift in the value of the dataset. Consequently, NetZ
enables a maximum update delay to be set by the DataSet::SetMaximumUpdateDelay
member function. This maximum ensures that messages are periodically sent to the remote station even if the local prediction model is correct. If reliable communication is used, the maximum update delay has no effect. The maximum update delay is the maximum time allowed between updates for a dataset. If a dataset is not updated within the maximum delay time, an update of the dataset is automatically sent to the remote stations, even if the value of the dataset has not changed since the previous update. The default value for the maximum update latency is DATASET_UPDATE_DELAY::DEFAULT_MAXIMUM_UPDATE_DELAY
(2000) milliseconds. The DataSet::GetMaximumUpdateDelay
function returns the current maximum update delay. All instances of a single dataset class share the same maximum update delay. However, datasets of different classes may have different maximum update delays.
On the other hand, when the defined error tolerance of an extrapolated dataset is low, the dataset may be updated too frequently, using a significant amount of bandwidth. NetZ
enables you to set a minimum update delay using the DataSet::SetMinimumUpdateDelay
function. This function enables the developer to control the maximum frequency at which a dataset is updated. The minimum update delay is the minimum time allowed between updates for a dataset. A dataset is not updated before the minimum update delay has expired. If the DuplicatedObject::Update
member function is called more frequently than the minimum update delay, the dataset updates are stopped.
However, if a continuity break is indicated by the DataSet::IndicateContinuityBreak
function, the minimum update delay is ignored for the next few updates to enable NetZ
to accurately extrapolate the dataset values. DEFAULT_MINIMUM_UPDATE_DELAY
provides the initial value for the minimum update delay of 150 milliseconds, which is equivalent to approximately 6 updates per second. The DataSet::GetMinimumUpdateDelay
function returns the current minimum update delay. All instances of a single dataset class share the same minimum update delay, but datasets of different classes may have different minimum update delays.
When an extrapolation filter is used to update a dataset, the updates may be sent over a reliable or an unreliable channel as specified in the update policy of the dataset in the DDL. By default, updates are sent over a reliable channel; however, if an unreliable DDL property is specified, NetZ
automatically switches between sending messages over an unreliable channel and a reliable channel as necessary, rather than always using an unreliable channel. This enables NetZ
to reduce the number of messages sent between a duplication master and its duplicas, because messages are not sent when the extrapolation model on the duplica is correct. If the position of a duplication master changes so that the extrapolation error is exceeded, the update is sent over an unreliable channel. However, if the extrapolation error is not exceeded—as would be the case if the duplication master stopped or moved in a straight line—the update is sent over a reliable channel. After a duplica receives three consecutive reliable updates, no further updates are sent from the duplication master to the duplica until the extrapolation error is exceeded once more. During this time, the duplica updates its position using its extrapolation model, calculated by using the three points sent over a reliable channel.
If a constant extrapolation error is chosen for the dead-reckoning method, the extrapolation error does not vary regardless of the distance between the observer and the object. After the error is exceeded, the number of position updates sent for the object and the amount of bandwidth used remains the same, regardless of the relative positions of the observer and object.
To implement dead reckoning, you must use an extrapolation filter to update the values of the dataset. The extrapolation filter must be defined in the DDL as described in section 14.1 DDL File Syntax.
The current extrapolation error is calculated as the difference between the current value and the extrapolated value of a dataset. The error is calculated according to the following equation, where the xi
values are the values of different variables contained in the same dataset, and the prime refers to the extrapolated value of the variable.
In the calculation of the extrapolation error, the final value of the error is equally dependent on each variable of the dataset. So, to avoid one variable having a dominant effect on the extrapolation error, ensure that the datasets are consistent. That is, each of the variables of a dataset should have a similar range of values.
The dead-reckoning error tolerance function is calculated according to the following equation.
Error tolerance = dConstantFactor + dLinearFactor*z + dQuadraticFactor*z2
dConstantFactor
, dLinearFactor
, and dQuadraticFactor
are the parameters of the error tolerance function, as defined by the ErrorToleranceFunction::SetParameters
function. z
is the distance between the observer and an object.
When using a constant extrapolation error, the first-order coefficient (dLinearFactor
) and the second-order coefficient (dQuadraticFactor
) must be set to zero. It is not necessary to define the distance because the error tolerance does not depend on it.
Use the following syntax for the ErrorToleranceFunction::SetParameters
member function.
Code 12.3 Syntax of the SetParameters
Member Function
ErrorToleranceFunction::SetParameters(
qDouble dConstantFactor,
qDouble dLinearFactor,
qDouble dQuadraticFactor,
qDouble dMaximumError
)
Rather than using the ErrorToleranceFunction::SetParameters
member function, the easiest way to set a constant error is by a call to the ErrorToleranceFunction::SetConstantError
member function.
Code 12.4 How to Call the SetConstantError
Member Function
ErrorToleranceFunction::SetConstantError(qDouble dError)
The ErrorToleranceFunction::SetConstantError
member function calls the ErrorToleranceFunction::SetParameters
member function and is equivalent to the following code sample.
Code 12.5 SetParameters
Member Function When the SetConstantError
Member Function Is Called
ErrorToleranceFunction::SetParameters(qDouble dError, 0, 0)
As a guideline to setting the error tolerance, start with a value that is about 5-10% of the size of the object and then fine-tune this value until the dead reckoning is to your satisfaction.
If distance-based dead reckoning is used, the tolerated error changes according to the distance between the observer and the object. The precision of dead reckoning can be adjusted dynamically based on the distance between the object and the observer, for effective utilization of bandwidth.
The extrapolation error is defined as the difference between the current dataset values of an object (the duplication master) and the extrapolated dataset values of the object (the duplica).
The Station::RegisterObserver
function can be used to register any duplicated object for the observer. The duplicated object registered by this method can be used to calculate distance. The observer may be redefined as required, and if no observer is defined then NetZ
uses a default distance to calculate the error tolerance.
If distance-based dead reckoning is used, the error tolerance differs depending on the constants and functions used for the distance between the object and the observer. The following list contains their respective characteristics.
The distance between NetZ objects is not calculated automatically. If distance-based dead reckoning is used, the developer must implement the DuplicatedObject::ComputeDistance
system callback. The implementation of this callback is explained later in this chapter.
If a dataset uses distance-based dead reckoning, whenever the DuplicatedObject::Update
member function is called on the duplication master holding that dataset, NetZ
calculates the error tolerance to determine whether the extrapolation error of the dataset is acceptable. The general idea of the algorithm is to compute two values: the current extrapolation error and the error tolerance. If the current extrapolation error is greater than the error tolerance, the DuplicatedObject::Update
member function refreshes the content of all the duplicated object datasets to their duplicas. If the extrapolation error is less, the datasets are not updated.
NetZ
is optimized to efficiently perform these computations. As the distance between the observer on each station and the object can be different, dataset updates may be sent to the various remote stations at different frequencies.
The current extrapolation error is computed as the difference between the current value and the extrapolated value of a dataset as detailed in section 12.2.1.
The error tolerance is calculated based on the distance between objects. The error tolerance varies based on the distance between objects and parameters set using the ErrorToleranceFunction::SetParameters
function.
For example, the error tolerance function can be set so that a larger error is tolerated for objects displayed at a greater distance from the observer, that is, those objects in the background of a scene, and a smaller error is tolerated for objects in closer proximity to the observer, that is, those in the foreground.
The error tolerance is calculated in two steps.
When a duplicated object is updated the first time, the distance between that duplicated object and the duplicated object registered for the station (the observer) is calculated. The DuplicatedObject::ComputeDistance
system callback is called to calculate this distance.
Note
The DuplicatedObject::ComputeDistance
system callback must be overridden for each DO class defined for the application that might be used as an observer. If the value returned by this method is UNKNOWN_DISTANCE
, a default distance is used.
The default distance is set using the ErrorToleranceFunction::SetDefaultDistance
function. NetZ
uses the value of UNKNOWN_DISTANCE
if the distance between two duplicated objects is unknown, or if no observer is defined on the station being updated. If no default distance is defined, NetZ
uses a value of zero, and the error tolerance is constant.
Next, the error tolerance is calculated according to the following equation using the distance between the duplicated objects.
Error tolerance = dConstantFactor + dLinearFactor*z + dQuadraticFactor*z2
dConstantFactor, dLinearFactor, and dQuadraticFactor are the parameters of the error tolerance function and are defined using the ErrorToleranceFunction::SetParameters
member function. z is the distance between the object being updated and the observer.
The error tolerance can be set using the ErrorToleranceFunction::SetParameters
member function, but the following member functions are also provided for setting it more easily.
The maximum value for the error tolerance must be specified using the ErrorToleranceFunction::SetLinearError
or ErrorToleranceFunction::SetQuadraticError
member functions. To have an unlimited maximum error, you must use the ErrorToleranceFunction::SetParameters
function to describe the error tolerance. Similarly, to define an error tolerance that exhibits both a linear and quadratic dependence on the distance, you must use the ErrorToleranceFunction::SetParameters
member function and define values for both the dLinearFactor
and dQuadraticFactor
variables.
Note
As a guideline to setting the ErrorToleranceFunction
variables, set the maximum extrapolation error to roughly equal the size of the object. The distance at which the maximum value is reached depends on the metric for calculating the distance between objects. It typically corresponds to the distance to an object as it appears close to the horizon. These values should then be fine-tuned until the dead reckoning is to your satisfaction.
The ErrorToleranceFunction::SetLinearError
member function sets a linear relationship between the error tolerance and the distance. The syntax is as follows:
Code 12.6 Syntax of the SetLinearError
Method
qBool ErrorToleranceFunction::SetLinearError(
qDouble dMinimumValue, // Minimum error tolerance.
qDouble dMaximumValue, // Maximum error tolerance.
qDouble dDistance // Distance at which the maximum error tolerance is reached.
)
The following figure shows the linear coefficient for the relationship between the distance and the error tolerance.
The ErrorToleranceFunction::SetLinearError
member function calls the ErrorToleranceFunction::SetParameters
member function and is equivalent to the following code sample.
Code 12.7 SetParameters
Member Function When the SetLinearError
Is Called
ErrorToleranceFunction::SetParameters(
qDouble dMinimumValue,
(dMaximumValue-dMinimumValue)/dDistance,
0,
qDouble dMaximumValue
)
The ErrorToleranceFunction::SetQuadraticError
member function defines a quadratic relationship between the error tolerance and the distance, where there is no linear dependence. The syntax is as follows:
qBool ErrorToleranceFunction::SetQuadraticError(
qDouble dMinimumValue, // Minimum error tolerance.
qDouble dMaximumValue, // Maximum error tolerance.
qDouble dDistance // Distance at which the maximum error tolerance is reached.
)
The following figure shows the quadratic coefficient for the relationship between the distance and the error tolerance.
The ErrorToleranceFunction::SetQuadraticError
member function calls the ErrorToleranceFunction::SetParameters
member function and is equivalent to the following code sample.
Code 12.8 SetParameters
Member Function When SetQuadraticError
Is Called
ErrorToleranceFunction::SetParameters(
dMinimumValue,
0,
(dMaximumValue-dMinimumValue)/(dDistance*dDistance),
dMaximumValue
)
SetLinearError
and SetQuadraticError
As shown in the following figure, the ErrorToleranceFunction::SetLinearError
member function uses a straight linear relationship between the error tolerance and the distance, whereas the ErrorToleranceFunction::SetQuadraticError
member function uses a curved quadratic relationship between the error tolerance and the distance.
Figure 12.6 Effect on Error Tolerance of (a) SetLinearError
and (b) SetQuadraticError
When using distance-based dead reckoning, the developer must define the observer on each station using the Station::RegisterObserver(DOHandle idObserver)
member function.
Even if an observer has already been registered, you can change the observer by registering another one. Unregister an observer by calling the Station::UnregisterObserver
member function.
Use the following member functions to get the observer that is already set.
A system callback for calculating the distance between objects must be implemented by overriding DuplicatedObject::ComputeDistance(DuplicatedObject* pObservedDuplicatedObject)
for duplicated objects used in the application.
Note
The DuplicatedObject::ComputeDistance
function returns UNKNOWN_DISTANCE
by default. In this case, a value of 0
is automatically used as the default distance. The default distance is set using the ErrorToleranceFunction::SetDefaultDistance
function.
Distance is normally calculated as follows. If the datasets of two objects include the same variables, the distance between objects, z, is calculated according to the following equation. The distance is calculated according to the following equation, where xi indicates the values of different variables contained in the same dataset, and the subscripted duplicated object and observer refer to the values of the variables for the different objects. Note: This equation is the same equation used to calculate the extrapolation error.
The distance calculated using the DuplicatedObject::ComputeDistance
function does not depend on the player's field of view.Regardless of whether the player can see the object, as the object moves closer to the observer, the datasets of the object are updated more frequently, which naturally uses more bandwidth. If objects outside the field of view of the player are unimportant, updating them frequently uses unnecessary bandwidth. For example, in a car racing game the field of view of the observer, which is the driver, is usually in front. In such cases, implement the DuplicatedObject::ComputeDistance
function to take the field of view into account by multiplying the actual distance of faraway objects to reduce bandwidth by updating less important objects less frequently.
If you need to have multiple observers on a station, such as with a split-screen mode, you still follow the steps described previously with the following two modifications.
DuplicatedObject::ComputeDistance
function, calculate the distance between an object and each observer and return the minimum value.For example, assume your game includes a split-screen mode. When in split-screen mode, rather than registering a player's avatar as the observer, register a SplitScreen
object containing an Observers
dataset that lists all of the player avatars to use as observers. In the ComputeDistance
callback for the SplitScreen
object, calculate the distance between a specific object and each of the observers, returning the minimum value. This ensures that as long as one of the observers is close enough to the object for it to require an update, an update is sent.
Implementing distance-based dead reckoning is explained in the context of the SphereZ
sample program. This example assumes that all processes participating in the P2P session create instances of two spheres. One sphere is controlled by the user, and the other by an AI (the computer). The user-controlled sphere on each station is defined as the observer. The distance between the object being updated and the user-controlled sphere on the station where the update occurs determines the required precision for extrapolation.
On each station, the sphere on which the camera is focused is defined as the observer on that station by using the following method.
Code 12.9 Defining a Station Observer
if (m_bHasFocus)
{
if (!Station::RegisterObserver(GetHandle()))
{
// Error handling
}
}
Then, use the following syntax to compute the distance between the sphere and another object in the Sphere
class. In this example the distance between two objects of the Sphere
class is calculated as the distance between their positions. The distance between a Sphere
and an object in another class is unknown.
Code 12.10 Calculating the Distance Between Objects
// Override ComputeDistance for a Sphere class duplicated object.
qReal Sphere::ComputeDistance(DuplicatedObject * pDO)
{
if ( pDO->IsA(DOCLASSID(Sphere)) ) {
return sqrt(
sqr(m_Pos.x-( (Sphere*)pDO )->m_Pos.x) +
sqr(m_Pos.y-( (Sphere*)pDO )->m_Pos.y) +
sqr(m_Pos.z-( (Sphere*)pDO )->m_Pos.z)
);
} else {
return UNKNOWN_DISTANCE ;
}
}
The error tolerance is set to vary with the square of the distance between the limits of 0.05 and 1, with the maximum error being reached at a distance of 30 by means of the following method.
Code 12.11 Setting the Error Tolerance
Position::GetErrorToleranceFunction()->SetQuadraticError(0.05, 1, 30);
If Station A creates a P2P session and Station B joins it, a total of four duplicated objects are found on every station. Two of these spheres are user-controlled and two are AI-controlled. Of the four duplicated objects, two are duplication masters that were created by the station and two are duplicas that were detected when the session was joined. On Station A, for example, the duplication masters are the user-controlled and AI-controlled spheres that were created by that station, and the duplicas are the two spheres created by Station B. This situation is illustrated in Figure 12.10. The spheres controlled by Stations A and B are red and green, respectively. The following operations are carried out when a duplication master is updated on a station by a call to the DuplicatedObject::Update
method. In this example, the Update
method is called on the AI-controlled sphere on Station A.
The extrapolation error is calculated. NetZ calculates the difference between the real (updated) position of the AI-controlled sphere on Station A (x, y, z) and the position of its duplica on Station B (x’, y’, z’), which is extrapolated. This value is the extrapolation error and is calculated according to the following equation.
The distance z
is then calculated. The duplica of the observer on Station B, in this case the user-controlled sphere on Station B, is searched for on Station A. The distance between the duplica of the observer on B (x_{User-cont(B)}, y_{User-cont(B)}, z_{User-cont(B)}
) and the duplication master of the AI-controlled sphere on Station A (x_{AI-cont(A)}, y_{AI-cont(A)}, z_{AI-cont(A)}
) is then calculated by NetZ
by calling the Sphere::ComputeDistance
function, as defined previously. Note that when an update occurs on a station, it is the observer that was set on the remote station that determines from which object the distance is calculated.
NetZ
then uses the error tolerance function to calculate the error tolerance based on the parameters that were set for the error tolerance function as described previously. This example called SetQuadraticError( 0.05, 1, 30)
. The error tolerance is given by the formula Error Tolerance = 0.05 + [(1-0.05)/900] * z2
. The error tolerance increases with the square of the distance, and has a minimum value of 0.05 and a maximum value of 1. The minimum error tolerance of 0.05 is used when x = 0. This happens when the AI-controlled sphere of A is very close to the user-controlled sphere of B. As the distance increases the error increases, until, at a distance of 30, the error tolerance reaches its maximum value of 1. After that it remains constant.
Finally, the extrapolation error and the error tolerance are compared. If the extrapolation error is greater than the error tolerance, a message is sent from Station A to Station B to update the position of Station A’s AI-controlled sphere duplica (AI-control (A’)) on B.
Figure 12.10 Updating the Position of the AI-Controlled Sphere (A) Using Distance-Based Dead Reckoning
You can fine-tune the dead-reckoning algorithm used by NetZ
in several ways. In general, the default values used by NetZ
produce satisfactory dead-reckoning performance, but you can change how dataset updates are treated by NetZ
to optimize game performance. Your choice of the fine-tuning mechanisms depends on the type of dataset, but the dataset must use an extrapolation filter. The end of this chapter suggests a workflow with step-by-step explanations of how to fine-tune your game.
When a remote station receives a dataset update from a duplication master, it updates the dataset model in two steps: tracking and then convergence.
Datasets are extrapolated using the values in previous updates to predict the likely values until the next update is received.
The convergence step adjusts the local values of the dataset so that the values converge smoothly to the values predicted by the tracking algorithm.
When an extrapolation filter is used on a dataset, either a linear or non-linear tracking algorithm is used to predict each of the variables of the dataset. When a linear algorithm is employed, the two most recent updates are used to predict the variable, whereas a non-linear, or parabolic, algorithm uses the three most recent updates. The choice of algorithm depends on how much the variable changes between updates. This change is defined by the angle of embrace θ (theta). As illustrated in Figure 12.11, if the values of the variable remain relatively unchanged over time, the angle of embrace is large and non-linear tracking is used. However, if the variable undergoes a sudden change over time, the angle is small and linear tracking is used.
Figure 12.11 Tracking Angle for (a) Linear and (b) Non-Linear Tracking Algorithms for a Single Dataset Variable
The tracking angle threshold is used to determine whether a linear or non-linear prediction algorithm is used. The threshold, which may be specified in radians or degrees, is set for each dataset using the PHBDRParameters::SetTrackingAngleThreshold
member function. The default unit for the angle is radians. If the angle of embrace is less than the tracking threshold angle, a linear tracking algorithm is used. If the angle of embrace is larger, a non-linear tracking algorithm is used. You may force the use of either a linear or non-linear tracking algorithm by calling either the PHBDRParameters::ForceLinearTracking
or PHBDRParameters::ForceNonLinearTracking
member functions. To ensure coherence between the datasets of objects on different stations, the tracking angle threshold must be set the same on each station across the network.
After updating a dataset, the second step is the convergence of each individual variable of the dataset on the remote station to the values given by the update from the duplication master. If no convergence is implemented, when a duplica receives an update message, the values of the dataset change directly to that of the update. If the dataset is a position, the object appears to jump from one position to another. When a convergence algorithm is used, however, the values of the dataset smoothly converge to the path predicted by the tracking algorithm. NetZ
implements the convergence step by default. If you do not want to use convergence, configure the dataset so that the PHBDRParameters::ApplyConvergence
function returns false
.
For tracking, a linear or non-linear (parabolic) algorithm may be used to determine the convergence path. The choice of algorithm is based on the angle of embrace. As illustrated in Figure 12.12, a large angle of embrace indicates that the values of the variable are almost constant and a linear convergence is used. On the other hand, a small angle of embrace indicates that the variable has changed significantly, and a non-linear convergence algorithm is used. After the variable has converged to the path predicted by the tracking algorithm, it follows this path until the next update is received, after which the process is repeated.
Figure 12.12 Paths for (a) Non-Linear and (b) Linear Convergence Algorithms for a Single Dataset Variable
The time taken for the variable to converge from the old values to a value on the predicted path is called the convergence delay, or CPDelay
. NetZ
uses a heuristic approach to compute the convergence delay, and the minimum and maximum delay for the convergence can be defined by calling the PHBDParameters::SetConvergenceDelay
member function. Set the convergence delay so that the variable converges to the predicted path before the next update is received. If this is not the case, the extrapolation appears irregular.
The convergence angle threshold is used to determine whether a linear or non-linear convergence algorithm is used. The threshold, which may be specified in radians or degrees, is set for each dataset by calling the PHBDRParameters::SetConvergenceAngleThreshold
member function. The default unit for the angle is radians. If the angle of embrace is less than the convergence threshold angle, a non-linear tracking algorithm is used. If the angle of embrace is larger, a linear tracking algorithm is used. You may force the use of either linear or non-linear convergence for the tracking algorithm by calling either the PHBDRParameters::ApplyConvergence
or PHBDRParameters::ForceNonLinearConvergence
member functions. As the convergence algorithm is only implemented on the local station, each station across the network may set a different convergence angle threshold.
The choice of appropriate values for the tracking angle and convergence angle thresholds and the convergence delay depends on the dynamics of the particular dataset. For a summary of the general criteria for the selection of the parameters, see Table 12.1. For example, assume that the dataset relates to the position of an object. If the object is one that undergoes frequent changes in its motion, such as a ball, set the tracking angle threshold to a high value to ensure that linear tracking is usually used, and set the convergence delay to a low value to ensure that the object quickly converges to the predicted path. If a high convergence delay was set, it is likely that the next update would be received before the object had converged to the predicted path, which would result in the object having an irregular trajectory. However, if the object is one that exhibits a relatively smooth motion, such as an airplane, set the tracking angle threshold low so that non-linear tracking is used, and set the convergence delay high so that the motion of the object remains smooth. The value of the convergence angle threshold largely depends on how smooth you want the motion of the duplica on the local station to be. If the convergence angle threshold is high, non-linear convergence is used more frequently on updates, and the path of the object is smooth. If a low convergence angle threshold is used, linear convergence is used, and the object is more likely to have an irregular trajectory.
Parameter | Characteristic | Settings |
---|---|---|
Tracking Angle Threshold | Dataset values change frequently. | High |
Dataset values are relatively constant. | Low | |
Convergence Angle Threshold | Smooth trajectory is required. | High |
Smoothness of trajectory is unimportant. | Low | |
Convergence Delay | Dataset values change smoothly. | High |
Dataset values can change suddenly. | Low |
For most datasets, the default values supplied by NetZ
are adequate for the tracking angle threshold, convergence angle threshold, and convergence delay, so the user does not need to adjust these values. If you want to fine-tune the updates of datasets, use the criteria in Table 9.1 as a guide, and then optimize by trial and error for each dataset. To fine-tune dead reckoning (as convergence hides bad tracking), first fine-tune the tracking, and then fine-tune the convergence algorithm. To start, turn off the convergence algorithm by calling PHBDRParameters::ApplyConvergence
, and then adjust the tracking algorithm until the object's trajectory is relatively accurate but jerky. Then, tweak the values of the convergence angle threshold and convergence delay until the trajectory of the object is satisfactory. Note that the default values for the tracking angle and convergence angle are large because it is assumed that the time scale is much greater than the variable scale.
The latency between two stations causes a delay between the time at which an update is sent by the duplication master and the time at which the duplicas receive the update. There is a time lag before the duplicas are updated when the dataset of a duplication master is changed. If this delay is long, it can lead to an inaccurate extrapolation of the datasets of the duplicas, especially if the values of the dataset change significantly. This effect is more pronounced when the dataset values change significantly. To handle this situation, NetZ
can delay the extrapolation of the duplicas by changing the timestamp on the message when the message is received. For example, if t is the initial timestamp on the message received from the duplication master, with an extrapolation delay, the value used by the duplica would be t + tiDelay. This delay enables NetZ
to predict the dataset for the duplicas more accurately, because extrapolation is performed for a shorter period of time, and because there is a time lag between the display of the duplication master and its duplicas. Delaying extrapolation on the duplicas is a trade-off between the types of values that the duplicas display: either more accurate dataset values that are delayed, or less accurate values that are synchronized with the duplication master.Extrapolation delay is normally set when loopback is applied.
Use the PHBDRParameters::SetExtrapolationDelay
function to implement the extrapolation delay, as shown in the following example where the time is given in milliseconds. The set value is returned by using the PHBDRParameters::GetExtrapolationDelay
member function.
Code 12.12 SetExtrapolationDelay
Member Function Syntax
PHBDRParameters::SetExtrapolationDelay(TimeInterval tiDelay);
For example, the following line of code delays the extrapolation on duplicas by 100 milliseconds.
Code 12.13 Delaying Extrapolation on Duplicas by 100 Milliseconds
PHBDRParameters::SetExtrapolationDelay(100);
Note that only dataset updates from a duplication master are delayed, and that other messages—such as RMCs, actions, and fault recovery operations—are not delayed. In some situations this may lead to the inaccurate display of a duplica, or to run-time errors.
Continuity breaks occur when the values of a dataset change in a non-continuous way, such as when an object is teleported or bounces off a wall. For example, if an avatar is teleported from one position to another, it makes no sense to interpolate between the two positions. Indicating continuity breaks helps NetZ
perform a smoother dead reckoning by enabling it to disregard the extrapolation error. When a continuity break occurs, the dataset is updated a number of times, which depends on the type of continuity break regardless of the extrapolation error, to provide the duplicas with information about the change. This update is sent over a reliable channel, regardless of whether updates for the dataset are normally sent over a reliable or unreliable channel. As explained in Section 12.1, the extrapolation error is the difference between the dataset values of a duplication master on the local station and the extrapolated dataset values of the duplica on a remote station.
Indicate a continuity break by calling the DataSet::IndicateContinuityBreak
member function, which is declared using the following syntax. byBreak
indicates the type of continuity break.
Code 12.14 IndicateContinuityBreak
Member Function Syntax
DataSet::IndicateContinuityBreak(qByte byBreak);
The valid types of continuity breaks are stop
, sudden change
, and teleport
. As their names suggest, continuity breaks are used to indicate when the values of a dataset suddenly stop, change, or jump from one set of values to another. In addition to indicating a continuity break, if you know how the dataset changes when a continuity break occurs, you can use a continuity break update model to provide NetZ
with additional information about how a particular type of continuity break affects the dataset. An update model may be used if the continuity break indicated is a teleport or sudden change. This allows NetZ
to react more quickly to the sudden change in dataset values, and to predict the dataset values more accurately. For example, the spheres in the SphereZ
game regularly collide with the walls and undergo a sudden change in the values of the Position
dataset. The following code is used to indicate the continuity break when a collision occurs.
Code 12.15 Indicating a Continuity Break When a Collision Occurs
m_dsPos.IndicateContinuityBreak(CONTINUITY_BREAK_SUDDEN_CHANGE |
CONTINUITY_BREAK_UPDATED_MODEL);
This case also indicates that an update model, which contains additional information about the continuity break, is associated with the break. Set an update model by calling the DataSet::SetModel
member function, and access it by using the MODEL
macro. For example, use the following update model for the collision of a sphere with the wall. This update model defines the new position, speed, and acceleration of the sphere in the x
, y
, and z
directions.
m_dsPos.SetModel(MODEL(m_dsPos,x), m_dsPos.x, m_Speed.sx/1000, 0);
m_dsPos.SetModel(MODEL(m_dsPos,y), m_dsPos.y, m_Speed.sy/1000, 0);
m_dsPos.SetModel(MODEL(m_dsPos,z), m_dsPos.z, m_Speed.sz/1000, -9.0/1000000);
The SetModel
member function is called on the duplication master when the change occurs, to provide NetZ
with additional information about a sudden change to an extrapolated dataset. So, rather than requiring the duplica to wait for three updates to be received to have enough information to extrapolate the dataset, information about how the dataset changes can be sent from the duplication master to the duplica immediately after the continuity break occurs. This enables the dataset of the duplica to be shown more accurately when a continuity break occurs.
Local corrections are performed on the local station, and are used to correct inaccurate extrapolations of a duplica's dataset before the subsequent update is received from the duplication master. For example, on each station the duplica of a sphere should not pass through the walls. However, because of network latency, extrapolation of the position of the duplica of a sphere may lead to a sphere being displayed in or past a wall. If this occurs, you can correct an inaccurate extrapolation of the duplica's position on the local station before you receive an update from the duplication master. This correction does not have to be exact, because the duplica's position will converge to the duplication master's position when the position is updated. By implementing a local correction, the positions of the duplicas of the spheres are shown more accurately. Note that if latency is low, the effect of implementing local corrections may not be evident. However, as latency increases their implementation becomes more important in ensuring an accurate extrapolation.
Set the local correction for the duplica by calling the DataSet::SetLocalCorrection
member function, after which the correction is called on the duplica. When this method is executed, updates from the duplication master are ignored until two points have been received from the duplication master. After two points are received, NetZ
has enough information to continue the extrapolation. When setting local corrections, you can predict the values of a dataset of a duplica by calling the DataSet::PredictValue
and DataSet::PredictRateOfChange
functions. These methods predict the values of a dataset based on the current values. For example, you can set a local correction for the duplicas of all spheres to ensure they are not displayed in a wall. The following code indicates that when a sphere collides with a wall, its position remains the same, but its velocity and acceleration changes. The velocity in the x
and y
directions is the negative of the current value predicted by the DataSet::PredictRateOfChange
member function. In the z
direction, the sphere is given an arbitrary velocity and a negative acceleration to account for gravity.
Code 12.16 Sample Local Correction Code for a Duplica
if (ComputeXWallCollision())
{
qDouble dCurrentSpeed=m_dsPos.PredictRateOfChange(MODEL(m_dsPos, x));
m_dsPos.SetLocalCorrection(MODEL(m_dsPos, x), m_dsPos.x, -dCurrentSpeed,
0);
m_dsPos.SetLocalCorrection(MODEL(m_dsPos, z), m_dsPos.z, 10.0/1000,
-9.0/1000000);
}
if (ComputeYWallCollision())
{
qDouble dCurrentSpeed=m_dsPos.PredictRateOfChange(MODEL(m_dsPos, y));
m_dsPos.SetLocalCorrection(MODEL(m_dsPos, y), m_dsPos.y, -dCurrentSpeed,
0);
m_dsPos.SetLocalCorrection(MODEL(m_dsPos, z), m_dsPos.z, 10.0/1000,
-9.0/1000000);
}
Set and get the maximum local correction delay by using the PHBDRParameters::SetMaximumLocalCorrectionDelay
and PHBDRParameters::GetMaximumLocalCorrectionDelay
member functions. The maximum local correction delay is the maximum time that local corrections are applied to a duplica. After the local correction delay expires, if the duplica has not converged to the position given by the extrapolation model, the object is teleported to this position. In the rare situations where local corrections, such as for hitting a wall, are repeatedly performed on a duplica, thereby preventing the duplica from converging to the extrapolation model, this ensures that the position of the duplica is corrected after a short time.
The trickiest thing about dead reckoning is fine-tuning the pertinent parameters so that the duplica follows the motion of its master in even the most difficult situations. Because all the dead-reckoning parameters interact to produce a duplica's final extrapolation, it is very important when fine-tuning dead reckoning to isolate each variable and only change one at a time. To achieve this result, use the following basic workflow.
Regardless of whether you want to use a constant or distance-based error, first fine-tune dead reckoning with a constant error tolerance by using ErrorToleranceFunction::SetConstantError
, and then adjust this error until you have acceptable movement as long as the duplication master's motion does not change suddenly. In addition to decreasing the number of interacting variables, using a constant error gives you an idea of the minimum tolerable error.
Use the DataSet::IndicateContinuityBreak
function to indicate continuity breaks when datasets change in a non-continuous manner.
Use the DataSet::SetLocalCorrection
function to implement local corrections so that duplicas do not pass through walls or other solid objects.
To delay extrapolation on the duplicas, call the PHBDRParameters::SetExtrapolationDelay
function to set the extrapolation delay. This delay is typically set in conjunction with a loopback on the master.
Fine-tune the tracking algorithm.
- Because convergence hides bad tracking, call the
PHBDRParameters::ApplyConvergence
function to turn off the convergence algorithm. You must ensure that the tracking is satisfactory before applying convergence.- This variable needs to be tweaked for each dataset. However, if the object frequently changes motion, usually the threshold is high so that linear tracking is used. For an object, such as a car or plane, that has a relatively smooth trajectory, the threshold is typically low so that non-linear tracking is used. After tweaking this parameter, a duplica's motion should be relatively accurate but still slightly jerky.
Fine-tune the convergence algorithm.
- Set the convergence angle threshold. If you want a very smooth trajectory, set this threshold high so that non-linear convergence is used; if some jerkiness is permissible, you can set a low value and linear convergence.
- Set the convergence delay. If the dataset values change smoothly, this delay can be high; if the values change suddenly, set this delay low so that the duplica converges quickly to the predicted path. Ideally, this convergence should happen before a new update is received from the master.
If you want to use a distance-based extrapolation error, set it now. Distance-based extrapolation error decreases the bandwidth used as objects move further away from the observer object, at the cost of accuracy.
- Use the
Station::RegisterObserver
function to register a duplicated object as the observer for a station.- Implement the
DuplicatedObject::ComputeDistance
callback to calculate the distance between the observer and a duplica.- Use the
ErrorToleranceFunction::SetLinearError
orErrorToleranceFunction::SetQuadraticError
functions to set the tolerated extrapolation error.- Repeat steps 5 and 6 if necessary.
You can use either a lookahead or a loopback on the duplication master, as described in the following sections. In general, to mask latency, these techniques are used with dead reckoning performed on the duplicas.
When using lookahead, the duplication master predicts an event at a particular time in the future, and sends this prediction along with its associated timestamp to the duplica. When the predicted event actually occurs on the duplication master station, the outcome of the event is illustrated by Figure 12.13. Obviously this technique should only be used for events and datasets that can be predicted with reasonable accuracy. When this is the case, good synchronization can be achieved. These events are usually physics-based, such as position, orientation, and collisions. Isolated events such as firing a weapon are less likely to be suited to this technique.
A typical use for lookahead is events, such as collisions, that the duplication master can predict better than the duplica. So rather than predicting an object's physics (collisions) on the duplica, as is done with standard dead reckoning, you can predict its physics on the master and then send this prediction model to the duplicas. If the duplication master predicts its actions incorrectly, you can either accept the prediction as being correct, or accept that the prediction is incorrect and patch for it. Depending on the type of event, either solution may or may not lead to some noticeable inconsistencies for the player.
Figure 12.13 Schematic of How Lookahead Is Applied to a Duplication Master
To implement lookahead, predict a particular dataset’s values at a specific time in the future on the duplication master, and then send the predicted values to the duplicas by using the DuplicatedObject::Update
function, specifying the valid update time as a parameter. On the duplication master, render the scene using dataset values that are valid at the current time. On the duplicas, refresh the datasets with the DuplicatedObject::Refresh
function to calculate dataset values at the current time—knowing when the received update is valid—and then render the scene.
The second technique you can use on a duplication master is loopback. Loopback is simply an enforced delay between when an event occurs or is triggered, and when its outcome is calculated and shown onscreen. This delay is effectively the reaction time of the game. Reaction times on the order of 100 milliseconds are not typically noticeable in most games, even fast action games. As long as the reaction time is consistent, players adjust their gameplay to account for the delay. The advantage of loopback is that the induced delay on the duplication session master gives you time to send a message about the event to the duplicas, which can lead to very good synchronization between the duplication master and duplica stations.
Figure 12.14 Schematic of How Loopback Is Applied to the Duplication Master
Set the loopback delay by using the DataSet::SetLoopbackDelay
member function, and get the value by calling the DataSet::GetLoopbackDelay
member function. When using loopback, rather than storing all dataset values for the duplication master for the duration of the delay, store only a sampling of the values in a queue and then use those values to interpolate the data for the object at the required time. Set the sampling interval for the stored values by calling the DataSet::GetLoopbackDelay
function, and get the values by calling the DataSet::GetLoopbackSamplingInterval
function. The default value is 50 milliseconds, which is equivalent to sampling 20 points per second.
When a duplication master uses loopback, in addition to maintaining extrapolation models for its duplicas, it maintains a separate loopback model that it uses to predict the values of its own looped-back datasets at the required time. To ensure that the loopback model is kept up-to-date, the model needs to be updated and then refreshed in much the same way as the standard dead reckoning model. When a dataset changes, you must call the DataSet::UpdateLoopback
function to add the new values to the loopback model, and then call the DataSet::RefreshLoopback
function to recalculate the model using the new values. After the loopback model is refreshed, you can call the DataSet::PredictLoopbackValue
function to get the predicted loopback dataset values, or one of the similar methods. For example, say you use loopback on the Position
dataset; you typically make function calls similar to the following code on the duplication master.
Code 12.17 Using Loopback on a Position
Dataset
// Set the new values in the Position dataset
pObject->SetPosition(oNewPosition);
// Update the values. This will trigger a network message to the
// duplicas
pObject->Update();
// Update the values in the loopback model
pObject->m_dsPosition.UpdateLoopback();
// Recalculate the loopback model
pObject->m_dsPosition.RefreshLoopback();
// Return the new predicted loopbacked values, where
// GetLoopbackPosition uses the PredictLoopbackValue method
// to return the predicted values for the Position dataset.
Vector3D vLoopbackedPosition = pObject->GetLoopbackPosition();
To ensure that a duplication master and its duplicas stay synchronized, you must use loopback together with dead reckoning with an extrapolation delay applied. (Set this delay by calling PHBDRParameters::SetExtrapolationDelay
.) If not, the degree of synchronization is sensitive to variations in latency. An extrapolation delay ensures that the time at which the outcome is calculated is consistent on the duplication master and on the duplica, regardless of the time taken for a message to traverse the network.For perfect synchronization, set the extrapolation delay and loopback delay to the same value.
CONFIDENTIAL