16. Effectively Using Duplicated Objects

16.1. Reducing the Overhead of Updating Duplicated Objects

A message is generated every time the DuplicatedObject::Update function is called.

When simultaneously updating more than one dataset, rather than calling DuplicatedObject::Update(DataSet &refDataset) for each one, calling the DuplicatedObject::Update function to update all objects at the same time increases efficiency by reducing overhead.

Warning

If you call the DuplicatedObject::Update function that updates multiple objects in whole, even if only datasets that have unreliable specified with DDL are updated, datasets that have reliable specified with DDL are also update targets. Because datasets specified as reliable do not normally change every frame, be sure to add upon_request_filter (described later) to the DDL file for the dataset and implement code in a way that specific datasets are not sent.

16.2. Increasing Dataset Efficiency

If there will be any unmodified parameters when you call the DuplicatedObject::Update function, move them to another dataset and add upon_request_filter to the DDL file. If you define upon_request_filter for a dataset, that dataset is included in messages only when the DataSet::RequestUpdate function has been executed. This allows you to reduce the packet size when that dataset does not need to be updated.

Take a parameter that changes only before a match, but rarely or never changes during one, for example. Note: This might apply to information such as a character's name or maximum number of hit points.

Note

The advantage to this approach is that reducing the packet size reduces the CPU load by lowering the cost of serializing and deserializing messages and allows for more efficient use of limited network bandwidth.

It also gives you some leeway to make improvements even if you have defined parameters that update at highly different frequencies in the same dataset.

Code 16.1 Example of an Inefficient Dataset

// This dataset is large and includes parameters that update at highly different frequencies.
dataset Parameters {
    uint32 avatarState; // High update frequency.
    qbuffer binaryData; // Low update frequency. Assume that these parameters are relatively large.
} upon_request_filter;

In Code 16.1, even if only avatarState is updated, binaryData is also updated because it is in the same dataset. If binaryData is large, it consumes network bandwidth unnecessarily and greatly increases the load on the CPU.

In cases like this, you can reduce the CPU load and make better use of network bandwidth by putting avatarState and binaryData into different datasets to limit the transfer of unnecessary data. (Code 16.2)

Code 16.2 Example of Efficient Datasets

dataset AvatarParameter {
    uint32 avatarState; // High update frequency.
} upon_request_filter;

dataset BinaryParameter {
    qbuffer binaryData; // Low update frequency. Assume that these parameters are relatively large.
} upon_request_filter;

Note

If avatarState and binaryData are updated at the same time, the most efficient approach is to define them in the same dataset.The more that parameters updated at the same time are defined in a particular dataset, the better the efficiency.

16.3. Notes About the Frequency of Reliable Transmissions

With NEX, only one reliable transmission stream is maintained. If reliable transmissions are performed frequently, latency tends to increase gradually if connectivity is poor. As in Section 7.3.4, by initializing several reliable communications to use, and by specifying SubStreamID with DataSet::SetSubStreamID for datasets and with DOCallContext::SetSubStreamID for RMCs, reliable data communications can be performed with independent resend control.

Even when using multiple reliable communications, because it is conceivable that latency increases because of the increase of data volumes caused by resends or worsening connectivity, keep the use of datasets and RMCs with reliable communications to a minimum.

  • Parameters suited to unreliable communications
    • Parameters that are updated frequently, but some loss does not adversely affect the game
    • Parameters such as character position that may change every frame
  • Parameters suited to reliable communications
    • Parameters that are updated less frequently, but must arrive or the game suffers
    • Parameters such as match start and end flags, and match results

DDL definitions, rather than the application source, specify whether datasets are sent by unreliable or reliable communications. For more information, see Chapter 14.

Warning

Take care that datasets specified as reliable are not always updated at high frequency (such as every frame).

When a dataset specified as reliable is updated using the DuplicatedObject::Update function, or a remote method call is made on a duplicated object that uses reliable communications, QERROR(Transport, ReliableSendBufferFull), or DOCallContext::ErrorReliableSendBufferFull, respectively, occur if the reliable communications buffer runs out. Use the following functions to set the maximum number of reliable buffers for duplicated objects. For more information, see the function reference.

Except for certain P2P communications, the buffer for packets is managed by the packet buffer manager in a fixed-length memory pool. As long as there is enough space in the packet buffer, no error is thrown until the reliable communication buffer limit is reached. When the packet buffer runs out of space, calling the DuplicatedObject::Update function and sending a remote method call throws QERROR(Transport, PacketBufferFull) and DOCallContext::ErrorPacketBufferFull, respectively. If PacketBufferFull is thrown, dispatch the data and wait briefly before trying to send the data again, if necessary. If PacketBufferFull occurs frequently, increase the size of the packet buffer. For more information about the packet buffer, see Managing Packet Buffer Memory in Memory Management.

16.4. Caching SessionClock::GetTime()

If you use the following functions without a session clock specification, they call the SessionClock::GetTime function internally. On the CTR platform, the SessionClock::GetTime function has its own CPU cost.If you are going to use it frequently, you should probably use a cached time value.

Warning

Discrepancies in datasets can result if the same session clock is used outside the same frame. We recommend that you limit the scope of values being cached to within the same frame.

Code 16.3 gives an example of using a cached session clock for both the DuplicatedObject::Refresh and DuplicatedObject::Update functions. Sharing a session clock within the same frame in this way presents no problem.

Code 16.3 Example of Using a Cached Session Clock

// Refresh and Update both use this session clock.
Time time = SessionClock::GetTime();

// Refresh all duplicas.
Avatar::SelectionIterator itAvatarDuplica(nn::nex::DUPLICA);
while (!itAvatarDuplica.EndReached()) {
    // You can get update data using Refresh when using extrapolation_filter and buffered.
    itAvatarDuplica->Refresh(time);
    ++itAvatarDuplica;
}

//
// Make physics calculations and change the position of the avatar serving when the local console is the duplica master.
//

// Inform the duplica of changes in parameters of the duplica master.
Avatar::SelectionIterator itAvatarMaster(nn::nex::DUPLICATION_MASTER);
while (!itAvatarMaster.EndReached()) {
    itAvatarMaster->Update(time);
    ++itAvatarMaster;
}

16.5. Dead Reckoning Example

The dead-reckoning feature described in Chapter 12 Dead Reckoning can be effective in making up for network latency by smoothing changes in datasets, in addition to reducing the number of packets that need to be sent.

To narrow the discussion, consider the following example of setting up dead reckoning for the following dataset. For more information about dead reckoning, see Chapter 12 Dead Reckoning.

Code 16.4 Example of Setting Up Dead Reckoning

dataset Position {
    float x;
    float y;
    float z;
} extrapolation_filter, unreliable;

16.5.1. Example of Setting the Minimum Update Delay

The minimum and maximum update delay is used to determine whether the DO master sends values to duplicas by comparing the elapsed time since the last data transmission.

The default minimum update time for a dataset is 66 milliseconds. We recommend that you change this to a lower value for applications with high real-time requirements.

Use a function call as in the following code example to set the minimum update delay in the DDL file used for the dataset given in Code 16.4.

Code 16.5 Sample Code for Changing the Minimum Update Delay for the Position Dataset

// Set a minimum update delay of 33 milliseconds every two frames.
// This places an upper limit of once every two frames on the number times the DO master sends a Position update to duplicas.
Position::SetMinimumUpdateDelay(33);

Warning

Two frames are actually approximately 33.34 milliseconds long. Setting a minimum update delay of 33 milliseconds results in a difference of approximately 0.34 milliseconds. Depending on the timing, an update may occasionally be sent to duplicas twice in two frames.

16.5.2. Example of Setting the Error Tolerance

The error tolerance determines whether the DO master sends values to duplicas by comparing the difference between dataset values on the DO master and the duplica.

The default error tolerance is 0. With this setting, the DO master always sends an update packet to duplicas at the interval defined by the minimum update delay even if the dataset has only changed a little bit. We recommend that you set this to an appropriate value.

Use a function call as in the following code example to set the error tolerance in the DDL file used for the dataset given in Code 16.4.

Code 16.6 Sample Code for Changing the Error Tolerance for the Position Dataset

// Set a constant error tolerance of 0.5f.
// If there is an error larger than 0.5f, the DO master sends a Position update to duplicas.
ErrorToleranceFunction * pErrorToleranceFunction = Position::GetErrorToleranceFunction();
pErrorToleranceFunction->SetConstantError(0.5f);

Warning

The minimum update delay has higher priority than the error tolerance in terms of packet transport timing. No matter how low you set the error tolerance, packets are never sent at a shorter interval than that given by the minimum update delay.

16.5.3. Using Continuity Breaks

Because dead reckoning predicts coordinate values, the duplica of a duplicated object may appear to go too far into other objects when hitting the ground or a wall. This effect is even more pronounced when network latency is high.

In such cases, use the DataSet::IndicateContinuityBreak function to indicate continuity breaks. Penetration of objects can be alleviated using continuity breaks. (Although it does not always eliminate it altogether.)

The second parameter of DataSet::IndicateContinuityBreak gives the protocol for indicating a continuity break. Select whether to use reliable or unreliable communications. Although packets are resent if lost when using reliable communications, use unreliable communications in situations where a continuity break would be meaningless if its arrival at the duplica is delayed.

Code 16.7 Example of Setting Up Dead Reckoning

doclass Avatar {
    Position m_pos;
};

Code 16.8 Example of Using DataSet::IndicateContinuityBreak

Time time = SessionClock::GetTime();
Avatar::SelectionIterator itAvatarMaster(nn::nex::DUPLICATION_MASTER);
while (!itAvatarMaster.EndReached()) {
    //
    // Make physics calculations and change the position of the avatar serving when the local console is the duplica master.
    //

    if () // The avatar has hit a wall.
    {
        itAvatarMaster->m_pos.IndicateContinuityBreak(CONTINUITY_BREAK_STOP, false);
    }
    else if () // The avatar has suddenly changed its direction of movement.
    {
        itAvatarMaster->m_pos.IndicateContinuityBreak(CONTINUITY_BREAK_SUDDEN_CHANGE, false);
    }
    else if () // The avatar has instantaneously warped to another location.
    {
        itAvatarMaster->m_pos.IndicateContinuityBreak(CONTINUITY_BREAK_TELEPORT, false);
    }

    itAvatarMaster->Update(time);
    ++itAvatarMaster;
}

CONFIDENTIAL