This chapter describes features and provides reference information that is useful for making games.
This section describes the classes provided by NetZ
for fine-tuning a game. These classes enable you to monitor aspects of the transport protocol and dataset updates, and to override the default memory allocation functions. This aids in the fine-tuning process, and enables the simulation of different network conditions to see how your game performs under different conditions.
Use the EmulationDevice
class to emulate network conditions such as bandwidth, jitter, latency, and packet drop probability on a station's transport devices (its input device and output device).
Use the InputEmulationDevice
subclass to access the settings for the input device, and the OutputEmulationDevice
subclass to access the settings for the output device. These classes enable you to test the performance of your game under varying network conditions. Note that you must access these classes through the GetInputEmulationDevice
and GetOutputEmulationDevice
member functions of the RootTransport
class. The RootTransport
instance is valid as soon as the NetZ
object is created. After that, you can get references to EmulationDevice
objects and enable them by using independent Enable
methods.
The following example shows code for network emulation of packet loss for a station's transport devices, given the set bandwidth (in bits per second), latency (in milliseconds), and loss rate (as set for dropProbability).
Code 17.1 Settings for Output Network Emulation
OutputEmulationDevice * pOutputEmulation =
RootTransport::GetInstance()->GetOutputEmulationDevice();
pOutputEmulation->Enable(); //Set the value after running Enable.
pOutputEmulation->SetBandwidth(bandwidth);
pOutputEmulation->SetLatency(latency);
pOutputEmulation->SetPacketDropProbability(dropProbaiblity);
You can also use InputEmulationDevice
for input network emulation, but normally you only need to use OutputEmulationDevice
.
Data is recorded to logs using the Log
class. The EventLog
and TraceLog
classes are subclasses of this class, and are used to log system events and traces, respectively. Each EventLog
and TraceLog
object must have a LogDevice
and OutputFormat
object associated with it to define, respectively, the data output location and the data format. Use the Log::SetLogDevice
and Log::SetOutputFormat
member functions to associate these objects with a Log
object. Use the LogDevice
class or any of its subclasses to define how log strings are output to a particular logging device. The logging device may be a file, debugger, console, or a user-defined device. The OutputFormat
class specifies what information is provided with the logged data. For example, a trace can include information such as the thread ID, process ID, the local time, and the trace number. Both the LogDevice
class and the OutputFormat
class may be subclassed to implement your own custom output device and format.
The EventLog
class implements persistent logging using a log level system. The log level specifies the severity of the log message. Use the SetLogLevel
member function to set the log level to one of the following values, listed in increasing order of severity: Verbose
, Info
, Warning
, Critical
, or Always
. When a log output request is made, the log only shows messages at or above the severity level specified for the EventLog
object. For example, if the log level is set to Warning
, Verbose
and Info
messages are not logged.
Use the QLOG
member function to define the event messages logged by the EventLog
class. This member function has the following syntax. lLevel
is the event log level, and szFormat
is the log message, in the same format as the standard printf
function.
Code 17.2 QLOG
Member Function Syntax
void QLOG(EventLog::LogLevel lLevel, qChar *szFormat, ... );
You can call the QLOG
member function directly. There is no need to look up the EventLog
class. For example, a “Begin test” message at the Info
level could be defined as follows:
Code 17.3 Outputting a Message at the Info
Level
QLOG(EventLog::Info, _T("\n\nBegin test"))
Use the TraceLog
class to trace the behavior of the system. This class logs all active traces and is useful in program debugging. Note that when this class is run in release mode, traces are automatically not logged. Several trace macros are defined for tracing various aspects of the system, such as the duplicate object protocol, fault tolerance, initialization, operations, plug-ins, lobbies, and updates. For example, set the following trace flag to trace the fault behavior of the system.
Code 17.4 Setting the Trace Flag to Trace the Fault Functionality of the System
TraceLog::GetInstance->SetFlag(TRACE_FAULT);
There is also a TRACE_USER
macro that you can redefine to implement your own traces, and a TRACE_ALL
flag for easily tracing all available information. You can set and clear the various trace flags by using the TraceLog::SetFlag
and TraceLog::ClearFlag
member functions. The TRACE_OPERATION
flag is associated with system operations, and traces all operations for all objects by default. To filter the operations and objects logged, you must define what you want logged by the Operation::SetTraceFilter
system callback, as detailed in Section 10.1.
Use the DuplicatedObject::Trace
member function to trace the state of a duplicated object. This method may be redefined by the user, or the default implementation may be used. The default trace information for a duplicated object comprises its class name DOHandle
, its duplication role, its reference count, whether it is flagged for destruction, and the station on which its duplication master instance resides. For example, the following code is used in SphereZ
to trace the registered observer in the Sphere
class when the camera changes focus.
Code 17.5 Tracing the Observer Registered with the Sphere
Class
void Sphere::DoTakeFocus()
{
// Other details of the implementation of the method
else
{
DOHandle hObserver = INVALID DOHANDLE;
Station::Ref refStation(Station::GetLocalStation());
if(refStation.IsValid())
{
refStation->GetObserver(&hObserver);
}
TRACE( TRACE_ALWAYS, _T("Observer: %X "), hObserver );
}
}
You can use the DuplicatedObjectStore::Trace
member function to trace the contents of the local duplicated object store. This effectively calls the DuplicatedObject::Trace
member function on all the objects currently instantiated on the local station.
For a game to be scaled without decreasing the quality of the user experience, the resource usage of the system must be optimized. As more stations join a P2P session, resources—such as the bandwidth and latency between stations and the CPU and memory on individual stations—must be used efficiently to prevent players from experiencing a decline in quality. To enable you to scale a game, the NetZ
API includes several features that you can use to make the most effective use of available system resources. As mentioned previously, the object duplication model used by NetZ
gives the developer control over where and how many duplicas of an object are published, so that objects are only duplicated to the stations where they are needed. To reduce bandwidth use, you can minimize the number of updates sent to object duplicas by using dead reckoning for data extrapolation. These features enable a game to be scaled, but naturally the extent to which a game can be scaled is dependent on the particular game. Note that NetZ
is designed for relatively small-scale games.
The scalability of a game depends on the type of game, and on the level of interactivity required between the stations connected to the P2P session. Obviously, as is the case for any networked game, the more information that you need to send between stations, the lower the number of stations that can be supported simultaneously. The amount of information that must be received by a station depends on several factors, such as how many objects there are, how often their datasets need to be updated, and the quality of the output required on the station. Although NetZ
has some inherent limitations on the number of duplicated objects and classes that can be created, these limitations should not prove restrictive. Up to 2^10 (1024) duplicated object classes can be created in one game, and up to 2^22 (over 4 million) duplication master instances can be created for a particular class. The number of duplication master objects in a particular class is limited because 22 bits are used to uniquely identify each object for each duplicated object class. Duplication masters and duplicas use the same ID, and so there is no restriction on the number of duplicas for a particular master object. Regardless of how a game is implemented, the physical limitations of the hardware, such as the bandwidth, available memory, and CPU, are always present. Depending on the game, hardware and software limitations may or may not present performance problems; however, with the flexibility of NetZ
, there are several ways to minimize the effect of such restrictions.
Bandwidth use is always a concern over any network, but before you can attempt to optimize the bandwidth used by a station, you need to understand the factors that affect it. Depending on the hardware used, a station has limited input and output bandwidth available. The input bandwidth used by a station depends on the number of duplicas on the station, the size of the message, and how frequently the datasets of the object are updated.
The message size is equal to the payload plus an additional 88 bytes of overhead that may contain information such as the object and dataset IDs. To reduce input bandwidth use, you can use data extrapolation so that a dataset is updated only when the extrapolation is no longer precise. In addition, the developer must create only the copies of the duplicas that a station needs.
The output bandwidth used by a station is dependent on the number of duplication masters on the station, the number of stations to which it directly sends updates, and the size and frequency of the message.
The number of messages that a station sends directly to another station is typically equal to the number of duplicas of the object. However, you can use an extrapolation filter to predict the dataset values on the duplicas, and thereby minimize the number of updates that a duplication master is required to send. So, rather than updating a dataset each time it changes, the duplica values are predicted and updates are sent only when this prediction is no longer accurate. You can also implement object migration to distribute duplication masters across the network and optimize available resources. Use object migration to decrease the number of duplication master objects on a low-capacity station and increase the number on a high-capacity station, to ensure that each station controls only the number of duplication masters for which it has sufficient bandwidth.
The bandwidth that a station requires depends principally on the number of messages it sends and receives. This is, in turn, dependent on the number of duplicas of an object, as a separate message must be sent to each duplica, and also on how frequently the object’s datasets are updated. The bandwidth required to receive messages depends on the number of object duplicas that the station receives updates for, and the bandwidth required to send messages depends on the number of duplicas of the duplication master that updates must be sent to. If a station has limited bandwidth, it is possible to decrease the frequency of updates. However, decreased frequency may decrease the quality of the game that the user experiences. The size of each message depends on the size of the datasets that need to be sent, in addition to the headers that are part of every message. The headers encompass the transport protocol information and information such as the packet ID and destination.
The flexible nature of the NetZ architecture allows games to run in a true peer-to-peer mode, client-server mode, or a combination of both as described in Section 2.4 Network Topologies. This flexibility presents the developer with options about how to structure the network depending on the needs of the game, and it can be used to configure a game in ways that, among other things, thwart hackers. The architecture chosen by the developer is a trade-off between the performance advantages of distributed control and the security of centralized control.
It is easy to restrict a user's access to specific data in a NetZ
game because the developer has full control over where each object's duplication master resides and how data is propagated over the network. For example, some data can be stored on a secure server to limit its access, without necessarily going to a true client-server architecture. The developer has the option of locating all or some duplicated objects on the secure server. Sensitive data could be located on a server, and non-sensitive data could be located on each user’s station. By using a hybrid approach, a developer can effectively minimize the effects of hackers while still maintaining many of the benefits of a peer-to-peer architecture, such as faster communication and fault tolerance.
Converting the type of control from distributed to centralized is a simple matter of changing the station where the duplication master of each object resides. For distributed control, the duplication master of a user-controlled object resides on that user's station. To operate in centralized control mode, the developer can choose to locate all duplication masters on one secure station, which effectively restricts the amount of information that a user can access. When this is the case, a player would use RMCs or actions to invoke commands on his object's duplication master. Running NetZ
on a client-server architecture effectively reduces the opportunities for hackers, but it also eliminates the peer-to-peer based advantages of NetZ
, such as fault tolerance and latency reduction. However, for certain games it may be beneficial to operate in a pure client-server configuration. When the server actually consists of a group of servers, NetZ provides all the necessary infrastructure to implement fault tolerance across the server group to ensure that the information on the server is still retained even if one server fails.
If NetZ
dead reckoning is not adequate for your needs, you can implement your own custom schemes by using the custom dead-reckoning extension. This extension enables you to define your own data extrapolation model, and also the scheme by which the duplication master updates its duplicas.
To use the extension, you need to include DeadReckoning.h
in the file where you implement your custom model, as demonstrated in the following code.
17.6. Using Custom Dead Reckoning
#include <Extensions/DeadReckoning.h>
Then, to implement your own custom dead reckoning, you must perform the following steps.
dsproperty
DDL declaration.DuplicatedObject::Update
and DuplicatedObject::Refresh
functions periodically to update dataset values on the duplication master and refresh them on the duplicas.To implement your own custom dead reckoning, in the DDL file you first need to declare your own custom dataset DDL property with the deadreckoning
property qualifier. For example, declare the following in the DDL to create a MyOwnDeadReckoning
DDL property.
Code 17.7 Creating a Custom Dataset DDL Property Using the deadreckoning
Property Identifier
dsproperty MyOwnDeadReckoning : deadreckoning;
After your custom dataset DDL property is declared, you can then assign it to the appropriate datasets. For example:
Code 17.8 Custom Dataset
dataset Position {
float m_fX;
float m_fY;
} MyOwnDeadReckoning;
As usual, DDL properties may be combined for a particular dataset. However, any DDL property that specifies the deadreckoning
property qualifier cannot be combined with another deadreckoning
property qualifier or the built-in buffered
or extrapolation_filter
properties.
When you define a DDL property with the deadreckoning
property qualifier, the following two macros are used in the generated code.
Code 17.9 Macros Used in Generated Code
// Defines the model used
_PR_MyDeadReckoningProperty_model(DS)
// Defines the model policy used
_PR_MyDeadReckoningProperty_modelpolicy
Using these two macros, you respectively define which dead-reckoning model and which model policy are used by defining the two macros in the <DDLFileName>Properties.h
file. The first macro defines the name of the model class used for extrapolation. If the same dead-reckoning model can be used for several dataset classes, a template class can be used.
For example, if you defined your MySpecificDeadReckoning
and MyGenericDeadReckoning
dsproperty
in the DDL as described previously, and you want to associate a dataset with your MySpecificModel
and MyGenericModel
dead-reckoning models and the supplied SharedModelPolicy
, you would define the following in the header file of your MyDDLProperties
class.
Code 17.10 Header File for the MyDDLProperties
Class
// Define that the model is used for a single dataset
#define _PR_MySpecificDeadReckoning_model(DS) MySpecificModel
// Define that the model is used for any dataset
#define _PR_MyGenericDeadReckoning_model(DS) MyGenericModel<DS>
// Define the model policy to use with the MySpecificModel model
#define _PR_MySpecificDeadReckoning_modelstore SharedModelPolicy
// Define the model policy to use with the MyGenericModel model
#define _PR_MyGenericDeadReckoning_modelstore SharedModelPolicy
A dead-reckoning model determines how dataset values are extrapolated or interpolated over time. To implement a model, you need to implement a class that inherits from the DeadReckoningModel
class and implements the IsAccurateEnough
, ComputeValue
, UpdateModelOnMaster
, and UpdateModelOnDuplica
member functions. These member functions are called when either the DuplicatedObject::Update
or DuplicatedObject::Refresh
function is called and an update is required. You may implement your model class as a regular C++ class if the model is used for only a single dataset, or as a template if several datasets use the same model. The following code shows an example of each implementation.
Note that you must either implement or include your custom dead-reckoning model in the <DDLFileName>Properties.h
file. Because of the way that the code is generated, your models' associated datasets must be forward declared in this same file.
Code 17.11 Implementing the Model Class
// This class only performs extrapolation of a Position dataset.
// Therefore, it is implemented as a regular C++ class and
// hardcodes the Position class in the API.
class MySpecificModel: public DeadReckoningModel {
public:
MySpecificModel();
~MySpecificModel() {};
qBool IsAccurateEnough(const Position& oPosition, Time tUpdateTime,
DuplicatedObject* pUpdatedDO, Station* pTarget);
void ComputeValue(Position* pPos, Time tTarget) const;
void UpdateModelOnMaster(const Position& oCurrentValue,
Time tUpdateTime);
void UpdateModelOnDuplica(const Position& oCurrentValue);
};
// This model is implemented as a template and as such may be used
// for any dataset. It could expect that the extrapolated dataset
// provides methods to access the X and Y parameters (GetX, SetX,
// GetY, SetY), for example.
template<class DS>
class MyGenericModel: public DeadReckoningModel {
public:
MyGenericModel();
~MyGenericModel() {};
qBool IsAccurateEnough(const DS& oCurrentValue, Time tUpdateTime);
void ComputeValue(DS* pValue, Time tTarget) const;
void UpdateModelOnMaster(const DS& oCurrentValue, Time tUpdateTime);
void UpdateModelOnDuplica(const DS& oCurrentValue);
};
When you implement custom dead reckoning, you can either use one of the provided model policies or write your own. A model policy describes how many dead-reckoning models the duplication master maintains, and how they are associated with the duplicas. For example, the duplication master may maintain only one model for all of its duplicas so that all duplicas are updated in exactly the same way, or it may maintain several models so that different duplicas can be updated differently (such as when implementing distance-based dead reckoning).
Two model policies are provided, the SharedModelPolicy
and the PerStationModelPolicy
. Under the SharedModelPolicy
, the duplicated object master creates only one local instance of the dead-reckoning model class. This instance is then used to decide which duplicas, if any, need updating. As the same model instance is used for each duplica, if the model is no longer accurate enough and needs to be updated, a message is sent to all duplicas. Under the PerStationModelPolicy
, the duplication master locally instantiates an instance of the dead-reckoning model class for each of its duplicas, meaning that the duplication master maintains as many model instances as there are duplicas. Because a model instance is maintained for each duplica and is used to determine whether that duplica requires an update to be sent to it, each duplica can be updated according to a different scheme, such as the distance of the object from the player's avatar. Figure 17.1 schematically represents the two provided models.
If neither of the provided model policies suits your needs, you can easily implement your own policy. To do so, you need to write your own model policy class that implements the following three methods. For more information about implementing the member functions, and some examples, see the DeadReckoningDSDecl
class and the provided SharedModelPolicy
and PerStationModelPolicy
policies.
Code 17.12 Implementing Your Own Model Policy
template<class MODEL>
class MyModelPolicy {
public:
// Returns a Duplication master’s local instance of one
// of its duplica’s dead reckoning models.
MODEL* GetRemoteDeadReckoningModel(const Station* pTargetStation);
// Returns a duplica’s local instance of the dead
// reckoning model.
MODEL* GetLocalDeadReckoningModel();
// Removes a Duplication master’s local instance of one of
// its duplica’s dead reckoning models. This method only
// needs to be defined if the duplication master maintains
// more than one model.
void ClearRemoteDeadReckoningModel(const Station* pTargetStation);
};
As with NetZ
's standard dead-reckoning scheme, to ensure that the duplica's datasets are correctly updated, you must call the DuplicatedObject::Update
function to update the dataset variables on the duplicated object master, and then call the DuplicatedObject::Refresh
function on all the related duplicas. For example, assume that MainLoop
is called periodically on every game object.
Code 17.13 Updating and Refreshing Datasets
void Sphere::MainLoop() {
if (IsADuplicationMaster()) {
// Compute the physics and then update the
// object’s position.
Update(m_dsPos);
} else {
// Refresh the position.
Refresh(m_dsPos);
}
// Display the object at the correct position.
}
When calling the DuplicatedObject::Update
or DuplicatedObject::Refresh
function with a custom dead-reckoning implementation, the methods that you implemented in your custom dead-reckoning model and custom dead-reckoning model policy classes are called according to the following schemes.
When DuplicatedObject::Update
is called:
GetRemoteDeadReckoningModel
of the defined model policy class is called to retrieve the local instance of the remote duplica's dead-reckoning model.
- If no model is found then no update is sent.
- If you retrieve a model that you already retrieved in this update loop, you return the same result as when
IsAccurateEnough
was previously called, without recalling the member function.
The IsAccurateEnough
member function of the defined dead-reckoning model class is called to compare the duplication master's dataset values with the model it has for the duplica. If this method returns true
, no update is sent.
If the IsAccurateEnough
function returns false
, an update is sent to the duplicas that are valid for the time passed with the DuplicatedObject::Update
function.
When an update is sent:
GetRemoteDeadReckoningModel
of the defined model policy class is called to retrieve the local instance of the remote duplica's dead-reckoning model.UpdateModelOnMaster
method of the defined dead-reckoning model class is called to update the duplication master's local instance of the duplica's dead-reckoning model.When the duplica receives an update from its master:
GetLocalDeadReckoningModel
of the defined model policy class is called to retrieve the duplica's local instance of the dead-reckoning model.UpdateModelOnDuplica
of the defined dead-reckoning model class is called to update the duplica’s local instance of the dead reckoning model.When the DuplicatedObject::Refresh
function is called:
GetLocalDeadReckoningModel
of the defined model policy class is called to retrieve the duplica's local instance of the dead-reckoning model.ComputeValue
function of the defined dead-reckoning model class is called to compute the duplica's dataset values as defined according to the applicable dead-reckoning model at the time passed with the DuplicatedObject::Refresh
function.CONFIDENTIAL