This file defines the terminology used within the CreativeStudio manual. It also explains how to configure the settings that correspond to each term.
|
|
|
|
An animation curve is a curve that controls animations.
The Curve Editor panel in CreativeStudio allows you to edit animation curves that were created in 3DCG tools, as well as to create new ones.
On the Curve Editor panel, the horizontal axis is time and the vertical axis is varying volume.
Values between neighboring keys are interpolated, which makes it easy to edit animations.
In the figure below explains the case in which keys have beeen set at frames 0, 20, 60, and 80 using the Curve editor panel in CreativeStudio.
The time scale for animation curves uses frames as units. Frames for which keys have been set are known as keyframes.
Animation curve baking is an optimization process performed when outputting an animation created with a 3DCG tool as game data.
When an animation curve is created using methods such as Hermite curve interpolation, the calculations required to interpolate between two neighboring keys requires some processing overhead. By "baking" the animation curves, programmers and artists can keep this calculation overhead in check.
The alpha value is information stored in a fragment that indicates that fragment's transparency.
The values range from 0 (fully transparent) to 1 (fully opaque).
The alpha test compares the alpha values of fragments against a reference value to determine which fragments to use.
If the test results do not satisfy conditions, the fragment is removed as a rendering target.
Fragments that meet the conditions pass the alpha test and will be used, so their status is shown as passed.
Fragments that have passed this test are handed on to the Stencil Test process.
This is used to express translucency in textures and other images that use alpha values.
Antialiasing is a method for smoothing outlines when drawing polygons.
You can apply antialiasing by having a larger image scale in the frame buffer than in the display buffer and reducing the image when sending it from the frame buffer to the display buffer.
when light traveling through our environment strikes various surface materials, colors other than that of the material are absorbed and only light of the color not absorbed is reflected. The color of this light then shines on the next material it strikes.
For example, the color reflected from a red object will become red light that illuminates the next object it strikes in red.
To handle all of the light traveling through the environment, we must vary the in overall hue and brightness of our environment in a complex manner.
In the CG industry, this is called mutual reflection, and the light of the entire scene expressed due to mutual reflection is called ambient light.
Even faint colors shining on dark areas where there is no light source are subject to mutual reflection.
Because the calculation of mutual reflections in the CG industry can be extremely intensive, light reflection calculations are performed only once under CreativeStudio as with many DCC tools.
As such, there is nothing called an ambient light defined by mutual reflections in a scene. Given this, dark locations where light does not strike fall into blackness.
To prevent this, ambient light is used to raise the level of color in black areas of a scene where light does not strike.
Naturally, the color of the entire scene brightness due to the ambient. As such, ambient is one type of environmental light.
The term window coordinate system refers to a coordinate system used when projecting the area viewable by the camera on the screen. Values for this coordinate system range from 0 to 1 in the horizontal direction for the X-coordinate and in the vertical direction for the Y-coordinate. The Z-coordinate is used when handling depth values.
The figure below is a conceptual image of the coordinate space of a window coordinate system.
Emitters are sources that generate and emit particles. They can be created in the particle emitter panel, which lets you set their shape, size, direction, and position. The shape, size, direction, and position of an emitter only has an effect when particles are being emitted.
The emitter time is the amount of time since the emitter began emitting particles.
This has no effect on any animation of particles emitted by the emitter.
Operands are a feature of the texture combiner used to specify the components for the blend of RGBA input sources from the various channels. In the default setting, RGB is applied to the color and A to the alpha.
Hierarchical structures refer to structures in which a single model at a certain level has multiple lower-level models branching out from it. Conversion to hierarchical structures allows transform values (scale, rotation, and translation) to be inherited from the upper-level object by the lower-level objects.
The write test takes generated fragment information as a reference value and determines whether to actually write it to the framebuffer. This is carried out for each fragment. Specify the standard by which to judge fragments in CreativeStudio. Fragments that do not meet the standard are excluded from rendering.
The table below lists the conditions by which a fragment is judged during blending.
Conditions | Description |
---|---|
Alpha test | An alpha test compares a fragment's alpha value with the reference value set for the alpha test. |
Stencil test | A stencil test compares the value set for the fragment stencil test with the value in the stencil buffer to write to. If the fragment passes, the value in the stencil buffer is overwritten with the fragment value. |
Depth test | A depth test compares the per-fragment depth value indicating the distance from the camera with the value in the depth buffer to write to. If the fragment passes, the value in the depth buffer is overwritten with the fragment value. |
Write tests are carried out in order of alpha test → depth test → stencil test.
Camera cube coordinates are texture coordinates generated by texel operations based on the polygon model's normal data and the camera coordinate system.
This mapping method allows expressions where the surrounding scenery is reflected when the viewpoint is changed by referencing a single texture image that includes image information for each of the six directions surrounding a polygon model.
The term camera coordinate system refers to a coordinate system that takes a camera position as its origin. Values increase going right for the X-coordinate, going up for the Y-coordinate, and going away from the look-at point in the Z-direction.
The figure below is a conceptual image of the coordinate axes of the camera coordinate system.
There are four keyframe formats that can be handled in CreativeStudio:
Constant format only saves a single value for each key. It doesn't perform any special calculation.
Hermite format saves a set of key information consisting of a frame, a value, and slope. The value at a given frame is calculated by using the Hermite formula to interpolate between the values of the two neighboring keys.
Linear format saves a set of key information consisting of a frame and a value. The value at a given frame is calculated by linearly interpolating between the values of the two neighboring keys.
Step format saves a set of key information consisting of a frame and a value. The value at a given frame is looked up based on the value at the previous frame.
Graphics is a general term for CreativeStudio features that handle data that has been exported from 3D graphics tools. This term is used to distinguish these features from those that relate to particle effects.
The GPU (Graphics Processing Unit) is the device where graphics are processed for screen rendering. It follows screen-rendering commands issued from the CPU.
The term clipping refers to a polygon clipping process performed on polygons that straddle the area viewable by the camera (called the "clipping volume") in the clip coordinate system. This is set for both perspective projection and orthogonal projection cameras.
The term clip coordinate system refers to a coordinate system used to exclude polygons outside the area viewable by the camera as rendering targets.
The phrase "area viewable from the camera" is defined as a coordinate space bounded by six clip planes: the near and far clip planes, the left and right clip planes, and the top and bottom clip planes. In addition to X, Y and Z , the clip coordinate system includes a W-coordinate used in matrix calculations when calculating coordinate conversions.
The figure below is a conceptual image of the coordinate space of a clip coordinate system.
Constant color refers to the fixed color and alpha values that can be used for the combiner computational expressions of the texture combiner. Up to six Constant Colors can be set (constant 0 to constant 5), and one can be selected as the input source for each step in the texture combiner. You can also animate these using animation curves.
The Blend Color used in the Blend process is set separately.
Texture Combiner Computational Expressions are the expressions used for blending the color and alpha components as determined by the input source operand settings. The color and alpha can each be selected from among 10 different expressions.
Lookup tables represent curves for which input and output has been set. Output values are pre-set for 256 input values. Tables are then used to calculate lighting or fog.
The figure below shows how a lookup table works.
The Shader is a program that calculates the appearance of objects based on defined information about polygon models or light sources.
Shaders for which the content of the operation can be programmed are referred to as programmable shaders.
A shape is a group of polygons that can be previewed as a shape using CreativeStudio.
A single material can be configured for each shape.
Operations that transform the coordinates of the polygons that make up an object during rendering.
Geometry shaders are operations that can rebuild the results of vertex operations and generate new shapes. The vertex attributes that make up the particle format in CreativeStudio are generated using geometry shaders.
The term projection refers to the different ways of handling the coordinate system for what the camera views.
There are two projection types -- perspective projection camera (Persp) and orthogonal projection camera (Ortho) -- and different items can be set for each.
Initial velocity is the speed intantaneously inherited when a particle is emitted from an emitter. If there is no effect of an external force, such as gravity, the particle will maintain this initial velocity as it moves.
The stencil test compares the value in the stencil buffer with a stencil reference value to determine which fragments to use.
If the test results do not satisfy conditions, the fragment is removed as a rendering target.
Fragments that meet the conditions pass the alpha test and will be used, so their status is shown as passed.
Fragments that have passed through are handed on to the depth test process.
In addition, the stencil buffer can be updated using the value for a fragment that has passed through.
The stencil test is primarily used by the mask process, which limits the region to be rendered.
The stencil test comparison function is a formula used to compare the reference value with the value in the stencil buffer.
With CreativeStudio, you can select from eight different comparison functions.
The stencil test reference value is a value that can be set using CreativeStudio for comparison with the value in the stencil buffer.
The stencil buffer is a memory region that stores a mask limiting the region to be rendered stored in the fragment.
The term 3D graphics tool refers to applications used by designers to create assets for CreativeStudio (with the exception of textures).
Special plug-ins can convert data that was created in these tools into data that can be used within CreativeStudio.
Slope refers to the two slopes of the curve on the left and right of the keyframe.
You can create a smooth animation by adjusting the degree of interpolation between keys (the shape of the curve). The slope input value specifies the vertical length for a horizontal length of 1. This is called the tangent value.
The term orthogonal projection camera refers to a camera that does not depend on depth information or perspective effects based on the camera angle. These are also called ortho cameras. Although coordinates are converted to window coordinates just as with a perspective projection camera, horizontal and vertical values are used unchanged.
Separate blending is the process by which the color and alpha values for fragments that have passed the write test (the source color and source alpha) are blended with the color and alpha values at the same locations in the framebuffer being written to (the destination color and destination alpha). The source and destination color components are multiplied by coefficients (scaled), and the results of the blending equation are written back to the framebuffer.
The term source is used broadly within CreativeStudio to refer to basic materials.
Texture tiling refers to an operation done on the edges of images during scaling, rotation, or translation or a texture's U and V coordinates in the texture coordinate system. CreativeStudio provides four tiling methods.
These tiling methods can be specified independently for the texture's horizontal and vertical axes.
The term W buffer refers to the memory buffer in which W values used in the clip coordinate system are stored during the rasterizing process.
Depth tests using the value in the W buffer are possible by specifying a value other than 0.0 for W scale.
The formula for finding depth values in window coordinates by using the W buffer is given below.
Window coordinate system depth value = - (Z value in clip coorindates ÷ Value specified for the far clip)
As for the relationship between the fragment depth value before perspective projection conversion and the depth value stored the depth buffer, the figure below shows the status of the W buffer before and after use.
Child particles are particle sets that can be called [from other particle sets] as emitters of particles. Values for the size, velocity, and other such properties can be transmitted from the emitting particle (i.e., the parent particle) to the child particle. (This feature has not been implemented.)
The CPU performs the calculations necessary for the handing of polygons and particles. It issues commands to the GPU relating to screen rendering. The CTR has two CPUs, one that is used for the system and the other for game applications.
This term refers to text-format data files used to store information about assets such as models, textures, and animations. These files uses the XML format. They can be output from 3DCG tools using the plug-ins provided by NintendoWare.
The intermediate file types are now undergoing adjustments; the number of file types may increase or decrease in the future.
The intermediate file types that are currently used by CreativeStudio are listed in the table below.
Extension | Content |
---|---|
.cmdl |
Stores model data, including polygons and materials. |
.ctex |
Stores texture image information. |
.cmata |
Stores material animation data. |
.cptl |
Stores particle and material data. |
.cskla |
Stores skeletal animation data. |
.csdr |
Stores user shader (programable vertex shader) data. |
.clgt | Stores light information. |
.ccam |
Stores camera information. |
.cenv |
Stores scene environment data. |
.clts |
Stores data in lookup tables. |
.cact | Stores particle-related data. |
.cres |
Stores data for all intermediate files that are loaded when the Merge and save files command is selected. |
Vertex operations apply lighting and shading to individual vertices based on information and vertex attributes from the vertex buffer.
The process is also called the vertex shader.
Vertex attributes refers to a type of data handled for each vertex. Polygon models are made up of vertex attributes.
The table below lists some of the vertex attributes that make up a model shape.
Types of Vertex Attributes | Description |
---|---|
Vertex coordinates | Represents the coordinate information used to indicate the position of each vertex making up a polygon surface. A polygon surface is configured from at least 3 vertices. |
Normal Vectors | Represents information used to indicate the orientation of each vertex making up a polygon surface. This can be used for the shading process performed by lighting. |
Texture coordinates | Represents coordinate information used to align the position of the texture image with each vertex making up the polygon surface. Up to three texture coordinates can be used with CreativeStudio. |
Vertex color | Represents information that indicates vertex color. You can directly specify a color using a 3D graphics tool or store the color resulting from lighting. |
The vertex buffer is a region of memory used to store vertex attribute information included in a polygon model.
It is used by thevertex shader and the geometry shader.
Polygon models can be created using the 3D graphics tools Maya or Softimage.
The display buffer is the region of memory where the image output from the framebuffer is stored after the image data have been converted to an actual picture for display on the screen.
It can be secured in VRAM and in device memory.
A texture is an image applied to the surface of an object to express a certain texture, or "feel." The act of applying a texture to the surface of an object is known as texture mapping.
The texture combiner is the fragment shader process by which the results of the vertex shader, fragment lighting and texel operations serve as the input values that get blended to output the final color and alpha values. The input values used by the texture combiner are called input sources.
The figure below depicts the process flow from the input of the fragment lighting result, texture color, vertex color and constant color serving as input sources, to the result of blending by the texture combiner that will be passed to the blending process.
Texture Combiner Computational Expressions are the expressions used for blending the color and alpha components as determined by the input source operand settings. The color and alpha can each be selected from among 10 different expressions.
The texture combiner's number of steps refers to the number of times computations are performed to blend the input source. The texture combiner always uses up to six steps.
The texture combiner buffer is the memory region for temporary storage of the value output from each step by the texture combiner. Values output from steps that use the buffer can be used as input sources for subsequent steps.
The buffer cannot be used for the last step by the texture combiner.
Texture coordinates are the coordinates used when applying texture images to a polygon model. This is called "texture mapping." With texture coordinates, the bottom left corner is taken as the origin of the texture image. The positive direction for the U-coordinate is toward the right and the positive direction for the V-coordinate is toward the top. Regardless of the size, the texture is applied within a range of 0.0 and 1.0 in both the U direction and the V direction.
The following figure shows the texture coordinate system with the bottom left of the texture image as the origin.
A texture filter is an operation that determines the appearance of textures applied to a polygonal model based either on the distance of the model from the viewpoint or the skew relative to the view angle. You can specify one of two types of texture filters to use when a texture is displayed on the screen in either enlarged or reduced fashion.
The table below lists the filtering methods that can be selected in CreativeStudio.
Filtering Method | Description |
---|---|
Point Sampling (Near) |
With this method, the corresponding pixels are referenced strictly during pixel color determination. This causes each texel to be displayed boldly. |
Bilinear Filtering (Linear) |
With this method, the neighboring texel colors are interpolated during pixel color determination. This causes the texture image to be displayed smoothly. |
If a texture uses mipmaps and is displayed on the screen in enlarged/reduced fashion, you can also specify texture filters that span multiple mipmap levels.
The table below lists the filtering methods that can be selected in CreativeStudio when mipmaps are used.
Filtering Method | Description |
---|---|
Nearest - mipmap - nearest | Performs point sampling within the texture image, with no interpolation between mipmap levels. |
Nearest - mipmap - linear | Performs point sampling on the texture image, with interpolation between mipmap levels. |
Linear - mipmap - nearest | Performs bilinear filtering on the texture image, with no interpolation between mipmap levels. |
Linear - mipmap - linear | Performs bilinear sampling on the texture image, with interpolation between mipmap levels. |
Texture memory refers to a memory area for storing lookup table data that is used by fragment shaders as well as image data that is applied (mapped) to polygonal models.
Texel is the term for the fundamental unit representing a single point and, in contrast to pixel , refers to the non-color composition of the texture image.
In CreativeStudio, the texel operation relates texture coordinates to texels. Texel is short for "texture pixel."
A texel operation computes the correspondences between an image that was input into texture memory and the texture coordinates that were input from a vertex shader.
Interpolation that smooths out the appearance of texture images is also done during this process.
Procedural textures (a way of generating textures automatically) are also generated by texel operations.
The output location (buffer) where colors are ultimately written after a fragment operation has been configured.
The following three types of data are written to destinations.
Name | Content |
---|---|
Destination Color | A color (including the alpha value) that has been rendered to the destination. |
Destination RGB | A color (excluding the alpha value) that has been rendered to the destination. |
Destination alpha | An alpha value that has been rendered to the destination |
The term depth value refers to depth information from the camera stored for a fragment in the depth buffer. This is also called depth information.
Small values indicate that the fragment is closer to the viewpoint, while large values indicate that the fragment is farther away.
The term depth test refers to the process used to determine fragments to be used based on the result of comparing the depth value of the fragment with the depth value of the depth buffer.
If the test results do not satisfy conditions, the fragment is removed as a rendering target.
If test results satisfy conditions, the status of fragments selected for use are said to have passed. Fragments that have passed through undergo a blender process and are written to the frame buffer. The depth buffer can also be updated using the depth values of fragments that have passed by enabling a depth mask. This is usually used in hidden surface removal, when hidden polygons are excluded from processing.
The depth test comparison function is a formula used to compare the depth buffer value with the depth value of each fragment.
With CreativeStudio, you can select from eight different comparison functions.
The depth buffer is a memory region for storing depth values.
The term perspective projection camera refers to a camera where perspective is applied based on the camera angle. When perspective transformation is used, objects closer to the camera look larger, while objects farther from the camera look smaller.
The term perspective projection conversion refers to a rasterizing process used during the triangle setup process in which conversion to per-fragment values is carried out based on vertex data given in clip coordinates. Because values calculated by dividing horizontal and vertical values by the depth are used when displaying objects on the screen, coordinate conversion causes polygons in the front to appear bigger, and polygons in the back to appear smaller. However, when using per-fragment values this way, polygon surfaces may sometimes appear to warp due to the fact that depth values are not uniform from front to back.
The following figure is a conceptual image of perspective projection conversion.
Triangle setup is the process of assembling polygons and converting them to optimal data. You can use clipping to decide on the polygons that will be rendered in the camera's viewable region and to eliminate the rest of the drawing region.
A power of two is any of the integer powers of the number two (in other words, two multiplied by itself a certain number of times). There are eight powers of two between the minimum and maximum texture sizes (8 and 1024): 8, 16, 32, 64, 128, 256, 512, and 1024.
The term particle refers to a system that can generate multiple "shapes" and control their shape, color, and movement as a group. Games use particles for effects like fire and water.
Particle Effect is a feature that utilizes the mechanisms of particles to create screen effects.
With the exception of texture images, all can be made with CreativeStudio.
A particle effect is comprised of three types of content: emitters, models, and particle sets.
CreativeStudio can configure the following four particle shape types.
Setting | Description |
---|---|
Screen-Aligned Billboard | Rotates to display with the z-axis parallel with the camera lens axis. Particles on the edge of the angle of view are parsed. |
World-Oriented Billboard | Rotates to display with the z-axis parallel with the camera lens axis. Particles on the edge of the angle of view are parsed. |
Y-Axis Billboard | Rotates only around the y-axis to display with the z-axis parallel with the camera lens axis. Particles on the edge of the angle of view are parsed. |
Polygon Plane (XY) | A rectangular polygon model. |
The particle lifespan is the amount of time between the emission of a particle from an emitter and its expiration.
It can be used to animate individual particles.
A particle set is a group of particles that all have the same shape.
The particle emission time is the period of time during which an emitter is emitting particles.
A buffer is a memory region where information can be temporarily stored and retrieved while processing is taking place.
Since different kinds of information are handled by different buffers, the buffers are named to reflect which kind of information they handle.
A bit unit is the minimum unit of data handled.
VRAM are memory devices specialized for graphics. They can be used by the GPU.
VRAM operate faster than main memory, so they are better for storing frequently-used vertex data and texture data.
Fog is the name of a feature that is used to represent atmospheric effects like mist and steam. This feature changes the color of objects based on their distance from the screen. The distance (aka "depth value") is calculated on a per-fragment basis.
Lookup tables can be used to determine the degree of attenuation over distance.
A fragment is information from the vertex buffer that has been calculated in a three-dimensional coordinate system by a vertex shader or a geometry shader and then converted into screen coordinates in order to allow the information to be handled on a per-pixel basis.
Fragment lighting is a process that applies shadows on a per-fragment basis to information output from the vertex process. More complex lighting expressions are possible using lookup tables.
The figure below shows shading applied to fragments for interpolated normal vectors provided as input values from the vertex shader. The quality of the lighting result is not affected by the polygon density.
Specify Frame Count is a playback option that loops animations created in the Curve editor panel based on the specified frames. You can configure the number of frames per loop cycle and the random offset.
Frame rate refers to the how many times the screen is updated per second.
It is often referred to in units fps (frames per second).
The framebuffer is a memory region for temporarily storing data output from the fragment shader.
Values for fragments for which processing by fragment using the blender has ended are stored here.
The table below gives an overview of the three buffers that collectively make up the framebuffer.
Buffer Type | Usage |
---|---|
Color Buffer | Renders fragment values that contain color and alpha values. |
Stencil Buffer | Stores information that limits the area being rendered. This is used during stencil tests. |
Depth Buffer | Stores information about the depth from the camera. This is used during depth tests. |
Write to Framebuffer is a process that blends the information for the fragments that will be used as determined by the write test with the information for fragments located at the same position in the framebuffer, and then writes the result of blending back to the framebuffer.
The table below lists blending methods when writing to the framebuffer.
Blending Types | Description |
---|---|
Blend processing. | Performs blending using a blend formula by multiplying the value set here times the color of the fragment to be newly written and the color in the framebuffer to be written and then writes the result to the framebuffer. |
Logic calculations | Peforms logical operations on the color of the fragment to be newly written and the color in the framebuffer to be written and then writes the result to the framebuffer. |
Amplitude (%) is a random value multiplied by the standard value at the time of particle generation.
Standard Value ± (Random Value × Standard Value)
A mechanism that uses alpha values to blend the material color (or the color resulting from lighting calculations) with the pixel color from the output buffer.
Blend color refers to the fixed color and alpha values that can be used for source and destination coefficients.
The constant color setting is different from the one for the texture combiner.
Blending is the process by which the color and alpha values for fragments that have passed the write test (the source color and source alpha) are blended with the color and alpha values at the same locations in the framebuffer being written to (the destination color and destination alpha). The source and destination are multiplied by coefficients (i.e., scaled) and the results of the blending equation are written back to the framebuffer.
Blend source coefficient refers to the coefficient that the source color and source alpha are multiplied by. The results are then used as source elements.
Blend destination coefficient refers to the coefficient that the destination color and destination alpha are multiplied by. The results are then used as destination elements.
In CreativeStudio, you can configure separate coefficients for the color and alpha.
Procedural textures refers to textures that are generated through calculation.
Unlike traditional textures, which are drawn as bitmaps, they allow new materials to be created by changing the parameters that are set for the material.
Because procedural textures are calculated for the first time and generated during rendering, they make it possible to keep memory usage in check.
Having both magnitude and direction, vectors are represented by arrow length and direction. With CreativeStudio, vectors are used in many different situations such as light calculations and particle speed control. They are also used when converting the color components of a texture image into X, Y and Z vector values.
The overwrite previous value method animates by overwriting the value from the previous frame with a specified value every frame. This can be convenient, as it can be simpler to control an animation using the animation curve.
The add to previous value method animates by adding a value to the value from the previous frame every frame. Use this to animate at a set speed.
Multi-UV is when one object is assigned multiple texture UV coordinates.
Mipmap is a feature where textures having lower resolution than the original texture are prepared ahead of time and the texture being used is changed based on distance. When a texture is enlarged or reduced for display, the texture with the optimum resolution is automatically applied based on the area being displayed on the screen.
To use mipmapping, you must first prepare a single texture that contains multiple versions of the main image with varying resolutions.
The levels of resolution of the texture images used in the mipmap are referred to as mipmap levels.
The figure below shows an example of a mipmapped texture being applied to a chessboard.
Mipmap LOD bias refers to a feature that adjusts the mipmap level that is applied by shifting the mipmap LOD level when the a level with something other than the desired resolution is referenced.
The term mipmap LOD level refers to the mipmap level that is applied to a mipmapped texture at runtime.
Main memory is a memory device where graphics-related data can be placed. It can also store data about the progress of the game. Both the CPU and the GPU can utilize main memory.
Moiré patterns are interference patterns that occur when regular patterns are reduced in size.
Moiré patterns occur when a texture with relatively high resolution is mapped to an on-screen area with relatively low resolution, causing the texture not to fit neatly within a single pixel. This causes the texture to appear to flicker.
The term model coordinate system refers to a coordinate system that takes one point of a polygon model located in world coordinate space as its origin.
Fit to Lifetime is a playback option that automatically links animations created on the Curve editor panel with particle lifetimes.
Rasterization is a process that converts polygons composed of vertices into pixels handled on a per-fragment basic. This fragment information is then passed on to the fragment shader.
Randomness is a feature that provides variety to the results of a calculation by multiplying a parameter by a random number.
Random table is a playback option for getting a value at the time of particle emission from a random animation frame.
A loop animation is an animation in which the ending frame transitions smoothly into the starting frame. To create one, create an animation curve that will transition smoothly when looped.
The local coordinate system is the coordinate system where (0,0,0) serves as the origin for the center and vertices of the polygon model.
Logical operations are conducted on the fragment colors that have passed the write test (the source color) and the existing colors at the same locations in the framebuffer being written to (the destination color).
A logical operation is performed on the two possible input values for source and destination color of 1 (TRUE) or 0 (FALSE) to output a single value, which is then written back to the framebuffer.
The term world coordinate system refers to a coordinate system where a Z-axis, representing depth, has been added to the traditional two-dimensional X-Y coordinate system. The location where X, Y and Z coordinates are all (0, 0, 0) is called the origin. Values increase going right for the X-coordinate, going up for the Y-coordinate, and going toward the viewer for the Z-coordinate.
The following diagram is a conceptual image of a model coordinate system inside a world coordinate system.
CreativeStudio is an abbreviation for NintendoWare for CTR CreativeStudio, a tool provided in the NintendoWare for CTR package for developing graphics.
NW4C is an abbreviation for NintendoWare for CTR, a set of tools and libraries that can be used to develop game software for the CTR system.
CONFIDENTIAL