6. Stereoscopic Display

Of the two LCD screens on the 3DS system, the upper screen can display images that appear in stereoscopic 3D to the naked eye, with no need for any special equipment.

Two images must be rendered for stereoscopic display, one for the left eye and one for the right. The CTR-SDK includes the ULCD library, which takes the camera matrix created for normal display and from it calculates the camera matrices needed to render these two images. Always use the camera matrices calculated by the ULCD library when rendering. This additionally serves to standardize the method used for stereoscopic display among various applications.

This chapter describes the principles of stereoscopic display, how to implement stereoscopic display in an application, and how to coordinate your application with the stereo cameras.

For convenience, the terms defined below are used in regard to stereoscopic display.

Real Space

Stereoscopic display involves various real physical factors, such as the distance between the player and the surface of the upper screen. “Real space” refers to the space in the real world where the player and 3DS system exist, distinct from the space created within an application.

Virtual Space

In contrast to real space, “virtual space” refers to the space created within an application.

Base Camera

This refers to the camera configured or created by the application for a scene. The ULCD library calculates the camera matrices for the left and right eyes based on information from this base camera.

Base Plane

During stereoscopic display, the plane in virtual space that represents the 3DS system's LCD screen is called the “base plane.” The base plane is a cross-section of the camera's viewing volume.

3D Depth Slider

This is a slider switch on the 3DS system for adjusting the intensity of the stereoscopic effect. This slider allows different people to adjust the stereoscopic display for the optimal view, or to adjust the effect if feeling any fatigue after extended play. The ULCD library uses the input value from the slider to adjust the distance between the left and right cameras generated in virtual space.

Optimal Viewing Position

This refers to the viewing position from which the player can best perceive the effect of stereoscopic display.

6.1. Principles of Stereoscopic Display

Stereoscopic display is the rendering of an object while factoring in the difference in view between the left and right eyes (this difference in view is called “parallax”) to generate the illusion of distance between the object and the viewpoint. The ULCD library offers two different ways of calculating the camera matrices used in stereoscopic rendering: one that keeps the base camera settings unchanged as much as possible (application priority method), and another that automatically changes the base camera settings as needed (realism priority method).

This section describes the principles underlying each of these calculation methods.

6.1.1. Preconditions

The ULCD library assumes the following conditions when calculating the camera matrices used for stereoscopic display.

  • The distance Disteye between the player's eyes is 62 mm.
  • Let Diste2d be the distance between the center of the surface of the LCD screen and the player's eyes.
  • Depthltd is the limit distance at which depth is naturally perceivable by human eyes, according to existing research. Likewise, Pr_ltd is the limit parallax in real space (on the base plane) that is required to produce this depth. The CTR Guidelines define the maximum parallax values that are allowed, both for the effect of distance away from the player (depth into the screen), and the effect of distance toward the player (jumping out of the screen).
  • Let Lendisp be the length of the shorter side of the upper screen.

6.1.2. Distance Between the Left and Right Cameras and Calculating Each Camera's Viewing Volume

The following information is needed as input to calculate the camera matrices.

  • The view matrix Viewbase from which the parallax images for stereoscopic display are generated.
  • The projection matrix Projbase used as the base from which the left-eye and right-eye camera viewing volumes are generated.
  • The distance Dlevel in virtual space from the camera to a point that you want to position on the surface of the LCD.
  • A coefficient Dr for adjusting the level of stereoscopy (its value ranges from 0.0 through 1.0).

You can find the width and height of the base plane from Dlevel and the viewing volume parameters lbase, rbase, bbase, tbase, nbase, and fbase (left, right, bottom, top, near, far) that can be reverse calculated from the projection matrix Projbase.

Width of the base plane:

Wlevel = | rbase - lbase | × Dlevel / nbase

Height of the base plane:

Hlevel = | tbase - bbase | × Dlevel / nbase

From this height and width, and from the actual dimensions of the upper screen, we can find the coefficient for converting the real space scale to the virtual space scale.

Scaler2v = Hlevel / Lendisp

6.1.2.1. Application Priority Calculation Method

This calculation method prioritizes maintaining the view from the base camera (the view originally expected by the application) when it generates the camera matrices for rendering the images for the left and right eyes.

The limit parallax in real space (the greatest parallax that still feels natural) Pr_ltd can be calculated from the preconditions.

Pr_ltd = Disteye × ( Depthltd / ( Diste2d + Depthltd ) )

Figure 6-3 shows the positions and relationships between the terms of these equations.

Figure 6-1. Limit Parallax in Real Space

You can use Scaler2v to convert Pr_ltd to the virtual-space limit parallax Pv_ltd.

Pv_ltd = Pr_ltd × Scaler2v

Calculate the distance I between the left and right cameras such that the base-plane parallax for displaying objects on the far clipping plane is set to this limit parallax, without changing the depth position of the base camera. Use Dlevel in this calculation. If the far clipping plane is in an abnormal position (for example closer than the base plane) this distance will be 0.

The equation to find I is as follows:

I = Pv_ltd × ( fbase / ( fbaseDlevel ) )

The following figure shows these relationships.

Figure 6-2. Distance Between the Left and Right Cameras in Virtual Space

In this method, parameters related to the viewing volume do not change because the position of the base camera is not changed.

Near clipping plane width: Wn = | rbase - lbase |

Near clipping plane height: Hn = | tbase - bbase |

6.1.2.2. Realism Priority Calculation Method

This method generates camera matrices that render the left and right images so that objects on the base plane appear to look the same as they would in real space. In other words, with this method the cameras are adjusted for the most natural stereoscopic view, assuming that the player is viewing at a distance Diste2d from the CTR LCD.

The distance between the player's two eyes and the distance from their eyes to the CTR LCD in real space are reflected in the relationships between the left and right cameras and base plane in virtual space. The parameters set for the base camera are automatically changed as part of this process.

You can calculate Dlevel_new by converting the distance Diste2d between the CTR LCD surface and the player's eyes to the scale of the virtual space.

Dlevel_new = Diste2d × Scaler2v

Use this calculated distance, Dlevel, and Projbase to find the distances to the clipping planes. In turn, use these to find distances nnew and fnew to the clipping planes of the newly generated camera.

nnew = Dlevel_new - ( Dlevel - nbase ) , fnewDlevel_new + ( fbaseDlevel )

If the newly calculated near clipping plane is behind the cameras, adjust the clipping planes as follows.

nnew = Dlevel_new × 0.01

Likewise, adjust as follows if the far clipping plane is closer than the near clipping plane.

fnew = nnew × 2.0

Because you are not moving the base plane, the base camera is moved forward or backward as needed to satisfy the values obtained so far.

Next find the dimensions of the near clipping plane (width, height), which have changed because it was moved. Also recalculate the near clipping plane's range (left, right, top, bottom).

Near clipping plane width:

Wn_new = Wlevel × nnew / Dlevel_new

Near clipping plane height:

Hn_new = Hlevel × nnew / Dlevel_new

Near clipping plane range:

lnew = tmp × lbase , rnewtmp × rbase ,

tnew = tmp × tbase , bnewtmp × bbase

tmpHn_new / | tbasebbase |

The following equation applies the distance between the player's eyes to the distance l between the left and right cameras in virtual space.

IDisteye × Scaler2v

Figure 6-3 shows the positions and relationships between the terms of these equations.

Figure 6-3. Realism Priority Calculation Method

6.1.3. Generating Projection Matrices

The adjustment coefficient vol and the value of the 3DS system's 3D depth slider Dr are applied to the distance l between the left and right cameras, which you found earlier. There is no sense of depth at the lowest 3D depth slider value, at which the right and left cameras match the base camera.

I = I × vol × Dr  (0.0 ≤ vol ≤ 1.0)

The parameters of the base camera's viewing volume differ depending on which calculation method is used, but for convenience, the following uniform notations are used: the near clipping plane width is Wn, the near clipping plane height is Hn, the distance from the base camera to the base plane is Dlevel, and the parameters related to the viewing volume are l, r, t, b, n, and f for the left, right, top, bottom, near, and far planes, respectively. To get the respective viewing volumes for the left and right cameras, find the parallax Pn at the near clipping plane. Assume that the left and right cameras are equidistant from the base camera position.

Pn = I × 0.5 × (( Dlevel - n ) / Dlevel )

Based on this parallax, you can calculate the position of the near clipping plane in the respective left and right camera viewing volumes. The top, bottom, near, and far planes are common between the left and right cameras because these cameras have only been moved horizontally.

Position of the near clipping plane for the left camera: lleft = ( l - Pn ) + I × 0.5, rleft = ( r - Pn ) + I ×0.5

Position of the near clipping plane for the right camera: lright = ( l + Pn ) - I × 0.5, rright = ( r + Pn ) - I ×0.5

Calculate the projection matrices for the left and right cameras based on parameters that follow from these equations. The figure below shows the viewing volumes yielded thus far. To generate a stereoscopic image, an object must be placed in the region where the viewing volumes intersect (shown by the central portion of the figure).

Figure 6-4. Viewing Volumes for the Left and Right Cameras

6.1.4. Generating View Matrices

Generate the left and right camera view matrices using the distance l between the left and right cameras and the base camera information derived from Viewbase. This base camera information consists of its position Posbase, its direction Dirbase, and its right-hand direction Eright, which are all 3D vectors. Dirbase and Eright are unit vectors with a length of 1.

When the distance between the cameras has been found using the realism priority method, the left and right camera positions may have moved forward or backward (along the depth direction) relative to the original base camera. The base camera's position is updated as follows.

Posbase = Posbase - ( Dlevel_newDlevel ) × Dirbase

Because the base camera is midway between the left and right cameras, the positions of the left and right cameras relative to the base camera will be equal to the base camera's position, plus or minus half of the distance between the left and right cameras. In the following equations, the left and right camera positions are Posleft and Posright, and the direction of their look-at points are Tgtleft and Tgtright, respectively (we do not need to find the precise position of the look-at points because we only need to know the camera orientation).

PosleftPosbaseI × 0.5 × ErightTgtleftPosleft + Dirbase

PosrightPosbase + I × 0.5 × ErightTgtrightPosright + Dirbase

Using these equations, you can generate the view matrices for the left and right cameras.

6.1.5. Parallax Required to Display an Object at an Arbitrary Position

Using the matrices calculated by the ULCD library, you can implement stereoscopic display without keeping track of how much parallax is necessary for each 3D object rendered in perspective projection. However, to stereoscopically display a 2D image or a 3D object rendered using orthographic projection, your application must create the left-eye and right-eye images itself.

The figure below shows the parallax required to display objects “in front of” the LCD screen (closer to the viewer than the screen) or “behind” the LCD screen (farther from the viewer than the screen), using the left and right cameras generated by the library.

Figure 6-5. Parallax Used to Display Objects at Arbitrary Positions

Far clipping plane / Base plane / Near clipping plane / Left / Base / Right

Object M1 is positioned at distance d1 from the cameras. To display M1 closer to the viewer than the LCD is, render M1 at the positions where straight lines connecting the cameras to M1 would intersect the base plane. These positions are R1 and L1 for the right and left eyes, respectively. In other words, to display M1 in front of the LCD, the parallax R1L1 is required. You can calculate this parallax length by taking the distance l between the left and right cameras and the distance Dlevel from the cameras to the base plane, and using trigonometry.

R1L1 = I × ( Dleveld1 ) / d1 , but Dleveld1

Similarly, you can use the parallax R2L2 to display object M2, which is positioned at distance d2 (behind the LCD).

R2L2 = I × ( d2 - Dlevel ) / d2 , but Dlevel < d2

If an object is positioned like M3 and included in only one of the left/right camera viewing volumes, only one camera can pass a line through it and intersect the base plane. It is not possible to implement stereoscopic parallax for such objects.

6.1.6. Appropriate Parallax for Objects Located at the Maximum Possible Distance From the Base Plane

This section explains the appropriate degree to offset the left-eye and right-eye images (whether of a 2D image or an object rendered using orthogonal projection) from each other to make that image or object appear to have the maximum possible stereoscopic depth, where it is located the maximum possible distance from the viewer.

Objects positioned at maximum possible depth can be represented by offsetting the left and right images by a distance equal to the limit parallax Pr_ltd (this distance is the actual physical offset on the surface of the LCD, which is also the base plane). In other words, to represent a far-off background with stereoscopic display enabled, the images rendered to the render buffers for the left and right eyes are each offset from the position of the base camera by half of the limit parallax. In practice, however, you must use Pv_ltd (the result of converting Pr_ltd to the scale of virtual space) as the offset during rendering.

In principle, this technique can also be used to display stereoscopic images that appear to be jumping out in front of the LCD screen (toward the user) to the farthest extent possible. However, you must be careful when doing so because objects that appear to be located in front of the screen do not have as wide a display range as those that appear to be located behind the screen. This difference in range is shown in Figure 6-5.

6.2. How to Implement in an Application

You must follow the general process below to implement stereoscopic display in your application.

  • Set the display mode of the upper screen to stereoscopic display.
  • Prepare a display buffer for displaying the right-eye image.
  • For perspective projection, use the ULCD library to calculate the camera matrices for the left and right eyes.
  • Render the images for the left and right eyes, and transfer them to the corresponding display buffers.
  • Display the content of the display buffers on the LCD.

 

6.2.1. Setting the Display Mode

Call the nngxSetDisplayMode function to set the display mode and enable or disable stereoscopic display on the upper screen. If you change the mode during display, do so just after a V-Sync operation.

Code 6-1. Setting the Display Mode
void nngxSetDisplayMode(GLenum mode);

Specify NN_GX_DISPLAYMODE_STEREO for the mode parameter to enable stereoscopic display. Specify NN_GX_DISPLAYMODE_NORMAL to disable stereoscopic display. Specifying any other value results in a GL_ERROR_9003_DMP error. The default mode is NN_GX_DISPLAYMODE_NORMAL, in which stereoscopic display is disabled.

Enabling stereoscopic display causes two screens—one for the left eye and one for the right eye—to both be displayed on the upper LCD screen. The resolutions, formats, and memory region locations (whether in main memory, VRAM-A, or VRAM-B) must be the same for both of these screens. If they are not the same, a call to the nngxSwapBuffers function results in an error. Note that this has no effect on the lower screen.

To specify the screen for the left eye, use NN_GX_DISPLAY0, which is the same value that indicates normal (non-stereoscopic) display. Use the added value NN_GX_DISPLAY0_EXT to specify the screen for the right eye. When you specify the screen for the right eye, you must still perform the same sequence of operations that are required for other screens. This sequence includes specifying the screen (nngxActiveDisplay), binding the display buffer (nngxBindDisplayBuffer), and setting the offset for LCD output (nngxDisplayEnv), among other required processing.

To make it easier to tell which screen you are specifying, the SDK defines aliases: NN_GX_DISPLAY0_LEFT is equivalent to NN_GX_DISPLAY0, and NN_GX_DISPLAY0_RIGHT is equivalent to NN_GX_DISPLAY0_EXT.

6.2.2. Allocating Display Buffers

When stereoscopic display is enabled, the upper LCD screen displays different images for the left and right eyes. Applications handle the image that is seen by the left eye and the image that is seen by the right eye as images displayed on separate LCD screens. For this reason, specifying NN_GX_DISPLAY0 in calls to functions now specifies the left-eye screen, instead of simply specifying the upper screen as this value did formerly. To specify the right-eye screen, use the new value NN_GX_DISPLAY0_EXT.

Because the left-eye and right-eye screens are actually treated as separate LCD screens, they need separate display buffers. When using multibuffering, note that the number of required display buffers will increase accordingly.

The following code sample shows how to allocate a display buffer of the size of the upper LCD screen and use it for double-buffering the right-eye screen.

Code 6-2. Allocating the Display Buffer for the Right Eye
GLuint m_Display0BuffersExt[2];
// For right eye - Upper (DISPLAY0_EXT)
nngxActiveDisplay(NN_GX_DISPLAY0_EXT);
nngxGenDisplaybuffers(2, m_Display0BuffersExt);
nngxBindDisplaybuffer(m_Display0BuffersExt[0]);
nngxDisplaybufferStorage(GL_RGB8_OES, 
    NN_GX_DISPLAY0_WIDTH, NN_GX_DISPLAY0_HEIGHT, NN_GX_MEM_FCRAM);
nngxBindDisplaybuffer(m_Display0BuffersExt[1]);
nngxDisplaybufferStorage(GL_RGB8_OES, 
    NN_GX_DISPLAY0_WIDTH, NN_GX_DISPLAY0_HEIGHT, NN_GX_MEM_FCRAM);
nngxDisplayEnv(0, 0);

6.2.3. Using the ULCD Library

When the application is rendering in perspective projection, it can use the matrices calculated by the ULCD library to implement stereoscopic display. Because the ULCD library uses the base camera to calculate the left and right cameras and associated information needed for stereoscopic display, the application does not need to keep track of how much parallax is necessary.

The ULCD library provides the nn::ulcd::StereoCamera class, which creates camera matrices that account for parallax. You must include the nn/ulcd.h header file to use this class in your application. Note that a function to get the 3D depth slider value was made public as of CTR-SDK 3.3.1, but it is intended for adding secondary effects to 3D viewing. For this reason, unless you use this class, you cannot use the 3D depth slider to adjust the intensity of the stereoscopic effect in your application.

Note:

If you intend to use the function that gets the 3D depth slider value, you must contact Nintendo in advance.

6.2.3.1. Initializing

Generate an instance of the nn:ulcd::StereoCamera class and call the Initialize member function to initialize.

Code 6-3. Initialization
void Initialize(void);

This function initializes all the internally stored information.

6.2.3.2. Setting and Getting the Limit Parallax

Use the member functions SetLimitParallax and GetLimitParallax to set and get the limit parallax.

Code 6-4. Setting and Getting the Limit Parallax
void SetLimitParallax(const f32 limit);
f32  GetLimitParallax(void) const;

For the limit parameter, specify the limit parallax to set, in millimeters. The maximum limit parallax is set forth in the CTR Guidelines, and for the depth direction (into the screen) you can set any positive value up to this maximum. The limit parallax set here affects the calculated camera matrices.

The GetLimitParallax function gets the current limit parallax value. If the SetLimitParallax function has not yet been called, the function returns the maximum value (for depth into the screen) that is set forth in the guidelines.

6.2.3.3. Base Camera Information

Use the SetBaseFrustum and SetBaseCamera member functions to set information for the base camera.

Code 6-5. Setting the Base Camera
void SetBaseFrustum(const nn::math::Matrix44 *proj);
void SetBaseFrustum(const f32 left, const f32 right, const f32 bottom, 
                    const f32 top, const f32 near, const f32 far);
void SetBaseCamera(const nn::math::Matrix34 *view);
void SetBaseCamera(const nn::math::Vector3 *position, 
                   const nn::math::Vector3 *rightDir, 
                   const nn::math::Vector3 *upDir, 
                   const nn::math::Vector3 *targetDir);

The SetBaseFrustum function mainly sets the near and far clipping planes. You can either set each of the parameters individually, or pass in a projection matrix created by the nn::math::MTX44Frustum or nn::math::MTX44Perspective functions, from which the parameters will be calculated. If you specify a projection matrix in nn::math::Matrix44 format, the base frustum is not calculated correctly unless the projection matrix was calculated based on the definition of the viewing volume of the 3DS graphics system. (Specifically, the z-coordinate must be clipped to the range from 0 to -Wc.)

The SetBaseCamera function mainly sets the position and other parameters of the base camera. You can either set each of the parameters individually, or pass in a view matrix created by a function such as nn::math::MTX34LookAt, from which the parameters will be calculated. When specifying a view matrix or specifying vectors, make sure that you use the right-handed coordinate system adopted by the 3DS graphics system.

6.2.3.4. Calculating the Left and Right Camera Matrices

As described in 6.1 Principles of Stereoscopic Display, the SDK provides two methods for calculating the left and right camera matrices: one that keeps the base camera settings unchanged as much as possible (application priority method), and another that automatically changes the base camera settings (realism priority method).

Use the CalculateMatrices member function to calculate matrices that prioritize the application, and use the CalculateMatricesReal member function to calculate matrices that prioritize realism.

Code 6-6. Calculating the Left and Right Camera Matrices
void CalculateMatrices(
        nn::math::Matrix44* projL, nn::math::Matrix34* viewL,
        nn::math::Matrix44* projR, nn::math::Matrix34* viewR,
        const f32 depthLevel, const f32 factor,
        const nn::math::PivotDirection pivot = nn::math::PIVOT_UPSIDE_TO_TOP,
        const bool update3DVolume = true);
void CalculateMatricesReal(
        nn::math::Matrix44* projL, nn::math::Matrix34* viewL,
        nn::math::Matrix44* projR, nn::math::Matrix34* viewR,
        const f32 depthLevel, const f32 factor,
        const nn::math::PivotDirection pivot = nn::math::PIVOT_UPSIDE_TO_TOP,
        const bool update3DVolume = true);

Both functions take the same arguments and differ only in their results.

For the projL and viewL parameters, specify the storage location of the projection matrix and view matrix for the left eye. For the projR and viewR parameters, specify the corresponding matrices for the right eye. The functions write values to these matrices, so you need instances for these structures.

Set depthLevel equal to the distance Dlevel from the base camera to the base plane, and set factor equal to the stereoscopic adjustment coefficient Dr. The factor parameter is used to correct internally calculated results. If this parameter is set equal to 0.0, there is zero parallax. If this parameter is set equal to 1.0, there is zero correction of the results. In addition to the effects of this value, the calculated results are also affected by the input value from the 3D depth slider.

These functions multiply the output projection matrix by a rotation matrix so that the camera's upward direction matches the direction specified in the pivot parameter. The default argument value is nn::math::PIVOT_UPSIDE_TO_TOP, which creates a rotation matrix in which the camera's upward direction matches the upper screen's upward direction after the projection matrix is rotated. (The upper screen's upward direction is the direction from its center toward the center of its long side, opposite the lower screen.) Specify nn::math::PIVOT_NONE if rotation is unnecessary, such as when rotation has been factored into the base camera settings.

The update3DVolume parameter specifies whether to get and use the 3D depth slider value when calculating the matrix. The value is used (true) by default if no argument is specified. When the 3D depth slider value is used for matrix calculations, if your implementation calls these functions multiple times per frame, any changes to the 3D depth slider value in the middle of a calculation could affect the rendering result.

For implementations that use this technique, disable (specify false) the update3DVolume parameter and call the Update3DVolume member function at the start of each frame. If you do not call this function, adjustments to the 3D depth slider will not affect the rendering result.

Code 6-7. Updating the 3D Depth Slider Value
void Update3DVolume(void);

6.2.3.5. Getting Parallax Information

The class provides member functions that use the results from the most recent matrix calculation to get the parallax required to produce an image at the specified distance from the camera. These functions cannot get the parallax if the class has since been initialized with the Initialize function.

Code 6-8. Getting Parallax Information
f32 GetParallax(const f32 distance) const;
f32 GetMaxParallax(void) const;

The GetParallax function gets the parallax required to place an object at a position separated from the camera by distance, and returns the ratio of this parallax to the LCD screen width. The GetMaxParallax function returns this ratio for an object at maximum possible depth. In the return value, 1.0 is equivalent to 100 percent of the screen width. Multiply the return value by the LCD screen resolution to find how many pixels to shift the object to the left and the right from its position, as seen by the base camera, to render it stereoscopically.

The return value is positive if the position specified by distance is behind the base plane, and negative if the position is in front of the base plane. The return value is 0 if distance is negative.

When rendering a 3D object in perspective projection, programmers previously did not need to worry about parallax when using the matrices generated by the library, but when rendering 2D objects or when rendering in an orthographic projection, you will have to calculate the parallax for each object. However, calling the GetParallax function for each object to calculate the parallax causes a resource intensive CPU load. To speed up this process, use the function below to obtain a partially calculated parallax value, and shift the remainder of the calculation to the vertex shader.

Code 6-9. Getting a Partially Calculated Parallax Value
f32 GetCoefficientForParallax(void) const;

Multiply the return value by ((distancedepthLevel) ÷ distance) to derive the same value returned by the GetParallax function.

Like the functions that get parallax, there are member functions that get the distance from the base camera to the base plane, near clipping plane, and far clipping plane. All these distances are based on the most recent calculation results.

Code 6-10. Getting the Distance From the Base Camera to the Base Plane, Near Clipping Plane, and Far Clipping Plane
f32 GetDistanceToLevel(void) const;
f32 GetDistanceToNearClip(void) const;
f32 GetDistanceToFarClip(void) const;

These functions are mainly used to get the updated base camera information when using the realism priority method. When using the application priority method, these functions simply return the information originally passed to the calculation.

6.2.3.6. Finalization

After an instance of the nn::ulcd::StereoCamera class is no longer needed, call the Finalize member function.

Code 6-11. Finalization
void Finalize(void);

Always call this function explicitly from the application, even though it is also called by the destructor.

6.2.4. Rendering and Transferring to Display Buffers

When you render 3D objects using perspective projection, you can use the projection and view matrices calculated for the left and right cameras by the ULCD library. This allows you to render without needing to keep track of the parallax. However, to render a 2D object or to render a 3D object using orthographic projection, your application must keep track of the parallax because it must render the left-eye and right-eye images itself. 2D objects can be rendered without needing to know parallax, if you render them using perspective projection, but you must keep track of the positions of objects so you can take into account their foreground/background relationships.

Images are rendered to color buffers and transferred to display buffers for display on the LCD screen. Note that the images for the left and right eyes must be transferred to different display buffers.

6.2.5. Displaying on the LCD Screen

Bind the display buffers for the left and right eyes to the screens specified by NN_GX_DISPLAY0 and NN_GX_DISPLAY0_EXT, respectively. Next you must swap the display buffers, but because during stereoscopic display the right-eye display buffer is handled simultaneously, when the upper screen (NN_GX_DISPLAY0) is specified, you do not need to also pass NN_GX_DISPLAY0_EXT to the nngxSwapBuffers function.

Code 6-12. Binding Display Buffers and Swapping Buffers
// UpperLCD for Left Eye
nngxActiveDisplay(NN_GX_DISPLAY0);
nngxBindDisplaybuffer(m_Display0Buffers[m_CurrentDispBuf0]);
// UpperLCD for Right Eye
nngxActiveDisplay(NN_GX_DISPLAY0_EXT);
nngxBindDisplaybuffer(m_Display0BufferExt[m_CurrentDispBuf0Ext]);
// Swap buffers
nngxSwapBuffers(NN_GX_DISPLAY0);

6.2.6. 3D Depth Slider Input

Tune your application so that players who have maximized the 3D depth slider (and who have no difficulty with stereoscopic display at this setting) enjoy your recommended "best" level of the effect.

When the 3D depth slider is used to reduce the 3D depth until the screen switches from stereoscopic (3D) display to normal 2D display, the system automatically turns off the LCD shutter. The system also automatically handles switching from 2D to 3D display and changing the brightness of the LCD backlight, so you do not need to control these from the application, other than for debugging purposes.

6.2.7. Disabling Stereoscopic Display

When the display of 3D images is restricted by Parental Controls in System Settings, and also when the 3D depth slider has been moved all the way down to 0 (the minimum value), stereoscopic display is forcibly disabled. In this state, attempts to set the display mode or to display the right-eye image on the LCD screen are ignored, and normal (2D) display is used without exception.

You can call the nn::gx::IsStereoVisionAllowed function to determine whether stereoscopic display is enabled. This function returns a value of true when stereoscopic display is enabled, and false when stereoscopic display has been forcibly disabled. This status is not affected by the display mode setting.

6.2.8. Flipping Left-to-Right

You cannot simply invert the x-axis of the projection matrix to flip a scene left-to-right, and then stereoscopically display it. This causes positions in the depth direction to be inverted about the base plane.

To resolve this problem, use a projection matrix with an inverted s-axis to create modelview and projection matrices for stereoscopic display, and then switch those left and right matrices.

Code 6-13. Flipping Left-to-Right in Stereoscopic Display
nn::math::Matrix44 proj,rev;
nn::math::Matrix44 projL, projR;
nn::math::Matrix34 viewL, viewR;
// SetBaseFrustum
MTX44Frustum(&proj, l, r, b, t, n, f);
MTX44Identity(&rev);
rev.m[0][0] = -1.0f;
MTX44Mult(&proj, &proj, &rev);         // Flip on the x-axis.
s_StereoCamera.SetBaseFrustum(&proj);
// SetBaseCamera
nn::math::Matrix34 cam;
nn::math::Vector3 camUp(0.f, 1.f, 0.f);
nn::math::MTX34LookAt(&cam, &camPos, &camUp, &focus);
s_StereoCamera.SetBaseCamera(&cam);
// CalculateMatrices
s_StereoCamera.CalculateMatrices(
    &projR, &viewR, &projL, &viewL,    // Specify left/right matrices in reverse order.
    depthLevel, factor, nn::math::PIVOT_UPSIDE_TO_TOP);

The following figure shows the principles of flipping left-to-right in stereoscopic display.

Figure 6-6. Principles of Flipping Left-to-Right in Stereoscopic Display

Left eye / Right eye / Flip projection matrix on the x-axis / Swap left and right generated matrices / Flipped left-to-right at the correct depth / Depth is inverted!

6.3. Using the Reserved Fragment Shader

When the rendering pipeline uses shadows, the shadow textures consist of depth values in relation to lights in the scene, so the changes to the view matrices caused by stereoscopic display have no effect on shadow texture generation. View matrices are pertinent to object rendering, so you must render twice, once for the left eye and once for the right. When doing so, you must be careful with the texture coordinate transformation matrices that you use to apply the shadow textures.

When the rendering pipeline uses gas rendering, the view matrices are involved in the generation of depth values during the first pass, so to use stereoscopic display, all passes must be run separately for the left and right eyes.

When the rendering pipeline uses fog, the lookup table input values are depth values in window coordinates, so you must regenerate the lookup table whenever the projection matrix is updated because of stereoscopic display. If you are using the application priority method, you can create the lookup table from the base camera's projection matrix, even in stereoscopic display. However, if you are using the realism priority method, the near and far clipping planes change from the base camera values, so you must regenerate the lookup table, even when the only change is an adjustment to the 3D depth slider.

For fragment lighting in general, if light positions are affected by the view matrices, the light positions you use when you render must take into account both the left and right view matrices. (This issue is also affected by how you standardize coordinate systems.)

6.4. Coordinating With the Stereo Cameras

The left and right cameras are connected to two different ports. Note that port 1 captures images for the right eye (NN_GX_DISPLAY0_EXT), and port 2 for the left eye (NN_GX_DISPLAY0). The cameras output image data to their respective buffers from these two ports, but because there is only one YUVtoRGB circuit, the application must take measures such as mutual exclusion when converting YUV-format images captured by the cameras into RGB format. The left and right images must also be obtained from the same frame, or stereoscopic display may not work properly.

Simply displaying the images obtained from the left and right cameras as is also produces a stereoscopic display, but note that this produces a different distance between the two cameras as compared to the distance between the user's eyes.

6.4.1. Calibrating the Stereo Cameras

The two outer cameras are designed to be positioned horizontally 35 mm apart, but manufacturing mounting tolerances may produce some degree of deviation, leading to a mismatch between the left and right camera images. The application must correct for this mismatch because the camera library does not automatically correct for such mounting variance.

Note:

Perfect correction might not be possible, leaving some mismatch around the edges of corrected images.

Call the nn::camera::GetStereoCameraCalibrationData function to get the stereo camera calibration data. The data is stored in an nn::camera::StereoCameraCalibrationData structure.

Code 6-14. Getting the Stereo Camera Calibration Data
void nn::camera::GetStereoCameraCalibrationData(
        nn::camera::StereoCameraCalibrationData * pDst);

Taken together, the calibration data member variables indicate the correction required to align the left and right camera images (zoom, optical axis rotation, translation) and the measurement conditions for the correction values.

Table 6-2. Bit Arrays and Post-Concatenation Values in Individual Mode
Item Member Range of Obtainable Valid Values
Zoom/shrink Scale 0.9604 through 1.0405
Z-axis (optical axis) rotation angle rotationZ –1.6 through +1.6 (degrees)
Amount of horizontal (x-axis) translation translationX –154 through –19 (pixels)
Amount of vertical (y-axis) translation TranslationY –70 through +70 (pixels)

The range of possible values for two vertical translations is quite broad, so just moving the image from the left camera and adjusting could lead to problems such as the image edges being visible within the display area, which results in rendering incomplete edges of the display. When using calibration data for 3D display, use the CAMERA library functions for calculating correction matrices. These include a function to expand an image to the display size if the edges are incomplete, even when the stereo camera placement error is at the limit. When using calibration data for purposes other than 3D display (such as for image recognition), be careful to avoid possible bugs or errors, such as reduced processing accuracy resulting from certain calibration data values.

Call the following functions to use the calibration data to calculate the correction matrix that must be multiplied against the left camera image.

Code 6-15. Calculating the Correction Matrix
void nn::camera::GetStereoCameraCalibrationMatrix(
        nn::math::MTX34 * pDst, 
        const nn::camera::StereoCameraCalibrationData & cal, 
        const f32 translationUnit, const bool isIncludeParallax = true);
void nn::camera::GetStereoCameraCalibrationMatrixEx(
        nn::math::MTX34 * pDstR, nn::math::MTX34 * pDstL, f32 * pDstScale, 
        const nn::camera::StereoCameraCalibrationData & cal, 
        const f32 translationUnit, const f32 parallax, 
        const s16 orgWidth, const s16 orgHeight,
        const s16 dstWidth, const s16 dstHeight);
f32 nn::camera::GetParallax(
        const nn::camera::StereoCameraCalibrationData & cal, f32 distance);

You can get the correction matrix in the pDst parameter by calling the nn::camera::GetStereoCameraCalibrationMatrix function with the calibration data passed to the cal parameter and the amount of translation in 3D space (see Note) necessary to move by one pixel passed to the translationUnit parameter. Pass true in the isIncludeParallax parameter to include the parallax from the measurement chart in the correction matrix. Including this parallax places the camera focus 250 mm away from the cameras. Pass false in the isIncludeParallax parameter to only correct for mounting mismatch.

Note:

The translationUnit parameter specifies the value of the amount of translation (the amount of translation required in 3D space to move the camera image by one pixel) multiplied by the ratio between the VGA image and display image widths. However, this is the method for specifying the value when displaying an image as is at a different size than the VGA image, or when zooming an image after applying a correction matrix. If you are instead zooming an object with an image applied before the image has had a correction matrix applied, and you want the subject to be displayed in the image at the same size as the subject displayed pixel-by-pixel in the VGA image, specify the amount of translation without modification.

The nn::camera::GetStereoCameraCalibrationMatrixEx function gets a correction matrix that allows stereoscopic images to be displayed at the edge of the screen when the camera placement error has reached its limit. Because this function uses the parallax obtained by the nn::camera::GetParallax function, the correction matrix is calculated to place the focus at a particular distance. We recommend that you use this function to calculate the correction matrix, unless you have some reason to do otherwise.

The nn::camera::GetStereoCameraCalibrationMatrixEx function takes the same values in cal and translationUnit as nn::camera::GetStereoCameraCalibrationMatrix. Use the GetParallax function to calculate the parallax (in pixels) of the VGA images to pass to the parallax parameter. The parallax is calculated with the focal point at the distance (in meters) specified by the distance parameter. Specify the width and height (in pixels) of the (trimmed) camera image in orgWidth and orgHeight, respectively. Specify the necessary width and height (in pixels) for rendering in dstWidth and dstHeight.

The correction matrices to be multiplied with the left and right cameras are returned in pDstL and dDstR, respectively. The scaling factor necessary for the rendering size is returned in pDstScale.

Note:

For more information about how the correction matrix is calculated, see the CTR-SDK API Reference.

Call the nn::camera::GetParallaxOnChart function to get the parallax from the measurement chart (the parallax that causes the cameras to focus on objects 250 mm away) in pixels.

Code 6-16. Getting Parallax From the Measurement Chart
f32 nn::camera::GetParallaxOnChart(
        const nn::camera::StereoCameraCalibrationData & cal);

The horizontal mounting variance is equal to the result of subtracting the translationX member of the calibration data from the value obtained by this function.

6.4.2. Linking Stereo Camera Brightness

Automatic exposure and other internal processing for the left camera is independent of the same processing for the right camera, and consequently the brightness differs between the left and right cameras for some subjects. The CAMERA library provides a function, nn::camera::SetBrightnessSynchronization, to automatically link the brightness of stereo camera images.

Code 6-17. Linking Stereo Camera Brightness
nn::Result nn::camera::SetBrightnessSynchronization(bool enable); 

When enable is true, the two cameras use the same brightness (they are linked). The default setting is disabled (false). This is disabled (false) by default, and only works for the stereo cameras.

You can call this function even when the (left and right) outer cameras have not been started. After the two cameras have been linked, this setting is remembered, even if the cameras later enter standby mode. They will continue to be linked when they are restarted.

This feature links the brightness of stereo camera images by periodically writing the exposure setting for the right outer camera to the left outer camera. In particular, the exposure for the right outer camera changes the exposure for the left outer camera when auto-exposure is enabled. The exposure settings may be delayed by heavy processing because they are linked by a low-priority thread created by the library, which does not consume application resources.

This function call fails for any of the following camera settings. After the cameras have been linked, each of the following settings will fail.

  • White-balance setting other than WHITE_BALANCE_NORMAL
  • Disabled auto-white balance
  • Capture mode of PHOTO_MODE_LANDSCAPE
  • Contrast setting of CONTRAST_PATTERN_10

 

Warning:

Processing may block in the library for a long time if this function is called while the cameras are being restarted.

6.5. Notes

Note:

The content of this section may be revised or expanded to comply with guidelines that are being reviewed separately.

6.5.1. Placing Objects in Front of the LCD Screen

When you use stereoscopic display to position objects so that they appear to jump out in front of the surface of the LCD, you must position them so that they do not touch any edge of the LCD screen. If an object does overlap the edge of the screen, the screen's border, which appears (from the viewer's point of view) to be behind the object, covers up an object that is supposed to be in front of it. This causes an unnatural effect, destroying the illusion of 3D depth for the player.

You must also be careful when positioning the near clipping plane. Normally the near clipping plane is very close to the camera. However, this usual distance to the near clipping plane does not match the boundary distance from the screen toward the player that allows the player to experience the correct 3D sense of depth. When an object gets too close to the camera, stereoscopic display of the object becomes impossible.

This is not a problem that can be solved by simply placing the near clipping plane far from the camera. In each individual application, you must consider and apply ways to prevent objects from getting too close to the camera.

6.5.2. Positioning When 2D Objects and 3D Objects Are Displayed Together

It is possible to display 2D and 3D objects at the same time by combining orthographic projection with objects rendered at camera settings that vary in a way that mimics perspective projection. When you do this, you must adjust the position, size, and order with which you render the 2D objects to ensure that the 2D and 3D objects have the proper foreground/background relationship.

If you use perspective projection to render the 2D objects in addition to the 3D objects, you do not have to adjust the 2D objects themselves. However, you must still consider the foreground/background relationship of the 3D and 2D objects when deciding where to position your 2D objects. This is because 2D objects could be unintentionally hidden by 3D objects.

6.5.3. Handling a Tilted System

Although the system can be tilted toward or away from the user without any problems while stereoscopic images are being displayed, it may become difficult to display stereoscopic images if the system is tilted left and right or rotated because the LCD will no longer face the user directly.

To handle this problem, Face Raiders (installed on the system) uses the gyro sensor to determine whether the system has been tilted. If so, it is designed to make the 3D adjustment coefficients weak (decrease the strength of the stereoscopic effect). We recommend that you use the gyro sensor to detect how the system is tilted and adjust the stereoscopic strength accordingly if your application displays stereoscopic images while the system is being moved aggressively.

6.5.4. Cautions When Using the Display Mode to Disable 3D Display

The nn::ulcd::CalculateMatrices function returns values based on the 3D volume, even when the application calls nngxSetDisplayMode(NN_GX_DISPLAYMODE_NORMAL) to disable 3D display. Consequently, using the matrices calculated by the ULCD library (as is) for display results in on-screen changes linked to 3D volume, even though the 3D LED is turned off.

You can avoid this situation by either not using CalculateMatrices after disabling 3D display, by calling nngxSetDisplayMode, or by passing in a value of 0.0 for the factor parameter.

This issue does not occur when 3D display is forcibly disabled, as described in 6.2.7 Disabling Stereoscopic Display.


CONFIDENTIAL