Cameras

The camera is the view point of any scene. Using a camera, the view position, orientation, target, and aspect ratio can be set. In HOOPS Visualize, a camera is an attribute of a segment. As is true of any segment attribute, the camera is inherited by the children of the segment, unless they explicitly override it with their own locally set camera. Thus, each segment always has a net value for the camera attribute which used to view the geometry in that segment.

Like any attribute, the camera can be set multiple places in the database. However, since each segment has only one net value for the camera attribute, each piece of geometry in a segment can only be viewed by a single camera. Different segments in a scene can be viewed by different cameras, but each segment will be viewed only once.

There is an exception to the above - if a segment is used as an include segment, it will have its own net camera each time it is included. Thus, geometry in the include segment could be viewed from more than one angle if each had its own net camera.

    // setting the camera using the CameraControl and method chaining
    mySegmentKey.GetCameraControl()
        .SetUpVector(Vector(0, 1, 0))
        .SetPosition(Point(0, -10, 0))
        .SetTarget(Point(0, 0, 0))
        .SetField(4, 4)
        .SetProjection(HPS::Camera::Projection::Perspective);

    // alternatively, the same operation can be done with a CameraKit
    HPS::CameraKit cameraKit;
    cameraKit.SetUpVector(Vector(0, 1, 0));
    cameraKit.SetPosition(Point(0, -10, 0));
    cameraKit.SetTarget(Point(0, 0, 0));
    cameraKit.SetField(4, 4);
    cameraKit.SetProjection(HPS::Camera::Projection::Perspective);

    mySegmentKey.SetCamera(cameraKit);

Camera Inheritance

Unlike most other attributes, the components of a camera inherit as a group. When you set a new camera, it does not inherit individual components from any camera attribute higher up in the database tree. Instead, the new camera completely overrides the inherited camera. A further consequence of this paradigm is that when changing a single camera setting for a segment with an inherited camera, the inherited camera is discarded and the default camera takes its place, incorporating only the new setting. However, this rule does not apply to segments which already have a local camera. If the segment already has its own local camera, its settings can be changed individually using the segment’s HPS::CameraControl.

If you unset a camera attribute that was explicitly set on a segment, the segment goes back to inheriting the entire camera. You cannot unset an individual component of a camera.

See also the camera set-up guidelines.

Camera Components

A camera attribute consists of several components: position, target, up vector, field, and projection.

../_images/3.2.1.a.gif

The components of a camera

Reading the Camera Components

Often it is necessary for an application to inspect the values of the camera components in order to make a decision. To do this, simply “show” the values into a HPS::CameraKit using the HPS::SegmentKey. For example, if you need to get the current location of the camera:

    // getting all components of the camera into a kit
    HPS::CameraKit cameraKit;
    mySegmentKey.ShowCamera(cameraKit);

    // subsequently getting one component, position, from the kit
    HPS::Point position;
    cameraKit.ShowPosition(position);

    // alternatively, if you only need one component, it may be more convenient to use:
    mySegmentKey.GetCameraControl().ShowPosition(position);

The Default Camera

Since a camera is an attribute, Visualize supplies a default value for it. The default camera is positioned at [0, 0, -5] with a target of [0, 0, 0] and an up vector of [0, 1, 0]. The field of the camera is 2.0 units wide and 2.0 high, and the projection is perspective. The default camera is inherited down the scene graph by all nodes until a segment’s local camera overrides it.

../_images/3.2.2.a.gif

A diagram illustrating the field width and height of the default camera

The camera field of 2 units wide by 2 units high, centered at the origin, indicates the limits of the camera’s view. In Visualize, this coordinate system is referred to as the window coordinate system. The image below shows how the default camera would map window coordinates onto the screen.

../_images/3.2.2.b.gif

Window coordinates when using the default camera

Mapping the Camera Field to the Window

If the camera field is changed, then the portion of world space visible in the window will change. For example, we might have a scene that contains objects that range from +5 to -5 in both x and y (in world coordinates). Using the default camera, we would see only those objects that are within 1 unit of the origin. We can change the camera field such that its width and height are both 10 units (between +5 and -5).

    mySegmentKey.GetCameraControl().SetField(5, 5);

The center of the output window always corresponds to the target position in world coordinates.

../_images/3.2.2.c.gif

The window coordinates after the camera field has been adjusted

Of course, the units/range applied to the world coordinates viewable in the window is completely arbitrary. For example, a molecular-modeling application might use coordinates with magnitudes such as 550 to represent nanometers, and would thus set the camera field to a size appropriately small; a civil engineering application might set the camera field to be 100000 meters wide, so that it could view an area 100 meters across.

It is usually a good idea to scale your application’s units such that your coordinates are neither extremely small nor immensely large. For example, it would probably be a bad idea to choose your units to be meters for an astronomy application (so you would have coordinates that were very large) or a molecular-modeling application (so you would have coordinates that were very small). Even though coordinates are floating-point numbers in Visualize, most of the defaults work best with coordinates that range close to +1.0 and -1.0. You can also run into numerical-accuracy problems when you mix numbers with different scales in the same computation.

Changing window size

When changing the window size, you are encouraged to use the convenience method CreateNormalisationTransformation(int width, int height). This method takes the width and height of the window and returns a pixel-to-window transformation matrix. It’s intended to be used in window resize event callbacks.

void CHPSView::OnSize(UINT nType, int cx, int cy)
{
        if (cx > 1 && cy > 1)
                _pixelToWindowMatrix = MatrixKit::CreateNormalisationTransformation(cx, cy);

        return CView::OnSize(nType, cx, cy);
}

Then, you can use this matrix whenever you need to perform a screen-space calculation. For example, when intercepting a mouse event:

HPS::Point p(static_cast<float>(cpoint.x), static_cast<float>(cpoint.y), 0);
p = _pixelToWindowMatrix.Transform(p);

See CHPSView.cpp in the MFC Sandbox demo application (<HPS_INSTALL_DIR>/samples/mfc_sandbox/) or SprocketsWPFControl.cs in the WPF Sandbox (<HPS_INSTALL_DIR>/samples/wpf_sandbox) for example usage.

Changing the Camera Near Plane

By default, Visualize will automatically adjust the near clipping plane to be as close to the camera target as possible, which should generally result in good usage of z-precision. In special circumstances, you may want to favor precision of certain objects at the expense of clipping other objects, and this can be achieved by manually setting the camera’s near clipping plane to a positive value (this will disable the default ‘auto-adjust’ behavior).

You can change the camera near limit by calling the following function:

    mySegmentKey.GetCameraControl().SetNearLimit(0.1f);

Camera Set-Up Guidelines

How you set up the camera in a scene can significantly affect the visual quality of the rendered result. To maximize visual quality and reduce artifacts like edge stitching and shine-through, we recommend that you follow the camera set up guidelines outlined in this section.

When you set up your camera, the distance from the camera position to the camera target should be 2.5 times the field width. This 2.5:1 camera ratio maximizes the z-buffer resolution around the camera target thus reducing the occurrence of edge stitching and shine-through. It also provides a commonly accepted level of foreshortening for perspective projections.

When possible, this general camera set up should be maintained even when zooming in, out, or to the extents of a specific object. For example, when zooming in, you should NOT actually zoom the camera because it effectively modifies the camera field thus changing the 2.5:1 ratio that you want to maintain. Instead, you can create the effect of zooming by dollying the camera. Specifically, you should reset the camera target to the middle of the object that you wish to view, modify the field as desired, and then move the camera position forward or back to maintain the 2.5:1 camera ratio.

If you deliberately want to set up an extreme field of view where the camera ratio is 5:1 or higher, additional steps are required to preserve the visual integrity of your scene. Although Visualize automatically sets the near clip plane to the front most object in the scene to maximize the z-buffer resolution, there are situations where this is not effective. For instance, if you have zoomed into a small part in a complex model, there maybe other parts that are not visible in the view frustum but whose bounding box is still closer to the camera position or possibly behind it. In this case, the automatic near plane adjustment will not be effective. You need to manually reset the near clip plane so that it is closer to the bounding boxes of the objects being viewed.

Even when the automatic near plane adjustment does increase the z-buffer resolution significantly, it cannot remove all the potential edge stitching and edge shine-through at extreme fields of view. In these cases, you may need to modify the face displacement option using the HPS::DrawingAttributeControl. Visualize does not automatically take into account extreme fields of view when normalizing the face displacement value, thus the default value can be too much at extreme field-of-view settings, causing edges to shine through when they should be hidden. You may need to reset the face displacement as the camera view distance gets greater. For example, with a camera ratio of 20:1, resetting the face displacement to 1 removes the edge shine-through. A general guideline is to set the face displacement to 20/camera_ratio.

camera_ratio = Distance(camera_target - camera_position) / camera_field_width

In addition to tweaking the face displacement value, you can also use the HPS::DrawingAttributeControl to set the vertex displacement. These two set in conjunction with one another can be used to fine tune the visual quality of your scene reducing edge shine-through and stitching.

Aspect Ratio

The ratio of the width to the height of a coordinate system is called the aspect ratio. For example, the default screen-coordinate system has an aspect ratio of 1 to 1 (as defined by the width and height of the camera field). A window on the screen also has an aspect ratio. If the aspect ratio of the screen window exactly matches the aspect ratio of the camera field, then the camera field fits perfectly into the window.

When the output window is resized, the aspect ratio of the screen window may change. If the aspect ratio of the window does not match the aspect ratio of the camera field, Visualize will center the camera field in the screen window, so that the whole camera field is visible. In effect, this means the camera field defines the minimum area around the target in the scene that is guaranteed to be visible in the output window. Visualize pads either the width or the height of the camera field as necessary to make the camera field fit the screen window.

For example, if the output window is resized such that it is 50 percent wider than it is tall (the aspect ratio becomes 1.5 to 1), then the y coordinates will range from -1.0 to 1.0 as before, but the x coordinate will range from -1.5 to 1.5. Visualize does not clip the scene to the camera field, so objects that are slightly outside of the camera field may become visible. Here, the camera field is indicated by dashed lines (these dashed lines do not actually appear in the output window):

../_images/3.2.4.a.gif

Fitting the camera field into a non-square window

As you resize the output window of an Visualize application, the output scene scales as the window gets resized, but the relationship of x to y coordinates does not change. Thus, as the output window gets wider or taller, a circle continues to look circular rather than getting fatter or skinnier (see stretched projections for an example of how to make objects scale nonuniformly to match the window).

It is possible to keep the aspect ratio of the output window constant, using:

    HPS::StandAloneWindowOptionsKit sawok;
    sawok.SetMobility(HPS::Window::Mobility::FixedRatio);

When this option is in effect, the user can change the size of the output window, but the ratio of the window’s width to height will remain fixed. However, this only applies to Visualize-created windows (HPS::StandAloneWindowKey), and not to windows created by the user and passed to Visualize (HPS::ApplicationWindowKey). Since virtually all applications will take the latter approach, this option is often not applicable.

Manipulating the Camera at a High-Level

Once you have a camera set up, you might want to move it around the scene. You can always move it by changing the individual attribute settings, but Visualize also provides a number of high-level routines to make it easier.

Zoom and Dolly

Zooming and dollying are accomplished as shown below:

    mySegmentKey.GetCameraControl().Zoom(2.0f); // zoom
    mySegmentKey.GetCameraControl().Dolly(0.5f, 0.25f, 0); // dolly

Note that the zoom level is itself not a camera setting. Zooming actually modifies the camera field - a zoom by a factor of 2.0 makes the camera field one-half as big, in both dimensions.

If you are using a perspective projection, then zooming in and out can change how objects look in perspective. Very wide camera angles (which act like a wide-angle lens) accentuate perspective and make objects look strange; very small camera angles (equivalent to a telephoto lens) reduce perspective. If you use a small enough camera angle, perspective will virtually disappear. An orthographic projection can be thought of as a camera infinitely far away with an infinitely large zoom factor.

The dolly command moves both the position of the camera and the camera target but maintains the up vector and field. Thus, the camera produces the same change in the view as would occur if you translated the scene in the opposite direction.

Notes on Dolly versus Zoom

Dollying the camera forward and back and zooming the camera in and out might seem to have a similar effect on a view, but the effects are actually quite different. If you are using an orthographic projection, then dollying the camera will not make the objects in the scene get larger or smaller. This is because in an orthographic view, the size of an object does not depend on that object’s distance from the camera. To make objects larger or smaller in an orthographic projection, you need to zoom the camera (zooming changes the camera field).

In a perspective projection, zooming the camera in and out will make the objects larger or smaller, but it will also change the perspective in the scene. Dollying the camera forward and back will make objects in the scene larger or smaller without changing perspective, but if there is an object close in front of the viewpoint, then dollying the camera forward might put that object behind the camera (or, even more disconcerting, put the camera inside of the object). Likewise, dollying the camera back might put an object that used to be behind the camera in front of it, blocking the view.

Orbit

Orbiting the camera around the target point is accomplished using the Orbit command. Orbiting produces the same change in the view as would occur if you rotated the scene the opposite direction about the target point.

The two arguments to Orbit are floating-point numbers. The first number is the amount to orbit around to the right (or, if negative, to the left). The second number is the amount to orbit up (or, if negative, down). If both arguments are non-zero, the left-right orbit is performed first. For example, if we start with the default camera, then the code below orbits the camera such that the camera is looking at the scene from the positive x axis.

    mySegmentKey.GetCameraControl().Orbit(90.0f, 0);

If you orbit the camera up or down, the up vector is rotated by the same amount, so it remains perpendicular to the new line of sight. If you orbit the camera up 180 degrees (up and over the top), the scene will be upside down (with the up vector pointing in the negative y direction), but if you orbit the camera right 180 degrees, the scene will be right-side up (with no change to the up vector).

Like all calls to the camera movement functions, calls to Orbit work relative to the current camera position, so successive calls are cumulative. Two calls, each of which orbits the camera 10 degrees to the right, will orbit the camera a total of 20 degrees.

Pan

Imagine the camera positioned on a tripod. Without changing the position of the tripod, you can swivel the head of the tripod right and left, or up and down. This movement is called panning. Panning the camera changes the camera target, but leaves the camera position unchanged. In addition, if you pan up or down, the camera up vector is rotated an equivalent amount, so it remains perpendicular to the new line of sight.

The two arguments to Pan are the amount in degrees to pan to the right (or, if negative, to the left), and the amount to pan up (or, if negative, down). For example, if someone says “look, up in the sky, it’s a…”, you probably want the following command:

    mySegmentKey.GetCameraControl().Pan(0, 90.0f);

If both an up-down pan and a right-left pan are specified, then the right-left pan is performed first. Note that pan and orbit both rotate the camera, but orbit rotates the camera about the target point and changes the camera position, whereas pan rotates the camera about the camera position and changes the camera target.

Roll

The roll camera command rotates the camera about the line of sight, leaving both the camera position and target unchanged. It is equivalent to rotating the up vector. A positive roll rotates the camera counterclockwise, which makes the scene appear to rotate clockwise. Rolling the camera produces the same change to the view as would occur if you rotated the scene the opposite direction about the line of sight.

The following command rotates the camera 180 degrees, turning the scene upside-down.

    mySegmentKey.GetCameraControl().Roll(180.0f);

Transform Masks

In some cases, it may be necessary for certain objects to ignore transforms. Consider the case of an axis triad which is a standard part of any scene. You would normally want the axis triad to rotate along with the coordinate system but not be affected by translations or zooming. In order to do this, HOOPS Visualize provides the concept of transform masks. Setting a transform mask enables a segment to ignore transformations that would otherwise be applied to the entire scene.

In the code sample below, axisSegment is a subsegment of modelSegment. Normally, this would mean that the axis triad would inherit that transformation of its parent segment. However, with transform masks, we can disable the translation and scaling:

    // disable scale and translation
    axisSegment.GetTransformMaskControl().SetCameraScale(true).SetCameraTranslation(true);

The flexibility of transform masks becomes apparent when you have a deep scene hierarchy. Even though a transform mask may be applied to a particular segment, its subsegments can still make use of the original transform if the developer disables the mask on that segment.

Other Projections

So far, we have talked about only orthographic and perspective projections. Visualize also provides a stretched projection, as well as the ability to skew any projection to form an oblique projection.

Oblique Orthographic

An orthographic projection is typically used in drafting applications so that objects do not get smaller as they get farther away, and so that parallel lines remain parallel. Unfortunately, a regular orthographic projection can cause some lines to be hidden. For example, in the following figure, viewing a cube straight on in an orthographic view causes it to look like a square.

../_images/3.2.6.a.gif

Orthographic and oblique orthographic views of a cube

In an oblique orthographic view, the x and y coordinates are skewed depending on the z coordinate. For example, a typical oblique orthographic view moves objects up and to the right (but does not make them smaller) as they get farther away. We can thus see the sides of the cube, even though we are still viewing it straight on. Oblique projections are created by setting the skew parameters of SetProjection:

    // 15 is the skew angle
    mySegmentKey.GetCameraControl().SetProjection(HPS::Camera::Projection::Orthographic, 15, 15);

Oblique Perspective

An oblique-perspective projection is useful when the target plane of a perspective projection is not perpendicular to the line of sight. There are a few, albeit specialized, situations where this kind of projection can be useful.

For example, consider a graphics system with three display monitors arranged side by side to display a panoramic view of a single scene. Logically, the three monitors are displaying a single view, but physically we need to create three separate views, one for each monitor. For the two side monitors, the screen is not perpendicular to the line of sight, so we must use a target plane (which is always parallel to the screen) that is not perpendicular to the line of sight. The following image shows the situation viewed from above (looking down the y axis).

../_images/3.2.6.b.gif

Use of oblique perspective with multiple monitors

To make this setup work, the target plane is rotated about the y axis for the side views using an oblique-perspective projection.

To determine the proper angle to rotate the target plane for each monitor, we take the offset from the camera target to the center of the monitor, divided by the distance from the viewpoint to the (entire) target plane, and take the arc tangent of the result. For example, if we are using a default camera for monitor 2, then monitor 1 is offset 2.0 units, and the distance from the viewpoint to the target plane is (the default) 5.0 units, which gives us arctan(2.0 / 5.0) = 21.8 degrees. For the camera corresponding to monitor 1, we issue the following command:

    // the projection is made oblique by setting the two skew parameters after the "Perspective" enum
    mySegmentKey.GetCameraControl().SetProjection(HPS::Camera::Projection::Perspective, 21.8f, -21.8f);

The camera for monitor 3 will have its target plane rotated -21.8 degrees.

The same trick may be useful even if you are not using multiple monitors. For example, consider a flight simulator used for training airplane pilots. Such simulators display the view that the pilot would see out of a window in a monitor positioned where the window would be. Often, these windows are not perpendicular to the pilot’s line of sight, so an oblique-perspective view is required.

Another use for oblique-perspective views is for creating stereo images. To create a stereo image, we need to create two views of an object: one from the perspective of each eye. We already know how to create two views of the same object using two cameras. We offset each camera slightly left or right to approximate the position of each eye; however, then the line of sight for each eye is no longer perpendicular to the target. The following image illustrates the situation viewed from above.

../_images/3.2.6.c.gif

Use of oblique perspective for a stereo view

We would rotate the target plane slightly for each eye with an oblique-perspective projection. However, it is not necessary for us to manually setup oblique perspective views to create a stereo image because Visualize provides built-in support for stereo viewing.

Stretched Projections and 2D-Scenes

In our discussion on aspect ratio, we saw how Visualize keeps the aspect ratio of a scene constant, even when we change the dimensions of the window. The system keeps the aspect ratio constant by adding extra space to the camera field either on the sides or on the top and bottom.

../_images/3.2.6.d.gif

Regular projection - circle stays circular

To create a stretched projection, the camera projection must be set to Stretched.

    cameraKit.SetProjection(HPS::Camera::Projection::Stretched);
    mySegmentKey.SetCamera(cameraKit);
../_images/3.2.6.e.gif

Stretched projection - circle stretches to match window

With a stretched projection, the scene stretches to fit the output window. Why would we want that to happen? Do we not want our circles to remain circular and our squares to remain square? One case where we would use a stretched projection is to draw a border around the inside of a window. The snippet below draws a thick black border by drawing a black edge around the inside of the window.

    HPS::SegmentKey borderKey = windowKey.Subsegment();
    borderKey.GetCameraControl().SetProjection(HPS::Camera::Projection::Stretched);

    PointArray pointArray(5);
    pointArray[0] = Point(-1, -1, 0);
    pointArray[1] = Point(1, -1, 0);
    pointArray[2] = Point(1, 1, 0);
    pointArray[3] = Point(-1, 1, 0);
    pointArray[4] = Point(-1, -1, 0);

    borderKey.InsertLine(pointArray);
    borderKey.GetLineAttributeControl().SetWeight(6.0f);

Since the camera showing the geometry has a perspective projection, and the border has a stretched projection, the result is a scene that looks normal with a border that stretches to match the window size:

../_images/3.2.6.f.gif

The border stretches, but the circle does not.

Another case where stretched projections are useful is when we want to place an object in a specific position of the output window, even if the output window is resized. For example, to place an object (such as a user-interface gadget) in the upper-right corner of the window, we could position it at x = 1, y = 1. However, it would only appear in the upper-right corner if the output window is square. By using a stretched projection, we can place objects accurately regardless of the aspect ratio of the output window. Another such use would be to place a toolbar along one of the sides of the window. The toolbar itself could then use an unstretched projection, so that the tools (buttons and sliders) would not stretch.

Calculating the Camera Near Plane

In certain cases, you may want 3D geometry to be rendered on top of other geometry regardless of its world position. For instance, when drawing leader lines, it is usually not desireable to have the leaders obscured. There are a few different ways to do this depending on your situation.

If you have a graphic entity that is part of an overlay, you may want to use a subwindow. If you are using hidden line removal, you should look into our section on hidden surface removal.

In the leader line case, or when drawing any other 3D geometry that doesn’t fall into the categories above, the recommended procedure is to draw the geometry on the camera near plane. Calculating the near plane involves a call to HPS::WindowKey::ConvertCoordinates, using an output space parameter of HPS::Coordinate::Space::InnerWindowNormalized. This will give you the world coordinates of your object in normalized (-1 to 1) space. The X and Y are identical to window space coordinates, so once you have those normalized coordinates, you can simply set the Z value to 0 to make the object appear in front of all other objects in the view.

This camera near plane value is view-dependent, so if you transform your scene in any way, such as performing a rotation, the near plane value will need to be recomputed.

The above process works for both orthographic and perspective projections. Since perspective projections are not linear, a Z value of 1.0 represents an infinite distance from the camera, and therefore cannot be calculated using ConvertCoordinates. Setting the normalized Z coordinate to 0.999999 would cause Visualize to draw the object behind all other objects.

Importing Models With Very Large Transformations

In certain cases, models which contain very large transformations or geometry coordinates relative to their size may not be presented correctly due to floating point precision errors. This issue stems from a loss of precision which happens when double precision data present in a model is converted to the single precision HPS database. When using single precision data, adding a small number to a much larger one will result in a lot of precision being lost from the result. The effect is especially pronounced in federated models, which causes the apparent position of relative geometry to be incorrect. Another common indication that a model is suffering from this problem is the model appearing to shake or vibrate while the camera is moved around the scene. This section of the Programming Guide provides one possible solution to fix this issue.

There are two main ways to approach this problem, and the solution depends on what is causing the model to be placed far away from the origin. Both of these approaches involve changing the contents of the scene graph, and as such should be performed before enabling static model computation.

Models Placed Far From the Origin By a Large Translation

Models can contain one or more large translations which place the model very far away from the origin. If the model’s bounding is much smaller than the size of the translation, adding the (comparatively) very small geometry size to the (comparatively) very large model translation will result in a loss of precision, responsible for rendering artifacts.

If the model in question is being imported through the Exchange sprocket, you can opt to have Visualize automatically solve this issue for you by enabling the SetLargeTranslationExtraction setting found in the HPS::Exchange::ImportOptionsKit class. When this setting is enabled, translations that are much larger than the model size are ignored, and a user option is inserted where they would have been found. This allows the original transformation to be queried, if needed, without compromising visual fidelity.

The translation is stored as user data at the index you specify, and the data has the format extracted translation = (%f, %f, %f).

If the model was imported through other means, you can add some code to your application that will perform equivalent steps:

  1. After the model has been loaded, search it for translations.

  2. Compute the bounding of the segment where the translation is found.

  3. Decide if the translation is so much larger than the bounding which would cause visual artifacts. A translation which is several orders of magnitude larger than the bounding is a good candidate.

  4. If this is the first translation to be considered too large to render, we should save it away to use it as a baseline. Other translations that will be removed need to take it into account, so that the relative positioning of different geometry will remain untouched.

        float cutoff = 10000;
        Vector baseline_translation = Vector::Zero();
        SearchResults search_results;

        model.GetSegmentKey().Find(Search::Type::ModellingMatrix, Search::Space::SubsegmentsAndIncludes, search_results);

        SearchResultsIterator it = search_results.GetIterator();

        while (it.IsValid()) {
            // found items will be the segments that contains transformations
            SegmentKey segment_with_transform(it.GetItem());
            FloatArray transform;
            ModellingMatrixControl modelling_matrix_control = segment_with_transform.GetModellingMatrixControl();
            modelling_matrix_control.ShowElements(transform);

            // extract the translation from the transformation
            Vector translation(transform[12], transform[13], transform[14]);

            BoundingKit bounding_kit;

            if (segment_with_transform.ShowBounding(bounding_kit)) {
                SimpleSphere sphere;
                SimpleCuboid cuboid;
                bounding_kit.ShowVolume(sphere, cuboid);

                if (translation.x / sphere.radius > cutoff || translation.y / sphere.radius > cutoff ||
                    translation.z / sphere.radius > cutoff) {
                    // This translation seems too large given the model's bounding.
                    if (baseline_translation == Vector::Zero()) {
                        // This is the first translation to be removed for this model. Store it in case more translations need to
                        // be removed
                        baseline_translation = translation;
                        modelling_matrix_control.Translate(-translation);
                    }
                    else {
                        // This is not the first translation to be removed.
                        // Take the baseline translation into account so that relative positioning is maintained.
                        Vector delta(translation - baseline_translation);
                        modelling_matrix_control.Translate(
                            -translation.x + delta.x, -translation.y + delta.y, -translation.z + delta.z);
                    }
                }
            }

            it.Next();
        }

Models That Contain Geometry Defined to Be Very Far Away From the Origin

It is also possible for a model to be placed very far away from the origin, not because or large translations, but because the geometry itself is defined that way. Just like the previous case, this is only a problem if the location of the model is very far away when compared to the size of the model - a model with a bounding box of 10 located millions of units away from the origin will cause problems, while a much larger model will render without precision issues.

This issue is more costly to rectify than the previous case, since it involves changing each piece of geometry in the model, rather than only changing only the translations that are too large compared to the model’s size. As such, the possibility of precision loss due to large translations should be considered first.

This issue can be rectified as follows:

  1. After the model has been imported, compute its bounding.

  2. If the bounding radius is much smaller than the bounding center, then the geometry data should be edited to place it closer to the origin.

  3. Calculate the vector that would translate the bounding’s center to the origin.

  4. Subtract this vector from the vertices of the geometry in the scene. When doing this, rotations that are found between the model segment and the segment where geometry is being edited need to be taken into account.

        // This snippet assumes that the possibility of very large translations has already been taken into account
        float cutoff = 10000;
        Vector translation_vector = Vector::Zero();
        BoundingKit bounding_kit;

        if (model.GetSegmentKey().ShowBounding(bounding_kit)) {
            SimpleSphere sphere;
            SimpleCuboid cuboid;

            bounding_kit.ShowVolume(sphere, cuboid);

            if (sphere.center.x / sphere.radius > cutoff || sphere.center.y / sphere.radius > cutoff ||
                sphere.center.z / sphere.radius > cutoff) {
                // The model is very far away from the origin
                translation_vector = Vector(-sphere.center);
            }
        }

        if (translation_vector != Vector::Zero()) {
            // edit geometry vertices taking into account any rotations
            // for example, for a shell:

            ShellKey shell;
            PointArray points;
            shell.ShowPoints(points);
            for (Point& one_point: points)
                one_point += translation_vector;
            shell.EditPointsByReplacement(0, points);
        }

Loading Multiple Models with Large Coordinates into the Same Scene

When multiple models are to be loaded into the same scene, the strategies discussed above need to be repeated for each model. All transformations across models need to maintain their relative effect with each other, therefore, we need to apply the same transformations to each model.

Example: A two-story house is divided into two models, each representing one floor. The first floor is loaded, and a very large translation of 10 million units along the X axis is removed, as to avoid precision issues.

The second floor is then loaded, and we notice that this model, apart from having a 10 million units translation along the X axis, also contains a 50 unit translation along the Y axis. If this was the first model being loaded, we would remove the translation along both axis, but since it is the second model loaded into the same scene, the previously removed translation needs to be removed here as well.

The result will be that while the translation along the X axis will be removed, the translation along the Y axis needs to be maintained, so that the second floor model will be placed correctly above the first floor model.

If you are loading multiple models using the Exchange Sprocket, this process is automated by Visualize, if the following conditions are met:

  1. Each part of the model should be imported with the HPS::Exchange::ImportOptionsKit::SetLargeTranslationExtraction setting enabled.

  2. Models other than the first one should be added to the scene using the HPS::Exchange::ImportOptionsKit::SetLocation method.

  3. The second argument of SetLargeTranslationExtraction, denoting the user data index where extracted translations are saved should be the same between imports of models that belong in the same scene.