• Breaking News

    Floating-point precision problems with world-to-screen Hi, I am working on a target reticule for a 3D game I am working on. You know, your typical HUD element that sits on top of an object in the world. If you turn your camera away from the object so that it goes off screen the marker sticks to an edge of the screen, giving you a hint at how to turn to face it again. In some games these turn into arrows as they stick to the corners etc. I am doing it by transforming a world space coordinate to NDC through my camera, something like this: Vec4 posHomogeneous(worldPosition, 1.0f); Vec4 posInClipSpace = posHomogeneous * camera->getViewMatrix() * camera->getProjectionMatrix(); Vec3 posInNDC(posInClipSpace.x / posInClipSpace.w, posInClipSpace.y / posInClipSpace.w, posInClipSpace.z / posInClipSpace.w); That works well in most cases, and from the NDC coordinate I can get the screen coordinates to draw the HUD element etc. However, in some cases this introduces some precision problems which makes the HUD element jump around on the screen. It seems to me it comes from the w component of posInClipSpace. Imagine you are standing at (0, 0, 0) and the target object is at (100, 0, 0), ie. a hundred units to the right of you. If you turn left/right the object switches from being "in front of you" to being "behind you" but is stays very close to zero. This, of course also makes posInClipSpace.w very small, which, in turn causes the NDC values to jump around as the perspective divide (third line above) is performed. How would you solve this? I was thinking you don't necessarily need to use clip space or NDC space for off screen objects and instead rely on view space for that, but it somehow seems "mathematically incorrect". Any ideas? EDIT: Well, I tried simply feeding the view space coordinates into it and it works surprisingly well. Maybe this is the right way to go... https://ift.tt/eA8V8J

    Hi, I am working on a target reticule for a 3D game I am working on. You know, your typical HUD element that sits on top of an object in the world. If you turn your camera away from the object so that it goes off screen the marker sticks to an edge of the screen, giving you a hint at how to turn to face it again. In some games these turn into arrows as they stick to the corners etc. I am doing it by transforming a world space coordinate to NDC through my camera, something like this: Vec4 posHomogeneous(worldPosition, 1.0f); Vec4 posInClipSpace = posHomogeneous * camera->getViewMatrix() * camera->getProjectionMatrix(); Vec3 posInNDC(posInClipSpace.x / posInClipSpace.w, posInClipSpace.y / posInClipSpace.w, posInClipSpace.z / posInClipSpace.w); That works well in most cases, and from the NDC coordinate I can get the screen coordinates to draw the HUD element etc. However, in some cases this introduces some precision problems which makes the HUD element jump around on the screen. It seems to me it comes from the w component of posInClipSpace. Imagine you are standing at (0, 0, 0) and the target object is at (100, 0, 0), ie. a hundred units to the right of you. If you turn left/right the object switches from being "in front of you" to being "behind you" but is stays very close to zero. This, of course also makes posInClipSpace.w very small, which, in turn causes the NDC values to jump around as the perspective divide (third line above) is performed. How would you solve this? I was thinking you don't necessarily need to use clip space or NDC space for off screen objects and instead rely on view space for that, but it somehow seems "mathematically incorrect". Any ideas? EDIT: Well, I tried simply feeding the view space coordinates into it and it works surprisingly well. Maybe this is the right way to go...

    from GameDev.net http://bit.ly/2IYX85i

    ليست هناك تعليقات

    Post Top Ad

    ad728

    Post Bottom Ad

    ad728