• Breaking News

    Screen-tearing, buffer overwrites and timing issues? I have a renderer project that I recently upgraded from VS2015 x86 -> VS2019 x64 and got a new, more powerful computer aswell. I got maybe a 2-3x FPS increase. However I ran into a few problems: Screen-tearing. The less intense the gamescene, the more FPS, and the more visible tearing. One of my per-frame constant buffers appears to have garbage contents (more on that later) Inspecting the backbuffer in RenderDoc in captured sample it appears to have several different frames depending on at what point you inspect it (might be a renderdoc bug but maybe not) When I enable VSync the screen tearing is less but not completely gone. I believe the old compiler and/or my older computer hid these issues before. So I must be something wrong with my swapchain setup or somesuch. My swapchain creation: ZeroMemory(&mSwapchainDesc, sizeof(DXGI_SWAP_CHAIN_DESC)); mSwapchainDesc.BufferCount = 2; mSwapchainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; mSwapchainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; mSwapchainDesc.OutputWindow = mWindowHandle; mSwapchainDesc.SampleDesc.Count = 1; mSwapchainDesc.Windowed = true; DXCALL(D3D11CreateDeviceAndSwapChain(nullptr, D3D_DRIVER_TYPE_HARDWARE, 0, deviceFlags, &featureLevel, numFeatureLevels, D3D11_SDK_VERSION, &mSwapchainDesc, &mSwapchain, &mDevice, nullptr, &mContext)); I am using deferred rendering and use a light accumulation buffer as the RTV when doing shadows/lights. I then do tonemapping on it and output it to the backbuffer in sRGB format. Then I copy the backbuffer texture to use it as input to a FXAA shader which renders into the backbuffer again. After that I optionally do a bunch of debug passes directly into the backbuffer aswell before I call present(). (I also read the depth texture during a depth reduction pass but I don't think thats relevant to these problems) On point 2) I have a very simple way to calculate tessellation factor like this: float CalculateTessellationfactor( float3 worldPatchMidpoint ) { float cameraToPatchDistance = distance( gWorldEyePos, worldPatchMidpoint ); float scaledDistance = ( cameraToPatchDistance - gMinZ ) / ( gMaxZ - gMinZ ); scaledDistance = clamp( scaledDistance, 0.0f, 1.0f ); return pow( 2, lerp( 6.0f, 2.0f, scaledDistance ) ); } gMinZ/gMaxZ is min/far Z that is set once-per-frame in a constant buffer. When I inspect them in RenderDoc they have correct values (and it worked before), but apparently they do not when I render, but if I hardcode the values in the shader then it works. I assume they are overwritten repeatedly and garbage when I read them in the shader? --------------------------------------------------------------------- All in all I guess I am rendering too fast and overwriting resources all the time. How would I go about solving this? https://ift.tt/eA8V8J

    I have a renderer project that I recently upgraded from VS2015 x86 -> VS2019 x64 and got a new, more powerful computer aswell. I got maybe a 2-3x FPS increase. However I ran into a few problems: Screen-tearing. The less intense the gamescene, the more FPS, and the more visible tearing. One of my per-frame constant buffers appears to have garbage contents (more on that later) Inspecting the backbuffer in RenderDoc in captured sample it appears to have several different frames depending on at what point you inspect it (might be a renderdoc bug but maybe not) When I enable VSync the screen tearing is less but not completely gone. I believe the old compiler and/or my older computer hid these issues before. So I must be something wrong with my swapchain setup or somesuch. My swapchain creation: ZeroMemory(&mSwapchainDesc, sizeof(DXGI_SWAP_CHAIN_DESC)); mSwapchainDesc.BufferCount = 2; mSwapchainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; mSwapchainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; mSwapchainDesc.OutputWindow = mWindowHandle; mSwapchainDesc.SampleDesc.Count = 1; mSwapchainDesc.Windowed = true; DXCALL(D3D11CreateDeviceAndSwapChain(nullptr, D3D_DRIVER_TYPE_HARDWARE, 0, deviceFlags, &featureLevel, numFeatureLevels, D3D11_SDK_VERSION, &mSwapchainDesc, &mSwapchain, &mDevice, nullptr, &mContext)); I am using deferred rendering and use a light accumulation buffer as the RTV when doing shadows/lights. I then do tonemapping on it and output it to the backbuffer in sRGB format. Then I copy the backbuffer texture to use it as input to a FXAA shader which renders into the backbuffer again. After that I optionally do a bunch of debug passes directly into the backbuffer aswell before I call present(). (I also read the depth texture during a depth reduction pass but I don't think thats relevant to these problems) On point 2) I have a very simple way to calculate tessellation factor like this: float CalculateTessellationfactor( float3 worldPatchMidpoint ) { float cameraToPatchDistance = distance( gWorldEyePos, worldPatchMidpoint ); float scaledDistance = ( cameraToPatchDistance - gMinZ ) / ( gMaxZ - gMinZ ); scaledDistance = clamp( scaledDistance, 0.0f, 1.0f ); return pow( 2, lerp( 6.0f, 2.0f, scaledDistance ) ); } gMinZ/gMaxZ is min/far Z that is set once-per-frame in a constant buffer. When I inspect them in RenderDoc they have correct values (and it worked before), but apparently they do not when I render, but if I hardcode the values in the shader then it works. I assume they are overwritten repeatedly and garbage when I read them in the shader? --------------------------------------------------------------------- All in all I guess I am rendering too fast and overwriting resources all the time. How would I go about solving this?

    from GameDev.net http://bit.ly/2YiyxMz

    ليست هناك تعليقات

    Post Top Ad

    ad728

    Post Bottom Ad

    ad728