Hi All,

I need some help understanding FSR and how to use it.

Until recently, my only piece of gaming hardware was the Steam Deck. On this, the native (OS level) FSR easy to understand: drop the in-game resolution to something less than the display’s native 1280×800 and enable FSR, which then upscales the image. This makes sense to me.

Recently, I got myself a dedicated gaming PC as well (running 6700 XT). I’ve been playing around with the FSR option using Hellblade Senua’s Sacrifice as a benchmark, running DX11 mode and without ray tracing. I’m using a 1080p display.

From AMD’s control software, a ‘Radeon Super Resolution’ (RSR) mode can be enabled, which I understand is basically the same FSR as is running natively on the Steam Deck. It does nothing if the in-game resolution is the same as the display’s native resolution, but as soon as the in-game resolution is lowered, it applies spatial upscaling. So I drop my in-game resolution to 720p, enable RSR, and I can see the upscaling at work. This also makes sense.

Where I get confused is how in-game FSR fits into the picture. So Hellblade has native (in-game) FSR implemented. When running at 1080p resolution with no FSR and all settings maxed, I typically have close to 100% GPU utilization. Now, when I enable FSR in-game, still running at 1080p resolution, the GPU utilization drops to 75-80% with almost no visual impact (slight sharpening it seems, but wouldn’t notice without side-by-side screenshots). Framerates are of course more stable with the lower utilization.

So I don’t quite understand how this works. Does the game automatically render at a lower resolution (without having to adjust the in-game resolution) and upscales? Or why is it not necessary here to change the in-game resolution? Do all games implement native FSR in this manner?

Also, should the two be used mutually exclusively? I tried enabling both (so enabling SRS in AMD’s control software, dropping in-game resolution to 720p, and enabling in-game FSR). It worked, but certainly looked strange; not sure how to explain, almost like this watery effect. I’m assuming in this instance upscaling was applied twice?

Anyway, some insight would be much appreciated!

  • Afinda@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Alright, long story short:

    FSR2.x works different than FSR1, it requires game-engine level information (motion vectors and stuff) to not only up-scale to target resolution but also reconstruct detail and apply sharpening whereas FSR1 doesn’t and was just a tad better upscaling algorithm.

    To be able to use FSR 2.x, it needs to be natively supported by the game for that game-engine level information.

    Typically FSR 2.x, like DLSS, is split into different Quality levels. Each will result in the original rendered frame to be at a lower resolution than the target resolution to be then scaled back up to the target resolution and sharpened:

    • Quality - 67% (1280 x 720 -> 1920 x 1080)
    • Balanced - 59% (1129 x 635 -> 1920 x 1080)
    • Performance - 50% (960 x 540 -> 1920 x 1080)
    • Ultra Performance - 33% (640 x 360 -> 1920 x 1080)

    Source (AMD)

    So why do you see less utilization then?

    There’s two things at play here

    1. Upscaling and Reconstruction are cheaper than rendering a native frame but still a tad more expensive than simply rendering at a lower target resolution (cheaper/expensive in terms of calculation time spent)
    2. Less required performance for image rendering can lead to a shift towards more draw calls required by the GPU for more utilization and in turn result in a CPU bottleneck if the CPU can’t keep up with the now less strained GPU.

    Bonus: If you lock your framerate, target FPS is reached but GPU utilization is low and yet there’s no stutter: You GPU can easily handle what’s being thrown at it and doesn’t need to go the extra mile to keep up.