Skip to main content

Try a free month of being a Eurogamer Supporter

Get ad-free browsing and exclusive content. Use code "EGMay24" at checkout.

If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

PC image quality enhanced: the new DLSS and XeSS tested

Machine learning-based upscaling has evolved.

Machine learning-based image reconstruction has evolved into a genuinely game-changing technology - and what sets this apart from other PC features is that users can effectively mod improved versions of Nvidia DLSS and Intel XeSS into games with existing support, simply by swapping a .DLL file in the install directories. With that in mind, we wanted to use this unofficial modding technique to give a preview of sorts for the latest versions of DLSS and XeSS, to see what has changed.

The headlines are straightforward enough: The DLSS SDK has been updated to version 3.7 with a new reconstruction model called 'Model E', while XeSS has transitioned to version 1.3, promising greater quality and stability. We've covered DLSS across the years, but it's been some time since we first looked at XeSS prior to its launch. Back then, my conclusions were that XeSS when running on an Intel GPU produced quality similar to DLSS with only a few shortcomings, and did not suffer from issues that we have typically seen with FSR 2. However, XeSS's use of machine learning is interesting as it has multiple versions: it occupies a middle place between what AMD and Nvidia are doing. There's a full on ML implementation for its own XMX ML hardware, along with a DP4a path that allows the majority of modern GPUs to enjoy the benefits, with a small hit to quality .

And as XeSS has evolved, users of non-RTX graphics cards have discovered that the DP4a path, while heavier, has clear quality advantages over AMD's own FSR 2. However, at the same time, the simplified nature of the DP4a version means that few people have actually seen XeSS at its best. Based on my tests using Horizon Forbidden West, the hardware-based XMX version wins in quality on a few levels, but the biggest visible one is that particles do not have incessant trails following them like they have in the DP4a version. However, by swapping the existing XeSS .DLL file with the latest, there's clear improvement in this area. This is the sort of reporting that works best in the video format, so I entreat you to check out the embedded content below.

A video breakdown of the latest XeSS and DLSS innovations, as tested in Horizon Forbidden West and Ratchet and Clank. Watch on YouTube

Muddying the waters somewhat is that the new XeSS brings about wholesale change in the nature of the native resolutions it's upscaling from - a move I am somewhat ambivalent about. It's been established that performance mode is a 2x2 pixel upscale - so 1080p becomes 4K and 720p becomes 1440p, for example. However, XeSS has changed its upscaling factors. Performance mode at 4K is now upscaling from 900p, for example, meaning that 1440p performance mode now upscales from circa-626p. There are reports of XeSS 1.3 significantly increasing resolution up against version 1.2, but we're finding that once base resolutions are equalised, performance is the same. Any increase in frame-rate is coming from the lower native image. While like-for-like quality has increased, there can be noticeable quality downgrades.

So, on balance, is this a good thing or a bad thing? It's great that Intel is adding more granularity to the modes on offer - there's no native resolution XeSS (equivalent to Nvidia DLAA) and there are more modes in total. This is good for the user to have more options to suit their desires. However, Intel is also changing established norms which will lead to confusion. Users or reviewers may be confused where they falsely attribute differences in image quality and performance between the upscalers. Personally, I wish Intel kept the same scaling levels, but added in further mode variants - or simply added the ability for the users to choose via a percentage slider.

Even so with these caveats, while performance hasn't really changed, image quality has. The DP4a version of XeSS is largely greatly improved in our hacked-in testing with less ghosting, and less pixelation on large image features like the water. However, it does seem less stable on large inner surface detail. The XMX version for Intel GPUs can look slightly less stable and there is a reduction in sharpness, but trailing artefacts are definitely improved and there's less pixelation in water detail - something upscalers tend to find challenging. In version 1.2 of the XMX version of XeSS, any depth of field on-screen would have a somewhat distracting jitter but by injecting version 1.3 this completely goes away, which is nice to see. It's the same with cloud rendering, though there is now a knock-on effect with particles that fly in front of the clouds, leaving trails behind them on the surface of the clouds which is a little weird to see.

If you compare the XMX and DP4a paths, the XMX model looks better, benefiting from Intel's machine-learning silicon. There's less aliasing, it's smoother, and has more detail with less smearing. However, in the 1.2 vs 1.3 comparisons, it seems that it's the DP4a path that has the biggest overall improvement. However, there's still a lot of work for Intel here: for example, in Shadow of the Tomb Raider, issues I found with the game's water rendering are unchanged. Similarly, in Ratchet and Clank, the 1.3 version there still has the same flicker occurring occasionally with the game's vignette filter. XeSS 1.3 is a net improvement then, but it's not the finished article.

This brings us onto DLSS 3.7, which remains the undisputed king of the upscalers. There's less to cover here as DLSS is generally mature - but again, there are some avenues for improvement and that seems to be what the new 'Model E' mode offers. In a fair few titles now, we've seen a lot of excessive smearing when the game camera is left static for a certain period of time - I first noticed it in Hitman 3 and saw it most recently in Avatar: Frontiers of Pandora. These titles are using 'Model D' and using mods to switch to 'Model C' fixes this issue and improves motion clarity - at the expense of anti-aliasing and reconstruction quality. You can consider 'Model E' as offering the best of both worlds: it eliminates the poor smearing effects but unlike 'Model C', there is little to no hit to anti-aliasing or reconstruction quality.

Beyond that though, I have not found any other noticeable advances above the previous DLSS and while the new model is good, I still noticed jittering issues with clouds in Avatar. Meanwhile, in Hitman 3, Agent 47s clothing can still show a moiré pattern in the new 3.7 preset, just like older versions. Dragons Dogma 2 still sees smearing artefacts in grass. So, from my perspective, the main thing this new version of DLSS does is to fix the potential for large smears to show up on screen and not too much else. Still, it is nice to see.

Comparison of AMD, Intel and Nvidia upscaler solutions in Horizon Forbidden West.
AMD FSR 2 vs Intel's dual XeSS solutions vs Nvidia DLSS 3.7 (Model E). Click on the thumbnail for a higher quality image. | Image credit: Digital Foundry

Next up, how do the new upscalers compare with FSR 2? AMD's tech is ripe for improvement, lacking good anti-aliasing on moving objects. There's also a generic fizzle, while transparent elements like foliage have an ugly, crunchy look. XeSS's DP4a path is clearer and cleaner, but there can be slight bluriness and smearing. Looking at DLSS, the reconstruction of the moving objects is much cleaner with lines being completed and detail resolving more pleasingly, and there is no blurriness or smearing occurring.

The XMX rendition of XeSS for Intel GPUs is a clear improvement over DP4a, but still falling slightly short of the quality and clarity as offered by the latest 3.7 version of DLSS. This is not the end of the story, however, as FSR 3.1 is coming and an implementation for Ratchet and Clank has been confirmed. AMD is on the record in saying that objects in motion should look much cleaner with less ghosting.

It's good to see that both Intel and Nvidia haven't left their upscaling technologies as is - and it's frankly wondrous that with a simple DLL switch, we can actually improve image quality in existing games. The last remaining question is the extent to which AMD can close the large quality gap without machine learning - and we'll be reporting back on that as soon as the first FSR 3.1 title arrives.

Read this next