cua cà mau cua tươi sống cua cà mau bao nhiêu 1kg giá cua hôm nay giá cua cà mau hôm nay cua thịt cà mau cua biển cua biển cà mau cách luộc cua cà mau cua gạch cua gạch cà mau vựa cua cà mau lẩu cua cà mau giá cua thịt cà mau hôm nay giá cua gạch cà mau giá cua gạch cách hấp cua cà mau cua cốm cà mau cua hấp mua cua cà mau cua ca mau ban cua ca mau cua cà mau giá rẻ cua biển tươi cuaganic cua cua thịt cà mau cua gạch cà mau cua cà mau gần đây hải sản cà mau cua gạch son cua đầy gạch giá rẻ các loại cua ở việt nam các loại cua biển ở việt nam cua ngon cua giá rẻ cua gia re crab farming crab farming cua cà mau cua cà mau cua tươi sống cua tươi sống cua cà mau bao nhiêu 1kg giá cua hôm nay giá cua cà mau hôm nay cua thịt cà mau cua biển cua biển cà mau cách luộc cua cà mau cua gạch cua gạch cà mau vựa cua cà mau lẩu cua cà mau giá cua thịt cà mau hôm nay giá cua gạch cà mau giá cua gạch cách hấp cua cà mau cua cốm cà mau cua hấp mua cua cà mau cua ca mau ban cua ca mau cua cà mau giá rẻ cua biển tươi cuaganic cua cua thịt cà mau cua gạch cà mau cua cà mau gần đây hải sản cà mau cua gạch son cua đầy gạch giá rẻ các loại cua ở việt nam các loại cua biển ở việt nam cua ngon cua giá rẻ cua gia re crab farming crab farming cua cà mau
Skip to main content

XeSS is Intel’s answer to Nvidia DLSS, and it has one big advantage

On the heels of the Intel Arc announcement earlier this week, Intel has revealed more details about its upcoming GPUs and the technologies they’ll feature — in particular, XeSS. XeSS is a new supersampling feature on upcoming Intel Arc GPUs, and it looks to combine the best of competing technologies from AMD and Nvidia.

Functionality-wise, XeSS works a lot like Nvidia’s Deep Learning Super Sampling (DLSS). It uses a neural network that has been trained on images of a game to reconstruct details from neighboring pixels, and critically, it leverages previous frames to track motion through a scene. This temporal information is what sets DLSS apart, and now it looks to have a competitor in the form of XeSS.

Intel XeSS quality comparison.
Image used with permission by copyright holder

Intel demoed a scene upscaled from 1080p to 4K, nearly matching the quality of native 4K. The demo was running on an Intel Alchemist graphics card, which is the code name Intel is using for its first-generation Arc products.

Recommended Videos

Intel claims that XeSS can provide up to a 2x performance increase over native rendering. That sounds great, but Intel didn’t show XeSS running in any games or announce any titles that will support the feature (though Intel said “several” game developers are already engaged with XeSS).

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

The difference between DLSS and XeSS

XeSS looks functionally similar to DLSS, but it has one key difference. Intel announced that it will release two XeSS modes. One will leverage Intel’s dedicated XMX, or Xe Matrix Extension, cores to handle the artificial intelligence (A.I.) supersampling. That’s similar to how Nvidia leverages its Tensor cores in RTX graphics cards to upscale with DLSS.

The other mode is where things get interesting. It will use DP4a instruction, which is used for A.I. operations on recent Nvidia graphics cards and recent Intel integrated graphics. Intel claims there’s a “smart” performance and quality trade-off for the DP4a version. Regardless, what it means is that XeSS will still work on hardware that doesn’t have XMX cores.

Intel XeSS rendering pipeline demonstration.
Image used with permission by copyright holder

That brings one of the major benefits of AMD’s FidelityFX Super Resolution (FSR) to XeSS. FSR supports just about any graphics card, while DLSS works solely on Nvidia RTX graphics cards. By utilizing two different software development kits (SDKs), Intel is able to bring support for a wide range of hardware while still utilizing A.I.-assisted upscaling on dedicated hardware.

That’s something we wanted to see out of Nvidia following the launch of FSR. XeSS looks like it will provide the best of both worlds, giving users with dedicated hardware a proper A.I.-assisted supersampling tool without restricting the feature to a certain platform. That said, we still need to wait on game support and independent testing to see how it stacks up.

Intel said that the XMX version of the XeSS SDK will be available later this month. The DP4a version will arrive “later this year.” Unfortunately, Intel didn’t provide any more details on the release window for the DP4a version. Regardless of the version, Intel says XeSS can easily fit in existing rendering pipelines.

A look inside Xe HPG

XeSS uses dedicated XMX units to handle the A.I. supersampling, and there are 16 XMX units on each Xe Core. Moving forward, Intel is moving away from the term “execution units,” which defined the number of cores on previous versions of Xe graphics. Instead, it will use “Xe Core” to show what GPUs have inside.

Each Xe Core features 16 XMX units and 16 vector engines, and there are four Xe Cores on each render slice. On the slice, each Xe Core also gets a dedicated ray-tracing unit for DirectX 12 and Vulkan ray tracing, as well as a shared L2 cache. This slice is the building block of cards using the Xe HPG architecture, and Intel is able to add or remove slices to meet performance targets.

Intel Xe HPG render slice model.
Image used with permission by copyright holder

Currently, Intel is able to use eight slices on a graphics card, totaling 32 Xe cores and 512 vector engines. Previously, reports suggested that Intel’s flagship Alchemist card would use 512 execution units, and that still appears to be the case. The only thing different now is the naming.

Overall, Intel says that the Xe HPG architecture provides a 1.5x increase in frequency over Xe LP, as well as a 1.5x increase in performance per watt. These numbers come from Intel’s internal benchmarks, however, so we’ll need to wait until we have some time with a card to draw any conclusions.

Still, our first look at Xe HPG and XeSS is exciting. For decades, Nvidia and AMD have been the only two options for graphics cards. Intel is not only releasing a graphics card for gamers but one that comes with all the fittings modern GPUs need.

We’ll have to wait to see how Alchemist cards perform when they launch, but it would be nice to see a blue wave in a sea that’s been dominated by red and green for too long.

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
Ghost of Tsushima is a great PC port with one big problem
Jin riding through a field of flowers.

After nearly four years, Ghost of Tsushima is finally available on PC. The new release includes the base game, the Legends mode, and the Iki Island expansion, as well as a suite of the latest technologies from Nvidia, AMD, and Intel. From a performance perspective, Ghost of Tsushima runs well and looks beautiful, but it has one big problem.

Sony's recent push to PC has locked players in over 170 countries out from experiencing Ghost of Tsushima, despite initially offering the game in those locations for preorder. That shouldn't distract from the excellent PC port Ghost of Tsushima is, however.
Best settings for Ghost of Tsushima on PC

Read more
Intel finally responds to CPU instability but only makes it more confusing
A Core i9-12900KS processor sits on its box.

Intel and motherboard makers aren't on the same page about what exactly "default" means for high-end CPUs like the Core i9-13900K and Core i9-14900K. Intel has issued its first public statement regarding the wave of instability on its most powerful CPUs, but it doesn't address the problem directly.

Here's the statement that was shared with Digital Trends in full:

Read more
Intel’s big bet on efficient GPUs might actually work
An Intel Meteor Lake processor socketed in a motherboard.

Intel has a lot riding on its next-gen Battlemage graphics architecture, and a very early benchmark shows some promising signs for performance. An Intel Lunar Lake CPU packing a low-power integrated Battlemage GPU was reportedly spotted in the SiSoftware benchmark database. It boasts not only higher performance than Intel's Meteor Lake chips, but also much better efficiency.

User @miktdt on X (formerly Twitter) spotted the result, which appears to come from an early qualification sample of the HP Spectre x360 14. The benchmark picked up that the laptop was using a Lunar Lake CPU, which is said to come with the Xe2-LPG architecture, a lower-power version of Battlemage.

Read more