News PC

Intel Sapphire Rapids vs AMD Genoa – Which One Best CPU?

Intel and AMD are two companies that announced their server-level CPUs. And now both CPUs can handle higher performance, improve life span. So today we discuss Sapphire Rapids vs Genoa CPUs details.

Intel states that SPR-HBM can be used with standard DDR5, offering an additional tier in memory caching. The HBM can be addressed directly or left as an automatic cache we understand, which would be very similar to how Intel’s Xeon Phi processors could access their high bandwidth memory.

Sapphire Rapids vs Genoa

 

Specifications

Intel AMD
Architecture Intel 7 Zen 4
Bandwidth
Memory 12 TB of system memory
Generation Zen 4
DDR DDR5-4800 DDR5 – 5200
Core 56 96
Channels 8 Channel 12 Channel
Socket Type LGA 4677 LGA 6096
Series
Code Name Sapphire Rapids EPYC 7004?
Process Node 10nm TSMC 5nm TSMC
Max TDP Up To 350W 320W (cTDP 400W)
Years 2022 2022
PCIe Gen PCIe 5.0 (80 lanes) 128 Gen 5
Max L3 Cache 105 MB L3 384 MB?
Platform Name Intel Eagle Stream SP5
Max Thread Count 112 192

Alternatively, SPR-HBM can work without any DDR5 at all. This reduces the physical footprint of the processor, allowing for a denser design in compute-dense servers that do not rely much on memory capacity (these customers were already asking for quad-channel design optimizations anyway).

The amount of memory was not disclosed, nor the bandwidth or the technology. At the very least, we expect the equivalent of up to 8-Hi stacks of HBM2e, up to 16GB each, with 1-4 stacks onboard leading to 64 GB of HBM. At a theoretical top speed of 460 GB/s per stack, this would mean 1840 GB/s of bandwidth, although we can imagine something more akin to 1 TB/s for yield and power which would still give a sizeable uplift. Depending on demand, Intel may fill out different versions of the memory into different processor options.

One of the key elements to consider here is that on-package memory will have an associated power cost within the package. So for every watt that the HBM requires inside the package, that is one less watt for computational performance on the CPU cores. That being said, server processors often do not push the boundaries on peak frequencies, instead opting for a more efficient power/frequency point and scaling the cores. However HBM in this regard is a tradeoff – if HBM were to take 10-20W per stack, four stacks would easily eat into the power budget for the processor (and that power budget has to be managed with additional controllers and power delivery, adding complexity and cost).

One thing that was confusing about Intel’s presentation, and I asked about this but my question was ignored during the virtual briefing, is that Intel keeps putting out different package images of Sapphire Rapids. In the briefing deck for this announcement, there was already two variants. The one above (which actually looks like an elongated Xe-HP package that someone put a logo on) and this one (which is more square and has different notches):

Sapphire Rapids will also be the first Intel processors to support Advanced Matrix Extensions (AMX), which we understand to help accelerate matrix heavy workflows such as machine learning alongside also having BFloat16 support. This will be paired with updates to Intel’s DL Boost software and OneAPI support. As Intel processors are still very popular for machine learning, especially training, Intel wants to capitalize on any future growth in this market with Sapphire Rapids. SPR will also be updated with Intel’s latest hardware based security.

It is highly anticipated that Sapphire Rapids will also be Intel’s first multi compute-die Xeon where the silicon is designed to be integrated (we’re not counting Cascade Lake-AP Hybrids), and there are unconfirmed leaks to suggest this is the case, however nothing that Intel has yet verified.

Leave a Comment