Intel has revealed more details about its upcoming fifth generation Xeon processors at Hot Chips 2023, a conference where many of the industry’s leading chip design firms showcase their latest innovations. The new Xeon processors, codenamed Sierra Forest and Granite Rapids, are expected to launch in 2024 and will feature Intel’s new E-cores and P-cores architectures for improved performance, efficiency, and scalability.
What are E-cores and P-cores?
E-cores and P-cores are Intel’s new core types that are designed for different workloads and use cases. E-cores stand for Efficient cores and are optimized for high-density and low-power scenarios such as cloud computing, edge computing, and IoT devices. P-cores stand for Performance cores and are optimized for high-performance and high-throughput scenarios such as high-end servers, workstations, and gaming PCs.
E-cores and P-cores were first introduced in Intel’s 12th generation Core (Alder Lake) processors for consumer devices in 2021. They use a hybrid design that combines both core types in a single chip to deliver adaptive performance across a wide range of applications. However, for the Xeon processors, Intel has decided to use a homogenous design that uses only one core type per chip to offer more flexibility and customization for data center customers.
How are Sierra Forest and Granite Rapids different?
Sierra Forest and Granite Rapids are the two variants of Intel’s fifth generation Xeon processors that will use different core types depending on the target market segment. Sierra Forest will use E-cores exclusively while Granite Rapids will use P-cores exclusively.
Sierra Forest will be Intel’s first E-core Xeon processor for data center use and will be based on the new Sierra Glen microarchitecture that is derived from Intel’s Atom family of low-power cores. Sierra Forest will be fabricated on Intel’s EUV-based Intel 3 process node that offers higher transistor density and lower power consumption than previous nodes. Sierra Forest will be the first Xeon processor to launch in 2024 and will target cloud service providers, hyperscalers, telcos, and edge computing customers who need high-density, low-power, and cost-effective solutions.
Granite Rapids will be Intel’s next P-core Xeon processor for data center use and will be based on the new Redwood Cove microarchitecture that is derived from Intel’s Core family of high-performance cores. Granite Rapids will also be fabricated on Intel’s Intel 3 process node but will leverage its higher performance potential than Sierra Forest. Granite Rapids will launch shortly after Sierra Forest in 2024 and will target enterprise customers who need high-performance, high-throughput, and high-reliability solutions.
What are the benefits of using E-cores and P-cores?
By using E-cores and P-cores in its Xeon processors, Intel aims to offer more choice, flexibility, and scalability to its data center customers. E-cores and P-cores will enable Intel to tailor its Xeon processors to different market segments and workload requirements without compromising on compatibility and interoperability.
E-cores will offer significant benefits in terms of rack density and performance per watt. According to Intel, Sierra Forest will provide up to 2.5x better rack density and 2.4x higher performance per watt than its fourth generation Xeon processors (Sapphire Rapids). E-cores will also enable Intel to scale up the core count and frequency of its Xeon processors without increasing the power envelope or the die size.
P-cores will offer significant benefits in terms of performance and AI capabilities. According to Intel, Granite Rapids will provide 2 to 3x the performance in mixed AI workloads than Sapphire Rapids, thanks to the new AMX (Advanced Matrix Extensions) instruction set that accelerates matrix operations for deep learning applications. P-cores will also enable Intel to scale up the memory bandwidth and capacity of its Xeon processors by supporting the new MCR (Memory Compute Resource) technology that provides 30-40% more memory bandwidth than standard DIMMs.
How are Sierra Forest and Granite Rapids designed?
Sierra Forest and Granite Rapids will share the same platform design that uses a chiplet-based approach to combine multiple dies on a single package. This design allows Intel to reuse the same I/O chiplet for both processors and vary the number and type of compute chiplets depending on the core count and core type.
The I/O chiplet will be built on Intel’s Intel 7 process node (formerly known as 10nm Enhanced SuperFin) and will provide the common interface for memory, PCIe, UPI, and other peripherals. The I/O chiplet will support up to 136 lanes of PCIe 5.0/CXL 2.0, up to 6 UPI links, and various acceleration engines for compression, cryptography, and data streaming.
The compute chiplets will be built on Intel’s Intel 3 process node (formerly known as 7nm) and will contain the cores, caches, memory controllers, and fabric. The compute chiplets will use either E-cores or P-cores depending on whether they are part of Sierra Forest or Granite Rapids. Each compute chiplet will support up to 12 channels of DDR5-6400 memory (either standard or MCR) and will share their L3 cache with all other cores in a logically monolithic mesh.
The I/O chiplet and the compute chiplets will be connected by Intel’s active EMIB (Embedded Multi-Die Interconnect Bridge) technology that provides high-bandwidth and low-latency links between the dies. The EMIB bridges will be embedded within the substrate of the package and will use micro-bumps to connect the dies.
The following table summarizes the main features and specifications of Sierra Forest and Granite Rapids:
Feature | Sierra Forest | Granite Rapids |
---|---|---|
Core Type | E-core | P-core |
Microarchitecture | Sierra Glen | Redwood Cove |
Process Node | Intel 3 | Intel 3 |
Launch Date | H1 2024 | H2 2024 |
Target Segment | Cloud, Edge, IoT | Enterprise, HPC |
Rack Density | Up to 2.5x higher | Similar |
Performance per Watt | Up to 2.4x higher | Similar |
AI Performance | Similar | Up to 3x higher |
Memory Bandwidth | Similar | Up to 2.8x higher |
Memory Capacity | Similar | Higher |
PCIe/CXL Lanes | Up to 136 | Up to 136 |
UPI Links | Up to 6 | Up to 6 |
Acceleration Engines | Yes | Yes |
What are the implications of using E-cores and P-cores?
By using E-cores and P-cores in its Xeon processors, Intel is making a bold move to diversify its data center portfolio and compete with its rivals such as AMD, NVIDIA, and ARM. E-cores and P-cores will allow Intel to address different customer needs and preferences with more granularity and efficiency than before.
E-cores will help Intel gain more traction in the fast-growing cloud computing market, where power efficiency, scalability, and cost are key factors. E-cores will also help Intel expand its presence in the emerging edge computing and IoT markets, where low-power, high-density, and flexible solutions are in high demand.
P-cores will help Intel maintain its leadership in the traditional enterprise computing market, where performance, reliability, and compatibility are paramount. P-cores will also help Intel enhance its AI capabilities and offerings, where high-throughput, high-bandwidth, and high-performance computing are essential. P-cores will also help Intel leverage its new MCR technology that provides a novel way to increase memory bandwidth and capacity without increasing the number of DIMMs.
By using E-cores and P-cores in its Xeon processors, Intel is also making a trade-off between simplicity and complexity. On one hand, E-cores and P-cores will simplify the design and manufacturing of Intel’s Xeon processors by using a common platform and process node. On the other hand, E-cores and P-cores will complicate the product portfolio and marketing of Intel’s Xeon processors by introducing more variants and options for customers to choose from.
Intel hopes that by using E-cores and P-cores in its Xeon processors, it will be able to regain its competitive edge and market share in the data center space that has been eroded by its rivals in recent years. Intel also hopes that by using E-cores and P-cores in its Xeon processors, it will be able to deliver more value and innovation to its customers and partners in the data center ecosystem.