Revolutionary AMD EPYC Partnership for Clusters and Workstations
When resources (i.e. compute capacity and storage) are delivered in mismatched ratios, performance and cost effectiveness suffer. Ace Computers HPC clusters and servers integrated with AMD EPYC solve these problems.
The AMD EPYC SoC (system on a chip) optimizes performance at a higher level, delivering the highest core count and memory capacity; and the greatest memory bandwidth and I/O density in the industry.
Actual cluster/workstation performance depends on the ratio of resources to balance performance and minimize bottlenecks. The AMD EPYC SoC has the memory capacity and bandwidth to satisfy the high demand of the processor cores for data. It also has I/O bandwidth that matches the capability of the CPU cores to move data to and from the network, spinning disks, NVMe storage, and graphics acceleration utilities.
The AMD EPYC SoC eliminates performance gaps with innovations designed to efficiently support the needs of datacenters now and years down the road. Consider these benefits:
- Performance: The highest core count in an x86-architecture server processor, largest memory capacity, most memory bandwidth, and greatest I/O density are allotted with the right ratios to reach new levels of performance.
- Flexibility: Matches core count with application needs without compromising processor features.
- Security: AMD created the first dedicated security processor embedded in an x86-architecture server SoC. The processor manages secure boot, memory encryption, and secure virtualization on the SoC itself. Encryption keys can stay within the processor.
AMD EPYC processors can equal or exceed computing power needed for lower cost, high core clock, high counts, and low wattage power draws. Based on AMD testing, when compared to last generation Opteron processors, EPYC offers up to a 37% performance boost and supports 2666MHz of DDR4 RAM and PCI-e 3.0. The processor:
- Supports up to 21.3 GBs per channel with DDR4-2666 x 8 channels (total 170.7 GBs), versus the Xeon E5-2699A v4 processor at 19.2 GBs with max DDR4-2400 x 4 channels (total 76.8 GBs). NAP-03.
- Offers up to 128 PCI Express high speed I/O lanes per socket, versus the Xeon E5-2699A v4 processor at 40 lanes per socket. NAP-05.
- Includes up to 32 CPU cores versus the Xeon E5-2699A v4 processor with 22 CPU cores. NAP-02.
Features/Benefits (per CPU)
- Up to 32 high-performance cores (64 threads): Boosts performance and compute density.
- Up to 2TB of DDR4 memory capacity (across 8 channels): Accelerates memory-intensive application performance.
- 128 Lanes of PCI-e Gen 3: Extends server capabilities without incremental PCI switching.
- Integrated security subsystem: Protects and enables secure multi-tenancy per CPU/SoC.