Announcement of Intel Xeon Cooper Lake processors and Optane DCPMM 200 memory: upgrade not for everyone

Announcement of Intel Xeon Cooper Lake processors and Optane DCPMM 200 memory: upgrade not for everyone


Intel today officially unveiled the third generation of Xeon Scalable processors, codenamed Cooper Lake. The most important thing to know about them is that, as stated earlier, they are exclusively focused on 4- and 8-socket systems within the new Cedar Island platform. Simply put, these are not mass processors, but solutions created for large corporate clients and hyperscalers. For typical 1- and 2-socket platforms, Cascade Lake Refresh or earlier generations of Xeon Scalable are now available.

Only a few Gold 5300/6300 and Platinum 8300 processors are presented in the new family. An important innovation for building seamless 4S / 8S systems is the fact that they have six UPI lines (10.4 GT / s) instead of two or four earlier. This is important for internal data exchange – accessing the “alien” memory should be faster. Typical loads for such dense systems are in-memory databases, virtualization, real-time analysis of large amounts of data, OLTP, and now neural networks.

The second important innovation in Copper Lake is to expand the capabilities of DL Boost instructions – they now have support for the bfloat16 format, and not just INT8. The bf16 format was developed by Google and is used in its own TPU tensor accelerators, but its hardware and software support is very wide. The main advantage of bf16 in maintaining a balance between accuracy and computational complexity for neural networks.

So now, no matter how surprising it may sound, Intel Xeon processors can be used not only to launch, but also to train neurons. So far, there are no detailed comparisons of performance with competing products, but if we consider this innovation from the point of view of the versatility of the CPU as such, then the opportunity to refuse to install an external accelerator for a number of tasks seems to be quite good. To hyperscalers, that is, owners of a fleet of tens and hundreds of thousands of servers, unification is always to their liking. If speed is needed to the detriment of versatility, Intel has prepared a specialized FPGA Stratix 10 NX with dedicated tensor blocks and on-board HBM2.

Features Intel Xeon Cooper Lake (click to enlarge)

Features Intel Xeon Cooper Lake (click to enlarge)

Updates category “A trifle, but nice!” they concern the presence of two FMA ports for Gold 5300 processors, as well as support for DDR4-3200 memory (1DPC) for older models. The number of memory channels remained equal to six per socket. The number of cores still does not exceed 28, but the clock frequencies have slightly grown. The current leader of the Platinum 8380H (L) has a formula of 2.9 / 4.3 GHz, which with its 28 cores spills out in a TDP of 250 watts. Intel Speed ​​Select prioritization support is still available for select models.

Intel Optane DCPMM 200

New processors have indices H and HL. For the former, the maximum amount of supported memory has grown to 1.12 TB, and for the latter, the upper bar has remained at 4.5 TB. So for 4S-systems, the total volume can reach 18 TB. How to recruit them? Obviously, the set of DRAM and Intel Optane DCPMM in the DIMM form factor, now of the second generation, will simply go broke on ordinary memory. The Optane DCPMM 200 (formerly Barlow Pass) promises up to a quarter average performance compared to the first generation. The maximum capacity of the memory module is 512 GB.

The frequency is 2666 MT / s, and the maximum read and write speeds reach 6.8 and 2.3 GB / s. They, like endurance, depend on the capacity of the module and the type of load. Modules can withstand recording from 75 to 363 Pbytes of data. And yes, they can still be used in the transparent expansion mode visible in the RAM system, but Intel insists on using App Direct Mode, which is both more efficient and reliable. The backward compatibility of the Optane PMem 200 with Cascade Lake (Refresh) clearly does not say anything, but it probably doesn’t make much sense.

Features Intel Optane DCPMM 200

Features Intel Optane DCPMM 200

Be that as it may, Intel Xeon Cooper Lake requires new platforms because they have a new LGA4189 socket, which they will apparently share with 10-nm Ice Lake-SP for Whitley, in the end of which, at the end, the Intel speaker assured everyone again. These Ice Lake are expected to get the same capabilities as Cooper Lake, plus the PCIe 4.0 support required by the Intel D7-P5500 and D7-5600 server SSDs introduced today. The upcoming generation of Xeon Sapphire Rapids processors for the Eagle Stream platform has so far announced a further expansion of DL Boost capabilities – the new AMX (Advanced Matrix eXtension) instructions are likely to hide tensor kernels.

At the moment, support for Cooper Lake has already been announced by Facebook and the well-known narrow-circle vendor Wiwynn, which supplies servers to giants like Facebook or Microsoft. In both cases, these are OCP platforms, both the classic rack-mount Sonora Pass and the specific four-processor Yosemite with Delta Lake.

Facebook Yosemite V3

Facebook Yosemite V3: four-processor chassis with Xeon Cooper Lake

If you notice an error, select it with the mouse and press CTRL + ENTER.

Leave a Comment