> IC: While AMD increases the performance on its processor product line, the bandwidth out to DRAM remains constant. Is there an ideal intercept point where higher bandwidth memory makes sense for a customer?
> FN: I think you’re absolutely right, and really at the top of the stack, depending on the workload, that can be the performance limiter. If you’re comparing top of the stack parts in certain workloads, you’re not going to see as much of a performance gain from generation to generation, just because you are memory bandwidth limited at the end of the day.
> That’s going to continue as we keep increasing the performance of cores, and keep increasing the number of cores. But you should expect us to continue to increase the amount of bandwidth and memory support. DDR5 is coming, which has quite a bit of headroom to DDR4. We see more and more interest in using high bandwidth memory, for an on-package solution. I think you will see SKU’s in the future from a variety of companies incorporating HBM, especially for AI. That will initially be fairly specialized to be to be candid, because HBM is extremely expensive. So for most the standard DDR memory, even DDR5 memory, means that HBM is going to be confined initially to applications that are incredibly memory latency sensitive, and then you know, it’ll be interesting to how it plays out over time.
I would love to see desktop systems move to 4 channels. 2 DIMMs/channel can die as far as I'm concerned, lets have 2 channels/DIMM for laptops. Memory channel density needs to increase along with core density (wider and/or more cores).
They are still using GF 14nm on their IOD due to Wafer Supply Agreement. We will have to wait and see how they fit their iOD with PCI-Express 5 and DDR5 with Zen 4.
And I cant wait to see Netflix plays around with these in their FreeBSD box.
It is interesting the biggest upgrade from Zen 4 won't actually the Core part but the iOD. As I am expecting Zen 4 to be some tweaks and a Die Shrink to 5nm. TSMC also somehow unexpectedly announce doubling their 5nm capacity with new Fab capacity being built. My guess would be an aggregate demand from Apple and AMD exceed certain threshold to be worth doing.
Core chiplets only send signal to I/O die, the distance is measured in mm and the parameters of these wires is tightly controlled by AMD.
I/O die sends signals to the rest of hardware. Wires are way longer, often with multiple connectors in them. I/O chiplet needs to produce and consume more of these milli-amps of electrical current.
One can produce larger transistors with finer processes e.g. by adding more fins to FinFETs, but since they need larger transistors anyway to handle more current, I’m not sure there’s much value in upgrading process of the I/O chiplet.
https://www.anandtech.com/show/16548/interview-with-amd-forr...
> IC: While AMD increases the performance on its processor product line, the bandwidth out to DRAM remains constant. Is there an ideal intercept point where higher bandwidth memory makes sense for a customer?
> FN: I think you’re absolutely right, and really at the top of the stack, depending on the workload, that can be the performance limiter. If you’re comparing top of the stack parts in certain workloads, you’re not going to see as much of a performance gain from generation to generation, just because you are memory bandwidth limited at the end of the day.
> That’s going to continue as we keep increasing the performance of cores, and keep increasing the number of cores. But you should expect us to continue to increase the amount of bandwidth and memory support. DDR5 is coming, which has quite a bit of headroom to DDR4. We see more and more interest in using high bandwidth memory, for an on-package solution. I think you will see SKU’s in the future from a variety of companies incorporating HBM, especially for AI. That will initially be fairly specialized to be to be candid, because HBM is extremely expensive. So for most the standard DDR memory, even DDR5 memory, means that HBM is going to be confined initially to applications that are incredibly memory latency sensitive, and then you know, it’ll be interesting to how it plays out over time.