Comments Locked

14 Comments

Back to Article

  • yuhong - Thursday, January 25, 2018 - link

    I believe the listed memory chips are dual die packages, not true 16Gbit chips.
  • KarlKastor - Friday, January 26, 2018 - link

    You're right. The tenth digit is an "M". M = DDP
  • iwod - Friday, January 26, 2018 - link

    Which made half of the article pointless....
    But, that means we now have single die, duel die, Stacked Die.

    What exactly is stopping us from making larger DRAM, both capacity and die area wise. In the era of In-Memory Computing, 4TB isn't that large. DRAM and SSD hasn't seen any price drop for a long time.
  • MrSpadge - Saturday, January 27, 2018 - link

    I thought the same while reading the article. If it's such a great hassle to improve per module capacity, why not simply go for larger dies? A few points came to my mind:

    - economies of scale: you don't want to produce a design only for the high end, but rather reuse a commodity item many times (considering the shift to virtualization & cloud computing, and products like the Titan V, this may not apply to the current high end market. Also TSVs are expensive)

    - the yield of smaller chips is better (but not a big issue for memory, since you can easily build some redundancy into them)

    - better array efficiency, i.e. a chip of 2x the capacity will be less than 2x bigger because some logic doesn't need to be duplicated (this is actually a reason to do it)

    - wasted space at the wafer edge: the larger your dies, the more unusable "fractional chips" you get at the wafer edge. It seems weird they don't simply put smaller chips of the same kind there. But so far the cost for doing that would probably be too high (stepper lithography is a serious high throughput business requiring high precision. Changing the mask on the fly is simply not possible for the current devices)
  • iwod - Monday, January 29, 2018 - link

    Yes but we have had those capacity since the 2x nm era. And we are now in 1x nm, possibly 4x the density increase and yet we have little to no capacity increase.

    i.e Despite better node, better yield, mature tech, smaller die size, our DRAM price / GB hasn't gone down at all.
  • FreckledTrout - Thursday, January 25, 2018 - link

    Those latencies are pretty bad.
  • DanNeely - Friday, January 26, 2018 - link

    Giant capacity server ram's always been slower than consumer ram due to the additional layers of hardware needed to stick so many more ram dies on the memory bus while keeping stable signalling. What you're overlooking is that they're hundreds or thousands of times faster than reading from/writing to an SSD; which is what such huge dimms are intended to be used as replacements for. (In the old days their advantage over HDDs was even larger).
  • Lolimaster - Friday, January 26, 2018 - link

    This is basically the big market for NVRam like optane, get 1-2TB of ram relatively cheap with access to before impossible densities for a "small" performance hit.
  • dgingeri - Friday, January 26, 2018 - link

    Larger capacity memory is always going to have higher latency.
  • Pinn - Thursday, January 25, 2018 - link

    I built a titan x (pascal) quad channel 6-core 128gb ram 1.2tb pcie ssd machine awhile back. looks like the ram kept the most value. do we stock up on these like ammo?
  • PeachNCream - Friday, January 26, 2018 - link

    Higher capacity DIMMs will absolutely make lower capacity DIMMs more affordable so there won't be any absurd prices from today forward. </sarcasm>
  • Xajel - Sunday, January 28, 2018 - link

    Does that mean we will finally have 16GB single rank sticks ?
  • schujj07 - Thursday, February 1, 2018 - link

    I believe you mean in a 2 socket system the Xeon can support 3TB of RAM or 1.5TB per socket.

Log in

Don't have an account? Sign up now