Kioxia and Dell cram 10 PB into slim 2RU server (blocksandfiles.com)
117 points by rbanffy 11 hours ago
NitpickLawyer 9 hours ago
There's been a lot of talk about orbital DCs lately, but with these levels of density, orbital CDNs might be a more obvious usecase. It would be interesting to see if something like Starlink can use something like this to cache media content and reduce their overall data moving through the constellation. It could even be worth it to have some satellites in higher orbits (even GEO if the ground hw can reach it) dedicated to streaming media content. You can tolerate higher RTT for content that doesn't need to be real time.
evil-olive 8 hours ago
no, absolutely not. orbital datacenters are never going to happen, it doesn't matter whether you try to frame them as compute or storage or whatever else.
the extreme density of these SSDs is actually an anti-feature in the context of spacecraft hardware.
the RAD750 CPU [0] for example uses a 150nm process node. its successor the RAD5500 [1] is down to 45nm. that's an order of magnitude larger than chips currently made for terrestrial uses.
radiation-hardening involves a lot of things, but in general the more tightly packed the transistors are, the more susceptible the chip is to damage. sending these SSDs to space would be an absurd waste of money because of how quickly they would degrade.
and then there's the power consumption & heat dissipation. one of these drives draws 25W [2] and Dell is bragging about cramming 40 of them into one server. that's a full kilowatt of power - essentially a space heater in a 2U form factor.
0: https://en.wikipedia.org/wiki/RAD750
1: https://en.wikipedia.org/wiki/RAD5500
2: https://americas.kioxia.com/content/dam/kioxia/en-us/busines...
wahern 7 hours ago
AFAICT[1] the latest generation of SpaceX Starlink satellites use AMD Versal XQR SoCs, which are built on a 7nm process with components like the main processor (dual-core ARM Cortex A-72) and memory (DDR4) clocked in the gigahertz, not megahertz, range.[2] At least some of these SoCs models (presumably the lower-clocked ones) are certified for geosynchronous orbits, not just low-earth orbits.
[1] https://www.pcmag.com/news/amd-chips-are-powering-newest-sta...
[2] https://docs.amd.com/r/en-US/ds955-xqr-versal-ai-edge/Genera...
Neywiny 2 hours ago
rbanffy 43 minutes ago
wildzzz 5 hours ago
The RAD750 is like 20 years old and is the absolute king in high reliability in the most extreme radiation environments. LEO is much more forgiving and there's plenty of examples of commercial gear operating in it. You could definitely put this much storage into LEO along with some EDAC and be fine for a few years.
killerstorm 6 hours ago
It's possible to run modern GPU on a sattellite: https://www.starcloud.com/starcloud-1
Some error rate is acceptable for uses which aren't "mission-critical".
nine_k 4 hours ago
dmurray 8 hours ago
In the limit, packing transistors tighter should mean more radiation resistance, not less, because you can shield them with a smaller mass of water or lead or whatever.
nine_k 4 hours ago
manquer 7 hours ago
> order of magnitude
It is much worse than that. Even taking the node names at face value[1] that is just one dimension, there are two/three[2] dimensions to consider so it would be 100x different.
Nehalem(2008) was a 45nm node based chip and had ~3MTr/mm2 transistors in comparison today we have 3nm(N3E/P/X/C) nodes(2023-4) from TSMC area about 220MTr/mm2.
Of course that is just one metric- transistor count, there are many other improvements to consider over the last two decades.
[1] Processor node names after all haven't been tied to physical scale for 30 years https://www.eejournal.com/article/no-more-nanometers
[2] HBM that modern GPUs use already leverage 3D ICs.
fgfarben 8 hours ago
i can write extremely confident things in all lowercase and include citations too. [1]
doesn't mean i'm correct. [2]
hilariously 7 hours ago
bigyabai 5 hours ago
tesdinger 7 hours ago
For the sake of the generations that come after us, we really should not dump valuable material into space. I somehow doubt the electronics in space would be recovered and recycled properly.
9dev 7 hours ago
Nothing is recycled properly. Recycling was a story told to ease consumers minds so they keep on consuming. The stuff you throw away ends up on a landfill, in the sea, or on a ship to someplace else where it gets burned and then buried. Sending it to space makes absolutely no difference.
UltraSane 4 hours ago
KaiserPro 8 hours ago
Or you could use fibre, which has the advantage of not needing to use > 1kw of concentrated microwave to get ~2gig of throughput
Or even better not yeeting it into an environment where its cooked/cooled every 90 minutes
Or even better where its not absolutely pelted by cosmic rays enough to obliterate a good GB a day of data.
Or space data centre.
ssl-3 8 hours ago
If I correctly understand what you're suggesting, then that could save on uplink bandwidth. Sending one copy into space, and then sending it back down over and over again sounds nice.
But does it solve a problem that we actually have? Is uplink bandwidth a pressing limitation?
LargoLasskhyfv an hour ago
/me turning in my sleep muttering https://en.wikipedia.org/wiki/Teledesic
fancyfredbot 9 hours ago
The very first sentence of this article mistakes Terabytes and Petabytes. I used to dismiss the entire article as poor quality on seeing a mistake like this. But these days it also feels like an indicator the article was written by a human and might actually have something interesting to say.
Sadly not in this case though - the Kioxia drives are interesting, but the fact that Dell has put some in a box is much less so.
wildzzz 5 hours ago
All it really means is that big corps that have already existing sales relationships with Dell will be purchasing them in the next fiscal year. Anyone else that needed this level of storage density has already built their own boxes.
Pallav123 8 hours ago
At current enterprise NVMe prices, the drives alone for this must easily push past the $500k to $1M mark. It's fascinating to see this level of density, but it’s strictly going to be hyperscaler or high-end defense/research budget territory for a long time.
buzer 7 hours ago
The list price seems to be ~40M. https://www.dell.com/en-hk/shop/servers-storage-and-networki... Select the 40 slot chassis and put 40 of those 245TB disks in. Comes out at ~HK$317M. Of course HK prices might also be higher than what Dell USA offers.
Now how heavy the discounts you can get I don't know.
tempest_ 6 hours ago
My understanding is no one actually pays Dell sticker price. They list that price on the website but if you talk to whatever they are calling their sales reps you get the real price.
buzer 5 hours ago
rbanffy 33 minutes ago
I believe the rationale is that they are so much denser than they will compensate the price difference over the entire lifetime of the system. A 10 TB install is a full rack otherwise and this is 5% of the space. Colocation for HFT tends to be expensive and using 5% of the space for the same amount of data might make sense.
twotwotwo 2 hours ago
It is kinda neat how the density can trickle down. When an individual SSD can hold tens of TBs, recent-gen drives can do millions of random reads/s each, and one socket can handle lots of RAM and many cores, it doesn't take the fancier chassis with two sockets or lots of storage bays to handle pretty substantial data work.
On the other hand, current part prices are not neat; a commodity platform only helps so much if none of what you want put in it is affordable! And other factors like power and cooling can push you away from optimizing for density. I just like that along with the ludicrious becoming possible, merely great stuff becomes more feasible.
mikestorrent 8 hours ago
I'd double your guess on this one
stego-tech 40 minutes ago
Look, out of all the BS that's come from the current AI CYOA (bubble, revolution, scam, whatever floats your particular boat), the one thing I'm optimistic for is the absolute glut of memory that'll be available once this cycle wraps up. I'm salivating at the thoughts of finally producing memory in such quantities that we can retire HDDs for bulk storage in all but the edgiest of edge cases; of more solid state tech seeping into consumer and business devices, enabling us to compute locally again instead of leaning on cloud providers for extra storage. Hell, I'd just be happy to see 4TB TLC or QLC SSDs coming in under $100 with endurance ratings that aren't garbage.
C'mon, 60TB SSD NAS for my media collection, or 20TB external SSD for backups. Get that density up and those costs down, already.
bombcar 9 hours ago
Full NICs takes about 666 minutes to fill this thing.
Satan’s NAS!
ksec 8 hours ago
This is one of the case limited by PCIe speed, sharing it with SSD so Network could only do 5x400Gbps Network. This is on PCIe 5.0, luckily we have 7.0 spec ready and 8.0 is even in 0.5 draft status.
If we could somehow increase the density further by 5x, we would be able to store 1EB in a single rack.
The most interesting part to me is the last sentence.
>Scality tells us it’s working on supporting a future nearline-class SSD from Samsung, viewed as an HDD killer, with similar or even larger capacity and a roadmap out to a 1 PB drive.
Finally a HDD killer. May be in another 5 - 10 years time. The day of everyone having an SSD NAS / AI Cloud at home will come.
loeg 6 hours ago
QLC already beats out HDD in power constrained hyperscalar environments. Capex is not the only factor.
zeristor 8 hours ago
Tell me about the thermals.
zamadatix 8 hours ago
Max per drive is 25 W, so even a rack with 20 servers of 40 drives each is probably less than the average GPU rack even after the other overheads.
amelius 7 hours ago
If datacenters start buying these things, will we see consumer harddrives go up in price?
joezydeco 7 hours ago
Kioxia was my eMMC supplier until earlier this month, when they said they couldn't fill my orders anymore. They're sold out.
So, yes.
nout 7 hours ago
I could do some cool backups with this bad boy.
mmanfrin 7 hours ago
Time for my NAS to get an upgrade.
danhon 7 hours ago
Someone please fix the title for this.
smallerize 5 hours ago
9.8 PB?
metadat 7 hours ago
Now make it for consumers. Storage capacity per dollar has really stalled.
a1o 7 hours ago
Now you put this in a cruise ship and you can move a lot of data
varispeed 8 hours ago
10PB is probably the amount of data a medium sized country can collect about its all citizens (basic details, work history, all taxes, all financial records, all medical records, all police records, all biometric records and more) for their lifetime.
I think development like this might get many public sector focused firms sweating.
jandrewrogers 7 hours ago
Those records are going to be pretty negligible in terms of storage. It is only a couple new records per day. Even if you add things like detailed mobile and tracking telemetry, it is a few MB per person per day.
louwrentius 9 hours ago
What would this cost?
geerlingguy 9 hours ago
I can't remember where I saw it, but I think each of these high capacity drives is in well into the 15-25k price range.
So a petabyte will be $600-800k alone, plus a server with enough high-speed PCIe lanes to serve the 40+ drives, definitely $1m+
bracketfocus 9 hours ago
They are likely 200USD+ per TB, so one 250TB drive would be ~50,000USD.
There’s probably bulk pricing, but if you bought 40 drives separately thats 2,000,000USD in storage alone.
DannyBee 5 hours ago
It's actually now at least 400 per tb.
The 64's are 25k, up from 6k a year ago. I have to imagine the 128's or 256's are at least 500/tb
retired 6 hours ago
$500 on Facebook Marketplace in 20 years time.
cr125rider 9 hours ago
More than you can afford cause you had to ask, ha
gosub100 9 hours ago
You can't buy this stuff anymore. They are leased and rented through layers of middlemen.
lostlogin 9 hours ago
> anymore
Could you ever buy it?
DannyBee 5 hours ago
reactordev 10 hours ago
Remember that season of Silicon Valley on HBO that was all about “the box”?
I feel like we’re in that season.
darknavi 9 hours ago
Just waiting for the Gavin Belson edition box.
tanseydavid 9 hours ago
Signature edition ;)
joe_mamba 10 hours ago
Can't wait to move my spinning rust NAS to this in 20 years.
loeg 9 hours ago
I went to QLC for my NAS last cycle. The $/TB was worse, but not by a huge margin, and the performance is quite a bit better (not that it matters).
anonymousiam 8 hours ago
I've been wanting to update my (100TB) NAS for over five years, but I haven't yet found anything that I feel is worth upgrading to. One of these with a QSFP56 interface would be nice, but I would need to sell one of my houses to pay for it, so I'll be waiting a little longer...
mx7zysuj4xew 9 hours ago
Sadly none of that enterprise hardware will ever make it to you due to being wastefully shredded
theandrewbailey 8 hours ago
I work in the refurb department of an e-waste recycling company. In my n=1 data point, some server drives are shredded/destroyed, some aren't (maybe half) before they reach my team. Of the ones that aren't, most are too small to sell, or have bad reads or reallocated sectors. Maybe 10% are fit to resell, not zero.
tempest_ 9 hours ago
NVME SSDs are consumable items more so than HDDs are.
These drives will arrive in the secondary market to be snapped up by businesses lower in the food chain. By the time you can find them they will be ridden hard and put away wet that you probably wont want them.
theandrewbailey 8 hours ago
I work in the refurb department of an e-waste recycling company. Some SSD brands are more durable than others. In my experience, a greater proportion of Intel and Micron SSDs are (or have) failed than any other brand. It's as if sysadmins are like "Intel is a good brand, lets use these SSDs to cache our HDD storage array", then throw them out when they turn read-only.
bitwize 4 hours ago
"Let's build a compact, hugely data-dense server that absolutely no one can afford!"
retired 9 hours ago
[flagged]
tomhow 4 hours ago
Please don't sneer at imaginary people on HN. We're trying for something better than that here.
trvz 8 hours ago
Not quite yet.
The interesting thing here is ~256TB in a single drive, but it's in E3.L form factor.
I have about 160TB on hard drives that I'm waiting to offload onto a single SSD.
But that needs to come with a connector that has adapters to USB-C, so I can attach it to my Macbook Neo.
Hopefully they get it a bit more dense soon and into the 2.5" NVMe form.
dijit 8 hours ago
I've been waiting with bated breath for a SATA 3.5" SSD with high capacity.
I might be waiting forever, because clearly there's nothing coming. Though I'm not sure if it's because it's technically difficult (high power consumption to keep the flash lit?) or something else.
I'm aware that it leaves performance on the table for the chips, and probably that means that unit economics means that for the yeild: OEMs would rather make high performance drives which sell for more.
But a 4-bay NAS with 3.5" SSD's would be silent and theoretically sip power, and there so much space for chips, you could space them nicely and get 10+TiB in a drive...
I don't need to touch every cell, I just want something silent and stateless and less power intensive for my time-capsule backups and linux ISOs.
Alas.
crote 5 hours ago
crote 5 hours ago
E3.L is just fancy-shaped PCIe, is it not? What's stopping the standard off-the-shelf NVMe-to-USB converter chips from being used?
Given this disk is going to cost something like $40k, what's another $500 for having a Chinese hw eng throw one of those chips together with an E3 connector on a PCB for you, and 3D printing a neat housing?
TiredOfLife 6 hours ago
Attaching a $40k drive to a $600 Macbook
jauntywundrkind 8 hours ago
There's a ton of different adapters already between edsff connector used for e3 / e2 / e1 drives and everything else pcie already (pcie, m.2, u.2). For example this pcie card. (Good luck tweaking your equalizer settings jumpers by hand though, whew!!) https://www.microsatacables.com/pcie-x8-gen4-with-redriver-t...
Drop that in one of the many usb4 to pcie docks and you should be good to go. Pretty fugly but it ought to just work! I think there's some cheaper models that are under $90 still available, but here's a listing. https://www.dfrobot.com/product-2835.html
I believe a more focused dedicated usb<->NVMe chip might also work, if attached to an edsff connector. I didnt look hard, but I haven't seen any such products yet, but: it's mostly mechanical/packaging, some signal integrity checks, but generally wouldn't really be much different in the end than a NVMe adapter. Seems very doable.
Build it! Someone could sell (to quote a Daily Show) literally dozens of said adapter! (Eventually probably many many more, but not a huge second hand market for edsff atm).
trvz 8 hours ago
nickstinemates 9 hours ago
Hitting a little too close to home with this comment.
tliltocatl 8 hours ago
Data retention is probably unusable for archival purposes.
tesdinger 7 hours ago
All the increases in density are impressive, but they come with the downside of repairability and recycling difficulties. I hope we can still repair this when parts of it break or at least recycle it properly. No matter how high tech it is, eventually this will break.
geerlingguy 7 hours ago
These drives all use standard enterprise storage interconnects, and the server chassis is like other Dell server chassis. Not using ATX or EATX, but it's status quo for Dell, and many old Dell servers wind down their old age in homelabs.
Hopefully one of these 10 PB monsters will be under $2,000 someday, at which point I will pop it in my homelab :)