I feel like that was an ultra awkward conversation to listen to. This is just a bunch of SandForce 2000 series drives in RAID. PCIe 2.0 only so limited speed anyway.
Since that happened, OCZ went bankrupt and Toshiba bought what was remaining.
Isn't the 40 just 10 x4 anyway? I'd think it'd just show up as 4x10?
To me this is the big winner.
Even if it "won't work" for @Chuckleb 's project for latency, you should still be able to get decent throughput right?
Has anyone gotten one of these yet?
Intel XL710QDA2BLK PCIe 3 0 x8 Dual Port Ethernet Converged Network Adapters | eBay
2x QSFP+ 40Gb ethernet ports that can be split using breakout cables to 4x 10GbE. I'd imagine this is best in class network card right now.
I'm also thinking that for...
@Jeggs101 Super post man.
A few things. enterprise drives typically have power loss protection, slc or mlc nand, higher quality NAND (eMLC for example) and therefore higher write endurance. They're also designed for lower afr and lower uber. You'll see 10^-17 as the uber on most SAS drives...
You need SOME airflow on them, just not a lot. You can get a silent 120mm fan atop the ram and CPU and you'd be fine.
In the SM 1U's you'll see they have 2 fans over the HSF just to keep airflow up if one fan fails, but they spin really slow.
Does pfsense do ConnectX-2 EN cards? You'd probably save money on SFP+.
Is there a good way to really test switches made like this? Will it matter port to port?
I saw these Plextor M6e's for $400 for 512GB --- Plextor M6E M 2 2280 512GB M 2 PCI Express X2 Solid State Drive MLC | eBay
It's really expensive on a per GB basis, but here's the thing - what if I ran a web server on that kind of drive? There is a mysql db but it's like under 20% writes...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.