I do not know the refresh cycles of the big players but wouldn't it be hight time that v2 E5s are phased out? Maybe it would be beneficial to wait a bit longer, not only for v2 CPUs but also for a price drop of 16GB or even (*gasp*) 32GB DDR3 DIMMs.The markets got flooded with these with major data centre refreshes. Looks like the price is back on the rise as the bulk of these have now hit the market. Buy um' up whilst you can!
True, it would seem that they should begin to phase out, however a lot of the servers that are currently being refreshed take both V1 or V2 so you do find that they have V1 quite often. 16GB/32GB DIMMs are extortionate and i'm not sure when they will begin to drop really, i'd see this pricing staying quite flat maybe with a small decline.I do not know the refresh cycles of the big players but wouldn't it be hight time that v2 E5s are phased out? Maybe it would be beneficial to wait a bit longer, not only for v2 CPUs but also for a price drop of 16GB or even (*gasp*) 32GB DDR3 DIMMs.
The V1 chip price? I'd suggest it'd see a slight rise as the amount of processors that are available, sell. You can see this with the 5600 series processors, they still sell for similar amounts as the V1 chips due to less availability (such as the X5670's/X5675s). I've already witnessed a price rise on these (in the Europe) over the past 4-5 months.I don't see why you think the prices will rise? As more datacenters cycle out Haswell chips, you should see the demand on these V1/V2 chips shift drastically. The power savings alone for the same amount of cores/performance should net you the difference in acquisition cost within a few months.
That's a steal. The prices are increasing and 2670 duals are now available for $200.Ended up going with dual 2780's since I got them for $150 the pair.
My server is running great under Unraid. Got a couple VMs running too
Ended up going with dual 2780's since I got them for $150 the pair.
Anyone else here running unraid?
Dual-parity added with v6.2, fyi.3). No dual parity. This is a deal breaker. Most drives fail when rebuilding arrays, no support for multiple parity drives is a non-starter for me.
I dislike unRAID for several reasons.
1). You MUST boot from a USB thumb drive. They store the license key on this, and uses the serial number on the USB drive to validate key authenticity. Uhh, no thanks.
2). It's paid, but offers no real advantage over what already free out there. If you want their model of non-striped parity, SnapRAID is a much better option.
3). No dual parity. This is a deal breaker. Most drives fail when rebuilding arrays, no support for multiple parity drives is a non-starter for me.
I'm running Proxmox VE, which I see no downsides to as compared to unRAID. It supports ZFS RAID mirrored boot drives, which is much preferable to a single thumb drive (with literally the lowest quality NAND memory you can buy). It's free. It runs SnapRAID if you want to do non-striped parity. It runs KVM/QEMU just like unRAID (which is what popularized unRAID in the first place, even though it wasn't their work to begin with). It supports whatever filesystem you'd like, ZFS is much more robust and has much better performance options if you want it.
Disagree again - I could pull ticket history of large RAID50 and RAID5 arrays failing in use, and rebuilding 100% fine w/ over 24 disks in a span.I think "most drive failures occur when rebuilding arrays" is more accurate.
Disagree again - I could pull ticket history of large RAID50 and RAID5 arrays failing in use, and rebuilding 100% fine w/ over 24 disks in a span.
As usual in life, I think the truth here lies somewhere in between. A lot here depends on the disk type and usage. Obviously, enterprise drives will fare better than consumer. Likewise, enterprise drives in light use are unlikely to fail during a rebuild. However, lightly used consumer drives when presented with a rebuild are more likely to fail.I have already lost data because of single parity. I will never choose it again.
There is no substitute for backing up or replicating data. You can have RAIDZ3. Or, you can use NetApps new RAID-TEC. It doesn't matter. If you lose enough pieces of equipment you will lose data. The point is, there's a place for everything. I have a client with 75TB of RAID5 storage because they need as much staging area as possible - it's not super long retention data. Haven't had an issue rebuilding it. I have clients with RAID50, RAID6, RAID10, RAID-DP, and RAID-TEC in all sorts of varying pool sizes... they all have different purposes.The problem people ignores is that if one block is screwed during rebuild, you may not get a consistent filesystem when the rebuild is done. With double parity you at least have the change to reconstruct that damaged block.
After 7 years with ZFS, I've seen plenty of damaged blocks. And every time I see them I wonder how someone can trust filesystems like NTFS, EXT4 or propritary raid.
Survivorship bias - WikipediaDisagree again - I could pull ticket history of large RAID50 and RAID5 arrays failing in use, and rebuilding 100% fine w/ over 24 disks in a span.
The problem here is that you have no idea if something has gone wrong. In extreme cases you may have backup of corrupted data, you just dont know it yet. 75TB running raid5 is more an experiment than best practise.There is no substitute for backing up or replicating data. You can have RAIDZ3. Or, you can use NetApps new RAID-TEC. It doesn't matter. If you lose enough pieces of equipment you will lose data. The point is, there's a place for everything. I have a client with 75TB of RAID5 storage because they need as much staging area as possible - it's not super long retention data. Haven't had an issue rebuilding it. I have clients with RAID50, RAID6, RAID10, RAID-DP, and RAID-TEC in all sorts of varying pool sizes... they all have different purposes.