Intel Xeon E5-2670 Deal and Price Tracking

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Jack-BH

New Member
Oct 21, 2016
15
5
3
30
The markets got flooded with these with major data centre refreshes. Looks like the price is back on the rise as the bulk of these have now hit the market. Buy um' up whilst you can!
 

MBastian

Active Member
Jul 17, 2016
205
59
28
Düsseldorf, Germany
The markets got flooded with these with major data centre refreshes. Looks like the price is back on the rise as the bulk of these have now hit the market. Buy um' up whilst you can!
I do not know the refresh cycles of the big players but wouldn't it be hight time that v2 E5s are phased out? Maybe it would be beneficial to wait a bit longer, not only for v2 CPUs but also for a price drop of 16GB or even (*gasp*) 32GB DDR3 DIMMs.
 
  • Like
Reactions: jstaple2

Jack-BH

New Member
Oct 21, 2016
15
5
3
30
I do not know the refresh cycles of the big players but wouldn't it be hight time that v2 E5s are phased out? Maybe it would be beneficial to wait a bit longer, not only for v2 CPUs but also for a price drop of 16GB or even (*gasp*) 32GB DDR3 DIMMs.
True, it would seem that they should begin to phase out, however a lot of the servers that are currently being refreshed take both V1 or V2 so you do find that they have V1 quite often. 16GB/32GB DIMMs are extortionate and i'm not sure when they will begin to drop really, i'd see this pricing staying quite flat maybe with a small decline.
 

J--

Active Member
Aug 13, 2016
199
52
28
41
I don't see why you think the prices will rise? As more datacenters cycle out Haswell chips, you should see the demand on these V1/V2 chips shift drastically. The power savings alone for the same amount of cores/performance should net you the difference in acquisition cost within a few months.
 

Jack-BH

New Member
Oct 21, 2016
15
5
3
30
I don't see why you think the prices will rise? As more datacenters cycle out Haswell chips, you should see the demand on these V1/V2 chips shift drastically. The power savings alone for the same amount of cores/performance should net you the difference in acquisition cost within a few months.
The V1 chip price? I'd suggest it'd see a slight rise as the amount of processors that are available, sell. You can see this with the 5600 series processors, they still sell for similar amounts as the V1 chips due to less availability (such as the X5670's/X5675s). I've already witnessed a price rise on these (in the Europe) over the past 4-5 months.
 

jrdnlc

Member
Jun 26, 2015
115
16
18
My server is running great under Unraid. Got a couple VMs running too

Ended up going with dual 2780's since I got them for $150 the pair.

Anyone else here running unraid?
 

J--

Active Member
Aug 13, 2016
199
52
28
41
My server is running great under Unraid. Got a couple VMs running too

Ended up going with dual 2780's since I got them for $150 the pair.

Anyone else here running unraid?

I dislike unRAID for several reasons.

1). You MUST boot from a USB thumb drive. They store the license key on this, and uses the serial number on the USB drive to validate key authenticity. Uhh, no thanks.
2). It's paid, but offers no real advantage over what already free out there. If you want their model of non-striped parity, SnapRAID is a much better option.
3). No dual parity. This is a deal breaker. Most drives fail when rebuilding arrays, no support for multiple parity drives is a non-starter for me.


I'm running Proxmox VE, which I see no downsides to as compared to unRAID. It supports ZFS RAID mirrored boot drives, which is much preferable to a single thumb drive (with literally the lowest quality NAND memory you can buy). It's free. It runs SnapRAID if you want to do non-striped parity. It runs KVM/QEMU just like unRAID (which is what popularized unRAID in the first place, even though it wasn't their work to begin with). It supports whatever filesystem you'd like, ZFS is much more robust and has much better performance options if you want it.
 
  • Like
Reactions: kroem and Fritz

5mall5nail5

Active Member
Nov 16, 2015
107
32
28
39
I know this OT but saying "most drives fail when rebuilding arrays" is very misleading. I work with many different array configurations and even have some 75+ TB RAID5 arrays because the client needs as much space as possible and doesn't care about availability necessarily... and we've replaced failed disks in that unit without issue and that's a massive RAID5 span of ~3TB SATAs. I know what you're going for, but saying "most drives fail during rebuild" is extremely misleading.
 
  • Like
Reactions: Fritz

Fritz

Well-Known Member
Apr 6, 2015
3,371
1,375
113
69
In my humble experience, I concur. I've rebuilt the arrays in my 2 FreeNAS boxes several times as I shuffle drives around and have never had a failure during rebuilding. I originally started out with several WD Greens in the arrays and then decided I didn't want them in there. As I recall, there were 8 of them. They were all replaces over the span of a couple of weeks.
 

Churchill

Admiral
Jan 6, 2016
838
213
43
I dislike unRAID for several reasons.

1). You MUST boot from a USB thumb drive. They store the license key on this, and uses the serial number on the USB drive to validate key authenticity. Uhh, no thanks.
2). It's paid, but offers no real advantage over what already free out there. If you want their model of non-striped parity, SnapRAID is a much better option.
3). No dual parity. This is a deal breaker. Most drives fail when rebuilding arrays, no support for multiple parity drives is a non-starter for me.


I'm running Proxmox VE, which I see no downsides to as compared to unRAID. It supports ZFS RAID mirrored boot drives, which is much preferable to a single thumb drive (with literally the lowest quality NAND memory you can buy). It's free. It runs SnapRAID if you want to do non-striped parity. It runs KVM/QEMU just like unRAID (which is what popularized unRAID in the first place, even though it wasn't their work to begin with). It supports whatever filesystem you'd like, ZFS is much more robust and has much better performance options if you want it.



UnRAID just works right out of the box. No messy configs, no having to dig through underlying code, no mystery on how to do things, it's the Apple of home NAS/SAN devices. It's simple, flexible, and does the job for most of the people who want to slap a bunch of disks together, hit a few buttons, and off they are running.

I've used UnRAID since the 4.x days, I tried FreeNAS and Xpenology, in the end I emailed Tom telling him "I'm sorry I left, i lost my key can I have another one?" and he sent me my new license on my new key.

Having the license and file on a USB drive is a non-issue, hell I know PBX's that have the same functionality.
"What if the USB drive breaks/dies/fractures/fails" - Make sure you have a backup! Not rocket surgery.
 

unclerunkle

Active Member
Mar 2, 2011
150
38
28
Wisconsin
Disagree again - I could pull ticket history of large RAID50 and RAID5 arrays failing in use, and rebuilding 100% fine w/ over 24 disks in a span.
I have already lost data because of single parity. I will never choose it again.
As usual in life, I think the truth here lies somewhere in between. A lot here depends on the disk type and usage. Obviously, enterprise drives will fare better than consumer. Likewise, enterprise drives in light use are unlikely to fail during a rebuild. However, lightly used consumer drives when presented with a rebuild are more likely to fail.

I know I'm stating the obvious, but it also explains how you both can be right.
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,083
640
113
Stavanger, Norway
olavgg.com
The problem people ignores is that if one block is screwed during rebuild, you may not get a consistent filesystem when the rebuild is done. With double parity you at least have the change to reconstruct that damaged block.

After 7 years with ZFS, I've seen plenty of damaged blocks. And every time I see them I wonder how someone can trust filesystems like NTFS, EXT4 or propritary raid.
 

5mall5nail5

Active Member
Nov 16, 2015
107
32
28
39
The problem people ignores is that if one block is screwed during rebuild, you may not get a consistent filesystem when the rebuild is done. With double parity you at least have the change to reconstruct that damaged block.

After 7 years with ZFS, I've seen plenty of damaged blocks. And every time I see them I wonder how someone can trust filesystems like NTFS, EXT4 or propritary raid.
There is no substitute for backing up or replicating data. You can have RAIDZ3. Or, you can use NetApps new RAID-TEC. It doesn't matter. If you lose enough pieces of equipment you will lose data. The point is, there's a place for everything. I have a client with 75TB of RAID5 storage because they need as much staging area as possible - it's not super long retention data. Haven't had an issue rebuilding it. I have clients with RAID50, RAID6, RAID10, RAID-DP, and RAID-TEC in all sorts of varying pool sizes... they all have different purposes.
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,083
640
113
Stavanger, Norway
olavgg.com
There is no substitute for backing up or replicating data. You can have RAIDZ3. Or, you can use NetApps new RAID-TEC. It doesn't matter. If you lose enough pieces of equipment you will lose data. The point is, there's a place for everything. I have a client with 75TB of RAID5 storage because they need as much staging area as possible - it's not super long retention data. Haven't had an issue rebuilding it. I have clients with RAID50, RAID6, RAID10, RAID-DP, and RAID-TEC in all sorts of varying pool sizes... they all have different purposes.
The problem here is that you have no idea if something has gone wrong. In extreme cases you may have backup of corrupted data, you just dont know it yet. 75TB running raid5 is more an experiment than best practise.

If you need cost effective storage, you should take a look on Ceph where you can have erasure coding with for example 24 shards where 3 of them are parity blocks. Rebuild times should be a lot better than a similar setup with traditional raid.
 
  • Like
Reactions: heathen