Upgrade options for my Opteron 6128

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Radian

Member
Mar 1, 2011
59
5
8
What is a good option for replacing my 6128's? I was considering the 6380 but it is only a .5Ghz increment in frequency, whereas the 6328 is 1.2Ghz frequency improvement.

This is for a HyperV host with W2K12 running a mix of OS's.

How does the Turbo and Half Turbo work with HyperV, I've yet to see any impact on the 6128's.

I guess my question is should I sacrifice core count for speed?
 

Patriot

Moderator
Apr 18, 2011
1,450
789
113
What is a good option for replacing my 6128's? I was considering the 6380 but it is only a .5Ghz increment in frequency, whereas the 6328 is 1.2Ghz frequency improvement.

This is for a HyperV host with W2K12 running a mix of OS's.

How does the Turbo and Half Turbo work with HyperV, I've yet to see any impact on the 6128's.

I guess my question is should I sacrifice core count for speed?
There is no turbo on 61xx generation...

As to go core or frequency... it really depends on how many VMs you are running and what they are tasked with.
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
There is no turbo on 61xx generation...

As to go core or frequency... it really depends on how many VMs you are running and what they are tasked with.
Cores or threads is really the question that needs to get answered.
 

Patriot

Moderator
Apr 18, 2011
1,450
789
113
Cores or threads is really the question that needs to get answered.
And it really depends on the workload...
So answer could be received as... more info needed.

I mainly fold with my compute power.
Folding cares equally about frequency and cores... so I have many cores at high frequency (4x 12c 61xx es @3.5ghz).

I would not go down to 8 cores on the IL or AD side of things.

As far as performance per clock vs Magny Cours. IL requires + 4c.
AD is ~6% faster.

I would go 12 or 16c on AD for an upgrade.

If your tasks are highly threaded... go 16c.
It is certainly easier to add more VMs the more cores you have.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Some food for thought:

First: If you have lots of VMs then adding cores might be the right choice. A dual 6128 has 16 cores. If I had lots of VMs - say more than 16 - then I would probably be craving more cores. On the other hand, if I had a relatively small number of VMs then adding MHz might be the best speed bump, assuming that I already made the upgrade to SSD of course. You are on SSD of course?

Second: My personal experience, albeit biased toward hundreds of threads and heavy IO, is that a used 61xx is hard to beat on price, the 62xx was a step in the wrong direction, and the 63xx is a decent chip but too expensive. Frankly, if your VM project can justify 63xx pricing then it might be time to go with a lower end Xeon, which do very well with virtualization.

Also: "AD" is short for "Abu Dhabi" which is code for the Opteron 63xx series.

What is a good option for replacing my 6128's? I was considering the 6380 but it is only a .5Ghz increment in frequency, whereas the 6328 is 1.2Ghz frequency improvement.

This is for a HyperV host with W2K12 running a mix of OS's.

How does the Turbo and Half Turbo work with HyperV, I've yet to see any impact on the 6128's.

I guess my question is should I sacrifice core count for speed?
 
Last edited:

Radian

Member
Mar 1, 2011
59
5
8
Yes to SSD, I have two Intel 520's 240GB in RAID1 for the OS drive and a RAID6 10TB setup on a LSI 9260-4i, split between VMs and storage for media. I'm kind of stuck for the mo on AMD as I'm not ready to replace platforms as well as cpus.

I decided to go with 6328 as these are less than half the price of the 16c. I considered the 12c, but since I had an older KDGPE motherboard I would use that to house the older 6128s for more VM's in the future, I'm hoping to see some decent net results in the 10 or so VMs I run.

I guess I now need to look at some kind of SAN setup once both servers are running, what's recommended for shared VM storage?
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Radian,

So your VMs are using non-SSD storage? How many VMs are you running and how many disks are in that RAID6 array?

RAID6 is nice for data protection of course, but it's terrible for IOPS and I wonder if that isn't holding you back. 10TB of RAID6 is most likely six 2TB drives or twelve 1TB drives. With 12 drives and RAID6, you'll have just 200-400 write IOPS at the drive level - half that if you are using six drives. Your LSI card has some cache so if you measured IOPS it would be higher than that, but not dramatically since your data will be spread across a far larger area than your 512MB of cache can hold. That's not enough for one busy VM, much less several.

If you can, give this a try: Temporarily migrate one or two of your VMs to a local SSD drive. If you can do so easily, add a few local SSD drives and then move all of your VMs. I suspect that you'll immediately start thinking "I need better storage" instead of "I need more CPU".

If that ends up being the case, we can talk about long-term storage options for VMs.

My first-hand experience: I have an HP MSA2000 G2 SAN. When I first acquired it, I thought it might make a good storage location for a VM cluster. I knew that RAID6 has a serious IOPS penalty, but I wondered if the 4GB worth of cache on the controllers would more than make up for it.

I created a 10TB RAID6 array out of twelve 1TB disks. Using IOMeter, I measured - recalling from memory - around 9,000 IOPS, which means that the cache was working very well. I then connected the array to a host via SAS (1,000MB/S max throughput) and migrated several VMs to the array. Performance was disappointing - the VMs felt very sluggish. When I used IOMeter to measure IOPS inside of each VM with all VMs running at the same time, the numbers were awful. In the end, I migrated the VMs to local 512GB Samsung SSD drives. I can run eight or ten VMs from one SSD drive and still get far better real-world performance than the MSA array with its dual controllers, twelve disks, and 4GB of cache. By the way: I also tried RAID10 on the MSA, which was much better for VMs but still not nearly as good as local SSD.


Yes to SSD, I have two Intel 520's 240GB in RAID1 for the OS drive and a RAID6 10TB setup on a LSI 9260-4i, split between VMs and storage for media. I'm kind of stuck for the mo on AMD as I'm not ready to replace platforms as well as cpus.

I decided to go with 6328 as these are less than half the price of the 16c. I considered the 12c, but since I had an older KDGPE motherboard I would use that to house the older 6128s for more VM's in the future, I'm hoping to see some decent net results in the 10 or so VMs I run.

I guess I now need to look at some kind of SAN setup once both servers are running, what's recommended for shared VM storage?
 
Last edited: