Virtual Lab question - cpu and memory

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

hammer84

New Member
Aug 26, 2013
1
0
0
First - great forum, look forward to spending some time on here !

I have out grown my little home built lab box. Getting ready to up grade and wanted some opinions. Looking to buy a c6100 but the I not sure what configuration to get.

Common Configs:

L5520 quad core
L5630 six core

either 96-128 or 192 gb of ram

I was leaning to the L5630 with 128 gb of ram. 4 nodes x 32 gb ram x dual 6 core procs.

My questions is basically is ram or CPU more critical. I don't mind buying up ( and one time ) but don't want way over buy unneeded resources.

I currently use my lab to mimic customer environments for demos and solution development.
Lab will be contain multiple servers of various OS. Lots of different applications but no real load except for log collection.

thanks !
 

BThunderW

Active Member
Jul 8, 2013
242
25
28
Canada, eh?
www.copyerror.com
For lab environment the L5520 is a pretty damn good choice. Even in production those L5520's can carry their weight. Depending on what you planning to use them for, in virtualization you'll probably run out of RAM before you ran out of computing power. I'm running the C6100 in production and with about 30VM's acting as web servers, sql servers, game servers, etc the L5520's are pretty much idling.

 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
For lab environment the L5520 is a pretty damn good choice. Even in production those L5520's can carry their weight. Depending on what you planning to use them for, in virtualization you'll probably run out of RAM before you ran out of computing power. I'm running the C6100 in production and with about 30VM's acting as web servers, sql servers, game servers, etc the L5520's are pretty much idling.

Would agree.

If you get a unit with L5520s and 24GB ram then half of your ram slots will be empty per node and so you can double up your ram as your needs require with pretty cheap 4GB sticks. If you feel you are going to need more than 48GB per node then it may be cheaper to buy it in at the start to avoid the hit of changing from 4GB ram sticks to 8GB sticks.

I have been using virtualization (ESXi) for a number of years ever since I put Windows Home Server 2011 in a VM and saw that it was using next to no CPU power. Unless you are going to be doing some heavy cpu intensive work on VMs then ram is king... fast disk also helps (speed / IOPS wise).

RB
 

tby

Active Member
Aug 22, 2013
222
111
43
Snellville, GA
set-inform.com
I'm eagerly awaiting an L5520 / 96GB setup to arrive. 32 cores @ 3GB per ought to be plenty for me for a while. I'm going to use it to study for the VCP and mimick the testing environment at my day job. Sharing resources with co-workers sucks.

 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
can you discuss you ESOS? I'd love to try it but not sure if it supports any modern tech like:

1. SAS target? cheap point to point
2. FCoE target? Lightpulse driver in Vn2Vn direct connect

I have ISCSI covered with lefthand vsa but would like to try others.

Given the $10-11 for 4gb RDIMM ECC - more ram the merrier. Those empty slots need to be filled. For instance Ramdisk for TEMPDB on sql server equals huge gains when you are limited in IO speed. Even SSD can't touch ramdisk for tempdb and even if you have EE you may find the server decides to chose to use disk over ram. I found simply upping the query ram in resource governor reduced TEMPDB i/o by 80% !! But if you only have STD edition sql server you can do 1 tempdb per core SSD with pre-grow to gain a huge edge! So even if you have 32gb of ram for SQL, you will find microsoft products use the piss out of tempdb and will benefit with the other 32gb towards ramdisk for tempdb!

You should look at the infiniband options some folks have here. You just can't beat 10gbe or greater! FC is cool, but 10,20,40gbe is far faster! With a simple node setup you may even be able to ditch the switch and direct connect!

With L5639 @ $70 each, man it is hard to not pick up some of these! Serious bang for buck cpu's! A single L5639 with 10gbe nic and 8 ssd can push full 10gbe working at 100 watts! Sick!

Our file servers push 240 watts average with 2 x 10gbe dual port and 1 L5639 and 8 15K SAS drives! I have a DL380 G5 but don't use it because it uses 300-400 watts doing just about nothing !( dual e5440/32gb/0 drives) The operating footprint (cooling/ups) is just not worth it! We swapped in 1 L5639 Dl180 with 24gb of ram to replace old core2 based server, plus 2 dual port 10gbe nic and use LESS power!

The way SDS/SDN is going - pretty soon you can ditch all the FC/ISCSI/SAN/NAS and just rock n roll a C6100 or two to do it all! Can't wait to try out esxi 5.5 for this (hoping microsoft can step up and provide some of the same features cheaper!) vsan/vflash cache.
 

tby

Active Member
Aug 22, 2013
222
111
43
Snellville, GA
set-inform.com
I'm new to ESOS but it appears to support FCoE targets. SCST, which is what ESOS is based on, has SAS target drivers for Marvell and LSI but my Google-foo couldn't uncover anyone who has used them. Here's a pretty good write-up from a guy who built an HA solution using ESOS, DRBD, IB, & 8Gb FC:

Marc's Adventures in IT Land: Building & Using a Highly Available ESOS Disk Array

I'm on a tight budget so FCoE / 10Gbe / Infiniband were too spendy. I still need to buy one more FC HBA and the LC/LC cables but it looks like my total cost will be around $200, +/- a fiver.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Yeah with 8gbps fiber switch costing $295 and older 4gbps nic's at $20 it is cheap to do FC and very well supported (FcoE is a mess and very expensive!).

I was thinking of picking up a pair of $295 8 port FC switch and some old light pulse 8gbps adapters but the fc adapters ended up costing more than 10gbe nic's!

I wish they could have done IPoFC, but I could never find a solid solution there.