Hi All,
Long Time Lurker, first time poster. Looking for feedback on a upgrade I'm desperately in need of! Sorry for the long post, I wanted to get the background in place, so at least you have some idea of what my workload consists of.
My Setup currently has 1 x Dell R510, 2 x Dell C6100, and a HP DS2600 DAS hooked upto the R510. Running oVirt KVM primarly for build/continuous integration services. They are all hanging off a Cisco SG200-18 Switch. - Mostly 1 * 1Gbe connection, but a few bonded 2 * 1Gbe connections (i'm pretty much out of ports on the switch when you include the rest of my home network)
I build/maintain linux images for various 3rd parties who ship their solutions out on Kiosks/Embedded Solutions. On a good day, I can have at least 10 builds running, which typically starts very IO intensive writes, then go CPU intensive, and finally, back to IO reads as its all packaged up. A typical build takes around 2 hours. The Builds are mostly automated (when I commit to a repository, the build server picks up the change and off it goes creating the image). for the type of workload a build is - Consider installing a Linux Distro, compiling a bunch of binaries and installing them, and then making a disc image of the distro with the installed software pre-configured. Typical Image sizes around around 600Mb compressed , but some go out to 2-3Gb compressed when its a multimedia kiosk etc.
My storage is a real mess. I have a big mix of drives (sizes, vendors 2.5/3.5, SAS, SATA, SSD, Platters 5.2K RPM through to 15K) etc. Most of it is local to either the nodes or the R510 (in the case of the DS2600)
Then I've got a real mix of drive setups - Some Raid 6 (on the R510), some software mdadm raid 0 (for the moderately critical stuff) and some hardware raid 0. Then a bunch of standalone drives dedicated to single VM images (for IO Ops isolation I'd call it - If one VM is thrashing the Disk while building a image - It doesn't affect other VM's).
I've pretty much hit a limit, where adding more build VM machines just slows down everything else, and obviously my storage is the limiting factor right now. My builds sometimes continue well into the night now and a few times are still going when I get up in the morning.
So I'm planning to replace my existing storage - namely the drives - My thoughts are:
1) Populate the DS2600 with at least 146G, 15K SAS drives. Data on these drives is "throwaway" VM images. - meaning if I loose it due to a HDD failure or whatnot, its not a concern. The Build VM images that are stored here are created from a template when a build is kicked off, and deleted when the build is finished. so at most, i'd just loose running builds.
What I really need here is some decent IO at a moderate price. I know I could go SSD (and will eventually when the invoices get paid!) but I need to do something in the short term.
I can't decide if I should just go for a huge 25 Spindle Array (Raid 0?), or split the drives across 3 arrays, or just go stand alone, and have 2-3 VM's dedicated to a single spindle.
Advise/Thoughts on that point?
2) For Business Data (repositories, built images) etc that does have a business impact to me, I'm planning to utilize the R510 12 bays to storage here. Here I'm currently utilizing about 6TB - I'd like to at least provision 12TB here. I was thinking 12 x 3TB WD Red, but I've seen a lot of talk about rebuild times etc here. I can't really got for ZFS here, so it would be hardware raid (via a H700 raid controller). Performance here is not really a concern - Availability is. This storage domain needs to be mounted by the build VM and where the completed images finish up, so exported via NFS (as it needs to be shared) Any Advise here?
3) for the remainder of the storage on the C6100's (they are the SFF version) - This would basically be my "infrastructure" VM's. Storage requirements here are around a modest 2 -3TB. Performance is not critical, but VM migration and availability is. I thought to setup a few nodes on the C6100 (not all) with 3 x 4TB WD Red in raid 5, and then put GlusterFS on top so I can mirror between nodes (and thus migrate VM's and shutdown nodes at night)
All up I have 71 SFF bays, and 12 LFF bays I can populate over the C6100's, R510, and D2600, but due to power concerns, I'd prefer not to have to use all of them! (Singapore is already hot enough as it is!)
All the Storage in 1 and 2 need to be mounted across VM's/Nodes, so final thing I'm looking at is Network Upgrade to 10GB. I have a opportunity to get a Cisco Nexus 5010 Switch cheaply (around $US500) - But I haven't pulled the trigger yet as I have a few concerns:
1) it seems like there are a bunch of license add-on's to enable features. Anybody have any experience with the Nexus Line? If I'm just planning todo Ethernet Switching and not get fancy with FCoE etc, I should be fine?
2) the Nexus I can get comes without any SFP's. I'd need at least 10 (9 servers/nodes + a uplink link to my SG200 Switch). A quick look shows me the SFP's would cost more than then Switch! Anybody know if there is any lockin with SFP's on this switch. I did read some spec sheet that alluded to 3rd party support, but it wasn't clear.
3) can the standard ports (the Nexus has no addon card installed) do a 1Gb SFP to the SG200? (which has 2 SFP ports, not not 10Gbe) I tried reading up the docs/specs etc, but its still not clear to me
4) Finally, I assume the 10Gb mez cards for the C6100 should be fine right?
Last Question - Backups - I have a little old LTO3 drive that barely keeps up. Should I just consider getting a few of the 6TB drives that are coming out, whack them in a cheap NAS and do disc 2 disk (I'm using LVM snapshots on my main "business" storage pool) or get a Amazon Glacier backup going?
And the obvious requirement - Cost Effective! I'd love to have a enterprise budget, but I need some real bang for the buck here. I'm also constrained by the fact that I live in Singapore, so shipping charges add up real quick...(dont ask how much it cost me to get the C6100's and the D2600 to Singapore from the US!)
I was initially budgeting around $US3K for this, but the 10Gb upgrade and Drives might push that, so I maybe could go a extra 1K if its really a compelling improvement
Long Time Lurker, first time poster. Looking for feedback on a upgrade I'm desperately in need of! Sorry for the long post, I wanted to get the background in place, so at least you have some idea of what my workload consists of.
My Setup currently has 1 x Dell R510, 2 x Dell C6100, and a HP DS2600 DAS hooked upto the R510. Running oVirt KVM primarly for build/continuous integration services. They are all hanging off a Cisco SG200-18 Switch. - Mostly 1 * 1Gbe connection, but a few bonded 2 * 1Gbe connections (i'm pretty much out of ports on the switch when you include the rest of my home network)
I build/maintain linux images for various 3rd parties who ship their solutions out on Kiosks/Embedded Solutions. On a good day, I can have at least 10 builds running, which typically starts very IO intensive writes, then go CPU intensive, and finally, back to IO reads as its all packaged up. A typical build takes around 2 hours. The Builds are mostly automated (when I commit to a repository, the build server picks up the change and off it goes creating the image). for the type of workload a build is - Consider installing a Linux Distro, compiling a bunch of binaries and installing them, and then making a disc image of the distro with the installed software pre-configured. Typical Image sizes around around 600Mb compressed , but some go out to 2-3Gb compressed when its a multimedia kiosk etc.
My storage is a real mess. I have a big mix of drives (sizes, vendors 2.5/3.5, SAS, SATA, SSD, Platters 5.2K RPM through to 15K) etc. Most of it is local to either the nodes or the R510 (in the case of the DS2600)
Then I've got a real mix of drive setups - Some Raid 6 (on the R510), some software mdadm raid 0 (for the moderately critical stuff) and some hardware raid 0. Then a bunch of standalone drives dedicated to single VM images (for IO Ops isolation I'd call it - If one VM is thrashing the Disk while building a image - It doesn't affect other VM's).
I've pretty much hit a limit, where adding more build VM machines just slows down everything else, and obviously my storage is the limiting factor right now. My builds sometimes continue well into the night now and a few times are still going when I get up in the morning.
So I'm planning to replace my existing storage - namely the drives - My thoughts are:
1) Populate the DS2600 with at least 146G, 15K SAS drives. Data on these drives is "throwaway" VM images. - meaning if I loose it due to a HDD failure or whatnot, its not a concern. The Build VM images that are stored here are created from a template when a build is kicked off, and deleted when the build is finished. so at most, i'd just loose running builds.
What I really need here is some decent IO at a moderate price. I know I could go SSD (and will eventually when the invoices get paid!) but I need to do something in the short term.
I can't decide if I should just go for a huge 25 Spindle Array (Raid 0?), or split the drives across 3 arrays, or just go stand alone, and have 2-3 VM's dedicated to a single spindle.
Advise/Thoughts on that point?
2) For Business Data (repositories, built images) etc that does have a business impact to me, I'm planning to utilize the R510 12 bays to storage here. Here I'm currently utilizing about 6TB - I'd like to at least provision 12TB here. I was thinking 12 x 3TB WD Red, but I've seen a lot of talk about rebuild times etc here. I can't really got for ZFS here, so it would be hardware raid (via a H700 raid controller). Performance here is not really a concern - Availability is. This storage domain needs to be mounted by the build VM and where the completed images finish up, so exported via NFS (as it needs to be shared) Any Advise here?
3) for the remainder of the storage on the C6100's (they are the SFF version) - This would basically be my "infrastructure" VM's. Storage requirements here are around a modest 2 -3TB. Performance is not critical, but VM migration and availability is. I thought to setup a few nodes on the C6100 (not all) with 3 x 4TB WD Red in raid 5, and then put GlusterFS on top so I can mirror between nodes (and thus migrate VM's and shutdown nodes at night)
All up I have 71 SFF bays, and 12 LFF bays I can populate over the C6100's, R510, and D2600, but due to power concerns, I'd prefer not to have to use all of them! (Singapore is already hot enough as it is!)
All the Storage in 1 and 2 need to be mounted across VM's/Nodes, so final thing I'm looking at is Network Upgrade to 10GB. I have a opportunity to get a Cisco Nexus 5010 Switch cheaply (around $US500) - But I haven't pulled the trigger yet as I have a few concerns:
1) it seems like there are a bunch of license add-on's to enable features. Anybody have any experience with the Nexus Line? If I'm just planning todo Ethernet Switching and not get fancy with FCoE etc, I should be fine?
2) the Nexus I can get comes without any SFP's. I'd need at least 10 (9 servers/nodes + a uplink link to my SG200 Switch). A quick look shows me the SFP's would cost more than then Switch! Anybody know if there is any lockin with SFP's on this switch. I did read some spec sheet that alluded to 3rd party support, but it wasn't clear.
3) can the standard ports (the Nexus has no addon card installed) do a 1Gb SFP to the SG200? (which has 2 SFP ports, not not 10Gbe) I tried reading up the docs/specs etc, but its still not clear to me
4) Finally, I assume the 10Gb mez cards for the C6100 should be fine right?
Last Question - Backups - I have a little old LTO3 drive that barely keeps up. Should I just consider getting a few of the 6TB drives that are coming out, whack them in a cheap NAS and do disc 2 disk (I'm using LVM snapshots on my main "business" storage pool) or get a Amazon Glacier backup going?
And the obvious requirement - Cost Effective! I'd love to have a enterprise budget, but I need some real bang for the buck here. I'm also constrained by the fact that I live in Singapore, so shipping charges add up real quick...(dont ask how much it cost me to get the C6100's and the D2600 to Singapore from the US!)
I was initially budgeting around $US3K for this, but the 10Gb upgrade and Drives might push that, so I maybe could go a extra 1K if its really a compelling improvement