Sup crew!
New DR/DEV setup is finally rolling so I thought I'd share some of the build process. This is the first time I've posted anything like this so if there's any more information you all would like to see, just let me know!
Final Build:
4 Compute Nodes:
Operating System/ Storage Platform: ESXi 6.0 U1
CPU: 2x Intel Xeon X5560
Motherboard: X8DTN+
Chassis: 12 Bay SC826-E16
Drives: 4x HGST Deskstar NAS 4TB 7200 RPM 64MB and 2x Samsung 850 Pro 512GB
ADATA Premier SP550 128GB (boot media)
RAM: 256GB (16x 16GB)
Add-in Cards: LSI 9211-8i IT mode / 2x MHQH29B XTR Infiniband
Power Supply: 2x 800W PS
And 4 Storage Nodes:
Operating System/ Storage Platform: CentOS 7
CPU: Single Intel Xeon X5560
Motherboard: X8DTN+
Chassis: 12 Bay SC826-E16
Drives: 4x HGST Deskstar NAS 4TB 7200 RPM 64MB and 2x Samsung 850 Pro 512GB
ADATA Premier SP550 128GB (boot media)
RAM: 32GB (8x4GB)
Add-in Cards: LSI 9211-8i IT mode / 1x MHQH29B XTR Infiniband
Power Supply: 2x 800W PS
Current Capacity: 113TB
Backstory:
I work in the oil and gas industry, and man have things been rough over the past year. Management finally listened to our complaints about facilities and a cohesive DR strategy and gave us the go ahead to build out a secondary system that could do double duty as a failover system and development environment (new ERP system coming
). Thing is, it's not exactly cheap to duplicate a UCS chassis and a Nimble CS400. We also needed a substantial amount of scaleability and capacity for our seismic and geology data. Short story is we ended up going with ScaleIO for the storage platform and Zerto for the replication component.
Our plan was to steadily add 4 drives per month until the system reached 256TB of capacity - we figured we would start at around 100TB to get things off the ground.
Everything excluding the drives was purchased off of eBay. I'll post up an FAQ for why we went this direction as I have more time free up.
Now being that I wasn't entirely sure any of this would work the way I wanted, I ended up going through a couple of build phases to prove the system out before asking for more money.
Phase 1:
I wanted the system to take advantage of a meshed Infiniband network for storage and vMotion traffic. So for the POC we ended up buying two compute nodes and a Mellanox IS5025. To see how flexible SclaleIO could be with hardware, I threw in a desktop (lol) and an older Dell PE server.
1gig ethernet on the client side, IB on the backend.
Using USB 2.0 drives as boot media in this setup blew...
Desktop hyyyype.
The office I was testing in got kinda warm, but storage testing and replication worked like a champ.
Phase 2:
I trashed all of my testing config in phase 1 and started from scratch with the new hardware. My plan was to build out a base 3 node cluster and expand node by node until everything made it to our rack.
Everything ordered
Three node setup (a bit cleaner this time)
The three nodes were racked and validated. By this time, I had added a second IS5025 and dual IB cards to each compute node. I'll post a diagram of the topology for those interested in how the storage fabric was configured.
Phase 3:
All racked and configured!
Part of the Nimble is shown on the very top and two UCS Fabric Interconnects are chillin on the bottom
Don't judge those labels. Accounting stole the legit printer so we had to make do
That's a 3750-G in the middle handling client side traffic.
IB cables were so freaking long.
Storage node on top, two compute nodes below it.
In the dark?
vCenter (ssh is enabled - haven't mass disabled the alerts yet)
vSwitch config
ScaleIO GUI
Zerto. not fully configured but hey...dashboard!
Had a blast implementing this project - if there's any info you guys want or you have any questions at all post away.
Have a Merry Christmas!
New DR/DEV setup is finally rolling so I thought I'd share some of the build process. This is the first time I've posted anything like this so if there's any more information you all would like to see, just let me know!
Final Build:
4 Compute Nodes:
Operating System/ Storage Platform: ESXi 6.0 U1
CPU: 2x Intel Xeon X5560
Motherboard: X8DTN+
Chassis: 12 Bay SC826-E16
Drives: 4x HGST Deskstar NAS 4TB 7200 RPM 64MB and 2x Samsung 850 Pro 512GB
ADATA Premier SP550 128GB (boot media)
RAM: 256GB (16x 16GB)
Add-in Cards: LSI 9211-8i IT mode / 2x MHQH29B XTR Infiniband
Power Supply: 2x 800W PS
And 4 Storage Nodes:
Operating System/ Storage Platform: CentOS 7
CPU: Single Intel Xeon X5560
Motherboard: X8DTN+
Chassis: 12 Bay SC826-E16
Drives: 4x HGST Deskstar NAS 4TB 7200 RPM 64MB and 2x Samsung 850 Pro 512GB
ADATA Premier SP550 128GB (boot media)
RAM: 32GB (8x4GB)
Add-in Cards: LSI 9211-8i IT mode / 1x MHQH29B XTR Infiniband
Power Supply: 2x 800W PS
Current Capacity: 113TB
Backstory:
I work in the oil and gas industry, and man have things been rough over the past year. Management finally listened to our complaints about facilities and a cohesive DR strategy and gave us the go ahead to build out a secondary system that could do double duty as a failover system and development environment (new ERP system coming
Our plan was to steadily add 4 drives per month until the system reached 256TB of capacity - we figured we would start at around 100TB to get things off the ground.
Everything excluding the drives was purchased off of eBay. I'll post up an FAQ for why we went this direction as I have more time free up.
Now being that I wasn't entirely sure any of this would work the way I wanted, I ended up going through a couple of build phases to prove the system out before asking for more money.
Phase 1:
I wanted the system to take advantage of a meshed Infiniband network for storage and vMotion traffic. So for the POC we ended up buying two compute nodes and a Mellanox IS5025. To see how flexible SclaleIO could be with hardware, I threw in a desktop (lol) and an older Dell PE server.

1gig ethernet on the client side, IB on the backend.

Using USB 2.0 drives as boot media in this setup blew...

Desktop hyyyype.

The office I was testing in got kinda warm, but storage testing and replication worked like a champ.
Phase 2:
I trashed all of my testing config in phase 1 and started from scratch with the new hardware. My plan was to build out a base 3 node cluster and expand node by node until everything made it to our rack.
Everything ordered

Three node setup (a bit cleaner this time)

The three nodes were racked and validated. By this time, I had added a second IS5025 and dual IB cards to each compute node. I'll post a diagram of the topology for those interested in how the storage fabric was configured.
Phase 3:
All racked and configured!
Part of the Nimble is shown on the very top and two UCS Fabric Interconnects are chillin on the bottom

Don't judge those labels. Accounting stole the legit printer so we had to make do
That's a 3750-G in the middle handling client side traffic.

IB cables were so freaking long.

Storage node on top, two compute nodes below it.

In the dark?

vCenter (ssh is enabled - haven't mass disabled the alerts yet)

vSwitch config

ScaleIO GUI




Zerto. not fully configured but hey...dashboard!

Had a blast implementing this project - if there's any info you guys want or you have any questions at all post away.
Have a Merry Christmas!
Last edited: