FINAL Bachelor Build - Xeon D vSAN Cluster

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
If you're running Ubuntu AUFS should be easy to try out - I'm reasonably sure they have all the patches in their default kernels. I prefer CentOS myself, but that means replacing the kernel with a patched one for AUFS.
 

Continuum

Member
Jun 5, 2015
80
24
8
47
Virginia
...




Oh interesting, I haven't tested that option yet. I'll give it a shot and see how much it hurts read speeds.


EDIT: Wow, that makes a HUGE difference.

I use the direct_io mount option for my home server, though I have yet to try to hammer it with massive file transfers. I will be interested in your experience using the option, especially any negative or detrimental consequences as, shortly, I will have to archive and move my approximately 800GB MythTV recording library from my HTPC to my home server over NFS.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
I use the direct_io mount option for my home server, though I have yet to try to hammer it with massive file transfers. I will be interested in your experience using the option, especially any negative or detrimental consequences as, shortly, I will have to archive and move my approximately 800GB MythTV recording library from my HTPC to my home server over NFS.
Can't you just use the direct NFS share for that file transfer and bypass the mergerfs pool?
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
I could, but I want to spread the recordings across the pool with minimal intervention. (I use the mfs mount option instead of epmfs for MergerFS mount.)
Ahhh, that makes more sense.

Well I'm just about doubling 1Gbps speeds (220 MB/s) with my writes right now going from (flash storage to flash storage). Depending on the type of disks you're reading from/writing to I don't see why you wouldn't be able to max out a 1Gbps connection.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
OK so...I was doing some brainstorming today and decided that keeping an entire backup (over 50TB) of my media in a separate physical location isn't giving me the flexibility I need. It's a nice peace of mind knowing my media is safe in case of a disaster scenario but it's not practical for my needs (HA) being connected via 100Mbps (VPN) when it can be connected via 10Gbe.

So I picked up one of these to round out my Xeon D lineup that I will stick in my backup server which I will move onsite into my rack and will serve as a HA node for my bulk storage/streaming needs. I have a cheap dual drive consumer NAS that I will keep off-site for a 3rd backup for my VM's and personal files (basically all my files but media).

What I'm still debating is what to do with the 4 Intel 730 480GB SSDs that are currently making up the BRTFS cache pool in my main bulk storage array. I no longer need that much space since I've moved my dockers over to vSAN but I do still need at least 1 drive worth of space to serve as cache for both bulk storage servers. I wanted to throw 2 drives in each server and run them as RAID 0 cache pools for both write cache for bulk media and also for my Plex 'cache' and 'media' directories. But of course UnRAID doesn't support RAID 0 BTRFS yet so I'm a little unsure what I'm going to do at this point.
 

jwegman

Active Member
Mar 6, 2016
144
65
28
49
But of course UnRAID doesn't support RAID 0 BTRFS yet so I'm a little unsure what I'm going to do at this point.
You can rebalance to RAID 0 (keep the metadata mirrored but stripe the data):

-dconvert=raid0 -mconvert=raid1

Note that unRAID seems to revert the balance to mirrored after an array restart, however you can easily rebalance back to stripe.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
You can rebalance to RAID 0 (keep the metadata mirrored but stripe the data):

-dconvert=raid0 -mconvert=raid1

Note that unRAID seems to revert the balance to mirrored after an array restart, however you can easily rebalance back to stripe.
Yea that's the problem. Having to rebalance after every reboot is a deal breaker.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
I'm happy to say that my build is COMPLETE and I'm very satisfied with the finished product :cool:.




From top to bottom in the rack:

24 port Cat6 Patch Panel
Switch
:
Dell X1052 10Gb Switch
Firewall: pfSense 2.3.1 running on SuperMicro A1SRi-2558F
vSAN Node #1: Xeon-D 1537 / 64GB RAM / 400GB Hitachi HUSSL (cache) / 800GB Intel S3500
vSAN Node #2: Xeon-D 1537 / 64GB RAM / 400GB Hitachi HUSSL (cache) / 800GB Intel S3500
vSAN Node #3: Xeon D-1508 / 16GB RAM / 8 x 8TB Seagate SMR HDDs
vSAN Node #4: Xeon D-1518 / 32GB RAM / 400GB Hitachi HUSSL (cache) / 800GB Intel S3500 / IBM M1015 / 8 x 8TB Seagate SMR HDDs
CyberPower 900w UPS


You'll notice that my rack is sort of broken down into 3 main sections.

Networking (PP, Switch, Firewall)
Computing (vSAN Node 1,2)
Storage (vSAN Node 3,4).

Nodes 1 and 2 are my main computing boxes. They run all my high CPU intensive applications and the VM's on those servers fail-over to one another or get migrated during maintenance.

Node 3 is currently just a "slave" for Node 4. It's not currently contributing any storage to the vSAN datastore but it will once I can pickup a 4th Intel S3500 800GB drive. The only VM running on it is my bulk storage OS (UnRAID). The 8TB drives are spun down 95% of the time except during data replication from Node 4 or if Node 4 is offline for whatever reason. This server isn't using much power.

Node 4 is running my main storage array (outside the vSAN datastore of course). On top of the storage OS, I also have vCenter running on there.


Deciding to convert my offsite backup server (very underutilized) to an on site backup / HA for my media was a good decision. One of the top priorities for this build was to add redundancy / HA to my Plex Server and while running Plex in an Ubuntu VM on the vSAN datastore did that, the actual media became the failure point. Moving my backup server on site allowed me to pool NFS shares off both servers into single mount points for Plex to access (using MergerFS). Thus if Bulk Array #1 is offline for whatever reason (failure, maintenance, etc.) Plex will automatically read/serve the data off of Bulk Array #2.

At higher than average usage right now the rack is pulling 360w int total. When mostly idle it's under 275w.

Now that I've got my "production" services running for home, it's now time to build out my Windows testing/lab environment ;).

** Additional Close-Up pics of each sections **






 
Last edited:

danws6

New Member
Apr 22, 2016
21
7
3
42
Just saw this on r/homelab. Great build out. A question about the iStar M series, do they sell adapters for 2.5" drives that will work in the 3.5" trayless slots?
 

maze

Active Member
Apr 27, 2013
576
100
43
Slick run :) - looking good.. I might just have a wet dream about this tonight.....
 

wildchild

Active Member
Feb 4, 2014
389
57
28
I'm happy to say that my build is COMPLETE and I'm very satisfied with the finished product :cool:.




From top to bottom in the rack:

24 port Cat6 Patch Panel
Switch
:
Dell X1052 10Gb Switch
Firewall: pfSense 2.3.1 running on SuperMicro A1SRi-2558F
vSAN Node #1: Xeon-D 1537 / 64GB RAM / 400GB Hitachi HUSSL (cache) / 800GB Intel S3500
vSAN Node #2: Xeon-D 1537 / 64GB RAM / 400GB Hitachi HUSSL (cache) / 800GB Intel S3500
vSAN Node #3: Xeon D-1508 / 16GB RAM / 8 x 8TB Seagate SMR HDDs
vSAN Node #4: Xeon D-1518 / 32GB RAM / 400GB Hitachi HUSSL (cache) / 800GB Intel S3500 / IBM M1015 / 8 x 8TB Seagate SMR HDDs
CyberPower 900w UPS


You'll notice that my rack is sort of broken down into 3 main sections.

Networking (PP, Switch, Firewall)
Computing (vSAN Node 1,2)
Storage (vSAN Node 3,4).

Nodes 1 and 2 are my main computing boxes. They run all my high CPU intensive applications and the VM's on those servers fail-over to one another or get migrated during maintenance.

Node 3 is currently just a "slave" for Node 4. It's not currently contributing any storage to the vSAN datastore but it will once I can pickup a 4th Intel S3500 800GB drive. The only VM running on it is my bulk storage OS (UnRAID). The 8TB drives are spun down 95% of the time except during data replication from Node 4 or if Node 4 is offline for whatever reason. This server isn't using much power.

Node 4 is running my main storage array (outside the vSAN datastore of course). On top of the storage OS, I also have vCenter running on there.


Deciding to convert my offsite backup server (very underutilized) to an on site backup / HA for my media was a good decision. One of the top priorities for this build was to add redundancy / HA to my Plex Server and while running Plex in an Ubuntu VM on the vSAN datastore did that, the actual media became the failure point. Moving my backup server on site allowed me to pool NFS shares off both servers into single mount points for Plex to access (using MergerFS). Thus if Bulk Array #1 is offline for whatever reason (failure, maintenance, etc.) Plex will automatically read/serve the data off of Bulk Array #2.

At higher than average usage right now the rack is pulling 360w int total. When mostly idle it's under 275w.

Now that I've got my "production" services running for home, it's now time to build out my Windows testing/lab environment ;).

** Additional Close-Up pics of each sections **






Nice job !