Home Lab V2 upgrade

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Yves

Member
Apr 4, 2017
65
15
8
38
Hi guys,

Since this is my first forum post I really wanted to say thank you for this great blog and the work you guys put into it. Your blog is one of the sites I visit daily. I love to read the news, reviews, tests and advices of you guys. You guys inspired me to build my first home lab which I still use daily for testing and toying around.

Now I wanted to bring my home lab to a more higher standard and wanted to ask you guys if you can give me some szenario advices.

I am currently running a dual host (one is a custom build supermicro board xeon cpu - one a hp microserver gen8 with a xeon upgrade) Vmware ESXi setup with a synology NAS for data (but all the vms are saved locally on thoses two hosts) and an old Qnap who actually is one iSCSI target for one of the vms who has a "large" (500GB) amount of data.

The goal of the new home lab is also simulating host failures (if for example one host completly crashes) or storage failures. So I guess I need to move all the local vms to a redundant NAS or vSAN (which I never did so far). My two ESXi hosts are already in a vSphere Center Cluster so I did already toy around and moved live vms between the hosts while working on them and it worked flawless... But I guess thats childsplay for you guys. But I was like reaaaally excited when it worked.

Any ideas or examples are very very welcome!

Cheers,
Yves
 
  • Like
Reactions: Net-Runner

Connorise

Member
Mar 2, 2017
75
17
8
33
US. Cambridge
Hello buddy,

When it comes to playing around host failures I can recommend few approach. With HPE (as far as I remember they can provide you with 2 node cluster). Also, centuries ago when I was diving into "VM world" StarWind Virtual SAN Free helped me a lot. StarWind can build HA only with 2 nodes thus they saved money for an additional server (for quorum purposes) for me. I spent that money on some pleasures and ultimately I gained great experience playing with really redundant vSAN which can tolerate losing a whole node.
 

Yves

Member
Apr 4, 2017
65
15
8
38
First of all, sorry for the late response. I had a lot to do at work and also did my VCP6.5-DCV Certification last week.

@Marsh, AMAZING!!! Thanks a lot for the links. I did a lot of nested toying on my notebook. Like this I had my "home lab" always with me. I did not know that it is that easy to setup such a nice nested environment. Also the performance even nested is not that bad.

@Connorise, Thanks for your help. I saw that it is possible to StarWind VirtualSAN with a 2 node cluster. But I wanted to stick to VMWare...

So I run into the issue of how to get the expensive VMWare vSAN License. Which I solved thru VMUG. I am now I official VMUGer ;-) so I have access to all the nice tools of VMWare for building my homelab.
Next step was Hardware, since I like to build my own systems and I already built a Supermicro White Box. I tried myself on another one. Which turned out pretty well.

Now I have 3 Nodes (2x Supermicro, 1x HP MicroServer Gen8). Everything for toying around with vSAN. So I wanted the two Supermicro Servers to be the hosts and the HP MicroServer to be the Witness. Which turned out okaisch I guess... But I have so many questions which I can not answer and I don't know where to ask? Is this forum the right place for this kind of questions? For example:

If I would add an nvme to each supermicro node and ssds (for example 2 or 3 - but cheap ones - mx300 or 850evo) would the network interface right now dual 1 gbit lacp be a bottleneck?

Or another one

If I do lacp does that actually help for vSAN performance? Since its still 1 node to 1 node communication which as far I know does not improve with lacp.

I am very sorry if I ask stuff here which dont belong here. But I feel little helpless since all the people I know lack that kind of knowledge...

Thanks a lot guys for your amazing help its very much appreciated

Yves
 
  • Like
Reactions: Marsh

Marsh

Moderator
May 12, 2013
2,645
1,496
113
Sorry, I'll give you some short answers. If you have more questions, post it here.

It is not expensive to upgrade you lab to 10gb network.
10Gb SFP+ single port = cheaper than dirt
https://forums.servethehome.com/index.php?threads/10gb-sfp-single-port-cheaper-than-dirt.6893/
You could do point to point direct connect between 2 hosts, using dual port 10gb network cards allows direct connection with 3 nodes without using a switch.
I was using direct connect between hosts , until I got my 10gb switch.

Here is a cheap 2 x 10gb SFP+ switch, I paid $120 .
MikroTik Cloud Smart Switch w/ 24 Gigabit Ports and 2 SFP+ Cages
http://www.balticnetworks.com/mikrotik-cloud-smart-switch-w-24-gigabit-ports-and-2-sfp-cages.html
I also purchased this 10gb switch from Amazon
TP-Link JetStream 24-Port Gigabit Ethernet Smart Switch with 4-10GE SFP+ Slots (T1700G-28TQ)
Amazon.com: TP-Link JetStream 24-Port Gigabit Ethernet Smart Switch with 4-10GE SFP+ Slots (T1700G-28TQ): Computers & Accessories
https://www.amazon.com/gp/product/B01CHP5IAC/ref=oh_aui_detailpage_o05_s00?ie=UTF8&psc=1
I won't bother with 1gb LACP when 10gb is cheaper than dirt.
 
  • Like
Reactions: Yves

Yves

Member
Apr 4, 2017
65
15
8
38
After a long time and a lot of tinkering... I got a new home lab my biggest upgrade ever!

I was totally lucky to get my hands on an amazing piece of machinery. I was able to get a HP BladeCenter C7000 (old and used one - but still really cool) for about nothing. Had to pay a little bit for the BL460c Gen7 (which are the oldest and cheapest who still work with VMware ESXi 6.5 - even doh officially unsupported. So right now I have 6x BL460c Gen7 all with the same config (2x X5670, 96gb ram, onboard 10gbps flex, 8gb fc card and 1gb card). So a lot of hardware for playing with all kinds of scenarios. My biggest investment so far for my home lab where the two Cisco SG550X-24 switches since I had to get them newly... but I wanted something which has stacking capability for fail-over testing. Additional I also bought a UBNT EdgeSwitch XG-16 for the many SFP+ I have now. Last but not least I am having now a storage issue, the Supermicros had local storage which was very fast... local ssd array. The BL460c Gen7 don't work well with SSDs because of the P410i, there is no HBA mode (as far as I know) so vSAN is also out of the picture and I think the controller is only SATA2 not SATA3 since the performance of two Samsung 850 Pro is horrible...

Which brings me to my question or your opinion (since you guys deal with similar situations on daily basis) I would like to build an all flash 12x Samsung 850 Pro 512GB storage for high-iop workload (i run about 15vms right now for different exams and testings which are an Exchange 2016, some Windows 2016 Servers, an SQL DB Server, a Web Server, a File Server, a Web Proxy, some DC Clients for testing out GPO objects, etc.)

I could get my hands on a used QNap TS-1279U for almost nothing (around 400$) and upgrade it with a Xeon E3-1275 CPU and a 16 gb ram, 12x 850 Pros and a celsio 10gbps sfp+ card.

The big question is will it perform good enough for my workload. Also there would be since the C7000 has 8gbps fc and the blades have the 8gbps fc card a fc san storage... which I have never ever built and would be fun... but I guess this would be very very time consuming for me.

Would you prefer a 8gbps fc over a 10gbps iscsi?

Thanks a lot for your inputs
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,709
517
113
Canada
I think you would be much happier using some Intel 200-400GB S3500's for the array, could probably get them cheaper too if you're careful. Consumer disks will be sorely lacking in sustainable performance :)
 
  • Like
Reactions: Yves

Yves

Member
Apr 4, 2017
65
15
8
38
Intel S3500 I can't get my hands on. But I have access to 12x Intel DC S4500 480gb or 4x Intel P4510 1TB but I guess with the P4510 I would need a diffrent storage solution... since they have a diffrent connector port and that will be much more expensive I guess...

Is the DC S4500 so much better than a Samsung 850 Pro?


Edit: just contacted my client he also has some leftover Intel DC S3510 Series 480 GB... now I am even more confused...

Edit 2: how much diffrence are we talking about? because I could reuse some 6x 850 Pros from a gaming rig I have here... so they would $$$ a lot less :)
 
Last edited:

pricklypunter

Well-Known Member
Nov 10, 2015
1,709
517
113
Canada
The P4510 are NVMe and would require lots of PCI-e slots. The S3510's would do you though, if cheap enough, those are SATA 6Gbps based and have reasonable performance with fairly good endurance :)

The problem with any consumer SSD, is that they are not designed to deliver the kind of sustainable performance you really need at the enterprise level. The better ones, including the Samsung Pro's, are ok in bursty type environments, but ask them to deliver over a period of sustained heavy use and they will quickly crumble under the load. They might be ok to average in your use case, but generally when it comes to enterprise gear you should be thinking "great in a laptop, bad in a server" :)
 

Yves

Member
Apr 4, 2017
65
15
8
38
pricklypunter: thanks a lot for your help! and advice.

Okay, So I should go for the S3510 not the S4500? correct?

The P4510 are U2 based, so with the right backplane they could / should / would work. But I guess that will run my a lot $$$ than what I want to spend.

What would you say about the sfp+ vs. 8gb fc topic?
 

i386

Well-Known Member
Mar 18, 2016
4,241
1,546
113
34
Germany
Is the DC S4500 so much better than a Samsung 850 Pro?
For a boot drive? No.
For workloads with sustained reads & writes? Yes*.

*The samsung will probaly start moving data at 400+ mbyte/s but will drop to >150mbyte/s under sustained reads/writes.
 
  • Like
Reactions: Yves

Yves

Member
Apr 4, 2017
65
15
8
38
@i386 thanks for the advice. I guess the 850 Pros are dead for me in all home lab use cases then. So should I go for the S4500 or S3510?
 

Yves

Member
Apr 4, 2017
65
15
8
38
I am sorry to ask, since you guys already established 850 Pro are not for data center / server usage. But does someone has a review or even better a versus between a "high end" consumer grade ssd like the 850 Pro vs. a "lower end" enterprise grade ssd like the S3500 or S3510? Since in the normal review's (again probably consumer workloads not server or data center workloads) the 850 Pro destroys the S3500...
 

Yves

Member
Apr 4, 2017
65
15
8
38
Okay, so definitly a no go on the 850 Pro front ;-) sorry for re-asking...

So now the goal is the find the right enterprise grade ssd which will not destroy my home lab budget... I saw that the S4600 has a much higher write iops rate than all of the other DCs except of course the nvme ones... should I go for 4 of them for a start instead of many S3510s? They will be in a RAID10 if this is any important information...
 

Yves

Member
Apr 4, 2017
65
15
8
38
Additional since I am limited to a maximum of 10gbps might it be overkill to go for the S4600? The would provide to many iops for the 10gbps? I am not sure how to calculate that 10Gbps = 1250MBps * 1024 / 4k blocks for example = 320k iops on 4k blocksize maximum? correct? theoretical.

So if I would go for the S4600 which has 72k/65k iops with a start of 4 devices I would have around 144k/130k in a Raid10 in a maxed out config I would run into the limit of the 10Gbps (my calculation sets me at) since 432k/390k all theoretical...

But so far correct?
 

i386

Well-Known Member
Mar 18, 2016
4,241
1,546
113
34
Germany
You will need ~306k iops @ 4KByte (read or write) to saturate a 10gbit link. (~ 10gbit / (4kb * 8)).

Unless you run huge databases with a lot of concurent connections and random access in your vms it's unlikely that you will need that many iops.
And for sequential workloads (copying vms as an example) on a raid 10 of ssds can be enough to get to the 10gbit limit.

S4600 / S4500
Do you need new drives/warranty or support?
Are you limited to sata? because you can get decent sas ssds (hgst husmm, 12gbit/s interface, 70k random writes @ 4k) on ebay.
 
  • Like
Reactions: Evan and Yves

Yves

Member
Apr 4, 2017
65
15
8
38
@i386 thanks again for the detailed input. Its superb to get feedback and help like this, since my knowledge in this field is not existing yet...

I don't think that I have such a database. I have a sharepoint test environment (which I think uses sql), a few little databases like a marinadb, sqlite and psql for a few little services like Unifi server and Zabbix.

About the coping vms as an example. Correct me if I am wrong, but would this not be offloaded to the storage directly because of the VAAI?

One last question I know @Marsh already said don't even bother with the LACP but my C7000 is connected with currently 4 near feature 6 SFP+ cables to the Unifi SFP+ Switch. Would it even be possible to LACP iSCSI traffic? between the multiple ESXi hosts? Or better question, would it even make sense? I read something about mpio iscsi...

Do you need new drives/warranty or support?
Are you limited to sata? because you can get decent sas ssds (hgst husmm, 12gbit/s interface, 70k random writes @ 4k) on ebay.
Unfortunately I am limited to SATA... so the S4600 would be best choice? They cost a small fortune...