vmware + storage box or just HyperV it?

vmware + SAN or HyperV


  • Total voters
    7
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Hi,
I’ve been trying to build almost a similar setup for one of my customers’ a while ago and would like to share my experience. I hope it might be useful.
I would recommend you to avoid FreeNAS and similar stuff for production usage unless you’re are totally familiar with this product since it’s your primary storage and in this case, it is self-supported only. ESXi is a great choice, moreover, I would definitely recommend you looking towards two similar hosts and building a highly available cluster.
For a shared storage you do not need a SAN or NAS. It’s a single point of failure and directly attached drives are much faster. The most obvious option is VMware VSAN vSAN Software-Defined Shared Storage but it’s quite anemic in two-node setup and requires the damn witness. We are using Starwinds StarWind Software – StarWind Virtual SAN® – Starwindsoftware.com for this purposes because it is less expensive, works on top of hardware RAID which is very good because of improved performance and reliability (having a complete and consistent set of data in each host is priceless) and it does RDMA (unlike VSAN) which is very good too because we have Mellanox ConnectX3 and they work great with Starwind.
I’ve built such a setup by myself (SuperMicro-based) but requested the quotation for Starwind ready-nodes StarWind HyperConverged Appliance too. Surprisingly their price tag for similar DELL-based setup was not much higher.
SMH, GL when your HW raid ctrl fails, hope you have a spare on hand, me, I'll be swapping in a cheap HBA or swapping disks quickly to another 'highly available' system...oh and with powerful snapshot/clone/replication capabilities.

To each their own I suppose though, the 'a SAN is a SPOF' is what I am shaking my head at btw, I'm not going to EVEN go there, those of us that know KNOW :-D
 
Last edited:
  • Like
Reactions: T_Minus and maze

wildchild

Active Member
Feb 4, 2014
389
57
28
Hi,
I’ve been trying to build almost a similar setup for one of my customers’ a while ago and would like to share my experience. I hope it might be useful.
I would recommend you to avoid FreeNAS and similar stuff for production usage unless you’re are totally familiar with this product since it’s your primary storage and in this case, it is self-supported only. ESXi is a great choice, moreover, I would definitely recommend you looking towards two similar hosts and building a highly available cluster.
For a shared storage you do not need a SAN or NAS. It’s a single point of failure and directly attached drives are much faster. The most obvious option is VMware VSAN vSAN Software-Defined Shared Storage but it’s quite anemic in two-node setup and requires the damn witness. We are using Starwinds StarWind Software – StarWind Virtual SAN[emoji768] – Starwindsoftware.com for this purposes because it is less expensive, works on top of hardware RAID which is very good because of improved performance and reliability (having a complete and consistent set of data in each host is priceless) and it does RDMA (unlike VSAN) which is very good too because we have Mellanox ConnectX3 and they work great with Starwind.
I’ve built such a setup by myself (SuperMicro-based) but requested the quotation for Starwind ready-nodes StarWind HyperConverged Appliance too. Surprisingly their price tag for similar DELL-based setup was not much higher.
I assume you work for starwind ?
 

fractal

Active Member
Jun 7, 2016
309
69
28
33
I vote for "KISS".

Most of these thread devolve into "you should do this and pass through that and then do the other thing to pass through yet still something else so you can pass through the passed through pass through."

Ya got discs? Make a file server. Make a damned good file server.

You got processors and need to run multiple solutions? Virtualize. Virtualize with mobility. Make it so your solutions can migrate across hardware as you grow. Start with one, right-sized server. Add another as your needs grow.

Every time I grabbed the string and pulled it to see what it would take to give up mobility to virtualize with passthrough I always spent more money to give up what is the most important feature of virtualization to me ... the ability to grow transparently as my needs increase.

There may be a nitch market for people who never grow who love to fiddle and create job security through obscurity by requiring their employer to replace all the infrastructure when they move on since they are the only person who can maintain it, but I don't abide by that sort of thing.

KISS. Just make everything work so any joe bubba can figure it out if you are on vacation or work somewhere else.
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
Seems like a basic 3 node ScaleIO/StarWind vSAN or 4 node S2D deployment would work well, Hyper-V, with something like Veeam, and a dedicated storage box to receive backups, would be pretty simple to deploy and manage.
 

Net-Runner

Member
Feb 25, 2016
81
22
8
41
SMH, GL when your HW raid ctrl fails, hope you have a spare on hand, me, I'll be swapping in a cheap HBA or swapping disks quickly to another 'highly available' system...oh and with powerful snapshot/clone/replication capabilities.
Even if HW raid controller fails, you still have a completely working set of data on another host and you have your time since controllers do not die every day :) Having a spare one is great but not a must. Depends on your SLA's/BCM rules, course.

To each their own I suppose though, the 'a SAN is a SPOF' is what I am shaking my head at btw, I'm not going to EVEN go there, those of us that know KNOW :-D
If you are talking about enterprise-grade SANs that have redundancy on everything you could imagine - that's true. But the current discussion does not assume such a budget and any SAN that financially correlates with 2 or 3 compute hosts is a SPoF. Unfortunately.

I assume you work for starwind ?
No, I don't. Just a relatively satisfied customer.

I vote for "KISS".
KISS. Just make everything work so any joe bubba can figure it out if you are on vacation or work somewhere else.
Seems like a basic 3 node ScaleIO/StarWind vSAN or 4 node S2D deployment would work well, Hyper-V, with something like Veeam, and a dedicated storage box to receive backups, would be pretty simple to deploy and manage.
Second both. S2D is not simple unfortunately. Management sucks. Licensing too :). ScaleIO is a very great product in terms of performance for sure, but the EMC support is terribly slow sometimes.
 

wildchild

Active Member
Feb 4, 2014
389
57
28
If you are talking about enterprise-grade SANs that have redundancy on everything you could imagine - that's true. But the current discussion does not assume such a budget and any SAN that financially correlates with 2 or 3 compute hosts is a SPoF. Unfortunately.



No, I don't.

Second both. S2D is not simple unfortunately. Management sucks. Licensing too :). ScaleIO is a very great product in terms of performance for sure, but the EMC support is terribly slow sometimes.
Point i'm trying to make is that those "enterprise sans" are running freebsd/debian/linux/ or in case of oracle exadata solarish.
Thus it's totally possible to build a redundant san based on open source software without killing a budget on windows AND some software license.
Sure you need to know what you are doing and test your setup, but thats also the case with those licensed solutions.

It just struck me that most every post you make, you are all over starwind, hence me asking.
Have tried starwind on multiple occassions both bare metal and hypervised and have seen no additional improvements warranting that license
 

Diavuno

Active Member
I actually support businesses too and I did some testing with a similar client (video productions)

The Final Solution was to use storage spaces in a transparent manner, the tiering was the way to go since the projects would eventually be migrated off SSD to disk.
This was even better since we knew eventually we would deploy celeron NUC's and use remote desktop to make a VDI type setup (currently 4 users like that.)

I never did try LSI's cachecade but thought it would also be a nice fit.

Keep in mind that going windows gives you great management and easy setup (I also used to be 100% ESX, now I'm 50/50)
 

superfula

Member
Mar 8, 2016
88
14
8
SMH, GL when your HW raid ctrl fails, hope you have a spare on hand, me, I'll be swapping in a cheap HBA or swapping disks quickly to another 'highly available' system...oh and with powerful snapshot/clone/replication capabilities.

To each their own I suppose though, the 'a SAN is a SPOF' is what I am shaking my head at btw, I'm not going to EVEN go there, those of us that know KNOW :-D
Yeah it's hard to disagree with this.

The 'SAN is a SPoF' is just a spiceworks community thing.
 

Diavuno

Active Member
RAID cards aren't that expensive to keep on hand if uptime is THAT important.

In my whole career i've seen one 3ware and one adaptec (I do not think highpoint is a raid card, or anything without an offload engine is a true raid card)

If you go the HyperV + spaces route you can teir with SSDs and replicate to 2 or more boxes.
 

NISMO1968

[ ... ]
Oct 19, 2013
87
13
8
San Antonio, TX
www.vmware.com
Windows Server 2016 Standard is $600 or so, and you'll pay $1,200 for a pair of those in HA. Enterprise SANs will cost you $20K+ in hardware alone, + labor ($100/hour) to build them from scrap parts. Software costs... Irrelevant ;) Good point: You can use free Hyper-V server as your 100% free Windows-ish OS.

Hyper-V Free "Shared Nothing" SMB3 Failover File Server | StarWind Blog

I'd rather wait for them to release Linux version, and you're right on that. Windows on storage... Haters gonna hate ;)

Point i'm trying to make is that those "enterprise sans" are running freebsd/debian/linux/ or in case of oracle exadata solarish.
Thus it's totally possible to build a redundant san based on open source software without killing a budget on windows AND some software license.
Sure you need to know what you are doing and test your setup, but thats also the case with those licensed solutions.

It just struck me that most every post you make, you are all over starwind, hence me asking.
Have tried starwind on multiple occassions both bare metal and hypervised and have seen no additional improvements warranting that license
 

wildchild

Active Member
Feb 4, 2014
389
57
28
Windows Server 2016 Standard is $600 or so, and you'll pay $1,200 for a pair of those in HA. Enterprise SANs will cost you $20K+ in hardware alone, + labor ($100/hour) to build them from scrap parts. Software costs... Irrelevant ;) Good point: You can use free Hyper-V server as your 100% free Windows-ish OS.

Hyper-V Free "Shared Nothing" SMB3 Failover File Server | StarWind Blog

I'd rather wait for them to release Linux version, and you're right on that. Windows on storage... Haters gonna hate ;)
I have no problem with windows at all, but i do have a problem with justc"assuming" stuff because it's being said by a particular vendor, or highly sponsored website.

I think storage spaces is cool, and seems to work really well except for the nfs part.

I do think this starwind is nice, for playing around, but it in no way, in my book justifies the licensing for it.


Verstuurd vanaf mijn ZP920+ met Tapatalk