Need to pick out / build SAN

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

compuwizz

Member
Feb 25, 2017
46
45
18
40
I need to figure out a shared storage solution for a VMWare install. My previous install is an older Dell Equalogic SAN.

I want to go with a SAN over a FreeNAS so that I can have dual controllers.

I don't need the latest and greatest generation so I'm hoping to be able to find some good gear on eBay or a used reseller.

I mainly want something that can take flash disks (24x1 or 1.2 GB drives) , and dual 10G iSCSI connectivity. With exception of one VM, we typically are skewed towards Read heavy versus Write heavy, most VMs are even though.

I haven't kept up with product lines but hopefully there is something commonly found on the used market that we would fit into.
 

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
If you've got 2 servers, you should really be looking at a Hyperconverged system for full redundancy. I've tried Datacore, Starwind, Stormagic, VSAN, and I ended up with Stormagic.

Then all you're doing is buying drives for the systems.....no enclosure needed.
 
  • Like
Reactions: NISMO1968

NISMO1968

[ ... ]
Oct 19, 2013
87
13
8
San Antonio, TX
www.vmware.com
Interesting! From your list Stormagic is the only "laggard" who uses controller VMs to provide storage. Last time we checked them they would barely deliver maybe 40K IOPS with a single underlying Intel DC3700 doing 450K+ IOPS. What OS are you using? What performance numbers are you getting?

If you've got 2 servers, you should really be looking at a Hyperconverged system for full redundancy. I've tried Datacore, Starwind, Stormagic, VSAN, and I ended up with Stormagic.

Then all you're doing is buying drives for the systems.....no enclosure needed.
 

NISMO1968

[ ... ]
Oct 19, 2013
87
13
8
San Antonio, TX
www.vmware.com
You can do FreeBSD (Linux?) with ZFS combined with some shared SAS drives to build dual-controller DIY SAN.

Check this out --> Home · ewwhite/zfs-ha Wiki · GitHub

There's also a way to replicate ZFS pools for "shared nothing" setup but I'd rather avoid doing that...

I need to figure out a shared storage solution for a VMWare install. My previous install is an older Dell Equalogic SAN.

I want to go with a SAN over a FreeNAS so that I can have dual controllers.

I don't need the latest and greatest generation so I'm hoping to be able to find some good gear on eBay or a used reseller.

I mainly want something that can take flash disks (24x1 or 1.2 GB drives) , and dual 10G iSCSI connectivity. With exception of one VM, we typically are skewed towards Read heavy versus Write heavy, most VMs are even though.

I haven't kept up with product lines but hopefully there is something commonly found on the used market that we would fit into.
 

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
Interesting! From your list Stormagic is the only "laggard" who uses controller VMs to provide storage. Last time we checked them they would barely deliver maybe 40K IOPS with a single underlying Intel DC3700 doing 450K+ IOPS. What OS are you using? What performance numbers are you getting?
It's not actually. They all use controller VM's. And if he's coming from a single Equalogic, I expect that he's not even close to needing 40k IOPS. I most cases I'd take redundancy over a single SAN at a similar price point.

There's also the "it just works" factor. I've got clients that would never want to deal with a brewed BSD/Linux with ZFS.

VSAN for a 2 node system was just dumb. It made no sense at all.

As a matter of practice, on my fastest CPU's running SM on FusionIO cards, I can squeek out ~ 100k IOPS. And the only time I see that is running IOMeter. That's also on a single target. I generally run multiple targets, some pure flash, some pure spindle.

-D
 

realtomatoes

Active Member
Oct 3, 2016
252
32
28
44
i'd keep things simple.
run freenas with a 4-6 core processor, 32-64GB ram, dual 10GbE nics and a 24 disk enclosure or go nuts and get the 60 disk enclosure.
you can buy used or go with the recently released Atom C3955s Patrick reviewed.
 

compuwizz

Member
Feb 25, 2017
46
45
18
40
Thanks everyone for the comments. I was under the impression that you needed 4 hosts to do VSAN. The compute servers are in a Dell FX2 chassis. Right now 3 compute and 1 storage bay. However we may take the storage bay out and put another compute node in because of the PCI expansion slot mapping that happens when you have a storage bay installed.

I would rather have a SAN versus a build my own FreeNAS box. The build your own ZFS storage looks interesting but painful to manage. Are there any common SANs hitting the used markets that are decent?

I need between 20 and 24 TB of usable storage for the VMs.
 

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
The MD3260i has 10G available, which is going to be the cheapest, but you won't get it back under warranty. If you're less interested in warranty, building 2 R720XD or R730XD running any of the SAN softwares would be even more redundant, faster, and probably the same price. You could virtualize with the software I suggested already, or you could go with something like OpenE, or TrueNAS, the supported FreeNAS implementation.
 
  • Like
Reactions: T_Minus

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Keep in mind I am not in the USA but my experience is that any support for old kit (3 or 4 years old) in the storage world you better off (cheaper) to just buy new stuff with support.
Example I looked at recently was the 5th year support on an EMC midrange was so much you could just buy the same capacity in a new unit with 3 years support !

Support is the complex thing in this decision.
 
  • Like
Reactions: realtomatoes

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
Keep in mind I am not in the USA but my experience is that any support for old kit (3 or 4 years old) in the storage world you better off (cheaper) to just buy new stuff with support.
Example I looked at recently was the 5th year support on an EMC midrange was so much you could just buy the same capacity in a new unit with 3 years support !

Support is the complex thing in this decision.
Sure is. We had a bunch of MD3220i's deployed, and support got crazy, that's why we went the HC route with Stormagic. Not even remotely the same price point, and WAY faster actually.
 

compuwizz

Member
Feb 25, 2017
46
45
18
40
Sure is. We had a bunch of MD3220i's deployed, and support got crazy, that's why we went the HC route with Stormagic. Not even remotely the same price point, and WAY faster actually.
With Stormagic, do you need a RAID controller managing the Datastore before you present it to their Stormagic management VM?
 

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
With Stormagic, do you need a RAID controller managing the Datastore before you present it to their Stormagic management VM?
Not specifically. For example, I have LSI Nytro and FusionIO cards running that aren't HW RAID. however, if you are asking if SM (or Datacore, Starwind) will do software raid, they don't.

-D
 

realtomatoes

Active Member
Oct 3, 2016
252
32
28
44
Keep in mind I am not in the USA but my experience is that any support for old kit (3 or 4 years old) in the storage world you better off (cheaper) to just buy new stuff with support.
Example I looked at recently was the 5th year support on an EMC midrange was so much you could just buy the same capacity in a new unit with 3 years support !

Support is the complex thing in this decision.
and right on queue *pre-sale engineer walks into room*

supporting older hardware costs and arm, a leg, and another arm.
 
  • Like
Reactions: _alex

Connorise

Member
Mar 2, 2017
75
17
8
33
US. Cambridge
Your goal can be reached in two ways.

The first one is FreeBSD (Linux?) with ZFS combined, this setup should be DYI and requires a knowledge. Bear in mind this approach still act as single point of failure (It will fail to the failed motherboard, for example.)

The second option is to obtain any software defined storage. From what I know there are plenty of vendors who should be capable to do the job for you. I would recommend taking the look on HPE (Software Defined Storage Solutions: Enterprise Data Fabric) and StarWind (Software Defined Storage for the HCI • StarWind Virtual SAN ® Free). Actually, StarWind free version seems like perfect fit for your needs.
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
My suggestion

A reference hardware could be a Supermicro | Products | SuperServers | 2U | 2028R-ACR24L

As built to order, you can use the same case with a single core Xeon but prefer a direct connect backplane without expander, more see http://www.napp-it.org/doc/downloads/napp-it_build_examples.pdf

Prefer Enterprise class SSDs from Intel (DC line) or Samsung (SM/PM 863) with powerloss protection. Both offer a cheaper line for mainly read workloads (Intel DC35../Samsung PM) and a line for heavy write load (Intel DC 36../37.. or Samsung SM 863)

Add a ZFS storage OS/appliance
Fastest and most feature rich is Oracle Solaris, the genuine ZFS where ZFS comes from - but this is nonfree for commerial use.

My next preference would be the free Solaris forks around Illumos (NexentaStor, OmniOS or OpenIndiana). They are Open-ZFS systems but with most advantages of Oracle Solaris. NexentaStor is a commercial appliance, the others are free.

Then there are solutions from the BSD familiy (Beside Solaris another Unix option) like FreeNAS or Nas4Free that have adopted ZFS.

My last preference would be ZFS on Linux. Quite stable now but far away from the "It just works" experience of Solarish.

Another conceptional method is All-In-One where you virtualize a SAN appliance on ESXi and use it like local storage, see http://www.napp-it.org/doc/downloads/napp-in-one.pdf

This require that you have enough CPU and mainly RAM to satisfy ESXi and storage needs
 
  • Like
Reactions: compuwizz

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
Your goal can be reached in two ways.

The first one is FreeBSD (Linux?) with ZFS combined, this setup should be DYI and requires a knowledge. Bear in mind this approach still act as single point of failure (It will fail to the failed motherboard, for example.)

The second option is to obtain any software defined storage. From what I know there are plenty of vendors who should be capable to do the job for you. I would recommend taking the look on HPE (Software Defined Storage Solutions: Enterprise Data Fabric) and StarWind (Software Defined Storage for the HCI • StarWind Virtual SAN ® Free). Actually, StarWind free version seems like perfect fit for your needs.
Careful....Starwind Free doesn't have a console after 30 days. It's all command line, and it seems at though the OP wanted easy management.