First, thanks to you all for the input on this. Greatly appreciated! I will try to answer all of you as best I can.
First a clarification. I was thinking of an active/passive HA-setup with shared storage between the two SAN servers. Will this change things cost and complexity wise?
Nice blog. Very much good information on there (I have yet to read the whole thing but I will!)
Your primary goals are performance and availability
For your budget of 15k $ you will not get one or the other from netApp or Nexenta.
Even with a free OS like OmniOS and RSF1, you can get the HA software but not a fast hardware
Hi gea. I hoped you would respond in this thread since I know you are very experienced with zfs setups.
I was guessing that NetApp was out of our budget. I might contact a reseller just to check what 15k $ would give us going that route.
Is nexentastor really that expensive, id did not know that
? I see that on your webpage you have banners for zstor.de. Would you recommend them?
I would like to contact a nexentastor reseller to get an estimate of how much it would cost to get the performance + HA if buying nexentastor + hardware.
So you can either reduce performance or availability (no high availablity in an active/active setup).
You must also care about the complexity of a real HA solution. I would not do without own knowledge
AND a support contract.
I agree that I might currently not have 100% knowledge about HA-setups. So therefore we will require a support contract with the solution we choose in the end. With the right support and documentation I am confident I would manage to keep it up and running
If you can allow say up to an half an hour or more of a service interruption, you may check solutions
with redundancy and/or a fast manual switchover of storage or services with performant hardware
ex as an extension to your current setup
- two 10G storage servers, either with iSCSI to provide a mirror for your Xenservers
and/or with short interval ZFS replication between them and a remote backup
- use your current system as a remote backupsystem (best another building)
you may need
- about 1500 $ for one 10G switch ex Netgear XS712 (use another 1G Switch for redundant cabling)
or something better like a HP 5820 (renew is quite affordable)
- about 3600 $ for two 10G server ex
Supermicro | Products | Chassis | 2U | SC216BE16-R920LPB
with a board like
Supermicro | Products | Motherboards | Xeon® Boards | X9SRH-7TF
add 10 G adapters to your Xenserver
10 x 350$ (ex Intel X540 or X520)=3500 $
This leaves about 6-7k $ for disks
I would build two pools, one pure SSD for high performance and one from spindels for the rest.
Prefer enterprise SSDs like S3500..S3710 (up to 1,6TB per SSD) or use your current and HGST/Toshiba disks
I would love to have a system in active/passive so we could almost instantly switch over if the active system went offline or if we need to take down one node for maintenance.
30 minutes downtime might be OK. But if any other route can avoid this with an active/passive setup that will be preferable.
In your scenario above you have one set of disks in each server right? Why not use SAS disks instead? Then the two servers could share the storage avoiding the need for replication and buying two identical sets of disks?
My first idea for this build was to keep the current SAN and buy another one. Then buy an external disk enclosure, RSF-1, some raid-cards and disks. Connect the two SAN boxes to the external disk enclosure in an active/passive setup.
What I started fearing with this was,
- Lack of support
- Complexity
Would you advice against this
?
For disk, just make sure get SAS HD/SSD, otherwise HA will not work.
Have you consider build another SAN then utilize App/OS to replicate? Exchange DAG, SQL AlwaysOn, DFS, so you will have data in two difference places.
In an active/passive setup with an external disk enclosure shared by two SAN servers I would need SAS disks.
My current estimates shows that SAS disks would be more cost effective than using one set of SATA disks in each server.
Too bad SAS SSDs are so nasty expensive. Would probably have to be ordinary SAS spindles due to that.
Im not sure if replicate is the way to go. I think it might be simpler and cheaper to go the SAS route using one shared storage enclosure?
When you say you would like to have 1GB/s / 0.8GB/s (R/W) you mean over IP?
So you will need some 10G connectivity (and relative 10G Switch) or a lot of 1G interfaces in active/active multipath.
That said, if you want HA with some sort of support I think you could consider 2x high end Synology RackStations
One idea could be
2x Synology RS3614XS+
4x Intel SSD DC S3500 Series 480GB
20x WD RE 3000GB
2x 10G Network adapters (intel X520?)
This setup will give you
- HA (
RS3614xs+ - Products - Synology - Network Attached Storage (NAS) )
- 480GB of read/write cache
- ~21.6 TiB (24TB) of space in a 8+2 (or 2x 4+1) configuration
- 5 years of warranty on drives, ssds and servers
- your performance requests (more or less)
- all the easy of use of the Synology DSM
and based on the prices here in Switzerland should fit exactly in your budget (I don't know the prices in Sweden but should not be more expansive than Switzerland, probably very close
)
The other option is a DIY solution, but you will need a lot of skill and time, and skill, and time... and.. a lot of them to get something stable (in my opinion)
Yes we want to access the storage via IP. We will invest in new network infrastructure too (10Gbe or IPoIB with infiniband). But that will go from another budget and I have not yet started reading on what to buy there.
The Synology alternative does indeed look interesting. They are Citrix ready to!
I checked the specifications of those machines. I would say one gets more bang for the buck doing a DIY-build BUT if their HA solution works and it keeps me from having to learn/understand/fix everything that would be awesome.
I have never tested their DSM environment before but I checked the live demo on their website. It does contain a lot of functionality. However currently we don’t need much more than a NFS/iSCSI share and some performance monitoring. It’s quite a change going from our current terminal based OmniOS to that fancy GUI
Are you using synology storage? Some questions if you are:
1. Do they support an active/passive setup so we can avoid buying two sets of identical disks (i.e. with their expansion units?)
2. Can they really deliver that speed in sync-writes? Our ZFS alternative will probably require a zeusRAM for that.
3. Is the support any good (i.e. technical questions and finding our bottlenecks)