Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Sami-L

New Member
Jan 21, 2016
7
0
1
58
Hi guys,

As I have two unused devices, I got the idea to try to make a starting point for a small DIY scalable SAN, so I need your help to find out the best way to achieve that:

ASRock machine
CPU: Celeron J1900 @ 1.99GHz
Chipset:
Motherboard: ASRock Q1900DC-ITX including:
- 1 PCIe 2.0 x1
- 1 Mini-PCIe
- 1 Realtek RTL8111GR (PCIe)
- 2 Sata 3 (ASMedia ASM1061)
- 2 Sata 2
- 4 USB 3
- 4 USB 2
RAM: 2X4GB 1600 DDR 3 Crucial 1.35V
Power Supply: 120W - 19V DC Adapter
Chassis: 1U Short depth (380mm) SUPERMICRO CSE-512L-260B


Intel NUC (D34010WYK)
CPU: Core i3-4010U @ 1.7 GHz
Chipset: Intel QS77 Express
Motherboard: Intel D34010WYB including:
- 1 Mini-PCIe full height
- 1 Mini-PCIe half height
- 1 Gigabit Intel I218V
- 1 Sata 2
- 4 USB 3
- 2 USB 2
RAM: 2X4GB 1600 DDR 3 Crucial 1.35V
Power Supply: 65W - 19V DC Adapter


Drives:
4x 2,5" 500 Go HGST Z7K500 SATA III 7200 RPM 32 Mo cache
1x 2.5” 128 Go SSD drive.
2x 3,5” 2 To Western Digital green

Operating System/ Storage Platform: any suitable.
Other Bits: Existing: 1x Empty chassis 1U + 1x 6U 450mm Depth 19" Cabinet
Usage Profile: (shared storage, Raid, Back-ups,)


You may have noticed that the two machines have been chosen especially for their ratio price/performance taking in account the silence, form factor, and low heat dissipation. Also for the same reason I prefer use 2.5 disks instead of 3.5, awaiting to change the spinning disks to ssds.

That said, what could be still missing to the above elements to finalize the SAN? first, a second Gigabit NIC to make an iscsi dedicated network, also, use of existing ports to connect hard drives through adapters as needed, I’ve seen many things on the net (PCIe converters, splitters, raid boards, sata port multipliers, expanders, smart array controllers, etc …), but previously I need to be clear minded about the most practical RAID type for this case, hardware or software,0,1,0+1,5,6, … or something else.

Besides the scalability, the system should have the best performance allowed by the existing hardware, so that one can feel no difference using internal or external storage, especially when dedicating some physical disks from the array to some virtual machines on some network host become quite possible.

Thank you in advance.
 
Last edited:

pricklypunter

Well-Known Member
Nov 10, 2015
1,709
517
113
Canada
The hardware in the NUC is better suited to what you are wanting to achieve, especially if you plan on virtulising stuff. Lots of Hypervisor's don't play too well with Realtek NIC's. Lack of ECC RAM support would be a concern for me though :)
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
small DIY scalable SAN
How much is scaling important?

The reason I ask is that your chassis is probably the hardest constraint right now. I would guess that you could rig the ASRock board as a FreeNAS system (please do try this first as I have not tried this configuration myself). ZFS mirror two 2.5" drives and use a SSD as a cache drive.

For shared storage/ backups, this will be great. If it is a NIC issue you can get a single port Intel 82574L NIC for around $20 or less.

You will have one node at that point and all will be reasonably good. You can then use the NUC as a hypervisor as that form factor is very constraining.

Now, if you want a scale out NAS, you need more nodes. (e.g. Ceph, GlusterFS, vSAN and etc.)
 
  • Like
Reactions: Sami-L

Sami-L

New Member
Jan 21, 2016
7
0
1
58
The hardware in the NUC is better suited to what you are wanting to achieve, especially if you plan on virtulising stuff. Lots of Hypervisor's don't play too well with Realtek NIC's. Lack of ECC RAM support would be a concern for me though :)
For virtualization on NUC, I have already failed to install Hyper-V server 2012 R2 on the NUC due to the Intel I218V NIC which had no certified driver, but then tried separately both ESXi 6 and Windows server 2012 R2 and both succeeded to install, I think there would be a workaround for the Hyper-V server to install.

@pricklypunter, For the ECC matter, since this is just a first step for an iscsi lab test (not a production environment), I prefer in a first time to test using the existing hardware, and ensure that there is no way to use ECC on it before purchasing a server board starting from $350 (including 4 GB ECC ram) if I want to keep the quiet concept (cpu onboard).

Now for me, the challenge remains (if I opt for the i3) how to attach multiple disks to the NUC since it have only 2 micro PCIe and 1 Sata2 port, each port has its wattage limit to be supported, a USB3 port could be used for the OS.
 

Sami-L

New Member
Jan 21, 2016
7
0
1
58
You're in the same boat as me, I have a NUC and a little AM1 setup...

the nuc is not in a standard case or form factor, and without a PCI slot it's hard to cable up more than 2 drives....

I'd use the ASRock
As said to pricklypunter above, the "challenge" is (if I want to benefit from the i3 NUC) how to link a scalable number of disks to the NUC since there is a usable mini PCIe (probably use mini PCIe to PCIe adapter, and/or PCIe multiplier, to attach some kind of HBA), one Sata2 port (probably use port multiplier if needed), one of the USB3 ports could be used for the cache and/or OS.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113

Sami-L

New Member
Jan 21, 2016
7
0
1
58
What about a pair of these to make the NUC more scalable? As a single node it's still limited but with a few more drives you could make a matched set of arrays (3 mechanical plus ssd cache?) and have them mirrored for redundancy
SANS DIGITAL TR4UTBPN 4Bay USB 3.0 / eSATA Hardware RAID 5 Tower RAID Enclosure (no eSATA card bundled) - Newegg.com
@Deslock, These are fully featured and great to attach to the USB3 ports of the NUC without need of any other hardware, but the current objective now is to have something working with minimum expanses just to discover how things would go (software or hardware RAID choice, RAID type choice, scalability, etc...).

I prefer keeping drives in a separate box for handle ease, curretly I plan to use the existing empty chassis which should be connected the AsRock unit or NUC unit using a simple cable (maybe USB3, eSata, RJ45...) , so what do you think if In the first chassis I connect drives to a USB 3.0 hub using USB 3.0 to sata adapters. That way, do you think I would be able to manage raid and hard disks individually from the host?
 
Last edited:

Sami-L

New Member
Jan 21, 2016
7
0
1
58
How much is scaling important?

The reason I ask is that your chassis is probably the hardest constraint right now. I would guess that you could rig the ASRock board as a FreeNAS system (please do try this first as I have not tried this configuration myself). ZFS mirror two 2.5" drives and use a SSD as a cache drive.

For shared storage/ backups, this will be great. If it is a NIC issue you can get a single port Intel 82574L NIC for around $20 or less.

You will have one node at that point and all will be reasonably good. You can then use the NUC as a hypervisor as that form factor is very constraining.

Now, if you want a scale out NAS, you need more nodes. (e.g. Ceph, GlusterFS, vSAN and etc.)
@Patrick, thank you for your great advice, I forgot to say that I have another same 1U empty chassis that I can use for drives. For scaling question, I would say that 5 HDD drives + 1SSD is a good starting point to evaluate access speed and different raid possibilities, then scaling in a later time to 10 or maybe 20 max.

So, what do you recommend for RAID, to have it on disks side or host side, on disks side it would consume no host resources, but from host side better manageability, what are other pros and cons?

I would like to be able to test FreeNas on Asrock as you advised, but also admit I am attracted by the idea of installing two nodes (maybe NUC and AsRock) using Ceph, GlusterFS or others.;)
 

Sami-L

New Member
Jan 21, 2016
7
0
1
58
Hmmm,
After filtering options, I think the ultimate three remaining options would be:

- Drives with their USB 3.0 to sata adapters and a USB 3.0 hub in the empty chassis and link it to the host using USB 3.0 (never tried RAID using USB)
- Drives and this thing that could be multiplied in a chassis and link it to the host using eSATA or USB 3.0 cable (RAID on drives side)
- Drives and this thing that could be multiplied in a chassis and link it to the host through an esata Host card

This last host card is featured with what is called "4-Channel" and "HyperDuo" Technologies, have you any thoughts ?
 
Last edited:

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
I looked into the hyper-duo stuff and never found a conclusive yes/no as to it's effectiveness, the LSI cachecade technology I can say works(very well) but i haven't tested HyperDuo(My desktop is limited to SAS3gbps currently and hyper-duo looked attractive but ultimately i decided not to buy something and hold out for a SAS12 HBA on ebay for ~100)
If you're going to buy a host card I recommend Ebay over amazon, there's a list on these forums of cards with known LSI HBA's and firmware to flash them to IT mode as well as their features, you can often find good cards under 100usd, I'd avoid anything LSI 1xxx based unless you're comfortable being limited to SATA II speeds and 2tb drives)
Those USB adapters are going to add up fast, you're looking at $12 per drive plus hubs(I'd recommend 2 4 port hubs over 1 7 or 8 for performance reasons) for 8 drives you're already at 120 ish plus power and complexity issues, you can get something like this for 100 which is admittedly more but substantially simpler(you have 2 usb3 host ports) less places to break something.
 
  • Like
Reactions: Sami-L

Sami-L

New Member
Jan 21, 2016
7
0
1
58
I looked into the hyper-duo stuff and never found a conclusive yes/no as to it's effectiveness, the LSI cachecade technology I can say works(very well) but i haven't tested HyperDuo(My desktop is limited to SAS3gbps currently and hyper-duo looked attractive but ultimately i decided not to buy something and hold out for a SAS12 HBA on ebay for ~100)
If you're going to buy a host card I recommend Ebay over amazon, there's a list on these forums of cards with known LSI HBA's and firmware to flash them to IT mode as well as their features, you can often find good cards under 100usd, I'd avoid anything LSI 1xxx based unless you're comfortable being limited to SATA II speeds and 2tb drives)
Those USB adapters are going to add up fast, you're looking at $12 per drive plus hubs(I'd recommend 2 4 port hubs over 1 7 or 8 for performance reasons) for 8 drives you're already at 120 ish plus power and complexity issues, you can get something like this for 100 which is admittedly more but substantially simpler(you have 2 usb3 host ports) less places to break something.
I have noted your preference for the LSI cachecade technology to which I will also make a request to get acquaintedo_O. At the same time, did you mean by SAS12 HBA 12 ports or 12 Gbps for ~ 100. Notice: The need should be 2, 1 for each node.

I agree that ebay for this kind of stuff offers more possibilities and have sometime better prices than Amazon, but at first sight I can say that it will be hard to find dowry under $200 ;). As I started to discover SAN soft as StarWind Virtual SAN (free 2 nodes version) and Vmware vSAN and I do not know yet what they propose as technologies in the field and if it can influence the choice of the RAID type deployment (or maybe RAID is dead with the emerging storage virtualization) and consequently the HBAs. Patrick's proposal to use FreeNas should be great, but I do not know yet if it can be combined with StarWind VSAN or Vmware VSAN.

You see now that the party is grand, and the multitude of candidates does not simplify things.

In fact I courted StarWind, let's say because I feel somewhat comfortable with Microsoft technologies, and also Vmware because I like their beautiful products.

USB 3.0: If we suppose a starting point of (5x 2.5” HDDs adapters + 2x 4ports hub) per node x2 nodes ~ $180 Total. On AsRock as on NUC there are 4 USB 3.0 ports that can be multiplied, enough room.

What are your thoughts finally, are you like Patrick and me, from those who encourage marriage with inexpensive dowry?:)
 
Last edited:

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
When I say SAS12 HBA I'm refering to 12gbps although i'm not finding any currently I want to say around November i saw a few dell H330 cards around that price it could have been a fluke however since they're around 200