A Pile of Miscellaneous Questions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mmmann

New Member
Dec 5, 2015
22
2
3
58
Hello. This is general chat, so hopefully a bunch of general questions are OK.

My goal of a rack populated with a security cam server, file server, and eventually web server / Email has led to a great deal of research over the last months. I have a few outstanding, unanswered questions that have piled up. I'd like to post them, here, and gather answers / commentary from the experts on the site.

1) Do rack mount servers need DVD trays, generally, or does one usually load the OS (etc.) from USB or from one of those SuperMicro simulate-a-HD plug-in modules?

2) What happens when you plug a consumer-level SATA drive into a SAS enclosure cabled for SES-2? Assuming non-RAID, is it OK to do so on a temporary basis, until migrating to SAS drives?

3) SuperMicro, Norco, Antec, Chenbro, iStarUSA, (etc.) Which of these manufactures' quality is best and 2nd-best; which of these manufacturers have the best- and 2nd-best "bang for buck?"

4) When moving to RAID, how do you backup (snapshot) so much data? In a 4U chassis, do you allocate, say, 6 slots for your main array, and another 6 slots for the backup array? Or use a separate, external RAID box with equal or greater capacity? Altenative: if my directory structure(s) are designed to stay under 6-8TB per root directory (perhaps enforced by StorageSpaces), can I use a fast, RAID5/6 array but backup each individual root directory onto single, 6-8TB drives?

5) Re: backing up to single, separate drives: can you use Storage Spaces over a RAID array? If not Windows but instead ZFS, can you break up a large ZFS partition into 6-8TB directories, enforce those limits, and do the same? (No experience or research on ZFS, yet).

6) If using a hardware RAID controller for speed, must I have a separate RAID controller on hand to be safe? Is software raid "safer" since there's no RAID card to fail?

7) Link aggregation: MaximumPC has this to say: Think of link aggregation in terms of network link resiliency rather than total available throughput." So each stream lives only an individual cable, it appears, making network transfers "broader" but not faster. If so,is 10GbE required between a specific client and the server if very fast copy operations are called for?

8) If I run Storage Spaces against RAID10 on Windows, can you identify a SAS drive with a utility to blink the lights?

9) Any thoughts on this commentary? http://betanews.com/2014/01/15/windows-storage-spaces-and-refs-is-it-time-to-ditch-raid-for-good/.

10) If I decide to host Email and/or an external web server etc., should I install a "threat management" box between my internet provider and my switch? How about a Sophos UTM on a small box--cat5e in from the provider, cat5e out to the switch? Is there a very small form factor product that would specifically make a good box? It could rest on the rack's shelf alongside the gateway.

Thanks a bunch for any answers anyone might deign to provide. Sorry I'm such a miserable noob. I'm working on it, week by week... :)
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
1) I would use USB-DVD drives if you need DVDs, otherwise USB keys are great.
2) SAS2 controllers can handle SATA drives so you can easily plug in SATA drives. I assume you meant SAS-2.
3) Supermicro is great. Intel makes servers, Asus is also known to be good. These are server-class gear.
4) Decide what needs to be backed up and how often. You may have 1TB of constantly-changing data (documents, etc) and 8TB of movies. If those 8TB don't change much, you can just sync to another drive and not care about proper backup strategies. The 1TB I would throw into Crashplanor some cloud-based backup service and also a local external USB drive. Easy, gives you history, and not much complication.
6) Hardware RAID is fast and nice, but comes with ongoing cost (backup batteries failing, etc). Software-based RAID is increasing in capabilities and I'd investigate depending on OS, etc. ZFS is heavily used around here and you just need an HBA to use it.
7) Depends on what you are trying to do if you need 10G. Most things around the house can't saturate 1G, 10G is great for server <-> server links. You can feed 10G to your server and 1G out to the house to remove the bottleneck if that's a concern.

I skipped ones that others may know better ;)

Good luck!
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,652
2,066
113
Hello. This is general chat, so hopefully a bunch of general questions are OK.

My goal of a rack populated with a security cam server, file server, and eventually web server / Email has led to a great deal of research over the last months. I have a few outstanding, unanswered questions that have piled up. I'd like to post them, here, and gather answers / commentary from the experts on the site.

1) Do rack mount servers need DVD trays, generally, or does one usually load the OS (etc.) from USB or from one of those SuperMicro simulate-a-HD plug-in modules?

Most install via USB or network.

2) What happens when you plug a consumer-level SATA drive into a SAS enclosure cabled for SES-2? Assuming non-RAID, is it OK to do so on a temporary basis, until migrating to SAS drives?

You can use SATA drives on backplanes made for SAS or SATA, mixing SATA with SAS is a bad idea.


3) SuperMicro, Norco, Antec, Chenbro, iStarUSA, (etc.) Which of these manufactures' quality is best and 2nd-best; which of these manufacturers have the best- and 2nd-best "bang for buck?"

Supermicro, Supermicro.
The other brands likely don't have near the options / line-up that Supermicro does.


4) When moving to RAID, how do you backup (snapshot) so much data? In a 4U chassis, do you allocate, say, 6 slots for your main array, and another 6 slots for the backup array? Or use a separate, external RAID box with equal or greater capacity? Altenative: if my directory structure(s) are designed to stay under 6-8TB per root directory (perhaps enforced by StorageSpaces), can I use a fast, RAID5/6 array but backup each individual root directory onto single, 6-8TB drives?

Everyone "backup" data differently. Some do certain folders/files only, others do certain arrays only, etc... How and where you backup to is really up to you. Common locations are a seperate backup FreeNAS/Napp-IT server (can be small/cheap), external hard drive(s), Blu-Ray Discs, Off-Site (S3/Backblaze/Etc).

5) Re: backing up to single, separate drives: can you use Storage Spaces over a RAID array? If not Windows but instead ZFS, can you break up a large ZFS partition into 6-8TB directories, enforce those limits, and do the same? (No experience or research on ZFS, yet).

I don't know what you mean by "ZFS Partition", and I don't use storage spaces sorry no help here.

6) If using a hardware RAID controller for speed, must I have a separate RAID controller on hand to be safe? Is software raid "safer" since there's no RAID card to fail?

Hardware RAID for speed? Not sure what you mean by that, also software RAID still needs a card, and it can fail it's just not a "RAID".

7) Link aggregation: MaximumPC has this to say: Think of link aggregation in terms of network link resiliency rather than total available throughput." So each stream lives only an individual cable, it appears, making network transfers "broader" but not faster. If so,is 10GbE required between a specific client and the server if very fast copy operations are called for?

Yes, your server may have 4 ports to the switch but if your comptuer, server, etc only has 1 port then that's your limit.

8) If I run Storage Spaces against RAID10 on Windows, can you identify a SAS drive with a utility to blink the lights?

Not sure.

9) Any thoughts on this commentary? http://betanews.com/2014/01/15/windows-storage-spaces-and-refs-is-it-time-to-ditch-raid-for-good/.

10) If I decide to host Email and/or an external web server etc., should I install a "threat management" box between my internet provider and my switch? How about a Sophos UTM on a small box--cat5e in from the provider, cat5e out to the switch? Is there a very small form factor product that would specifically make a good box? It could rest on the rack's shelf alongside the gateway.

Y0ur "web server" needs to be secured as well as your network if you're allowing outside traffic in.

Thanks a bunch for any answers anyone might deign to provide. Sorry I'm such a miserable noob. I'm working on it, week by week... :)
 
  • Like
Reactions: whitey

vl1969

Active Member
Feb 5, 2014
634
76
28
To your q8, no you can not identify drive fron a raid10 array with storage spaces utility. But on a server grade hardware and raid controller you can use thr utility in the controller bios to do that. Alao imho, it is a waist of time and resources to run storage spaces or zfs on a raided array. Both ss and zfs are basically a software raid , why would you want to run double raid?
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Q1 - As other have said, typically we use a boot from USB, sata DOM, network, or IPMI (type of network based w/ virtual media attachment options)
Q2 - Covered, T_Minus 'may' slap my hand or I should do it myself but I run a mix of sas/sata AND use a Intel RES2SV240 6Gbps sas expander in my Norco 4224/Norco 2212/SMsc216 LOL, I live on the edge some may say...NEVER had an incident or data loss and I pound my gear.
Q3 - IMHO, Supermicro, then Norco (may be biased here, some b|tch abt Norco builds, never had an issue here), Intel (ok maybe Intel before Norco build-quality-wise but bang-for-buck norco unless you find some INSANE Intel chassis deal which happens)
Q4 -Plan to allocate at LEAST as much on primary array as secondary array, you can fiddle w/ compression/de-dup to your hearts content but personally I just know that on my ZFS datasets/pools that on my DR-arrays I have similar pool capacity if not larger to where I ZFS send/recv to. Remote off-site replication never hurts as well.
Q5 - ZFS certainly has 'zfs quota's' in which you can enforce a certain usage limit on specific ZFS data sets/filesystems.
Q6 -Hardware HBA (detect the drives and get the hell outta' the way) biased here (LSI 2008 or newer chipset in IT mode) w/ OS level SW raid (pick your poison - ZFS, btrfs, etc.)
Q7 - Snor...LOL everytime I hear this the LACP/LAG/FEC nightmares/debates come raging back into my head over my professional career...sure you CAN aggregate/channel/bond/pair links together but to truly utilize them appropriately (switch configs/specific loadbalancing algorithm's/policies setup @ various points of infra including vSwitches/vDS's) and balance traffic down explicit links/paths is a major PITA unless that is what you 'live/breath/sleep' in the network engineering world. FAR better gains/yields/less pain/gray hairs can be achieved by simply implementing a 10GbE infrastructure end-to-end. Ohh sh|t I JUST re-lived a 'groundhog day' :-D
Q8 - Dunno much abt M$ stg offerings but many stg appliances I use offer this functionality (FreeNAS/Rockstor/etc.) If not there are vendor utilities that can usually aide in HD/chassis/slot identification.
Q9 - No constructive input here, biting tongue. (momma taught me 'If ya ain't got nothing nice to say don't say it at all') :p
Q10 - Abt a 1/2 dozen ways I can think of off the top of my head to architect this. Options vary from inline filtering/SPAN port to IDS/IPS/FW appliance, network TAP, virtual appliance injected into the explicit network segment, IMPORTANT thing is 'getting yourself in the path (injection point), or ensuring you are receiving ALL packets and not losing some due to poor design). May need to re-visit this one w/ more thoughts.
 
Last edited:

andrewbedia

Well-Known Member
Jan 11, 2013
701
260
63
Q2: SATA drives are just fine in RAID (and non-RAID) on SES-2. Done this for a very long time. Choosing the right sata drives goes a long way, though. You can't mix SAS and SATA in a RAID on LSI controllers (possibly others), but softraid ZFS, linux mdadm do not care what kind of drives you have.
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,714
521
113
Canada
Hello. This is general chat, so hopefully a bunch of general questions are OK.

My goal of a rack populated with a security cam server, file server, and eventually web server / Email has led to a great deal of research over the last months. I have a few outstanding, unanswered questions that have piled up. I'd like to post them, here, and gather answers / commentary from the experts on the site.

1) Do rack mount servers need DVD trays, generally, or does one usually load the OS (etc.) from USB or from one of those SuperMicro simulate-a-HD plug-in modules?
As others have mentioned, USB, Network, External DVD are the usual methods. When using ESXi on a new bare metal host, I will sometimes host the install ISO's for the VM's on my laptop and perform the install over the Management connection, negating the need to plug anything in to the actual server. If your server is in the basement and your management PC is elsewhere, it's a handy feature.

2) What happens when you plug a consumer-level SATA drive into a SAS enclosure cabled for SES-2? Assuming non-RAID, is it OK to do so on a temporary basis, until migrating to SAS drives?
SAS or SATA, choose one and do not mix them. SAS disks have the advantage over SATA disks, for example, if you are needing multipath I/O, but are obviously more expensive to buy.

3) SuperMicro, Norco, Antec, Chenbro, iStarUSA, (etc.) Which of these manufactures' quality is best and 2nd-best; which of these manufacturers have the best- and 2nd-best "bang for buck?"
It really depends on what sort of budget you have to play with. If money is no object, go the SM route, there are a gazillion options available and support is good. Next from that list, I would favour the Chenbro chassis. Reasonably good support and a decent range of chassis to fit a wide ranging budget. I have looked at the Norco chassis in the past, but there are just too many issues with them for me to be bothered with. About the only advantage would be the depth of chassis, where the Norco might be able to squeeze in to a tighter space, that the others cannot.

4) When moving to RAID, how do you backup (snapshot) so much data? In a 4U chassis, do you allocate, say, 6 slots for your main array, and another 6 slots for the backup array? Or use a separate, external RAID box with equal or greater capacity? Altenative: if my directory structure(s) are designed to stay under 6-8TB per root directory (perhaps enforced by StorageSpaces), can I use a fast, RAID5/6 array but backup each individual root directory onto single, 6-8TB drives?

5) Re: backing up to single, separate drives: can you use Storage Spaces over a RAID array? If not Windows but instead ZFS, can you break up a large ZFS partition into 6-8TB directories, enforce those limits, and do the same? (No experience or research on ZFS, yet).
The world is your oyster when it comes to the backup strategy you will need to implement. You have differing levels of requirement. Some data is static and rarely if ever changes, backing that data up to cold storage would be the way to go. Or, if it's something that you already have on Blueray for example, why bother, you can always reload it if it gets corrupted or lost. You may have some databases that change frequently and will need to be backed up every few mins. Either way, you will want to strive for both onsite and offsite back-ups, this may mean backing up to a cloud storage provider like Backblaze or Crashplan etc. BTW, once you get beyond having 4 disks of any reasonble capacity in an array, you will want at the very least, to implement Raid 6. Raid 5 is simply not going to cut it going forward.

6) If using a hardware RAID controller for speed, must I have a separate RAID controller on hand to be safe? Is software raid "safer" since there's no RAID card to fail?
It's really your own preference whether you use hardware or software based Raid. Software based Raid will allow you to move your disks to another box and recover your data very easily, not so if you don't have a spare Raid card and your original one is toast. Even then it has to have compatible firmware etc. Software Raid is every bit as good now as hardware based Raid when implemented properly, so why have the headache of support going forward. Then there is obsolescence to deal with. Your hardware based Raid card is likely not going to be able to take advantage of new technologies going forward, software Raid on the other hand may easily be able to. Hardware Raid can also get expensive quickly, high end cards are not cheap nor for that matter is mainaining them. You have no such drawback with a software Raid implementation.

7) Link aggregation: MaximumPC has this to say: Think of link aggregation in terms of network link resiliency rather than total available throughput." So each stream lives only an individual cable, it appears, making network transfers "broader" but not faster. If so,is 10GbE required between a specific client and the server if very fast copy operations are called for?
10Gbe is really only going to shine where you can actually make use of that bandwidth, like server to server transfers of large sustained amounts of data, for example. While it might look great on paper, unless you have a burning need for the bandwidth, like a heavy duty workstation running video editing etc, it's unlikely that you'll ever need it on the average client. There are various schemes to boost bandwidth and make best use of the available links, but your hardware must fully support whichever scheme you choose, at both ends.

8) If I run Storage Spaces against RAID10 on Windows, can you identify a SAS drive with a utility to blink the lights?
This is a feature of SAS controllers and compatible backplanes, not the actual drive itself. You would need a utility for your controller installed to be able to blink lights etc. I don't think storage spaces supports this natively.

ReFS is still evolving and although it shows much promise, it's no better yet than ZFS. In fact ZFS may have the advantage here because of the extensive testing already performed. I'm sure it will get better as time goes on, but there is a debateable argument of whether it's production ready. As for storage spaces, it's not really something I have played with much. It looks nice, for simple storage pools it's probably ok and it has that point 'n click advantage. Just how granular can you get with it and still be able to recover your data if it all goes sideways? I don't know the answer, but taking a guess, I would say it's no better than the chances of recovering data from any of the current line up of "next gen" filesystems.

10) If I decide to host Email and/or an external web server etc., should I install a "threat management" box between my internet provider and my switch? How about a Sophos UTM on a small box--cat5e in from the provider, cat5e out to the switch? Is there a very small form factor product that would specifically make a good box? It could rest on the rack's shelf alongside the gateway.
You will definitely want a good security appliance of some sort, either whiteboxed or purchased pre-built, between you and any public network. Money is a consideration here as is capability and ease of configuration and maintenance of the device plus any support requirements you may have. You can get all sorts of complicated with this, everything from basic firewalling to honey pots. It really depends on whether you just want to keep the curious outside your networks or actually catch them attempting to break in. Also your own capability at putting something effective together if you plan on whiteboxing it.

Thanks a bunch for any answers anyone might deign to provide. Sorry I'm such a miserable noob. I'm working on it, week by week... :)