Media File Server - Advice sought

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

sim871

New Member
Apr 22, 2012
13
0
1
Bermuda
Build’s Name: Media File Server (MFS)
Operating System/ Storage Platform: ESXi 5, OpenIndiana VM
CPU: Intel Xeon E3-1240 V2 Ivy Bridge 3.4GHz LGA 1155 $280.00
Motherboard: Supermicro X9SCA-F Motherboard $ 280.00
Chassis: TBD or Norco RPC-4224 4U Server Case $412.00
Drives: 6 - Western Digital Caviar Green 3 TB SATA III 64 MB - WD30EZRX $169.99
RAM: 16GB RAM KVR16E11K4/16I $140.00
Add-in Cards: SAS9211-8I 8PORT Int 6GB Sata+sas Pcie 2.0 $226.99
Power Supply: Seasonic PLATINUM-860 ATX 860 Power Supply by Seasonic $219.99
Other Bits: SFF-8087 to SFF-8087 Internal Multilane SAS Cable $15.00 (1.5 ft)
Noctua NF-P12 Fan 3-Pack $64.00
Norco120mm fan wall bracket $11.00

Usage Profile:
Generally only three users will access/stream stuff at a time. Is the choice of processor adequate?
File server generally
Backup 2 Windows desktops
DHCP server
Firewall
DNS caching
Remote access
Time machine back ups
Needs to run Sabnzbd
Store Mp3, Flac, Wav files and serve them to iTunes, SqueezeBox, JMRiver
Store photos and serve them to Adobe Lightroom or other Win or Mac program
Store movies and serve them to iTunes and Plex

Some Qs assuming the use of the Norco 4224:
How and what type of connection is needed to power the backplane?
How long should the SFF-8087 to SFF-8087 cable be to safely connect the card to the back plane?
Are WD green drives of or should I look at others/
To load ESXi and OI, What size brand ssd should I use?

I plan to purchase and build a machine around the end of August. Any other suggestions welcomed

Many thanks,

Sim871
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
That processor is more than adequate for three people accessing the server at once; you have plenty of headroom there in terms of clock speed and number of available threads.

Each backplane has two Molex (4-pin, old PATA style drive etc.) connectors - the second is for a redundant power supply, you only need to hook up one molex per backplane. I use three cables (each powering two backplanes from the PSU) to avoid over-stressing a single cable + splitter.

With a 120mm fan wall 50cm is adequate if your cards have end-connectors; 60cm is necessary for cards like the M1015 with top connectors. I have gotten 50cm to fit with M1015s in all slots but there's a great deal of strain on the connectors and it doesn't feel healthy for the components.

We're running a large number of Western Digital green drives here without issue; the dramas they have with hardware RAID are not present in software raid, e.g. ZFS. Hitachis are also popular on this board. I've found performance to be quite good and power consumption low.

I load ESXi onto a thumbdrive and our ESXi datastore is a 64gb SSD (well, pair thereof) - we're only utilizing 16GB of it, though, as that's all I've allocated the OI VM. If you were using the desktop flavour I'd probably give it a bit more... If I were to do it again I might increase the space a little just in case. Anything over 64GB is overkill as there's no way OI is going to use that much space.
 

sim871

New Member
Apr 22, 2012
13
0
1
Bermuda
I load ESXi onto a thumbdrive and our ESXi datastore is a 64gb SSD (well, pair thereof) - we're only utilizing 16GB of it, though, as that's all I've allocated the OI VM. If you were using the desktop flavour I'd probably give it a bit more... If I were to do it again I might increase the space a little just in case. Anything over 64GB is overkill as there's no way OI is going to use that much space.
Thanks for the reply.

I'm not sure I understand. ESXi is on a stand alone USB stick? How come it is not on the SSD?
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
ESXi has the ability to be installed to a USB stick rather than a SSD/HDD - it limits the number of writes so that it doesn't wear out the USB flash media too quickly. ESXi loads into the RAM on boot so that the stick is mostly used for logfiles and settings anyhow. There's no speed disadvantage to doing it this way... I suppose one of the advantages is that it makes it easy to reinstall ESXi without worrying about the rest of the data on your SSD; if you want to upgrade to a newer version of ESXi down the track or it goes horribly wrong somehow you can blow the USB stick away and start again without the worry of accidentally doing the wrong thing and formatting valuable data on the SSD in the process.

It also means that you can get maximum capacity out of your SSD without wasting a couple of gig to something that's going to be loaded into the RAM anyway. It's not really a huge list of benefits - I just got in the habit of doing it when I was first playing around with ESXi and never stopped. Perhaps someone else will be able to come up with a reason as to why you might choose to do it or not.

Edit: I forgot to mention one thing... I have also used it in the past to set ESXi up in a stable way on one USB stick and then use other USB-installs of ESXi to play around with settings which might break the setup, e.g. VGA passthrough. I don't particularly want to spend the time setting the whole system back up again in a hurry so it's much less hassle to plug the original USB stick back in and have the server running as it was. Sometimes we don't have enough available systems in-house for me to experiment with something specific so I have to borrow a production system for a little while... and this is a lower-risk, lower-stress way of doing that.
 
Last edited:

sim871

New Member
Apr 22, 2012
13
0
1
Bermuda
ESXi has the ability to be installed to a USB stick rather than a SSD/HDD - it limits the number of writes so that it doesn't wear out the USB flash media too quickly. ESXi loads into the RAM on boot so that the stick is mostly used for logfiles and settings anyhow. There's no speed disadvantage to doing it this way... I suppose one of the advantages is that it makes it easy to reinstall ESXi without worrying about the rest of the data on your SSD; if you want to upgrade to a newer version of ESXi down the track or it goes horribly wrong somehow you can blow the USB stick away and start again without the worry of accidentally doing the wrong thing and formatting valuable data on the SSD in the process.

It also means that you can get maximum capacity out of your SSD without wasting a couple of gig to something that's going to be loaded into the RAM anyway. It's not really a huge list of benefits - I just got in the habit of doing it when I was first playing around with ESXi and never stopped. Perhaps someone else will be able to come up with a reason as to why you might choose to do it or not.

Edit: I forgot to mention one thing... I have also used it in the past to set ESXi up in a stable way on one USB stick and then use other USB-installs of ESXi to play around with settings which might break the setup, e.g. VGA passthrough. I don't particularly want to spend the time setting the whole system back up again in a hurry so it's much less hassle to plug the original USB stick back in and have the server running as it was. Sometimes we don't have enough available systems in-house for me to experiment with something specific so I have to borrow a production system for a little while... and this is a lower-risk, lower-stress way of doing that.

Thanks for the reply.

Sounds reasonable enough!
How big a thumbdrive is reqd? 4GB or less?

I don't think the Supermicro X9SCA-F has an internal usb slot like some boards do. Is there a workaround to get it inside or do you just hang it if the back?

Also, how come you don't create all your other VMs on the SSD and just use the ZFS storage pool for data?
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
Well... I've not ever used ESXi with a board that didn't have an internal header - I suppose you would just have to have it hanging out the back. Probably not an issue if you aren't going to be touching the rear of the computer that often... I would consider finding an internal header -> USB port cable (I have a few of them that came with various motherboards, the cables are around 6" long) and just have it laying on the case floor in case some helpful person comes along and borrows the USB stick.

Having said that, if they do remove the stick it won't crash the system in my experience; it'll throw up a stack of errors but keep running, since the hypervisor is loaded into the RAM. YMMV there though.

4GB is large enough - I found that a couple of the ones I had laying around simply wouldn't work with ESXi whereas others work just fine - I couldn't ever find a reason why a couple wouldn't boot or save settings, I just ditched them for ones that did work.

A few reasons; we have a couple of SSD mirrors inside the OI VM and ZFS mirrors are read in round-robin fashion, so you get close to 2x the read speed of a single drive for a mirrored pair. I tend to keep my VMs as small as is feasible so on a mirrored 128GB pair I can fit a reasonable number of VMs. That also means that they're not reliant on hardware RAID for redundancy etc. and you have the benefit of ZFS checksums, and the ability to check on the health of the RAID array via command line or a browser (e.g. napp-it). About the only time I see the health of the M1015's mirror is when I reboot and am watching the console, which is rarely. I back up the OI VM's SSD regularly and have spare drives on-hand so if/when both drives die before I notice on a reboot it won't be a huge issue. I'm not aware of any way to actively monitor the health of a M1015 datastore that's being used by ESXi. It's much less hassle for me to restore the single OI VM from a recent backup than to restore a full 64GB of VMs. That's from the perspective of running the ESXi datastore on a mirrored pair through a RAID card; if you were using a single SSD on the motherboard's ports it may be a different story.

Some of our VMs are over 60GB in size (e.g. WHS 2011) - thus they fit neatly onto a high-performing raidz2 array.

I also find it's much quicker to move VMs around inside a VM than through the ESXi console; for some reason reading/writing to the ESXi datastore isn't as quick as it should be.

Edit: Looking at your motherboard choice... you only have a x16 and two x4 PCI-E slots - you might consider a board with more PCI-E slots. Most HBAs like the M1015/LSI 9211/etc. are x8 physical/electrical and a lot of add-on network cards are x4... if you end up wanting more than three expansion cards you'll have run out of room. I haven't seen any performance hit with using a x8 HBA in a x4 electrical slot using HDDs but I would be leery of using a board with that few slots in a 4224. Three M1015s etc. will fill up the backplanes nicely but leave you with nowhere to go in the future for external ports, NICs etc.
 
Last edited:

sim871

New Member
Apr 22, 2012
13
0
1
Bermuda
Edit: Looking at your motherboard choice... you only have a x16 and two x4 PCI-E slots - you might consider a board with more PCI-E slots. Most HBAs like the M1015/LSI 9211/etc. are x8 physical/electrical and a lot of add-on network cards are x4... if you end up wanting more than three expansion cards you'll have run out of room. I haven't seen any performance hit with using a x8 HBA in a x4 electrical slot using HDDs but I would be leery of using a board with that few slots in a 4224. Three M1015s etc. will fill up the backplanes nicely but leave you with nowhere to go in the future for external ports, NICs etc.
Well that knocked the wind out of my sails! Didn't notice that. I thought there were 2 x8 slots. None of the new i3 Supermicro boards have many PCI-E slots. Can I put the LSI 9211 in the x16 slot, (which is actually an x8)?

Any MOBO suggests?

http://www.supermicro.com/products/motherboard/Xeon3000/#1155
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
You certainly can put it in the x16 slot - that will work fine.

Given that there's a finite number of PCI-E lanes going to a processor (more for S2011 than S1155) you're likely to find boards with more PCI-E slots have more PCI-E x8 and x4 and lack a x16 - not a huge deal for a server, generally.

I don't see anything on that page that has more PCI-E slots - Supermicro and Tyan aren't really my area of expertise, though, so I'm sure someone else can make a recommendation there. The board with the most PCI-E slots that we use most frequently is the Asus P8C WS (previously P8B WS) - it has x8, x8, x4, x4, x1 and a PCI slot. You need to use an E3-12x5 CPU with it, though, as it lacks an onboard graphics chip and you need the integrated graphics on the CPU (unless you use a discrete graphics card). Each of the x8/x4 slots is x16 physical so any card will fit. You lose IPMI, which may be a big deal or you may not mind - depends on your setup and intended use. The last board we built with a P8B WS had 4 RAID cards in it in a 4224 chassis and it performed quite nicely - very stable.

I'm sure someone will chime in with another motherboard with a larger number of PCI-E slots, though!
 

john4200

New Member
Jan 1, 2011
152
0
0
Oddly enough, the Supermicro C20x chipset ATX motherboards have fewer PCIe slots than the uATX boards. I like the X9SCM-F (soon to be replaced by the similar X9SCM-iiF) which has two x8 PCIe 3.0 slots, and two x4 PCIe 2.0 slots, all in physical x8 slots.

If you want to stick with Supermicro, the only good way to get more PCIe slots is to go with a dual-1356 or dual-2011 motherboard. But I guess the Ivy Bridge Xeon chips for LGA2011 are still quite a ways off.
 
Last edited:

sotech

Member
Jul 13, 2011
305
1
18
Australia
Oddly enough, the Supermicro C20x chipset ATX motherboards have fewer PCIe slots than the uATX boards. I like the X9SCM-F (soon to be replaced by the similar X9SCM-iiF) which has two x8 PCIe 3.0 slots, and two x4 PCIe 2.0 slots, all in physical x8 slots.

If you want to stick with Supermicro, the only good way to get more PCIe slots is to go with a dual-1366 or dual-2011 motherboard. But I guess the Ivy Bridge Xeon chips for LGA2011 are still quite a ways off.
Hey, that board looks alright - I didn't even think to check the uATX selection, assuming that the full ATX boards would have a greater complement of PCI-E slots. That's odd :S

The 2011 Xeon chips are out - quite a few of us are running them already in single-socket or dual-socket systems :)
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
Not the Ivy Bridge ones, as I said.
Whoops, misread. You're right, Ivy won't be out for a while yet... Not sure how that factors in here? A single-socket current S2011 system will give you a good number more PCI-E lanes than 1155 if that's what you're after, without breaking the bank - the dual-proc systems are nice but probably overkill for most home users. Given that it's a recent release I would expect that the Ivy Xeons won't be out for a while yet, so it's not like you're buying a soon-to-be-superseded platform.
 

john4200

New Member
Jan 1, 2011
152
0
0
Whoops, misread. You're right, Ivy won't be out for a while yet... Not sure how that factors in here?
His original CPU choice is an Ivy Bridge Xeon, so I assumed that was what he was looking for. Also, the Ivy Bridge Xeons at 3.x GHz have lower TDP and are a lot cheaper than the LGA2011 Sandy Bridge Xeons (and there are only a couple above 3GHz).
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
LGA1155 Ivy bridge = 16x PCIe Gen3 lanes and 4x PCIe Gen2 lanes + 8x PCIe Gen2 lanes from the PCH (not always used)

LGA1155 Sandy bridge = 20x PCIe Gen2 lanes + 8x PCIe Gen2 lanes from the PCH (not always used)

LGA1356 = 24 PCIe Gen3 lanes

LGA2011 = 40 PCIe Gen3 lanes

One of the best LGA1155 Mobo's for PCIe is the X9SCM-F with 2x PCIe 8x Gen3 and 2x PCIe 4x Gen2 ports

You pay a lot more for the LGA2011 CPU as you get a lot more out of it, twice the lanes more cores Dual QPI vs DMI
More cores, memory and with it all more Power used

Best to do homework on what you need and what you get from each CPU/Mobo combo
 
Last edited: