Building new fileserver, appreciate advise on ideas

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

connorxxl

New Member
Apr 12, 2013
8
0
1
Hi,

In the process of building a separate file server, and I'd appreciate any advise on some parts/decisions I need.

Currently using
Supermicro X9SCM-IIF
Intel E3 1230v2
32gb RAM
3 x IBM M1015 (2 flashed to IT, one not sure)
10 x 2tb HDDs SATA, some older/smaller HDDs
All-in-One with ESXi and OpenIndiana (two M1015 passthrough)

VMs/applications used:
(a) File server (CIFS, AFP) thought OpenIndiana
(b) SABNZBD, Transmission, Amule, Sickbeard, Couchpotato, Headphones in one VM (need to split that up later in two or three separate VMs)
(c) NewzNab (two VMs, one of them will be switched off soon)
(d) NzeDB (one VM)
(e) MySQL in a VM (bad performance due to location on NFS/file server, much better when run on zone on OmniOS, already tested. Needed for NewzNab and NzeDB, high io load, might get an SSD raid1 for the database data)
(f) MusicBrainz mirror in a VM
(g) Plex VM (only run occasionally due to high CPU load, usually using XBMC box for media, would like to run it constantly with the new system)
(h) Zabbix VM (monitoring)
(i) Windows 7 VM (for administrative purposes, RDM often easier than starting Windows VM on a Mac)

Already bought/ordered for a dedicated fileserver:
Intel Dual Gbit Ethernet NIC PCIe
Intel Dual 10 Gbit Etherent NIC PCIe
X-Case RM 424 Pro (24 bay case)

I want to split up ESXi and file serving, got some trouble in the past, and also need more HDDs which won't fit in the current case (Fractal Design XL). Additionally want to run some (io-intense) services/systems like MySQL or SABNZBD directly on fileserver, either in VM or zone. If possible, I might migrate some more CPU-intense VMs to the fileserver too (like NewzNab/NzeDB), dependent on CPU power in the new fileserver (single/dual socket, see below). So thinking about OmniOS or Solaris 11.1 (no VMs) for the fileserver as OS, already using ZFS.
So the plan is to move all HDDs and M1015s to the new fileserver, ESXi server will only have a USB stick to boot ESXi and use NFS for all VMs on the new file server.

I'm a bit stuck what to get in terms of motherboard/CPU now, that's where I'd like to ask for your advise/comments.

My thoughts up to now:
(1) Get a X9SRL-F, 64gb RAM, XEON E5 1620. Costs are medium, similar CPU power to my current ESXi server, so I would distribute CPU-intense VMs evenly. I might get another 10Gbe NIC to have a 10GBe connection between the two servers for NFS and file transfer. Also thought about X9SRH-7TF, but already have enough HBAs and the board doesn't offer enough PCIe slots (2 x M1015, and then perhaps a normal Intel Dual Gbe NIC, no more free slots).

(2) Get a X9DRH-iTF, two Intel E5 2620 and 64gb RAM. Way more expensive solution than (1), but more CPU power. Might move more VMs over to the new server than in (1) with this hardware, keep ESXi as a playground and for few other VMs.

(3) Reading on this forum (one of my favourite ones by the way), getting something like this (ebay X8DTL) and two Intel L5639 from the States. I'm located in Central Europe, so getting a complete server from the states is out of question looking at the shipping costs, but two CPUs should work. RAM is the same as in (1) or (2) (if I assume correctly?), so not investing in old technology there. Costs should be below (1) or (2). Issue with number of PCIe slots though (3 HBAs and one Dual NIC won't work), but could get also something like this ebay X8DTH-IF. Again 64gb RAM (or whatever the board needs in terms of minimum DIMM numbers).

I sometimes forget that this is just a hobby (a very nice one though), so I'd like to get a cost-efficient solution covering my needs.

There are some questions I have:
- Have I missed some other idea or possible solutions?
- Is it worth waiting for the new E5 generation coming this fall (Ivy Bridge I believe)?
- Is it useful to get an older solution like (3) in terms of costs/performance and also power consumption?
- Does this Intel Xeon L5639 have all the virtualisation features available e.g. in E5 1620? OmniOS is quite picky there.

Sorry for the long post, but it's easier to put everything on the table right at the start, instead of lots of questions afterwards... :)

Thanks a lot for your help!

Chris

PS: There will be a build post of course...
 

kenned3

New Member
Sep 5, 2013
2
0
1
Why so many VM's?

Why so many VM's? Most, if not all of the apps you list will run on OpenIndiana directly (i have the same apps happily running on Solaris).
Isnt most of your ram going to support all those VM's?

The only real exception i see is Plex, which doesnt have a server for Solaris/OI yet.



Hi,

In the process of building a separate file server, and I'd appreciate any advise on some parts/decisions I need.

Currently using
Supermicro X9SCM-IIF
Intel E3 1230v2
32gb RAM
3 x IBM M1015 (2 flashed to IT, one not sure)
10 x 2tb HDDs SATA, some older/smaller HDDs
All-in-One with ESXi and OpenIndiana (two M1015 passthrough)

VMs/applications used:
(a) File server (CIFS, AFP) thought OpenIndiana
(b) SABNZBD, Transmission, Amule, Sickbeard, Couchpotato, Headphones in one VM (need to split that up later in two or three separate VMs)
(c) NewzNab (two VMs, one of them will be switched off soon)
(d) NzeDB (one VM)
(e) MySQL in a VM (bad performance due to location on NFS/file server, much better when run on zone on OmniOS, already tested. Needed for NewzNab and NzeDB, high io load, might get an SSD raid1 for the database data)
(f) MusicBrainz mirror in a VM
(g) Plex VM (only run occasionally due to high CPU load, usually using XBMC box for media, would like to run it constantly with the new system)
(h) Zabbix VM (monitoring)
(i) Windows 7 VM (for administrative purposes, RDM often easier than starting Windows VM on a Mac)

Already bought/ordered for a dedicated fileserver:
Intel Dual Gbit Ethernet NIC PCIe
Intel Dual 10 Gbit Etherent NIC PCIe
X-Case RM 424 Pro (24 bay case)

I want to split up ESXi and file serving, got some trouble in the past, and also need more HDDs which won't fit in the current case (Fractal Design XL). Additionally want to run some (io-intense) services/systems like MySQL or SABNZBD directly on fileserver, either in VM or zone. If possible, I might migrate some more CPU-intense VMs to the fileserver too (like NewzNab/NzeDB), dependent on CPU power in the new fileserver (single/dual socket, see below). So thinking about OmniOS or Solaris 11.1 (no VMs) for the fileserver as OS, already using ZFS.
So the plan is to move all HDDs and M1015s to the new fileserver, ESXi server will only have a USB stick to boot ESXi and use NFS for all VMs on the new file server.

I'm a bit stuck what to get in terms of motherboard/CPU now, that's where I'd like to ask for your advise/comments.

My thoughts up to now:
(1) Get a X9SRL-F, 64gb RAM, XEON E5 1620. Costs are medium, similar CPU power to my current ESXi server, so I would distribute CPU-intense VMs evenly. I might get another 10Gbe NIC to have a 10GBe connection between the two servers for NFS and file transfer. Also thought about X9SRH-7TF, but already have enough HBAs and the board doesn't offer enough PCIe slots (2 x M1015, and then perhaps a normal Intel Dual Gbe NIC, no more free slots).

(2) Get a X9DRH-iTF, two Intel E5 2620 and 64gb RAM. Way more expensive solution than (1), but more CPU power. Might move more VMs over to the new server than in (1) with this hardware, keep ESXi as a playground and for few other VMs.

(3) Reading on this forum (one of my favourite ones by the way), getting something like this (ebay X8DTL) and two Intel L5639 from the States. I'm located in Central Europe, so getting a complete server from the states is out of question looking at the shipping costs, but two CPUs should work. RAM is the same as in (1) or (2) (if I assume correctly?), so not investing in old technology there. Costs should be below (1) or (2). Issue with number of PCIe slots though (3 HBAs and one Dual NIC won't work), but could get also something like this ebay X8DTH-IF. Again 64gb RAM (or whatever the board needs in terms of minimum DIMM numbers).

I sometimes forget that this is just a hobby (a very nice one though), so I'd like to get a cost-efficient solution covering my needs.

There are some questions I have:
- Have I missed some other idea or possible solutions?
- Is it worth waiting for the new E5 generation coming this fall (Ivy Bridge I believe)?
- Is it useful to get an older solution like (3) in terms of costs/performance and also power consumption?
- Does this Intel Xeon L5639 have all the virtualisation features available e.g. in E5 1620? OmniOS is quite picky there.

Sorry for the long post, but it's easier to put everything on the table right at the start, instead of lots of questions afterwards... :)

Thanks a lot for your help!

Chris

PS: There will be a build post of course...
 

Biren78

Active Member
Jan 16, 2013
550
94
28
I just had to google half of those applications :)

One that won't run directly on OI is Windows 7
 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
If money is an issue might be worth having a look at the Supermicro 6026TT-HDTRF. I have my eye on one of these on eBay UK for £300 without RAM or CPUs

Do you run your HDDs in any RAID currently?
 

lpallard

Member
Aug 17, 2013
276
11
18
Im in the pretty much same boat except that I had already built my server last summer. I am going to virtualize my existing Supermicro server and add VM's to eliminate physical machines in my SoHo/house due to underutilized server hardware and ever increasing electricity costs.

A few quick questions for you!

1. have you built yours yet? You never posted back here to inform us about the outcome of that..

2. I will assume you indeed decided to go forward with your build. Regarding the ethernet controllers, can you specify which ones you bought back in august when you initially posted here? I cannot find a reliable recommendation for a quad port NIC (Intel based of course).. There are hundreds on ebay and I wouldnt' want to buy one that is either incompatible with my H8DCL-IF mobo or with the most popular Virtualization platforms out there (proxmox, ESXi, XenServer, KVM, etc)..

3. Regarding your dual port 10Gbps NIC, are you not going to max out the PCIE3.0 x8 slot of the Supermicro X9SCM-IIF ??? AFAIK, a 3.0 x8 slot is capable of sustaining 7.877Gbps in each directions. A dual 10Gbps (minus 15-20% losses) would yield to around 17Gbps which is way more than a 3.0 x8 slot can sustain. Am I wrong here?

4. How do you find the performance of a virtualized file server (SAB, NZBD, CP, HP, SB, etc) along with other IO intensive VM's (MySQL) on the same physical host machine?

5. I will also assume you are doing "mass storage" (either through a ZFS array or LVM) on the same physical server. How's the access performance ? Lets say a VM (containing SAB, Newznab, etc) retrieves content from the web and moves it to the storage, are you suffering performance penalty?

Right now, my server runs slackware64-14 and has 3 raid arrays (2X2TB in RAID1 for the "OS", 3 older drives in RAID0 for scratch drives soon to be replaced by a SSD.., and 6X 2TB in RAID5 for my mass storage). I dont want to build a dedicated file server for now because of the $$$. Eventually, as I max out my RAID5 array and Im growing out of SATA ports and drive bays, I will then build a dedicated storage server but for now I'd like to keep that in the current server and in a VM.

Like I said, we are pretty much in the same identical situation, except that I am running Joomla!, Drupal, Knowledgetree and other web based frameworks on my server.

Im happy to have found your post because I am very new to virtualization and the whole idea..

I am interested to hear what you ended up doing!

Post back! ;)
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
#3: PCIe 3.0 supports either 2.5, 5.0 or 8.0 Gbits/second/lane. Even at the slowest speed 8 lanes PCIe 3.0 supports 20Gbits/second full duplex. No worries at all supporting a dual port 10Gbe with full line rate on both ports. PCIe 2.0 supports up to 5.0Gbits/second/lane so same outcome even for PCIe 2.0.

Not sure at all where the 7.877Gbps number came from. It seems pretty random.
 

lpallard

Member
Aug 17, 2013
276
11
18
Nevermind, I screwed up my calculation!

I did that:

PCIE 3.0 = 8 GT/s (=8Gbps) according to Wikipedia.

Then according to this website Theoretical vs. Actual Bandwidth: PCI Express and Thunderbolt - Tested a PCIE 3.0 slot uses an encoding scheme of 128b/130b (128/130=0.9846...) so roughly 1.5384% loss.

So finally I deducted these losses from the 8 GT/s value giving the weird 7.877Gbps...

I forgot to account for the 8 lanes of the X8 slot! Sorry!

Pcie bandwidth is measured in bytes, Lpallard.
Really? I thought bandwidths were *always* measured in bits (smallest chunk of information possible) not bytes (8bits) (actual usable data)? SATA and SAS controllers are measured in bits no??
 
Last edited:

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
Hi. Sounds an awesome setup.

Please check out my thread though and the issues that I had with the X-Case 420 Pro case and the Supermicro X9SCM-IIF Motherboard. Case fans either at insane and noisy highest speed or so slow that they stopped and the fan alarms on the case kept going.

Solution was to remove a jumper on the fan controller built into the case. Details and pictures are on my build topic for the solution.

Hope this saves you the headaches, quite literally, that I got. :)
 

connorxxl

New Member
Apr 12, 2013
8
0
1
Hello,

my apologies for not replying earlier. Thanks alot for your responses, although they do not really cover my questions. :) Still very interesting thoughts there.

I've build my fileserver with a X9SRL-F, budget won over excitement. :) Quite happy with it. ESXi is running on the old system (X9SCM-iiF), using fileserver for VM storage. I've switched the ESXi case to a Norco RPC-240, rackmount is simply cleaner and takes less space. Additionally I got a Zyxel GS1910-24 switch, quite happy about that one too. Might switch another of my old servers to rackmount. Using a Lackrack by the way.
I really like zones on Solaris/OmniOS now, so I've build a zone for MySQL and one for SabNZBD on my fileserver. Additionally I run a Zabbix VM on it via KVM (more a test to see how KVM works, quite impressed sofar). Still thinking about moving Couchpotato, Headphones and Sickbeard into another zone.
I chose OmniOS over OpenIndiana because I read on the OpenIndiana Wiki that the KVM integration is not that great yet. Sometimes I regret my decision, OmniOS os far away from OpenIndiana when it comes to read-to-use packages...
Still looking at the dual socket/L5639 market, but that only seems to make sense if you have an old dual-socket 1366 motherboard somewhere. Buying everything new (except CPU of course) seems to be on the expensive side again.
Looking at my experience so far I believe that an "All-in-One" would be much easier with an OpenIndiana/OmniOS system, instead of using ESXi, passthrough and a ZFS system.

I will try to respond to all posts...

@kenned3
I like to keep my basic file server clean, so only file serving (in global zone). Your thoughts are absolutely correct, I started implementing some services in zones (see above), and will continue to do so. Plex will remain in a VM, not sure if it stays on ESXi or will be moved on OmniOS/KVM.

@Biren78
Got kind of lost in this whole "media management" thing. NewzNab/nZEDb are quite some work sometimes. :)

@nry
Good recommendation! Had a look, but noise is a bit of an issue since I usually live in apartments. So that one seemed too noisy for me. And yes, I run my hdds in raids: 6 x 2tb in RaidZ2, 6 x 3tb in RaidZ2, 4 x 2tb in Raid10 (MySQL DBs), 2 x 500gb in Raid1.

@lpallard
That's a longer response. I hope I addressed some of your questions in the description above already. Let's see.
(1) Yes, see above. Let me know if you have more questions.
(2) Not an expert for NICs unfortunately. I always buy Intel, so the 10gbit one is a X540 T2. Unfortunately these NICs are very expensive, so I haven't bought a second one yet. Thinking about switching to the Mellanox adapters mentioned here on the forum. Haven't done enough research though.
The dual-port 1Gbe one is a normal Intel on from eBay, can't tell the model right now. No clue about quad-port ones, sorry.
(3) I believe that has been addressed... :)
(4) The performance with ESXi (5.1) has been great, I have to admit. I never really measured throughput, but it seemed that there wasn't any performance penalty (at least in my SOHO scenarios, some experts here might disagree). MySQL is a different beast though. I found that it performs much better close to barebone, e.g. in a zone (see my description above). NewzNab/nZEDb can be very demanding in terms of MySQL performance so this improved a lot with changing to MySQL in a zone (one DB for all VMs). The other applications you mentioned are just IO-intense when it comes to moving/copying files.
I found my all-in-one solution nice, but quite difficult in terms of management. You always have to plan all your steps when doing maintenance or changes, since usually your VMs are stored on the virtualised file server. So once that file server is shut down, you don't have your VMs available anymore (or any files on the storage). Bit of a hassle for me. That's why I split it up. Another reason is fighting with different (virtualised) NIC types (ESXI: vmnet3 or e1000) in OpenIndiana. Never worked well for me.
(5) I haven't seen that, no.
With all my past experience, I would either split file server and virtualisation, or go for something like OmniOS (or perhaps OpenIndiana) and use zones/KVM for virtualisation. Way less hooks and switches to correct/adjust, although more of learning than with ESXi.

@TallGraham
I wrote in your thread already... :)
Never had problems with XCase RM424pro and a X9SRL-F. Did you get a documentation for the jumpers anyway?

Again, thanks alot for all your help! STH is a great forum and I like the politeness here alot. Too many forums these days where people just bash around unfortunately.

Enjoy the holidays.

Chris
 

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
Oh how embarrassed am I! :eek:

Yes you very kindly offered to test settings out for me on your case when I was having trouble with the fans on mine. So many wonderful people here have posted on my build thread I lose track of who is who..... sorry! I totally agree with what you say about how great the STH forums are for polite, helpful, and in my case a bit forgetful people ;)

The only thing I found out about the fan issue is that is appears to only be with the X9SCM-iiF board that I have. I saw that on your post and the bit about the case so dove in without reading it properly. I was just so excited that I may be able to return the favor and help someone here.

There is a picture on my build thread and some details are printed on the PCB about three of the jumpers. The last one, which fixed my problem, apparently stops the fans from going too slowly if you remove it. This was my issue, the motherboard was sending fan speeds that were just too slow for the fans so they would stop.