Hartford Stage Audio & Projections Department Large File Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Morgan Simmons

Active Member
Feb 18, 2015
134
25
28
44
Hello Everyone.
I've been a lurker on this forum for a while, but I decided to post a build to pool your wonderful knowledge.

Build’s Name: Hartford Stage Audio & Projections Department Large File Server
Operating System/ Storage Platform: TBD
CPU: 2x Intel Xeon X5650 6core LGA1366
Motherboard: Supermicro X8DT6-F LGA1366
Chassis: Supermicro SC836E26-R1200
Drives: 4 tb White Label Drives (16 total slowly) 2 SSD's write cache, 1 SSD OS
RAM: Hynix 6x4GB DDR3-1333MHz ECC Registered
Add-in Cards: TBD
Power Supply: Supermicro Redundant 1200watt
Other Bits: Gigabit network with 2 SG300 switches 2 port LACP trunk between them. 5 VLans. Dell2950 dual quad 3.0gig 32gig running pfsense and 5 instances of windows 8

Usage Profile: We do theatrical productions with large file projection components as well as small file audio components. We will need this server to consolidate all of our external drives to quickly serve and hold files for these 1.5 month productions.

Other information… The biggest thing that I'm questioning is What OS / FS to use. We don't really need iSCSI as this would be better off with multiple computers accessing at the same time. What would be beneficial is a system that would allow us to add a 4TB disk or 2 per show until we were maxed out with all 16 drives. I don't have the budget to do it all at once, but we can definitely handle 1-2 drives at a time. XFS seems to not like to expand easily. Windows 2012R2 can expand, but it seems to be quite slow. BTRFS sounds like it would be perfect, but I haven't found a relatively easy to use solution that uses it.

I have also thought about ignoring the filesystem, and focusing more on a HW Raid card that could handle an SSD cache in either Raid 50 or 60. This option seems promising, but it also seems to have some corruption pitfalls with such a large amount of storage.


All of the audio files (1-5 gigs per show) will be handled by the local computers, but the video files (1-2TB a show) would be handled by this system, and hopefully the audio files would be able to be long term backed up to this system.

The motherboard has 2 LSI 2008 based 8087 sas ports. I was going to connect each of these to the backplane of the chassis (it has multiple 8087 sas ports for expansion and for daisy chaining to another chassis if needed) I've read about multipath and fallover sas connections, but I'm not majorly knowledgeable about this at this point. Still need to read up more.

Any suggestions or thoughts would be Greatly Appreciated!

Thanks
Marshall Simmons
Hartford Stage
Audio & Projections Supervisor
 
Last edited:

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
Random musings:

  • Have you considered Windows storage spaces?
  • That's a hell of a lot of firepower for a Storage Server supporting five instances of Windows 8.
  • Don't mix SAS and SATA drives on that backplane.
 
  • Like
Reactions: Patriot

Patriot

Moderator
Apr 18, 2011
1,451
792
113
@Marshall Simmons

Is the storage for the live performance... as in streaming to it or is it a dump site after.
That potentially makes the performance requirements less stringent.
If you will be adding drives often a non-traditional raid setup seems best, syncing the parity after an expand can take an exorbitant amount of time for a large array.
On the other hand... You could always create multiple arrays.... Do raid6 with the White label drives. $130 for 4tb.
WL 4TB 7200RPM 64MB Cache SATA 6.0Gb/s (Enterprise Grade) 3.5" Hard Drive (For Server, RAID, NAS, DVR, Desktop PC) w/1 Year Warranty - Newegg.com

Random musings:

  • Have you considered Windows storage spaces?
  • That's a hell of a lot of firepower for a Storage Server supporting five instances of Windows 8.
  • Don't mix SAS and SATA drives on that backplane.
I could see Storage spaces working for this.
Not sure what you mean by the second point...
Or why you would advise against mixing SAS and SATA? You can't have SAS and SATA in the same array... but besides that?
 

Morgan Simmons

Active Member
Feb 18, 2015
134
25
28
44
Thanks guys for the replies!

As for the horesepower of the system, I essentially wanted to get the fastest system I could get for a relatively cheap price. I tried to stick with things that were high end a generation or 2 ago keep our prices in check. Going to Haswell seemed like it would dramatically increase the cost. I'm also hoping that the CPU's will be fast enough for us for the next 3 or so years, and we'd just have to upgrade our networking tech to 10GBE or something else to keep up with our requirements.

@Patriot : For the performance - storage pool issue: We are looking at getting a media package called Watchout which requires a host computer that serves the files to multiple display computers that actually render and send the videos to the projectors. The display computers have SSD's that hold the video content during performances, but during technical rehearsals, any changes made to those videos on the host have to be uploaded to the display computers. So it's a bit of both; not exactly streaming HD content, but certainly deploying those changes at the fastest rate possible would be beneficial to time.

I also need something that can deep store backups of all of our previous shows. I've currently maxed out the space on the Dell 2950 (1 250g ssd VM drive, 5 2tb drives raid 5) and we are going to eat that up within the next 6 months to a year, so having something ahead of time that will be able to hold more is another reason why I'm building this larger system.

@CreoleLakerFan PFsense and the instances of windows are what is currently running on the Dell 2950 via Esxi5.5u2. On the larger server, I'm expecting to run just the filesystem OS vm and a windows VM for Watchout that we can RDP into. (I'll also probably run folding@home during the off hours on another vm)

I've looked a little at Windows Storage Spaces via server 2012r2, but reading the thread on this forum about the slow write speeds had me a bit worried. That's the only thing that's was really holding me back from a windows option since it also host vm's via hyper-v. I was planning on having SSD's (1-2 256g) as cache drives to speed things up. Am I worrying too much about the speed issue? I wouldn't have an issue moving completely to hyper-v on the 2950 and the new server.


Thanks again for everything. I have the chassis already, and should get the motherboard and ram tomorrow as well as the CPU and heatsinks Monday.

Marshall Simmons
 
Last edited:

Morgan Simmons

Active Member
Feb 18, 2015
134
25
28
44
@Patriot Those drives look like the ones that I was planning on getting (similar pricing to the company on ebay selling while label drives)

Found those thanks to this website and forum!
 
Last edited:

HellDiverUK

Active Member
Jul 16, 2014
290
52
28
47
Storage Spaces is probably too slow and inflexible for your needs.

BTRFS I don't think it stable enough quite yet for production use. There's been some bugs cropping up recently which aren't inspiring much confidence.

I'm going to make a recommendation here, with a few caveats. unRAID. But, it'd have to be unRAID 6.

unRAID6 supports BTRFS and XFS, and it runs like a storage pool. Need another drive? Just stuff one in. It does parity, all you need is a single drive that's equal or larger than your largest data drive.

The only thing is, unRAID6 is still in beta. It's pretty stable as it is, and they're expecting to have a RC in a few months.

I'm running 6b12 with XFS and I've had no major problems. I run some apps on Docker, and unRAID also has KVM for VMs. I'm running a Windows 7 instance on KVM, and it works perfectly.
 

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
PFsense and the instances of windows are what is currently running on the Dell 2950 via Esxi5.5u2. On the larger server, I'm expecting to run just the filesystem OS vm and a windows VM for Watchout that we can RDP into. (I'll also probably run folding@home during the off hours on another vm)
That makes sense. I couldn't tell if you were planning on running your storage server directly on the hardware, or as a VM.
 

Morgan Simmons

Active Member
Feb 18, 2015
134
25
28
44
Sorry @Patriot and @Patrick links are corrected in the earlier posts

@CreoleLakerFan Thanks! The 2950 is working perfectly right now for those VM's. I'm actually not sure about what to do if I run Win2012R2. Is better general practice to run Hyper-V bare metal, and Win2012R2 in a virtual machine, or run Win2012R2 bare metal and run Hyper-V on top of it?

@HellDiverUK Thanks! I'm taking a look at Unraid 6 Beta forums now. It does sound like it has to potential to be excellent, and the price is excellent. But it seems like there is quite a bit left to be finished before they are ready for 6 final.


I put a post in the win2012r2 performance thread about making windows thinks it has a BBU protected write back cache to speed up write performance. I'm hoping that @PigLover can test it to see if that will help improve the speed even more. I found the link via Storage Spaces and Parity – Slow write speeds | TecFused


So far I'm leaning towards Win2012R2 for the storage server and converting the dell 2950 from esxi to hyper-V over the summer.

Thanks to everyone so far.
Marshall
 

Morgan Simmons

Active Member
Feb 18, 2015
134
25
28
44
Quick Question: I have the supermicro backplane with multiple SAS 8087 outputs in two groups. I will also have 2 8087 inputs via the motherboard. I should connect both A and B outputs to the A and B inputs on the motherboard for multipath and fallover, correct?
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
I put a post in the win2012r2 performance thread about making windows thinks it has a BBU protected write back cache to speed up write performance. I'm hoping that @PigLover can test it to see if that will help improve the speed even more. I found the link via Storage Spaces and Parity – Slow write speeds | TecFused
Setting "IsPowerProtected" on a Storage Spaces volume that is not actually power protected is a REALLY BAD idea. You leave the volume at risk of corruption on a power failure. In fact, if you read the first couple of posts that thread thread on Storage Spaces you'd see that the entire thread exists in order to show how to get acceptable performance AND maintain integrity of the data - I acknowledged the "IsPowerProtected" hack and made great pains NOT use it.

I'm not interested in testing the performance of a method that compromised integrity of the data. Achieving throughput by putting the data at risk is not a "success".
 

Morgan Simmons

Active Member
Feb 18, 2015
134
25
28
44
Setting "IsPowerProtected" on a Storage Spaces volume that is not actually power protected is a REALLY BAD idea. You leave the volume at risk of corruption on a power failure. In fact, if you read the first couple of posts that thread thread on Storage Spaces you'd see that the entire thread exists in order to show how to get acceptable performance AND maintain integrity of the data - I acknowledged the "IsPowerProtected" hack and made great pains NOT use it.

I'm not interested in testing the performance of a method that compromised integrity of the data. Achieving throughput by putting the data at risk is not a "success".
@PigLover Apologies.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Apologies not needed. There is no offense. But you have to be cautious with what you find out on various blogs. There is a good deal of poor advice (like this guy shared) and often outright FUD. Take it all with a grain of salt, a skeptics eye and double check before you jump in.
 

MikeC

Member
Apr 27, 2013
59
11
8
UK
Quick Question: I have the supermicro backplane with multiple SAS 8087 outputs in two groups. I will also have 2 8087 inputs via the motherboard. I should connect both A and B outputs to the A and B inputs on the motherboard for multipath and fallover, correct?
Yes, but only if you are using SAS drives which use use dual path. If you are using SATA drives which are single path , they can't make use of the second path and thus only need one cable.
 
  • Like
Reactions: Patriot

Morgan Simmons

Active Member
Feb 18, 2015
134
25
28
44
Quick update: Everything is in except for the motherboard. However, I have had one of the worse experiences I've ever had with a seller on ebay with this stupid motherboard.

The first motherboard takes a week to get here, and it's the wrong motherboard (x8dtn in a x8dt6-f box .) The seller profusely apologizes and immediately ships out the "correct" motherboard via USPS 2 day priority. USPS decides to have the motherboard go on vacation for 4 days in Puerto Rico. After a couple of emails to the seller, he said he sent it to the right address and for me to call USPS to fix it. After doing that for 2 hours, and 3 more days of waiting, I finally get the motherboard. IT'S THE WRONG F***ING MOTHERBOARD AGAIN!!!!!! (x8dti in a x8dt6-f box.)

So now I've had it up to here. I call the seller and he again profusely apologizes, telling me how his guy in the warehouse, and his buyer screwed it all up, and he'd personally go over to get the correct board. He calls me back a couple of hours later to tell me that he doesn't even have the board that I ordered anymore.

So now I'm settling for an X8dah+-F-LR which was shipped overnight, but was delayed by weather so I'm hoping it'll be here tomorrow, and a LSI HBA, which I'm still waiting for him to find.

I would honestly like to put his username on this forum for a buyer beware warning, but I don't know if it's allowed. If it is, I'll gladly update it so no one has to go through the same 3 weeks of crap I have.
 

Morgan Simmons

Active Member
Feb 18, 2015
134
25
28
44
Oh he's definitely getting a negative review. Not for sending the wrong board, but for selling a board that he didn't have.
 

Morgan Simmons

Active Member
Feb 18, 2015
134
25
28
44
There has been a lot of PROGRESS!!!!

I played with windows server 2012r2, but I ended up going with Esxi 6 and running Freenas currently. I ended up getting a IBM 1015 card for the expander backplane(followed the instructions on here for it mode), and I have it passedthrough to Freenas. It's currently working very well and I'm easily getting 90 megabytes/sec over gigabit. I currently have 2 mirrored vdevs in a volume (just some extra drives I've had laying around.

I also looked at a btrfs os called Rockstor. What they've built so far is excellent! Very clean and easy to use interface. However they are missing things that I'd really like to implement, like SSD caching and deduplication.


Two quick question that I have for you all:
First, regarding my supermicro chassis. Currently the blue lights only are on when there is activity on the hard drives. Is there a way to tell the backplane to have any connected hard drives blue lights be constantly on, and still blink when there is activity?
Secondly, is there a way to have the LSI manager to remotely read the card when it's passedthrough to the Freenas VM?


Thanks everyone!
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
Can you install FreeBSD LSI for the card? I do not think so on the light thing. I would instead suggest using sticky labels.