I'm looking to rethink my existing server build to best suit my requirements, but there are a few too many things I'm unclear on, and I hope some advice from you experienced bunch might just set me on the right track.
I previously built a combined dual-E5-2687W workstation/8x3TB RAID-10 server, with another two workstations attached to it by dedicated 10GBE connections, and everything running Windows 7 Pro.
This is for heavy fluid-simulation and rendering work, which is typically reading and writing files of 2-3GB per frame. That setup works pretty well, but it's far from ideal. It gets bogged down as soon as two workstations access at once, and I have to be very careful to reserve 1-2 of the 16 cores on the server-workstation to manage disk access and network, or it will bottleneck everything.
It also has the downside that I have to run a rather power-hungry workstation all day and all night even when I'm not actually using the processing power, and only need the file server running.
What I'm planning to do, is to shift the hard drive array into a 12-bay 2U enclosure, get something like a Xeon-E3 or low-end E5 uniprocessor motherboard/cpu, whatever has enough PCIE lanes to run two 10GBE cards and whatever disk controller cards I might need. I'd either expand from 8 to 12 drives now, or at least plan to expand to that later - turns out even a 12TB Raid-10 gets eaten up in no time with this workload!
I'm also wondering whether to get my hands dirty and try to set up CentOS rather than relying on Windows.
The parts I'm unclear on are:
- If I'm running 12x3TB SATA drives in RAID 10 - is there any real performance benefit to having a dedicated hardware RAID card or other host adapter, or would I be just as well served using software RAID handled through CentOS?
- From what I've read, I get the feeling that setting up "NFS Over RDMA" could give a significant boost to performance working with large files over a 10GBE connection, but information seems pretty limited on this. Would this be the case, or am I getting the wrong end of the stick? I gather I would need to be running Linux at both ends to take advantage of RDMA, or is there some way I could do it with Windows based clients?
(I noticed Windows Server 2012 supports RDMA for SMB shares now, but I'd rather avoid having to pay out for Windows Server licenses, and my workflow should be transferable to Linux based workstations.)
- If I did run software RAID with these specs, would there be a minimum CPU capability I should be looking at, or would even something like the lowest-end dual-core Haswell Xeon be enough to keep up?
If anyone has any other suggestions for the best way to wring mostly-sequential-read-and-write performance out of a 10GBE-networked server, I'd love to hear them.
I previously built a combined dual-E5-2687W workstation/8x3TB RAID-10 server, with another two workstations attached to it by dedicated 10GBE connections, and everything running Windows 7 Pro.
This is for heavy fluid-simulation and rendering work, which is typically reading and writing files of 2-3GB per frame. That setup works pretty well, but it's far from ideal. It gets bogged down as soon as two workstations access at once, and I have to be very careful to reserve 1-2 of the 16 cores on the server-workstation to manage disk access and network, or it will bottleneck everything.
It also has the downside that I have to run a rather power-hungry workstation all day and all night even when I'm not actually using the processing power, and only need the file server running.
What I'm planning to do, is to shift the hard drive array into a 12-bay 2U enclosure, get something like a Xeon-E3 or low-end E5 uniprocessor motherboard/cpu, whatever has enough PCIE lanes to run two 10GBE cards and whatever disk controller cards I might need. I'd either expand from 8 to 12 drives now, or at least plan to expand to that later - turns out even a 12TB Raid-10 gets eaten up in no time with this workload!
I'm also wondering whether to get my hands dirty and try to set up CentOS rather than relying on Windows.
The parts I'm unclear on are:
- If I'm running 12x3TB SATA drives in RAID 10 - is there any real performance benefit to having a dedicated hardware RAID card or other host adapter, or would I be just as well served using software RAID handled through CentOS?
- From what I've read, I get the feeling that setting up "NFS Over RDMA" could give a significant boost to performance working with large files over a 10GBE connection, but information seems pretty limited on this. Would this be the case, or am I getting the wrong end of the stick? I gather I would need to be running Linux at both ends to take advantage of RDMA, or is there some way I could do it with Windows based clients?
(I noticed Windows Server 2012 supports RDMA for SMB shares now, but I'd rather avoid having to pay out for Windows Server licenses, and my workflow should be transferable to Linux based workstations.)
- If I did run software RAID with these specs, would there be a minimum CPU capability I should be looking at, or would even something like the lowest-end dual-core Haswell Xeon be enough to keep up?
If anyone has any other suggestions for the best way to wring mostly-sequential-read-and-write performance out of a 10GBE-networked server, I'd love to hear them.
Last edited: