need workstation suggestions & not sure where to post . . .

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

SemiLiterate

New Member
Apr 23, 2013
15
0
0
Southeastern USA
Google brought me here. I looked for a help wanted forum & didn't find one. Hope I'm not stepping on toes by posting this. If so, I apologize in advance.

I'm proficient with building "normal" high-end computers -- been building them for well over two decades -- however the only dual processor system I've ever built was based on an old MSI motherboard using a pair of Pentium III CPUs. My most recent build was a z77/i3770K machine with a 240GB SSD for a boot drive and a pair of 3TB WD reds in RAID-1 with a 60GB SSD as cache for internal work space & storage. My display is a Dell U3011 driven by a Radeon HD 7950. I also have two large external NAS devices (RAID-6 8x2TB, each) that I use as more permanent storage and backup (one is in another building and mirrors the "local" one).

That's where I'm coming from; but I need something more powerful. I'm now shooting with Nikon D800 36 megapixel cameras -- very large NEF files. And by the time they are Photoshopped, they are larger still. I need more horsepower and more storage.

I'd like to pattern the new system on what I've already built. I'll expand the two NAS devices with 4TB drives, effectively doubling my external storage. I want my workstation to have dual Xeon processors, an SSD boot drive, SSD(s) for working drives and an internal RAID array (probably RAID-6 8x2TB -- I'll re-use drives from my NASes) for "active" storage. I also need two video boards (probably Radeon HD 7950 like the one I already have) to drive a pair of the new Dell U3014 monitors. (While one video board can easily drive multiple monitors, it cannot be calibrated for each of them -- proper calibration requires that each monitor have its own video board.)

I'm a bit out of my league and need a few suggestions: CPUs, motherboard, raid controller, etc.

Thank you,
~ SemiLiterate
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,073
974
113
NYC
Hi SL ---

My major suggestion is to get into LGA2011. Either with an i7-3930K or Xeon E5. Likely do not need dual CPU but 64GB(+) of RAM is good when you have lots of stuff open. Some Adobe programs still have big single threaded components so high clock speeds are good. If you really need more, than dual CPUs.

RAID 1 or RAID 0 is really easy. You can even get LSI controllers built into motherboards. Might go that route if possible.

What kind of NAS do you have? I would keep the spinning hard drives in a NAS and use SSDs in the workstation. Increase the size of the pipe if you need more bandwidth to the NAS.
 

SemiLiterate

New Member
Apr 23, 2013
15
0
0
Southeastern USA
Most photo applications, including those from Adobe, are fully multi-threaded these days. The most I gain with LGA2011 is two more cores and two additional hyper-threads. Going dual Xeon gives me the potential for much more. And, while speed is important, dividing processes among more threads is more effective. I'll get a large SSD -- or possibly two of them connected in RAID-0 as workspace; but I want serious local spinning storage also. Moving large files across Ethernet -- even Gig/E -- is less efficient than addressing them on directly attached storage. My NASes are made by Synology. I need more internal storage than is economically practical with SSD alone. This project is going to be a major cash outlay. I'm not afraid of throwing money its way; but I want to do it judiciously.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,825
113
Actually you can stick a 8C/ 16T Xeon E5-2600 series CPU in a single socket LGA2011. Expensive to get high clock speed and if you want RAM dual Xeon is the way to go.

Many members are utilizing higher-speed interfaces such as inexpensive 40gbps Infiniband to move data to their NAS systems which might be what he is getting at.
 

Lost-Benji

Member
Jan 21, 2013
424
23
18
The arse end of the planet
I am going to buck the two replies from MK & Pat, sorry guys but I see where this is going and as I build these sorts of systems for a living, I will add my 2 cents.

OK, forget the consumer CPU's. They are limited to 6 cores (I am not and will not refer to fake cores also known as HyperThreading) and don't support ECC. Domestic CPU's and enthusiast RAM is great for the gaming boys but when your are working in workstation environments, data corruption and crashing is not acceptable.

Alright, Boards, stay with well known brands that are well featured. Supermicro and Intel have great boards but the Intels can be buggers at not including anywhere near the feature SM does. Asus and Gigabyte also make "Workstation" boards but you need to ensure they support ECC and use server grade chipsets (C606) rather than domestic X79. If you want to stay with single socket, then Gigabyte has a awesome board that is badly named (it doesn't run a X79),
GIGABYTE - Motherboard - Socket 2011 - GA-X79S-UP5-WIFI (rev. 1.0)
C206 based (related to the X79 but not supported as well in WS environments), packs a lot of features and is well priced, supports ECC with Xeon up to 8 cores even though the webpage lacks all relevant info.
The final tit-bit to watch is if you want to possibly expand later. This means if you possibly want to expand to a second CPU later, get a dual-socket (DP) board now.

RAID & SSD's, don't fall for RAIDing SSD's. There is still a very mixed bag of info out there on TRIM support and keeping the drives in good shape. There are plenty of boards that support softRAID or have HW RAID chipsets on-board but keep this for local storage options as HDD support can be limited to enterprise drives only on some meaning uber dollars.
The best way to run an OS on a SSD is use 2, not one.... Confused? Well here it is; Have only a single SSD (Intel 520 or Samsung Pro) attached to the 6Gbps (SATA-III) ports (Make sure they are in AHCI mode), install Windows 8 or Server 2012 (Win7/Server 2008R2 can also do this but limited in other features later). Do not partition off this drive, keep it simple. Now once the OS is installed and all drivers are in and system is smooth, shutdown, add a second matching SSD to the other SATA-III port, now boot. When in, go to Disk Management and then Mirror the SSD's using windows Dynamic Mirror. This will ensure that you have full speed read/writes of a single SSD while maintaining TRIM.
When you have done, it will take a little while to sync, let it do its thing and leave it alone while doing so. When you reboot next time, you will get a PLEX option in the bootmgr screen, let it boot then go to system properties and reduce timeout to 2 seconds or use MSCONFIG to nuke the other boot entry after booting. I recommend the first option to allow booting if the primary drive fails.
Any other SSD's to be used for caching or temp drives best used in JBOD on fastest ports with AHCI. All HDD's that are intended for storage, you can either use the onboard RAID options or if you went Win8/S2012, then Storage spaces maybe a better option.

Power Supplies; ATX style supplies are fine for what you wish to be do but keep in mind, a "Real Man's" board will be a DP board and more than likely require at least 2x EPS12 power feeds as well as a ballsy PSU. Redundant PSU's are nice but can add significant cost and limited for multi-GPU configs.

GPU's; ATI cards are great for video playback and gaming but check the software you intend to use for what supports HW GPU acceleration, you may find nVidiots and CUDA may be more sensible. All later generation boards will support Crossfire and some are also SLi capable although, I don't think you requiring this. Remember, Windows is limited to a total GPU core count of 4, no more.

The huge warning/consideration; S2011 CPU's drive PCI-E lanes direct from the CPU, not from a NorthBridge. DP boards sometimes have half the slots driven by the first socket and the rest from the other socket. Also keep in mind, the peripherals (Onboard RAID, SATA, IDE, sound, PCI and other guff are also driven off PCI-E lanes internally in most cases.

Use eBay; if you want to go a little cheaper, eBay can be a gold mine, especially for CPU's. The downside is if you need warranty, you can't beat walking into the local shop and swapping your purchase.

As for high-speed inter-connectivity, Infiband is cheap but also getting on in age. Other option would be to either agg the dual or quad onboard NIC's (Server 2012 only, not win8 will do this automatically, even without a managed switch) and stay with existing NAS boxes or look at a 10GbE NIC and a 10GbE switch for simple, raw speed.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,825
113
As for high-speed inter-connectivity, Infiband is cheap but also getting on in age. Other option would be to either agg the dual or quad onboard NIC's (Server 2012 only, not win8 will do this automatically, even without a managed switch) and stay with existing NAS boxes or look at a 10GbE NIC and a 10GbE switch for simple, raw speed.
Agree with most of these. Even explained the single CPU in a dual CPU motherboard thing a bit.

On the Infiniband part, QDR Infiniband is not that old (mid 2008 IIRC) FDR just started getting introduced in 2011. A bit harder to setup sure. Faster/ cheaper/ lower power per gbps for a point to point implementation, certainly.
 

Andreas

Member
Aug 21, 2012
127
1
18
Welcome SemiLiterate,
I am also shooting with the D800/E (sample ) (25 MB JPEG). Let me add my 2 cents.

I do have all the systems in operation your are envisioning:
1) i7-3770K / 32GB
2) i7-3930K / 64 GB
3) dual and single Xeon E5-2687W / 256 GB
4) soon (dual Xeon E5-2665 /128 GB )

My choice for photo editing is #1

Why?
photo apps are not written for throughput, most of them are latency dependent (fewer cores but higher frequency benefits)
lowest energy consumption / less heat / less noise
You will not experience any perceived speed increase by moving just to an LGA-2011 platform
The apps your are talking about are not NUMA aware, so their performance on those systems might even be lower than on single socket systems
The dual Xeon systems are throughput optimized platforms, you need a user interactive system - the opposite of throughput
The slowest component of your system is the 6x2TB Raid in your workstation - I wouldn't do it anymore as they are competing for cache in RAM with your apps.

My approach with about 10 TB of RAW files:
Data is stored on a homeserver. Currently a E3-1245v2 system with 32GB ECC RAM and a lots of 3TB WD Red drives. 4 x 1GBit/sec LAN teamed into one connection.
My primary photo workstation is the i7-3770K system with 32 GB RAM, NVidia GPU and 3 x 27 inch Samsung LCDs,
Two SSDs connected via the motherboard to Raid0 for OS, apps and scratchpad space (2 x Samsung 840 PRO 256 GB)
Intel i350 4x1 GBit connected to the homeserver

Reasons:
1) My workstation is silent, and significantly cheaper than your envisioned setup
2) The homeserver is allways on, energy efficient and has a host of other duties anyway (backup for all machines in the house, multiple VMs to present to the internet, graveyard for old dematerialized PCs in VMs). Its memory is available to the user as super fast cache
3) teamed NICs approach 450 MB/sec sustainable transfer speed - fast like a local SSD, but much bigger storage capacity. Faster than your local Raid6 system
4) The server with the disk drives is in a different room
5) Speed. Selecting 50 keepers from 1000 shots takes about 10 minutes. I am using Fastpictureviewer for this, which can - with the support of the GPU - present 5-6 D800 RAW images per second on screen, even when loaded from the homeserver. Basically as fast as I can press the space bar.
6) most interactive apps have weak multi CPU scaling properties (these are not server workloads like Oracle or SQL Server). Adding more cores have fast diminishing returns. Frequency helps more. Dual Xeon CPUs with higher frequency are very expensive.

A "cost efficient" recommendation:
Take the i7-3770K system with max memory (if you feel more comfortable with the i7-3930K, then go this route)
If you want more reliability, take the respective Xeon versions which supports ECC RAM (i.e. E3-1275v2 or E5-1650)

But most importantly, have fun with your camera and allways good light,
Andy
 
Last edited:

SemiLiterate

New Member
Apr 23, 2013
15
0
0
Southeastern USA
Thank you. Some very good suggestions.

Yes, Andreas, the D800 is fantastic -- both tool and toy. What's most fun, though, is all the glass. I've been shooting Nikon since the '60's and have accumulated over a dozen bodies and nearly sixty lenses.

Based on replies, especially Andreas', it sounds like my current simple workstation system will suffice with some modification: I've already got 32GB RAM and the i7 3770K CPU. I need to drop the hard drives and get more SSDs -- a smaller one for application scratchpad files and a much larger one for local workspace (to replace the cached 2x3T RAID-1 WD reds I'm currently using) -- and the 2nd video board for the 2nd monitor. Plus I need to build a true file server with 10gig/E to my workstation (and a 10gig/E board for my workstation) and relegate both my NAS boxes to off-site backup.

I'm already wired for gig/E with an 8-port switch. Can I simply attache the file server to the gig/E switch in place of my current workstation and then connect the workstation and file server via a dedicated 10gig/E line? My current wiring diagram is as follows:

modem <-> wireless G/N router <-> 8-port gig/E switch <-> 2 NAS boxes, my workstation, 2 other computers, 1 television
(plus one laptop, an iPad and 2 cell phones connecting wirelessly)

This becomes a relatively inexpensive workstation upgrade; but then I need to build the file server. For that, I take cues from Lost-Benji. Sounds like SuperMicro, dual Xeon and ECC memory with Server 2012 is the way to go (will I need to upgrade my workstation to Win8 or can it remain Win7?). However, I'm still out of my league. Do I/we continue with this thread or do I start a new one? I'm open to suggestions.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
I think Andreas suggestions are spot on. You don't need (and probably don't want to deal with) the dual-2011 for your workstation. You already have all the horsepower you need. You just need more memory and faster disk io.

For the 10gbe - yes, you can do a point-to-point link between your file server and your workstation. Leave everything else alone. You may need to do a few things to ensure that your file server access goes over the faster link. Details depend a lot on exact configs of your WS and FS (win 7 vs 8 on the WS, Linux vs Solaris vs MS2008/12, etc). None of it is hard.

Alternatively, you could get one of the newish D-link 10g-baseT switches (~$850) and just make the 10gbe the only link on the server and WS.

Also, you need to inventory the PCIe on your MB and make sure you have enough to support dual video cards and the 10gbe. You need at least PCIe 2.0 x8 for each of them. You didn't indicate which Z77 board you are using but only a few of them will have the right PCIe configuration for you. You may want to rethink the need for dual video cards. You definitely want to get one of the monitors perfectly calibrated, but ask yourself about your workflow and whether you really need them both color-perfect. Most of my work is video, so the workflow is different, but most of the time one monitor is primary preview output and is calibrated. The other one can be off a bit and not impact the work.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
...That's where I'm coming from; but I need something more powerful. I'm now shooting with Nikon D800 36 megapixel cameras -- very large NEF files. And by the time they are Photoshopped, they are larger still. I need more horsepower and more storage.

I'd like to pattern the new system on what I've already built. I'll expand the two NAS devices with 4TB drives, effectively doubling my external storage. I want my workstation to have dual Xeon processors, an SSD boot drive, SSD(s) for working drives and an internal RAID array (probably RAID-6 8x2TB -- I'll re-use drives from my NASes) for "active" storage. I also need two video boards (probably Radeon HD 7950 like the one I already have) to drive a pair of the new Dell U3014 monitors. (While one video board can easily drive multiple monitors, it cannot be calibrated for each of them -- proper calibration requires that each monitor have its own video board.)

I'm a bit out of my league and need a few suggestions: CPUs, motherboard, raid controller, etc...

In prior years I have put together a great many pre-press workstations. With the current-generation of equipment, it is surprisingly easy to manipulate 50MB or even 150MB files in Photoshop. Consequently, I am not that worried about your CPU choice. In fact, I don't think that you need dual CPUs at all, just a nice fast single CPU. There is nothing wrong with dual CPUs, and perhaps some PS functions can utilize more than six cores, but a single CPU is probably enough.

Beyond that, I often see the following flaws in machines built for image editing:

1) Not enough RAM. Photoshop works best when it works in memory, so load up with tons of RAM. A mid-grade workstation with plenty of RAM will handily outrun a high-end machine with too little. Why not go with 64GB or even 128GB if that means you stay off of the hard drives entirely.

2) Slow storage. I hate to sit and wait while files open and close. You are on the right track with SSDs for your working drives (though with enough RAM you'll use these very little) but a RAID6 storage array with eight drives could end up being a bit slow, depending on networking, disks, and the RAID implementation. Is the RAID6 array going to be local to the workstation or NAS?

lastly: Upgrade to CS6 and go with GPU acceleration, though I think CS7 will be even more exciting when it comes to acceleration.
 
Last edited:

SemiLiterate

New Member
Apr 23, 2013
15
0
0
Southeastern USA
My current z77 motherboard has two PCIe3/16 slots, each configured as PCIe/8 when both are used. It also has a PCIe/4 slot. That means I'll need a new motherboard. And since the x77 chipset is limited as to the number of its PCIe lanes, I guess that means I should opt for x79.

Asus offers two high-end choices with slots & PCIe lanes configured such that both video boards are in PCIe3/16 slots and there is an independent (non-switched) PCIe3/8 slot remaining accessible for the 10gig/E card: the P9X79 Pro and the P9X79 Deluxe. Neither the P9X79 Sabertooth board nor P9X79 WS (workstation) board has more than a 4-lane PCIe slot available when 2 graphics boards are installed thanks to the combination of switched lane connections and slot placement. Odd. Regardless, the only features offered by the "Deluxe" over the "Pro" appear to be a newer-but-still obsolete version of Bluetooth, 2x2 Wireless N, an extra wired gig/E port and three extra USB headers. Sounds like my next computer workstation will be Asus P9X79 Pro based with an i7-3930K cpu and 8x8GB DDR3 2400 SDRAM. Add the two video boards, several SSDs, the monitors, ad nauseum.

Oh, lord ... 10gig/E PCIe/8 adapters are another $500 apiece. Whose do I choose? Intel? Something else?
Edited to add: Intel E10G41AT2 AT2 Server Adapter 10Gbps PCI Express 2.0 x8 1 x RJ45

Looks like I'm going to be building both a workstation AND a file server . . . . H-e-double-hockey-sticks. It's only money.
 
Last edited:

SemiLiterate

New Member
Apr 23, 2013
15
0
0
Southeastern USA
2) Slow storage. I hate to sit and wait while files open and close. You are on the right track with SSDs for your working drives (though with enough RAM you'll use these very little) but a RAID6 storage array with eight drives could end up being a bit slow, depending on networking, disks, and the RAID implementation. Is the RAID6 array going to be local to the workstation or NAS?
Both my NAS boxes are currently running 8x2TB RAID-6. They are also about three years old. I've never had a drive failure. My plan was/is to replace the drives with new 4TB models.

My original concept for the workstation was to use an SSD for workspace and re-use 8 of the 2TB drives in an internal RAID-6 array (with a dedicated controller); however, everyone here seems to think that I'd be better served (forgive the pun) by building a file server -- reorganizing such that my workstation has only SSDs and file storage is off-loaded to the server connected to the workstation through a 10gig/E channel. With that, both my NAS boxes would be moved off-site as backup storage.
 

Andreas

Member
Aug 21, 2012
127
1
18
My current z77 motherboard has two PCIe3/16 slots, each configured as PCIe/8 when both are used. It also has a PCIe/4 slot. That means I'll need a new motherboard. And since the x77 chipset is limited as to the number of its PCIe lanes, I guess that means I should opt for x79.
Not sure what your intent is. For what I understand you want to achieve, 2 x PCIe3 x8 slots are plenty. In a throughput oriented workload they could easily saturate your main memory bandwidth anyway. This will not happen in your case. It is interactive, which means there is a human in the equation :)

Correction on the chipset. SandyBridge and IvyBridge systems have the PCI root complex on the CPU die, not on the PCH on the motherboard. One of the reasons, why these CPUs perform so well.

But if you really want to accelerate your editing, go parallel with your GPU's - just kidding ...

(and get a different PSU. 4x Titans or 4x AMD7970 need under full load a 1500 Watt PSU - each group.

Andy
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
An external file server with 10GbE would work of course. You could locate it somewhere where the noise isn't a problem, share it across multiple workstations, etc. Nice. The downside is that you add complexity, make it take longer to get spun up in the morning, and use more power.

If you have multiple editing stations, then go for an external file server. If not, then don't discount keeping the disks internal - you have a choice.

If you do go for an external file server, then think about Infiniband versus 10GbE. IB is cheaper and faster, but far more difficult to integrate with your normal network traffic. A point-to-point IB network is shockingly cheap and scary fast.

Both my NAS boxes are currently running 8x2TB RAID-6. They are also about three years old. I've never had a drive failure. My plan was/is to replace the drives with new 4TB models.

My original concept for the workstation was to use an SSD for workspace and re-use 8 of the 2TB drives in an internal RAID-6 array (with a dedicated controller); however, everyone here seems to think that I'd be better served (forgive the pun) by building a file server -- reorganizing such that my workstation has only SSDs and file storage is off-loaded to the server connected to the workstation through a 10gig/E channel. With that, both my NAS boxes would be moved off-site as backup storage.
 

SemiLiterate

New Member
Apr 23, 2013
15
0
0
Southeastern USA
Okay, I phrased that poorly: Boards using the z77 chipset have PCIe lane limitations that boards using the x79 chipset do not have because of differences in the CPUs, themselves -- socket 2011 CPUs simply have more PCIe lanes available. Given that, finding a board such that the 10gig/E card is in an available 8-lane slot without interfering with typical two-slot graphics cards is not a trivial exercise. Most z77-based motherboards cannot support an additional 8-lane PCIe device when running a pair of video boards and the few that can seem to be using auxiliary chips to double-track the PCIe lanes -- probably not the best of tactics, especially when it isn't a problem for x79-based motherboards. The Asus P9X79 Pro appears to meet all my needs.

I've already placed a pre-order for a pair of the new Dell U3014 monitors. My plan was to drive each with its own Radeon HD 7950 video board. This allows me to calibrate each video board and monitor combination independently. Is this not best policy? Would nVidia be a better choice over AMD/ATI? And I know that Crossfire and SLI are useful for gaming purposes; however, I was under the impression that they offer little or no advantage with most other applications. Am I wrong?
 

Scout255

Member
Feb 12, 2013
58
0
6
My current z77 motherboard has two PCIe3/16 slots, each configured as PCIe/8 when both are used. It also has a PCIe/4 slot. That means I'll need a new motherboard. And since the x77 chipset is limited as to the number of its PCIe lanes, I guess that means I should opt for x79.

Asus offers two high-end choices with slots & PCIe lanes configured such that both video boards are in PCIe3/16 slots and there is an independent (non-switched) PCIe3/8 slot remaining accessible for the 10gig/E card: the P9X79 Pro and the P9X79 Deluxe. Neither the P9X79 Sabertooth board nor P9X79 WS (workstation) board has more than a 4-lane PCIe slot available when 2 graphics boards are installed thanks to the combination of switched lane connections and slot placement. Odd. Regardless, the only features offered by the "Deluxe" over the "Pro" appear to be a newer-but-still obsolete version of Bluetooth, 2x2 Wireless N, an extra wired gig/E port and three extra USB headers. Sounds like my next computer workstation will be Asus P9X79 Pro based with an i7-3930K cpu and 8x8GB DDR3 2400 SDRAM. Add the two video boards, several SSDs, the monitors, ad nauseum.

Oh, lord ... 10gig/E PCIe/8 adapters are another $500 apiece. Whose do I choose? Intel? Something else?
Edited to add: Intel E10G41AT2 AT2 Server Adapter 10Gbps PCI Express 2.0 x8 1 x RJ45

Looks like I'm going to be building both a workstation AND a file server . . . . H-e-double-hockey-sticks. It's only money.
You may want to consider the Supermicro X9SRH-7TF motherboard if you are set on 10G networking.

It includes dual 10G controllers, a LSI raid controller, and has a X16, X8, and X4 slot in a single LGA2011 platform. This would save you big bucks over buying seperate 10G cards (as the board is around ~450-500ish). Supermicro | Products | Motherboards | Xeon® Boards | X9SRH-7TF is the link if you are interested. You would still need a 10G card on your file server though.....
 

SemiLiterate

New Member
Apr 23, 2013
15
0
0
Southeastern USA
I do not have/need multiple workstations; however, it looks like I'll be building a new one to replace/augment my current one. There are two other computers plus a laptop on my network that could benefit with a file server thrown into the mix -- for one thing, they just might get backed up regularly.
An external file server with 10GbE would work of course. You could locate it somewhere where the noise isn't a problem, share it across multiple workstations, etc. Nice. The downside is that you add complexity, make it take longer to get spun up in the morning, and use more power.
Could I keep my current network topology -- everything wired through an 8-port gig/E switch, including both my workstation and the new file server -- and provide a "private" Infiniband link between my workstation and the file server?
If you do go for an external file server, then think about Infiniband versus 10GbE. IB is cheaper and faster, but far more difficult to integrate with your normal network traffic. A point-to-point IB network is shockingly cheap and scary fast.
 

Andreas

Member
Aug 21, 2012
127
1
18
Not sure why you need a x8 PCI slot for a 10Gig/E card. Unless you need the last fraction of 10gig performance, a x1 slot PCIe 3.0 is sufficient. Put it in another way, a x4 PCI 2.0 slots is ok as well.

BTW, 10G/E for the D800 is a kind of overkill. The RAW files are only 40-50 MB.

As said earlier, if you want to go the LGA2011 route, just do it. Will you experience a perf difference with your GPU's to a i7-3770K system? I don't know, but I couldn't.

On calibration. Which tool are you using? A colorimeter or spectrophotometer? Win7 and Win8 can assign independent color profiles to each monitor, not sure why you need 2 graphic cards in your particular case. Any special dependency?

For photo editing SLI and CF is not necessary as you don't need AFR (alternate frame rendering)

On NVidia/ATI:
Each camp has their use cases where the products shine and usually a very vocal community of supporters and "non-supporters".

On the compute side, NVidia started early with CUDA, and has the better OpenGL performance, ATI has the better OpenCL implementation. (very generic statement) I'd go for a midrange card from either manufacturer and wouldn't care too much about details and game performance. A single card is preferred, unless there is a real need for a second (driver stability, heat in the case, energy consumption are some of the negatives)
 
Last edited:

Lost-Benji

Member
Jan 21, 2013
424
23
18
The arse end of the planet
My current z77 motherboard has two PCIe3/16 slots, each configured as PCIe/8 when both are used. It also has a PCIe/4 slot. That means I'll need a new motherboard. And since the x77 chipset is limited as to the number of its PCIe lanes, I guess that means I should opt for x79.
Now I am guessing the the bulk of my initial contribution was ignored. You sir, have a mid-range system and many others are also missing this. Now I am not going to dispute what others find best for their use or what you are happy with, what I am going to point out is that you are still missing loads of performance and some things said by others just don't ad up.

Lets start with the z77 chipset, I cant do anything with your chosen board as I have not seen you actually put in the full specs of the system at hand :confused:
z77: Intel® Z77 Express Chipset
x79: Intel® X79 Express Chipset
c606: Intel® C600 Series Chipset
....and some more reading: Intel Xeon chipsets - Wikipedia, the free encyclopedia

Take some time, look and understand where entry-level and mid-range stop then real hardware begins. A close eye on the DMI/QPI is also wise.
Also take note of why I recommended the Gigabyte board, GIGABYTE - Motherboard - Socket 2011 - GA-X79S-UP5-WIFI (rev. 1.0) and it's options for pure expansion & I/O options. Dual NICs, 14 HDDs/SSD's onboard, ECC support, C606 chipset, s2011 and PCI-E lanes to sink a battleship.

CPU, yours is limited. My opinion sounds blunt but I am just stating as I see it. s1155 is not designed for the want/needs you have started this thread with and the opening questions.
ARK | Intel® Core? i7-3770K Processor (8M Cache, up to 3.90 GHz)
As Patrick and myself suggested, s2011 is the best way if you have plans to follow the OP questions.

There was also some statements made by Andreas that also didn't sit well and need some explaining how these opinions were formed???
The apps your are talking about are not NUMA aware, so their performance on those systems might even be lower than on single socket systems
This would also apply to single multi-core CPU's. These issues were practically eliminated with the advent of much faster multiple QPI.
The dual Xeon systems are throughput optimized platforms, you need a user interactive system - the opposite of throughput
Place a Xeon and a Consumer CPU in the common platform, match the clocks and cache levels, now try and tell the difference. There is none other than hardware options that the Xeon has, extra QPI/DMI, ECC support and binned to guarantee its stability and performance.
Where some get confused is that desktop OS's were never intended for DP or MP platforms and originally, scaled poorly. Later OS's like Win7 and Win8 scale better but still, Server/WS OS's work best. The trick and option most miss when running these and give the impression Andreas has aimed at is you need to go into the advance system properties and tell the OS to prioritise foreground rather the background services which Server OS's do by default.
I agree with what and where Andreas is coming from, just get a feeling there are some blinkers on and simple things missed.

As I said, I am not having a shot at anyone, the OP wanted ideas on a higher-specked WS. I can't help it if they get wobbly in the knees when facts and figures come to light.

Oh, lord ... 10gig/E PCIe/8 adapters are another $500 apiece. Whose do I choose? Intel? Something else?
Edited to add: Intel E10G41AT2 AT2 Server Adapter 10Gbps PCI Express 2.0 x8 1 x RJ45

Looks like I'm going to be building both a workstation AND a file server . . . . H-e-double-hockey-sticks. It's only money.
You get what you pay for if you shop wisely. 10GbE can be had for half of what you think. If you want horsepower, you need more horses, same horse pushed hard will stumble and fade quicker.

A hint for the future, google things first, then post thread questions. Also ad ALL details of the systems/setups in question, My crystal-ball is warn out and without proper info, poor info, opinions and advice will follow.

eg.
  • Full system details/specs?
  • Full specs on each NAS you have?
  • What you really want?
 

Andreas

Member
Aug 21, 2012
127
1
18
Take some time, look and understand where entry-level and mid-range stop then real hardware begins. A close eye on the DMI/QPI is also wise.
Also take note of why I recommended the Gigabyte board, GIGABYTE - Motherboard - Socket 2011 - GA-X79S-UP5-WIFI (rev. 1.0) and it's options for pure expansion & I/O options. Dual NICs, 14 HDDs/SSD's onboard, ECC support, C606 chipset, s2011 and PCI-E lanes to sink a battleship.
Check the interconnect between the C606 and the CPU.
14 SSD's and 2 NICs on this motherboard would have 80 GBit/s bandwidth , yet they are only connected via the DMI 2.0 interface (20 GBit/s). 3 fast SSDs saturate this interface (including overhead), 14 SSDs connected here are fine for casual (interactive) use, but not for real high & sustainable performance.

There was also some statements made by Andreas that also didn't sit well and need some explaining how these opinions were formed???
This would also apply to single multi-core CPU's. These issues were practically eliminated with the advent of much faster multiple QPI.
It would be great if the QPI would have eliminated NUMA issues. They are much deeper intertwined in the software stack.

Place a Xeon and a Consumer CPU in the common platform, match the clocks and cache levels, now try and tell the difference. There is none other than hardware options that the Xeon has, extra QPI/DMI, ECC support and binned to guarantee its stability and performance.
Where some get confused is that desktop OS's were never intended for DP or MP platforms and originally, scaled poorly. Later OS's like Win7 and Win8 scale better but still, Server/WS OS's work best. The trick and option most miss when running these and give the impression Andreas has aimed at is you need to go into the advance system properties and tell the OS to prioritise foreground rather the background services which Server OS's do by default.
I agree with what and where Andreas is coming from, just get a feeling there are some blinkers on and simple things missed.
I am not talking about OS scaling, this is largely solved. I am referring to the still weak scaling properties of interactive applications. They are still relying on ILP and frequency to gain performance, less on perfectly balanced resources like a database server optimized for throughput is able to extract from a DP system. There are lot of entertaining discussions going on on benchmark sites about perf comparisons between LGA1155 and LGA2011 systems. If the raw HW capabilities would count, 50% more cores and 100% higher mem bandwidth should translate to respective better performance. Yet, Photoshop is only 10% faster. If the OP wants to have a LGA2011 system, why not, he should buy it. I think our disagreement comes about the utility of this investment.

What I shared was my experience with the use case of photo editing. I do a lot of this (over the years, I took more than 500.000 RAW files with Nikon cameras, the D800 and D800E included). From a photographer's perspective the 36 MP sensor produces huge files (45 MB). From an IT perspective they are tiny when the system is properly set up. The file size is a non issue these days - only non-IT people (mostly in photographic forums) enter into the selfrepeating chorus of incredible large, unmanageable D800 files.

I run regularily into the file limitation of Nikon Capture NX2, which can open 20 files concurrently. With D800 files the application needs approx 20 GB RAM. If this is available, there is no slowdown whatsoever. If you have 16 GB and 32 CPUs, paging starts. So its a configuration issue, which was the whole point some of us made.

I don't know where the information is coming from that there need to be 2 graphic cards just to do color calibration for 2 monitors - I don't know anybody who does. I have 3x 27" color calibrated monitors on my system with 1 midrange GPU idling 99% of the time. Only very few function and filters in PS are GPU enabled.

If the OP wants to do some folding or other number crunching exercises besides his photographic hobbies (or jobs), then the system recommendation would look for sure different.

If you want horsepower, you need more horses, same horse pushed hard will stumble and fade quicker.
This is correct for throughput computing like we have in servers.
To stick with the metaphor, for legacy interactive apps, faster horses are often the better solution, not more.

I am not sure if this qualifies as "high performance", but when I joined STH last August, I shared the build notes of some systems I built and now use at home. Despite their availability, photo editing is still done on the "lousy " 3770K system. Why? Because the D800 files are not big enough to "justify" a system change :)

The build notes for two workstations and the homeserver: http://forums.servethehome.com/diy-storage-server-builds/799-intro-built-notes.html

rgds,
Andy