I'm doing some updates and going to be experimenting with a new NAS/SAN and iSCSI. As part of this I am building a new server based on a SuperMicro LGA-1156 motherboard. I'm using an X8SIA-F, but the question really applies to any Intel 3420 based motherboard (and - in larger memory configuration - to any LGA-1366 MB based on the 5500/5520 chipset too).
With the 3420 chipset, the memory speed is forced to DDR3-800 if you exceed a total of 8 "ranks" of memory (more than 4 dual-rank RDIMMs or 2 quad-rank RDIMMS). This means that any valid configuration over 16GB is forced to DDR3-800. 24GB using 6x4GB dual-rank RDIMMS or 32GB using 4x8GB quad-rank RDIMMS both get this treatment.
Of course, with most existing memory, your CAS-latency also goes down. Most compatible RDIMMS that you can actually buy at this size run at DDR3-1333/CAS-9 but at DDR3-800 you get CAS-6 or even 5.
So here's the question for the people smarter than me (which is almost anyone reading this forum ):
For a typical NAS/SAN with either hardware RAID or an efficient software RAID like Linux MD or Solaris with XFS: how much should I care? Is the difference in memory speed going to affect this type of application? Some on other boards have suggested that the improved latency actually makes it BETTER to run at DDR-800. Are they accurate or is this FUD?
Memory is cheap right now (relatively) and I plan to build this as an ESXi platform to do other things. At least one of my options (ZFS) is ram hungry, so more seems better. And i can afford it. But I want to understand this speed trade-off before I build.
With the 3420 chipset, the memory speed is forced to DDR3-800 if you exceed a total of 8 "ranks" of memory (more than 4 dual-rank RDIMMs or 2 quad-rank RDIMMS). This means that any valid configuration over 16GB is forced to DDR3-800. 24GB using 6x4GB dual-rank RDIMMS or 32GB using 4x8GB quad-rank RDIMMS both get this treatment.
Of course, with most existing memory, your CAS-latency also goes down. Most compatible RDIMMS that you can actually buy at this size run at DDR3-1333/CAS-9 but at DDR3-800 you get CAS-6 or even 5.
So here's the question for the people smarter than me (which is almost anyone reading this forum ):
For a typical NAS/SAN with either hardware RAID or an efficient software RAID like Linux MD or Solaris with XFS: how much should I care? Is the difference in memory speed going to affect this type of application? Some on other boards have suggested that the improved latency actually makes it BETTER to run at DDR-800. Are they accurate or is this FUD?
Memory is cheap right now (relatively) and I plan to build this as an ESXi platform to do other things. At least one of my options (ZFS) is ram hungry, so more seems better. And i can afford it. But I want to understand this speed trade-off before I build.