Intel 3420 based MBs and memory bandwidth

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
I'm doing some updates and going to be experimenting with a new NAS/SAN and iSCSI. As part of this I am building a new server based on a SuperMicro LGA-1156 motherboard. I'm using an X8SIA-F, but the question really applies to any Intel 3420 based motherboard (and - in larger memory configuration - to any LGA-1366 MB based on the 5500/5520 chipset too).

With the 3420 chipset, the memory speed is forced to DDR3-800 if you exceed a total of 8 "ranks" of memory (more than 4 dual-rank RDIMMs or 2 quad-rank RDIMMS). This means that any valid configuration over 16GB is forced to DDR3-800. 24GB using 6x4GB dual-rank RDIMMS or 32GB using 4x8GB quad-rank RDIMMS both get this treatment.

Of course, with most existing memory, your CAS-latency also goes down. Most compatible RDIMMS that you can actually buy at this size run at DDR3-1333/CAS-9 but at DDR3-800 you get CAS-6 or even 5.

So here's the question for the people smarter than me (which is almost anyone reading this forum ;)):

For a typical NAS/SAN with either hardware RAID or an efficient software RAID like Linux MD or Solaris with XFS: how much should I care? Is the difference in memory speed going to affect this type of application? Some on other boards have suggested that the improved latency actually makes it BETTER to run at DDR-800. Are they accurate or is this FUD?

Memory is cheap right now (relatively) and I plan to build this as an ESXi platform to do other things. At least one of my options (ZFS) is ram hungry, so more seems better. And i can afford it. But I want to understand this speed trade-off before I build.
 

john4200

New Member
Jan 1, 2011
152
0
0
For the eternal question of whether higher bandwidth or lower latency is better, I don't think that can be settled definitively. The best you could do is test two configurations with your application. But my guess would be that you will not notice a difference for typical NAS/SAN usage. RAM is an order of magnitude faster than even the fastest SSDs.
 

StartledPancake

New Member
Jan 3, 2011
11
0
0
A guesstimate, but:

ESX - More, just more. Ballooning, despite sounding entertaining, actually causes a much greater performance degradation than going from DDR3 1333 to DDR3 800.

NAS - It probably doesnt matter unless you have perhaps a large number of simultaneous connections. Ive no experience with ZFS though which seems to be a pretty specalist subject though.

As an aside to what John says, the data has to get stuffed your ethernet connection at the end of the day which will be the limiting factor even with link aggregation.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,516
5,811
113
As an aside to what John says, the data has to get stuffed your ethernet connection at the end of the day which will be the limiting factor even with link aggregation.
Some of my new Ethernet connections are 10gbps though :)
 

john4200

New Member
Jan 1, 2011
152
0
0
Some of my new Ethernet connections are 10gbps though :)
On my linux machine, I can get about 5 GBytes/s when copying from RAM with dd (and the RAM itself is even faster without the dd overhead). So saturating 10 Gbits/sec from RAM is not a problem...

Code:
# dd if=/dev/sda of=/dev/null bs=1M count=4000
4000+0 records in
4000+0 records out
4194304000 bytes (4.2 GB) copied, 12.3487 s, 340 MB/s

# dd if=/dev/sda of=/dev/null bs=1M count=4000
4000+0 records in
4000+0 records out
4194304000 bytes (4.2 GB) copied, 0.865723 s, 4.8 GB/s