Hi STH,
I'll call this my first post, although actually it's my second. I don't recall how I stumbled in here, but I've been a somewhat dedicated lurker. You all rock! You're doing really interesting things here with in reality shoestring budgets which sounds eerily like my day job. Since some of your home labs resemble my work, I figured I'd ask for a recommendation, if that's okay?
I'm currently run a SQL 2005 EE instance on a quad socket dual core Opterons (875s yes that old), connected via mutiple 1gbe to an iSCSI san using fail over clustering to another DL585. I'm using MPIO as opposed to using LAG groups however so many operations are single threaded so I rarely see the benefit of the other NIC. I've got 32GB of RAM in the current box and I'm occasionally RAM bound, never CPU bound, frequently disk bound. I've pulled some performance numbers; SkyDrive and GoogleDocs same data in either one.
Now I went and replicated our SQL data onto my gaming rig, an Intel 2500K with an single OCZ Vertex 3. A query of unordered audit log data I run in production takes about 15 minutes to complete (peaks at ~2200IOPS). This same query takes about 30 seconds on the Vertex SSD and hits 33000 READ IOPS. 30 seconds versus 15 minutes! Holy Cow!
In light of that, I've been given a budget of around 50K to change the slow as molasses boxes out. How would you spend it?
I'm thinking about using SQL Server Always ON and DAS, because when I try pricing SAN solutions the effective performance is usually abysmal until you start spending big money. This needs to be part of an HA cluster, so about 25K/box. I need about 1.5TB (actually only about 1TB with 30% free) for my 'five year' estimated growth plan for the high IOPS databases (Northwind and NorthwindAudit in the spreadsheet on Skydrive) and the rest can sit on the existing ISCSI SAN or slow DAS.
So for instance lets say I use something like a Dell R720XD or DL380 25SFF chassis, I wouldn't need an external sas jbod. If I went DL360 or R410, the JBOD I'm looking at using is probably the DataOn DNS-1640 Single Controller for $3895. But open to suggestion.
Let's say I went with 800GB S3700 Intel MLC drives at $2100/each then I'd need 4 in a Raid 10 (1.5TB) to achieve the usable space I'd want, so that's $8400. Or I could look at and believe me I'll be ordering some spares, I could buy 16 of the 240GB Intel 520 MLC drives in a Raid 60 to achieve similar capacity (1.7TB +2 HS). The Intel 520 goes for $249 each so $3984, but I'd need that JBOD since I'll need some cheap slow local storage and this eats up all the drives, so that's another $3895 so $7879. Now when we use the IOPS calculator things get a little different.
S3700 800GB Raid 10, Stripe =64Kb, 75000 Read and 36000 write IOPS 60/40 read/write = 66000 Random IOPS
520 240GB RAID 60, Stripe =64Kb, 75000 Read 36000 and write IOPS, 60/40 Read/write = 107000 Random IOPS
I save about $500 bucks and I got about 40% faster using consumer drives. Assumes 64KB average IO size, Stripe = 64KB, SSD Write IOPS = 36000, SSD Read IOPS = 75000
Let's say I end up using an LSI 9286 PCIe card $800 without Cachecade and $1250 with to connect to the JBOD. Ah cachecade, here's a good question, would Cachecade 2.0 even be worthwhile for piles of random IO or is it more like that Seagate Hybrid drive, neat for like the first five seconds, but only cool if you've never used a real ssd? I can't see using a 15K FusionIO or WarpDrive, but dunno that could be useful too.
Anyhow math exercises aside, how would you do it?
I'll call this my first post, although actually it's my second. I don't recall how I stumbled in here, but I've been a somewhat dedicated lurker. You all rock! You're doing really interesting things here with in reality shoestring budgets which sounds eerily like my day job. Since some of your home labs resemble my work, I figured I'd ask for a recommendation, if that's okay?
I'm currently run a SQL 2005 EE instance on a quad socket dual core Opterons (875s yes that old), connected via mutiple 1gbe to an iSCSI san using fail over clustering to another DL585. I'm using MPIO as opposed to using LAG groups however so many operations are single threaded so I rarely see the benefit of the other NIC. I've got 32GB of RAM in the current box and I'm occasionally RAM bound, never CPU bound, frequently disk bound. I've pulled some performance numbers; SkyDrive and GoogleDocs same data in either one.
Now I went and replicated our SQL data onto my gaming rig, an Intel 2500K with an single OCZ Vertex 3. A query of unordered audit log data I run in production takes about 15 minutes to complete (peaks at ~2200IOPS). This same query takes about 30 seconds on the Vertex SSD and hits 33000 READ IOPS. 30 seconds versus 15 minutes! Holy Cow!
In light of that, I've been given a budget of around 50K to change the slow as molasses boxes out. How would you spend it?
I'm thinking about using SQL Server Always ON and DAS, because when I try pricing SAN solutions the effective performance is usually abysmal until you start spending big money. This needs to be part of an HA cluster, so about 25K/box. I need about 1.5TB (actually only about 1TB with 30% free) for my 'five year' estimated growth plan for the high IOPS databases (Northwind and NorthwindAudit in the spreadsheet on Skydrive) and the rest can sit on the existing ISCSI SAN or slow DAS.
So for instance lets say I use something like a Dell R720XD or DL380 25SFF chassis, I wouldn't need an external sas jbod. If I went DL360 or R410, the JBOD I'm looking at using is probably the DataOn DNS-1640 Single Controller for $3895. But open to suggestion.
Let's say I went with 800GB S3700 Intel MLC drives at $2100/each then I'd need 4 in a Raid 10 (1.5TB) to achieve the usable space I'd want, so that's $8400. Or I could look at and believe me I'll be ordering some spares, I could buy 16 of the 240GB Intel 520 MLC drives in a Raid 60 to achieve similar capacity (1.7TB +2 HS). The Intel 520 goes for $249 each so $3984, but I'd need that JBOD since I'll need some cheap slow local storage and this eats up all the drives, so that's another $3895 so $7879. Now when we use the IOPS calculator things get a little different.
S3700 800GB Raid 10, Stripe =64Kb, 75000 Read and 36000 write IOPS 60/40 read/write = 66000 Random IOPS
520 240GB RAID 60, Stripe =64Kb, 75000 Read 36000 and write IOPS, 60/40 Read/write = 107000 Random IOPS
I save about $500 bucks and I got about 40% faster using consumer drives. Assumes 64KB average IO size, Stripe = 64KB, SSD Write IOPS = 36000, SSD Read IOPS = 75000
Let's say I end up using an LSI 9286 PCIe card $800 without Cachecade and $1250 with to connect to the JBOD. Ah cachecade, here's a good question, would Cachecade 2.0 even be worthwhile for piles of random IO or is it more like that Seagate Hybrid drive, neat for like the first five seconds, but only cool if you've never used a real ssd? I can't see using a 15K FusionIO or WarpDrive, but dunno that could be useful too.
Anyhow math exercises aside, how would you do it?
