Others have shared photos of their servers and server racks, so I'll do the same.
I am creating an identity management software package based on data warehouse (DW), data mining, and business intelligence (BI) technologies. A partial definition of a data warehouse is this: A really big, really expensive database with really good disk IO. Companies often spend hundreds of thousands of dollars on their DW infrastructure, with big database servers and hundreds or thousands of overpriced SAN disk spindles.
I need a data warehouse for my project, but I certainly can't afford to spend hundreds of thousands of dollars. As a solution, I have created what I call the "Dirt Cheap Data Warehouse (tm)" - aka the DCDW.
The secret to the Dirt Cheap Data Warehouse is this: Used generation-old server technology from eBay, plus a large number of consumer-grade SSD drives, carefully selected, configured, and deployed to avoid resource-wasting bottlenecks and to maximize query throughput and query throughput per dollar. Frankly, it's the details of the software and configuration that gives the DCDW most of its speed, but the hardware portion is still interesting by itself. This post will of course focus on the hardware.
Let the cable-pocolypse begin! Here are images of the current version of the DCDW plus some in-progress experiments to define the next generation of the architecture. I'll add a few other posts to describe what you are seeing. My "rack" is actually two 12U racks bolted together, one on top of the other, which is why I talk about the "top half" and "bottom half" of the rack. Originally I was so sure that 12U would be enough...
Image 1 - the bottom half of the rack, front view. Shows the HP DL585 G7 DB server plus two JBOD chassis with 28 Samsung SSD drives each. At the bottom is the DLI web-enabled PDU.
Image 2 - the top half of the rack, front view. Shows (from top to bottom) c6100 "Corporation in a Box", two c6100 storage nodes, two c6145 db cluster nodes, HP MSA2000G2 SAN.
Image 3 - the bottom half of the rack, rear view. Too many cables! Mellanox QDR Infiniband switch and Dell Gigabit switch at the bottom.
Image 4 - the top half of the rack, rear view
See posts below for details if you want to know more:
I am creating an identity management software package based on data warehouse (DW), data mining, and business intelligence (BI) technologies. A partial definition of a data warehouse is this: A really big, really expensive database with really good disk IO. Companies often spend hundreds of thousands of dollars on their DW infrastructure, with big database servers and hundreds or thousands of overpriced SAN disk spindles.
I need a data warehouse for my project, but I certainly can't afford to spend hundreds of thousands of dollars. As a solution, I have created what I call the "Dirt Cheap Data Warehouse (tm)" - aka the DCDW.
The secret to the Dirt Cheap Data Warehouse is this: Used generation-old server technology from eBay, plus a large number of consumer-grade SSD drives, carefully selected, configured, and deployed to avoid resource-wasting bottlenecks and to maximize query throughput and query throughput per dollar. Frankly, it's the details of the software and configuration that gives the DCDW most of its speed, but the hardware portion is still interesting by itself. This post will of course focus on the hardware.
Let the cable-pocolypse begin! Here are images of the current version of the DCDW plus some in-progress experiments to define the next generation of the architecture. I'll add a few other posts to describe what you are seeing. My "rack" is actually two 12U racks bolted together, one on top of the other, which is why I talk about the "top half" and "bottom half" of the rack. Originally I was so sure that 12U would be enough...
Image 1 - the bottom half of the rack, front view. Shows the HP DL585 G7 DB server plus two JBOD chassis with 28 Samsung SSD drives each. At the bottom is the DLI web-enabled PDU.
Image 2 - the top half of the rack, front view. Shows (from top to bottom) c6100 "Corporation in a Box", two c6100 storage nodes, two c6145 db cluster nodes, HP MSA2000G2 SAN.
Image 3 - the bottom half of the rack, rear view. Too many cables! Mellanox QDR Infiniband switch and Dell Gigabit switch at the bottom.
Image 4 - the top half of the rack, rear view
See posts below for details if you want to know more:
Last edited: