This is my basement render farm.
I run Thinkbox Deadline as my render manager and I primarily render with Vray for Maya Standalone.
It has one file server/render manager/license server that also doubles as a render node (not ideal) and 5 render nodes bellow it connected via 10Gbps Infiniband. There is a 6th chassis in the rack but it's empty (leftover from my previous file server build).
The PSU is one of Cyberpower's beefiest 2U units (3000VA / 2400W Pure sine wave UPS). Only the switch and file server are conected to the battery. If power goes out the render nodes are inconsequential so I prefer having all available power to what matters.
All nodes have dual ES-2670 ES cpus (8 core 2.6GHz - 3GHz Turbo all cores). These are very fast but aren't too power hungry (115W). Each node has 32GB of RAM (4x8GB). It's only dual channel in that configuration but mem bandwidth has no impact on raytracing performance whatsoever so I prefer having room to expand to 64GB if needed for jobs that require that sort of capacity. (this is a cost and power savings measure)
Future plans:
File Server/Render Manager/Render Node
OS Windows 2008R2
Server Supermicro SYS-2027R-WRF
CPU 2x Xeon E5-2670 ES
RAM 8x 4GB Hynix 1333 1.35V ECC RDIMM
NIC Mellanox Infinihost III Ex MHEA28-XTC
BOOT 2x 60GB OWC Mercury Pro 6G (RAID-0)
RAID LSI Megaraid 9271-8icc
SSD 4x 256GB Samsung 840 Pro (RAID-0)
HDD 4x 1TB WD Velociraptor (RAID-5)
Render Nodes
OS Windows 2008 R2
Chassis Supermicro CSE-111TQ-563CB
Mobo Supermicro X9DRL-iF-O
CPU 2x Xeon E5-2670 ES
SSD Intel 330
PCI RISER Startech PEX8RISER
RAM 4 x 8GB Crucial 1600 UDIMM BLS2K8G3D1609ES2LX0
HSF 2x Dynatron R15
NIC Mellanox Infinihost III Ex MHEA28-XTC
Other bits
Rack Supermicro CSE-RACK14U
Switch Cisco Topspin90 SFS 3001 + 1GbE Module
PSU CyberPower PR3000LCDRTXL2U
I run Thinkbox Deadline as my render manager and I primarily render with Vray for Maya Standalone.
It has one file server/render manager/license server that also doubles as a render node (not ideal) and 5 render nodes bellow it connected via 10Gbps Infiniband. There is a 6th chassis in the rack but it's empty (leftover from my previous file server build).
The PSU is one of Cyberpower's beefiest 2U units (3000VA / 2400W Pure sine wave UPS). Only the switch and file server are conected to the battery. If power goes out the render nodes are inconsequential so I prefer having all available power to what matters.
All nodes have dual ES-2670 ES cpus (8 core 2.6GHz - 3GHz Turbo all cores). These are very fast but aren't too power hungry (115W). Each node has 32GB of RAM (4x8GB). It's only dual channel in that configuration but mem bandwidth has no impact on raytracing performance whatsoever so I prefer having room to expand to 64GB if needed for jobs that require that sort of capacity. (this is a cost and power savings measure)
Future plans:
- Replace aging 10Gbps IB switch & NICs with 40/10GbE
- Setup Diskless boot for all render nodes (current infinihost NICs don't support FlexBoot).
- Fill up remaining 4U slots with render nodes.
- Replace E5-2670's in file server with E5-2630L's
- Switch everything to CentOS
File Server/Render Manager/Render Node
OS Windows 2008R2
Server Supermicro SYS-2027R-WRF
CPU 2x Xeon E5-2670 ES
RAM 8x 4GB Hynix 1333 1.35V ECC RDIMM
NIC Mellanox Infinihost III Ex MHEA28-XTC
BOOT 2x 60GB OWC Mercury Pro 6G (RAID-0)
RAID LSI Megaraid 9271-8icc
SSD 4x 256GB Samsung 840 Pro (RAID-0)
HDD 4x 1TB WD Velociraptor (RAID-5)
Render Nodes
OS Windows 2008 R2
Chassis Supermicro CSE-111TQ-563CB
Mobo Supermicro X9DRL-iF-O
CPU 2x Xeon E5-2670 ES
SSD Intel 330
PCI RISER Startech PEX8RISER
RAM 4 x 8GB Crucial 1600 UDIMM BLS2K8G3D1609ES2LX0
HSF 2x Dynatron R15
NIC Mellanox Infinihost III Ex MHEA28-XTC
Other bits
Rack Supermicro CSE-RACK14U
Switch Cisco Topspin90 SFS 3001 + 1GbE Module
PSU CyberPower PR3000LCDRTXL2U
Last edited: