Renderfarmer's DIY Render Farm

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
This is my basement render farm.

I run Thinkbox Deadline as my render manager and I primarily render with Vray for Maya Standalone.

It has one file server/render manager/license server that also doubles as a render node (not ideal) and 5 render nodes bellow it connected via 10Gbps Infiniband. There is a 6th chassis in the rack but it's empty (leftover from my previous file server build).

The PSU is one of Cyberpower's beefiest 2U units (3000VA / 2400W Pure sine wave UPS). Only the switch and file server are conected to the battery. If power goes out the render nodes are inconsequential so I prefer having all available power to what matters.

All nodes have dual ES-2670 ES cpus (8 core 2.6GHz - 3GHz Turbo all cores). These are very fast but aren't too power hungry (115W). Each node has 32GB of RAM (4x8GB). It's only dual channel in that configuration but mem bandwidth has no impact on raytracing performance whatsoever so I prefer having room to expand to 64GB if needed for jobs that require that sort of capacity. (this is a cost and power savings measure)

Future plans:
  1. Replace aging 10Gbps IB switch & NICs with 40/10GbE
  2. Setup Diskless boot for all render nodes (current infinihost NICs don't support FlexBoot).
  3. Fill up remaining 4U slots with render nodes.
  4. Replace E5-2670's in file server with E5-2630L's
  5. Switch everything to CentOS

File Server/Render Manager/Render Node
OS Windows 2008R2
Server Supermicro SYS-2027R-WRF
CPU 2x Xeon E5-2670 ES
RAM 8x 4GB Hynix 1333 1.35V ECC RDIMM
NIC Mellanox Infinihost III Ex MHEA28-XTC
BOOT 2x 60GB OWC Mercury Pro 6G (RAID-0)
RAID LSI Megaraid 9271-8icc
SSD 4x 256GB Samsung 840 Pro (RAID-0)
HDD 4x 1TB WD Velociraptor (RAID-5)



Render Nodes
OS Windows 2008 R2
Chassis Supermicro CSE-111TQ-563CB
Mobo Supermicro X9DRL-iF-O
CPU 2x Xeon E5-2670 ES
SSD Intel 330
PCI RISER Startech PEX8RISER
RAM 4 x 8GB Crucial 1600 UDIMM BLS2K8G3D1609ES2LX0
HSF 2x Dynatron R15
NIC Mellanox Infinihost III Ex MHEA28-XTC



Other bits
Rack Supermicro CSE-RACK14U
Switch Cisco Topspin90 SFS 3001 + 1GbE Module
PSU CyberPower PR3000LCDRTXL2U



 
Last edited:

gigatexal

I'm here to learn
Nov 25, 2012
2,772
531
113
Portland, Oregon
alexandarnarayan.com
wow simply wow

on another note, i like your wooden work environment, i like the unfinished rustic feel. and the hardware is awesome.

what's the reason for a push to linux? just curious as I don't have a reason why not to, just curious as to why...
 

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
on another note, i like your wooden work environment, i like the unfinished rustic feel. and the hardware is awesome.
Thanks. It's really not my style. We bought the house 2 years ago and it was either remodeling that part of the house or building the render farm ;-)

what's the reason for a push to linux? just curious as I don't have a reason why not to, just curious as to why...
Mostly because it's free and a little because it seems more flexible. That and all of the cool kids use it.
 
Last edited:

gigatexal

I'm here to learn
Nov 25, 2012
2,772
531
113
Portland, Oregon
alexandarnarayan.com
Haha, the unfinished wooden work area reminds me, fondly, of my dad's carpentry projects in the garage.

Linux is neat. And there are always improvements being made to the kernel. I imagine that it would also be easy to create a sort of beawolf.
 

PigLover

Moderator
Jan 26, 2011
2,976
1,283
113
Bah...if you can afford 12x E5-2670s and the infrastructure to make them useful then surely you can afford the electrons to keep them lit up! ;)
 

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
No issues yet. I only got them all a few days ago. C0 stepping.

For the past year I've been running retail E5-2620 in all of the machines.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,010
4,994
113
does STH have a F@H number? I'll fold for us
Heh, no. Several of us did fold for #33. As of now still #390 on the top contributor list for all of F@H. Used to be top 20 ppd overall :-/
 

RimBlock

Member
Sep 18, 2011
788
8
18
Singapore
Always interesting to see peoples setups from the start to full on implementations like this.

Nice website you have as well. Would be interesting to know how long the renders on the site took using the render farm.

RB
 

gigatexal

I'm here to learn
Nov 25, 2012
2,772
531
113
Portland, Oregon
alexandarnarayan.com
unrelated but Patrick get us official for F@H and I'll set my C6100 and my 3930k setup folding for us

Also say you have a power outage, is there some software that the computer can use to know that it is now running on UPS power and go through the process of saving, or doing the writes it had in the pipe, and then automatically shutting down?
 

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
Nice website you have as well. Would be interesting to know how long the renders on the site took using the render farm.
Thanks! Times vary wildly between scenes and rendering parameters.

A very cool thing about vray is that it lets me do both network rendering and distribited rendering. so I can either split a single frame amongst all of my machines or give each machine a single frame of an animation to work on.
 

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
Also say you have a power outage, is there some software that the computer can use to know that it is now running on UPS power and go through the process of saving, or doing the writes it had in the pipe, and then automatically shutting down?
Yes and no. The UPS comes with bundled software for auto-shutdown but it won't save anything that was in the process of being rendered. But realistically there isn't any use to saving a partially rendered frame. That's why I only have my file server and switch connected to the UPS.