I don't have experience with C6100 server power supplies but I would think that the wattage needed depends on the CPUs you are using in each node. If you use the 1100 watt units and populate every drive tray and every memory bank in each node and are using something like the 90W CPUs(X-series)you might have to upgrade otherwise you could run into a "system performance has been reduced" prompt during POST because of Dell bios/firmware "requiring" a PSU upgrade.
I say this because my R610 servers will absolutely not accept X5650 CPUs with the base 502W PSU, even if the there's no RAM installed or HDDs installed.
If the C6100 has no such firmware limitation I would see if the baseboard management controller on those have power consumption metrics you can check to view how much wattage each node is using and go from there. iDRAC is great for this but isn't available on the C6100 systems.
Thanks Dajinn for your valuable and insightful advice. Eight CPUs will make difference to power usage, particularly the 90 watt models. I'll be running the 60 watt L5639 or L5640s.
Nevertheless according to the Dell owners manual (page 59), the 1100 Watt PSU running is only recommended for running 3 or 4 nodes with 2 CPUs (not specified), 9 hard drives and nine memory modules. 1400 Watt PSU is good for 'full configuration' with running 3 nodes (no mention of expansion cards) and with 4 nodes 2 CPUs (not specified), 9 hard drives and nine memory modules.
As manufacturers can like to be generous with PSU requirements. I wonder what the real world experience is of users, especially with fully utilised systems with expansion cards?