The power supply for this beast is configured 2 + 2 configuration. The first number in "# + #" PSU setups means how many PSUs are required for the system to operate at full capacity. The second number means how many PSUs are redundant in the system. IOW, the C3260 requires two PSUs to run at max capacity.
Therefore, this is a 2100 watt system (2 x 1050W PSUs configured as 2 + 0). That exceeds a standard 120VAC 15A circuit. However, that's under max load. If you don't fill the system up, you're looking at far far less power. Like, around 700-1400 Watts MAX under full load with 30 or so 7200 HDDs (at best guess). An earlier user posted the system idled around ~550W with two nodes, 16x DIMMs each, and about 1/2 the drive bays filled with 7200rpm drives. But that's "idle", those 4x CPUs will ramp up another 600W+ when you load up the CPUs, and try to write to all of the HDDs at the same time.
While operating in a Fault mode of a single PSU, you could technically operate with a single PSU (as
@jtaj found out when he plugged in just one PSU unit). Most 2+0 configurations in servers allows the use of a single PSU, operating at 1/2 the capacity in a limp mode, as the system will only max to 1050W - not it's rated 2100 Watts. If you exceed the limit, the PSU shuts off. And you are already at ~600W max capacity, with just the system turned on with 2x nodes, 4x CPUs, and 32x DIMMs - and no HDDs. The point here is: plan for max load of your configuration, not idle wattage.
As mentioned, second 2 in 2 + 2 means it is redundant : you can loose two PSUs, and the system will still have enough power from the other two PSUs to continue operating at full capacity.
For completion sake, yes most redundant PSU / two-PSU servers are classified as 1 + 1 (meaning you can loose a single PSU and the system can operate at full capacity).
However, a lot of servers - especially GPU and "storage" systems with lots of 3.5" LFF HDD capacity like some Supermicro systems I have - are 2 + 0 systems. Meaning, it needs both PSUs at full capacity to supply enough power to the chassis.
This is all dependent on the servers' PDU and how it can handle 1+0, 1+1, 2+0, 2+2 configurations and system power draws. What I have posted is above is typical of Supermicro, Dell, and HP systems.
Now about 220 VAC... Since two PSUs under max load exceeds 15A 110VAC, you need a larger circuit. 20A 110VAC is not common in data centers. However, 20A and 30A 220VAC is. Hence why I think they just say 208V, as it defaults to assuming they are of at least 20A circuits, where when someone says 110VAC, it's largely assumed 110 means a max of 15A.
Oh, and you need 2x 20A circuits, to cover all 4 PSUs at max load - if you ever had the need for a 2000W HDD storage server. Hehe.
---
Do you need all of this? Hell no!
Go ahead and plug in two PSUs into a single 15A 110VAC - there's no issue for most of us home labbers here. What this means is that the system will pull power from both PSUs until it trips the circuit breaker, at ~1680 Watts (which is the typical magic number for GFCI breakers IME). That's way more than enough to plop in 30x or 50x or so 7200 drives. So yeah, feel free to plug two of those PSU suckers into a single 110VAC 15A outlet (and a really good UPS sine-wave UPS though!) and not worry about the 208V nor 4x PSU demands of this server.
You could do this with a single PSU as well. However, remember that you are tippy-toeing around a single 1050W PSU. IOW, if you load up a chassis with 56x LFF and have 4x 145W CPUs, you may experience a number of immediate power-offs of the server when under heavy load testing. Operating two nodes with 4x 145W CPUs, under max load, is going to eat a lot of your 1050W budget (~600 to 700W for just the two nodes!).
IME, 2x E5 V3 systems and 8x normal non-LRDIMM dimms pulls about 85W idle with no other components, no BMC/IPMI, and no power-hungry onboard NICs. So, add in RAID controller, backplane, PDU, SAS expansion chips (this system has two!!), SOIC (network card(s), BMC, etc), etc and you're looking at a bare system idle of around 110W to 140W is my best guess - multiplied by 2x because you have two nodes! That's idle. At full 100% CPU loads, you'll be looking at around 300W-350W full load - per node. So that's about 600W to 700W you want to dedicate to the nodes when calculating HDD wattage usage under a single 1050W PSU.
IOW, you have a capacity of around 300 to 400W for HDDs under a single PSU. Or, remove a node and SOIC and gain another 300W for more HDDs - off of a single 110VAC connection.
Best to use a normal single 15A 110V wall outlet, and connect two of the PSUs for a power budget of 1680W to play with (with nothing else on that circuit, that is). Or, run a 220VAC 30A circuit connected to a nice big and fat UPS. This is what I've done for my "server closet."
Remember, PSUs are a lot more efficient running at 220V than 110C (and less heat!).