What servers to buy?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

macrules34

Active Member
Mar 18, 2016
407
28
28
40
I am looking to get a quote on a large quanty (50) of servers to meet the following needs:
- connectivity to storage arrays (FC & iSCSi)
-Would need two PICe slots
- I would be discovering LUNs and running I/O generators like IOmeter to these LUNs
- I would be installing Suse Linux on these systems

These systems must be rack mounted, they can range from a 1u server to a 2u 4 node system. There are so many options out there, I'm not sure what route to go. The cheapest that I found was a Dell PowerEdge R240 for $539 each, then there is the option of the Supermicro "BigTwin" 4 node server which I couldn't find a price for I think I need to find a re-seller of Supermicro equipment.

Let me know your thoughts?

**Mods please feel free to move to appropriate section if needed.
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
What kind of resources compute and ram wise do you need? and does it need local disc or are you booting off lan?
Does it have to be a 1-2u? You might consider a blade chassis if you need that many hosts up front.
It would allow you to consolidate the networking, management, and configuration of the hosts.
 

macrules34

Active Member
Mar 18, 2016
407
28
28
40
What kind of resources compute and ram wise do you need? and does it need local disc or are you booting off lan?
Does it have to be a 1-2u? You might consider a blade chassis if you need that many hosts up front.
It would allow you to consolidate the networking, management, and configuration of the hosts.
Compute can be minimal I’m not doing high intensity work (maybe a server grade intel celeron), as for ram probably 8GB. I would be booting off a heard drive in the system. No can be any size “u” but need to fit on a max of two 42u racks.

The only issue I see with a blade server is over time as storage connectivity evolves past the blade chassis supports. I’m trying to get something that will last like 5-10 years and not have to replace the entire server maybe just an PCIe card say to go from 8GB fiber channel to 16GB fiber channel.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
also I would look at do you only need 2 PCIe slots.

In a previous Job we had 10000+ compute servers that only needed to be 1U, but all of the storage/HyperV servers we used 2U as there were more PCIe slots and local drives available.

It also sounds like you do not have a lot of design or specs laid out in what you are doing. this sounds like... " holy crap. I finally have budget, what do I do with it?"

Chris
 

macrules34

Active Member
Mar 18, 2016
407
28
28
40
also I would look at do you only need 2 PCIe slots.

In a previous Job we had 10000+ compute servers that only needed to be 1U, but all of the storage/HyperV servers we used 2U as there were more PCIe slots and local drives available.

It also sounds like you do not have a lot of design or specs laid out in what you are doing. this sounds like... " holy crap. I finally have budget, what do I do with it?"

Chris
I’m sure I only need 2 PCIe expansion slots, one for a dual port fiber channel card and one for an dual port optical Ethernet card. These servers will be running Suse Linux so no virtualization is going on here.

No this is not a situation of " holy crap. I finally have budget, what do I do with it?". I need 50 hosts to connect to storage arrays. In one of my previous jobs we used Dell R710’s, Cisco C210’s and some Supermicro 2u 4 node servers. I remember these servers being Xeon servers with 8GB of ram. That seemed to work perfectly fine.
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
Compute can be minimal I’m not doing high intensity work (maybe a server grade intel celeron), as for ram probably 8GB. I would be booting off a heard drive in the system. No can be any size “u” but need to fit on a max of two 42u racks.

The only issue I see with a blade server is over time as storage connectivity evolves past the blade chassis supports. I’m trying to get something that will last like 5-10 years and not have to replace the entire server maybe just an PCIe card say to go from 8GB fiber channel to 16GB fiber channel.
Well good news you only need a pair of 4-8 port modular cards instead of replacing one for each host, the chassis are all dumb boxes/backplanes. anything from the blades to the power supplies to the network and adapter modules are generally upgradable. (Depending on the brand and model you get)

If you’re going to be spending this much money your best bet might be to talk to a CDW/SHI/other representative notate your budget/ask and see what solutions they recommend.

They will probably be able to negotiate a bulk deal and provide competing brands solutions To choose from. (Assuming you buy new)

Secondly is there a reason you’re going alot of physical instead of powerful virtualized environment? A pair of 1U R640 boxes with esxi/hyperv/kvm could beat or meet performance of those 50 nodes and use a fraction of the space/power.

What kind of use case do you have thats driving this?
 
Last edited:

macrules34

Active Member
Mar 18, 2016
407
28
28
40
Well good news you only need a pair of 4-8 port modular cards instead of replacing one for each host, the chassis are all dumb boxes/backplanes. anything from the blades to the power supplies to the network and adapter modules are generally upgradable. (Depending on the brand and model you get)

If you’re going to be spending this much money your best bet might be to talk to a CDW/SHI/other representative notate your budget/ask and see what solutions they recommend.

They will probably be able to negotiate a bulk deal and provide competing brands solutions To choose from. (Assuming you buy new)

Secondly is there a reason you’re going alot of physical instead of powerful virtualized environment? A pair of 1U R640 boxes with esxi/hyperv/kvm could beat or meet performance of those 50 nodes and use a fraction of the space/power.

What kind of use case do you have thats driving this?

So the reason that I’m looking at doing physical boxes instead of virtualizing is that, individuals would be assigned a host and have a storage array login. That would be their “sand box”, basically an education environment.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,063
1,482
113
So the reason that I’m looking at doing physical boxes instead of virtualizing is that, individuals would be assigned a host and have a storage array login. That would be their “sand box”, basically an education environment.
That could be easily virtualized as well, at least with iSCSI. FC too depending on your HBAs I believe.
 
  • Like
Reactions: Spartacus

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
So the reason that I’m looking at doing physical boxes instead of virtualizing is that, individuals would be assigned a host and have a storage array login. That would be their “sand box”, basically an education environment.
That could be easily virtualized as well, at least with iSCSI. FC too depending on your HBAs I believe.
This, you could easily make individual VM‘s and they would be a lot easier to snap shot/restore/clone new ones, all while maintaining control of the environments. You can passthrough fiber/iscsi connections for direct storage. Plus resources would be more efficiently shared so you probably need a lot less if they are not it all being used at the same time, even if they are a beefy enough host could more than handle it. Just my 2 cents.
 

macrules34

Active Member
Mar 18, 2016
407
28
28
40
That could be easily virtualized as well, at least with iSCSI. FC too depending on your HBAs I believe.
This is where I see a problem with that: student a, b, c and d share a server (they have their own VMs), they each create 5 LUNs each (total 20 LUNs), when going to assign the LUNs to their VM they see 20 LUNs and assign all 20.

The only thing that I could see working is doing hardware pass through(what it’s called in VMware ESX) with the HBAs (not sure if they have a similar feature in Proxmox).

Has anyone done pass through with blade servers?
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,063
1,482
113
I'm not seeing how that would be any different than physical servers? Your SAN won't be able to tell the difference between virtual and physical hosts. Virtual NICs have their own MAC and IP addresses just like any other after all. You'll just need to ensure your hardware is capable of SR-IOV or create separate software iSCSI initiators for each VM.
 
  • Like
Reactions: Spartacus

macrules34

Active Member
Mar 18, 2016
407
28
28
40
I think that I will go the virtual route with the PCIe pass through. Of course the limitation is the number of PCIe slots available on the physical server. I will be using Proxmox VE as the hypervisor.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,063
1,482
113
Why not SR-IOV? You won't need to pass through individual NICs that way and therefore won't need many of them.
 

macrules34

Active Member
Mar 18, 2016
407
28
28
40
The reason is that I need that hardware physical separation to allow for zoning to a particular VM. I can't have user A accidentally using user B's storage, I need that separation.