What servers to buy?

Discussion in 'Chassis and Enclosures' started by macrules34, Mar 25, 2020 at 1:11 PM.

  1. macrules34

    macrules34 Member

    Joined:
    Mar 18, 2016
    Messages:
    270
    Likes Received:
    9
    I am looking to get a quote on a large quanty (50) of servers to meet the following needs:
    - connectivity to storage arrays (FC & iSCSi)
    -Would need two PICe slots
    - I would be discovering LUNs and running I/O generators like IOmeter to these LUNs
    - I would be installing Suse Linux on these systems

    These systems must be rack mounted, they can range from a 1u server to a 2u 4 node system. There are so many options out there, I'm not sure what route to go. The cheapest that I found was a Dell PowerEdge R240 for $539 each, then there is the option of the Supermicro "BigTwin" 4 node server which I couldn't find a price for I think I need to find a re-seller of Supermicro equipment.

    Let me know your thoughts?

    **Mods please feel free to move to appropriate section if needed.
     
    #1
  2. Spartacus

    Spartacus Active Member

    Joined:
    May 27, 2019
    Messages:
    387
    Likes Received:
    124
    What kind of resources compute and ram wise do you need? and does it need local disc or are you booting off lan?
    Does it have to be a 1-2u? You might consider a blade chassis if you need that many hosts up front.
    It would allow you to consolidate the networking, management, and configuration of the hosts.
     
    #2
  3. Marsh

    Marsh Moderator

    Joined:
    May 12, 2013
    Messages:
    2,176
    Likes Received:
    1,032
    #3
  4. macrules34

    macrules34 Member

    Joined:
    Mar 18, 2016
    Messages:
    270
    Likes Received:
    9
    Compute can be minimal I’m not doing high intensity work (maybe a server grade intel celeron), as for ram probably 8GB. I would be booting off a heard drive in the system. No can be any size “u” but need to fit on a max of two 42u racks.

    The only issue I see with a blade server is over time as storage connectivity evolves past the blade chassis supports. I’m trying to get something that will last like 5-10 years and not have to replace the entire server maybe just an PCIe card say to go from 8GB fiber channel to 16GB fiber channel.
     
    #4
  5. cesmith9999

    cesmith9999 Well-Known Member

    Joined:
    Mar 26, 2013
    Messages:
    1,132
    Likes Received:
    339
    also I would look at do you only need 2 PCIe slots.

    In a previous Job we had 10000+ compute servers that only needed to be 1U, but all of the storage/HyperV servers we used 2U as there were more PCIe slots and local drives available.

    It also sounds like you do not have a lot of design or specs laid out in what you are doing. this sounds like... " holy crap. I finally have budget, what do I do with it?"

    Chris
     
    #5
  6. macrules34

    macrules34 Member

    Joined:
    Mar 18, 2016
    Messages:
    270
    Likes Received:
    9
    I’m sure I only need 2 PCIe expansion slots, one for a dual port fiber channel card and one for an dual port optical Ethernet card. These servers will be running Suse Linux so no virtualization is going on here.

    No this is not a situation of " holy crap. I finally have budget, what do I do with it?". I need 50 hosts to connect to storage arrays. In one of my previous jobs we used Dell R710’s, Cisco C210’s and some Supermicro 2u 4 node servers. I remember these servers being Xeon servers with 8GB of ram. That seemed to work perfectly fine.
     
    #6
  7. Spartacus

    Spartacus Active Member

    Joined:
    May 27, 2019
    Messages:
    387
    Likes Received:
    124
    Well good news you only need a pair of 4-8 port modular cards instead of replacing one for each host, the chassis are all dumb boxes/backplanes. anything from the blades to the power supplies to the network and adapter modules are generally upgradable. (Depending on the brand and model you get)

    If you’re going to be spending this much money your best bet might be to talk to a CDW/SHI/other representative notate your budget/ask and see what solutions they recommend.

    They will probably be able to negotiate a bulk deal and provide competing brands solutions To choose from. (Assuming you buy new)

    Secondly is there a reason you’re going alot of physical instead of powerful virtualized environment? A pair of 1U R640 boxes with esxi/hyperv/kvm could beat or meet performance of those 50 nodes and use a fraction of the space/power.

    What kind of use case do you have thats driving this?
     
    #7
    Last edited: Mar 25, 2020 at 4:39 PM
  8. macrules34

    macrules34 Member

    Joined:
    Mar 18, 2016
    Messages:
    270
    Likes Received:
    9

    So the reason that I’m looking at doing physical boxes instead of virtualizing is that, individuals would be assigned a host and have a storage array login. That would be their “sand box”, basically an education environment.
     
    #8
  9. BlueFox

    BlueFox Well-Known Member

    Joined:
    Oct 26, 2015
    Messages:
    741
    Likes Received:
    286
    That could be easily virtualized as well, at least with iSCSI. FC too depending on your HBAs I believe.
     
    #9
    Spartacus likes this.
  10. Spartacus

    Spartacus Active Member

    Joined:
    May 27, 2019
    Messages:
    387
    Likes Received:
    124
    This, you could easily make individual VM‘s and they would be a lot easier to snap shot/restore/clone new ones, all while maintaining control of the environments. You can passthrough fiber/iscsi connections for direct storage. Plus resources would be more efficiently shared so you probably need a lot less if they are not it all being used at the same time, even if they are a beefy enough host could more than handle it. Just my 2 cents.
     
    #10
  11. macrules34

    macrules34 Member

    Joined:
    Mar 18, 2016
    Messages:
    270
    Likes Received:
    9
    This is where I see a problem with that: student a, b, c and d share a server (they have their own VMs), they each create 5 LUNs each (total 20 LUNs), when going to assign the LUNs to their VM they see 20 LUNs and assign all 20.

    The only thing that I could see working is doing hardware pass through(what it’s called in VMware ESX) with the HBAs (not sure if they have a similar feature in Proxmox).

    Has anyone done pass through with blade servers?
     
    #11
  12. BlueFox

    BlueFox Well-Known Member

    Joined:
    Oct 26, 2015
    Messages:
    741
    Likes Received:
    286
    I'm not seeing how that would be any different than physical servers? Your SAN won't be able to tell the difference between virtual and physical hosts. Virtual NICs have their own MAC and IP addresses just like any other after all. You'll just need to ensure your hardware is capable of SR-IOV or create separate software iSCSI initiators for each VM.
     
    #12
    Spartacus likes this.
  13. macrules34

    macrules34 Member

    Joined:
    Mar 18, 2016
    Messages:
    270
    Likes Received:
    9
    I think that I will go the virtual route with the PCIe pass through. Of course the limitation is the number of PCIe slots available on the physical server. I will be using Proxmox VE as the hypervisor.
     
    #13
  14. BlueFox

    BlueFox Well-Known Member

    Joined:
    Oct 26, 2015
    Messages:
    741
    Likes Received:
    286
    Why not SR-IOV? You won't need to pass through individual NICs that way and therefore won't need many of them.
     
    #14
  15. macrules34

    macrules34 Member

    Joined:
    Mar 18, 2016
    Messages:
    270
    Likes Received:
    9
    The reason is that I need that hardware physical separation to allow for zoning to a particular VM. I can't have user A accidentally using user B's storage, I need that separation.
     
    #15
Similar Threads: servers
Forum Title Date
Chassis and Enclosures How NVMe servers deliver PCI to the drives? Mar 23, 2020
Chassis and Enclosures Facebook Datacenter Servers Oct 16, 2019
Chassis and Enclosures Cheap LGA 2011 Quanta 1u servers, model S210-X12RS - Am I missing something? Jun 25, 2019
Chassis and Enclosures Can Tesla T4 cards be used in 2u4n servers? May 17, 2019
Chassis and Enclosures Do 1U servers require low profile memory? (re: Supermicro CSE-512L-200B) May 8, 2019

Share This Page