vSAN ESXi, StarWind, or Dell EMC software solutions

Discussion in 'VMware, VirtualBox, Citrix' started by Myth, Sep 21, 2018.

  1. Myth

    Myth Member

    Feb 27, 2018
    Likes Received:
    Hey Guys,

    I work for a server manufacturer and we specialize in the media and entertainment industry. We recently had a client ask for 900TB of storage and we have a solution for that but it's kind of ugly, so I'm looking at other alternative software's on the server side that would run on either windows or linux and then connect to workstation machines. I'm open to the idea of using virtual machines as well, but I have no idea how that would look in our current working environment.

    We have traditionally worked off of one physical server for up to 50 clients at a time playing back HD, and about 10 to 20 clients playing back UHD and 4K.

    But with a request for 900TB I'm wondering how we could do things differently. Each workstation will need at least 400-500MBps to playback uncompressed video with color correction effects applied.

    I think it's an interesting approach to use Vitrual Machines, but how would the client desktops view the editing workstation. In all of our builds each machine/workstation was connected via a 10GigE either optical or cat 6E cable. So I'm just so confused when it comes to how a VM solution could work, but I do know of one post house that some how plays back footage in the editing bay room from a computer located in the server room. I hope that's not confusing. Anyways let me know.

    Attached Files:

  2. Rand__

    Rand__ Well-Known Member

    Mar 6, 2014
    Likes Received:
    Started with some detail questions but I think we need to resolve the primary confusion first

    VM vs physical storage

    I am no expert, but from my point of view you have two options
    -editing at the client side/workstation, then you basically need your classic storage, whether thats a san filer or storage vm with block or file storage (physical disks attached) is up for discussion, but you don't need a hyperconverged setup for this
    -editing at the server side - this would include connecting to VMs running on the servers which then would need to be able to support the editing software (graphics requirements? Quadro or Grid Cards, potential licensing costs etc). Access to the editing VMs via dedicated workplaces (pc, thin or zero client depending on requirements). This still requires you to have some kind of storage that you provide to the VMs where the files that need to be edited are residing unless you want to localize (copy) each file to each editors VM beforehand which I doubt. Storage options here are still the classic SAN filer but also o/c a storage vm not using physical attached disks but providing access to the underlying hyperconverged storage system. This is not the classic forté of a hyperconverged setup though which excels at many concurrent users accessing their VMs and thus generating a large (parallelized) amount of read/write accesses which can get distributed to the (potentially) many nodes building the hyperconverged cluster.

    So basically this might mean a change in your current workflow (if you have everything centralized now), which might not be a bad thing if you can split up (by workflow stage, age of file, user type or whatever), but it needs to be considered and o/c it needs to be tested whether the future setup will be sufficient.

    So to that latest point the question comes to budget (ie. how many boxes are you willing to buy (CAPEX) and operate (OPEX) to support your space and speed requirements. You might theoretically save on workstations o/c but that money will need to be dropped into the server farm (graphics cards, licenses, expensive cpus) ...
  3. kapone

    kapone Active Member

    May 23, 2015
    Likes Received:
    I don't see where the complications are.

    1. You need fast networking - Easily available.
    2. You need lots of fast storage - Easily available.
    3. Your workstations need access to that fast and lots of storage with a fast network. - Easily available.

    In the end, the question is money. You can't do this kind of stuff for cheap. Once you accept that...
  4. Connorise

    Connorise Member

    Mar 2, 2017
    Likes Received:
    >We have traditionally worked off of one physical server for up to 50 clients at a time playing back HD, and about 10 to 20 clients playing back UHD and 4K.

    I would recommend starting with the solid plan in terms of redundancy. Are you willing to have one storage object in your infrastructure that can probably lead to the downtime? I would call that SPOF or not. Also, it would be good to know the workload and IOPs that your clients require. "We recently had a client ask for 900TB of storage and we have a solution for that but it's kind of ugly" is this raw or usable storage?

    If we are talking about HDDs, for example, it should make more sense to go with N+ storage nodes with some sort of replication between them.
    From a redundancy standpoint, it's always good to have backups/dr site, but if we are talking about business continuity it should be replication.

    As for some recommendation, it's quite hard to provide you with the "best" approach. I would say if we are talking about the single unit I would recommend taking into consideration Pure Storage or Tegile. If you are willing to have replication StarWind should be considered as an option.

    At the end of the day no matter whom you choose, any storage vendor should be able to provide you with the recommendation and help in building the storage solution.
  5. BoredSysadmin

    BoredSysadmin Active Member

    Mar 2, 2019
    Likes Received:
    +1000 this.
    (up to) 50 Workstations needing 500MBps each (I assume it's in Bytes) is about 25GBps of throughput
    You'll need either many RACKs of disk-based SAN or going with All-Flash storage.
    I highly recommend checking out Pure's Flashblade and get a quote for specs you need - ie: 900TB usable and at least 30GB/s bandwidth.
  6. muhfugen

    muhfugen Member

    Dec 5, 2016
    Likes Received:
    You dont need many racks worth of disks to reach 25GB/sec even with HDDs.
  7. BoredSysadmin

    BoredSysadmin Active Member

    Mar 2, 2019
    Likes Received:
    Keep in mind this is an agregate bandwidth for 50 different streams. One would need many more than 128 disks as simple math could suggest. Plus this is file, not block. That alone just makes it so much more complicated to scale.
Similar Threads: vSAN ESXi
Forum Title Date
VMware, VirtualBox, Citrix ESXi Cluster + vSAN Home Lab Build Ideas Mar 14, 2016
VMware, VirtualBox, Citrix In vSAN all-flash, can I stripe two PCIe caching devices together? May 16, 2019
VMware, VirtualBox, Citrix 6.7U1 Vsan doesn't seem to work with Connectx-3 Nov 7, 2018
VMware, VirtualBox, Citrix vSan does not see disks in Gui Oct 14, 2018
VMware, VirtualBox, Citrix VMWare vSAN, quad M.2 NVME + 1 cache vs SATA SSDs? Jul 5, 2018

Share This Page