Search results

  1. NISMO1968

    StarWind VSA Linux Version (sort of)

    Man you have a point! Less software runs @ escalated privilege level - less grey hairs admins will develop while running it. I'd be very careful (read - concerned) with storage stack having "root" rights, even with VM-level isolation.
  2. NISMO1968

    StarWind VSA Linux Version (sort of)

    This is very naive :) I wish software engineering would happen in your Universe where unicorns eat rainbow and dump with butterflies... In our Universe environment is much more harsh :( In a nutshell: Properly written system software will benefit from all the architectural features particular...
  3. NISMO1968

    StarWind VSA Linux Version (sort of)

    You isolate performance crucial primitives (like say mutexes and critical sections, each one non-acquired right on call is putting calling thread into APC state so you get at least scheduler timeout which is ~30ms, done with WINE you'll get even more thread context switches and re-queueing so...
  4. NISMO1968

    ESXi iSER iSCSI

    You might end up with iSER working, but from the experience iSER on VMware is so badly implemented you'll hardly notice any CPU usage difference even on 10 GbE networking... TL:DR; Don't waste your time on it :)
  5. NISMO1968

    S2D slow write (no parity & NVMe cache)

    We can all keep our fingers crossed, but from the multiple discussions with Microsoft developers this won't happen ever. SSDs unlike spinning disks don't have "Force Unit Attention" flag in writes tolerated forcing ACK returned only after write buffer will "touch" actual storage medium.
  6. NISMO1968

    Looking for a good SAN solution for VMware

    This statement from OSNEXUS means their virtual storage is so fundamentally slow they don't benefit from low latency RDMA networking can provide! VMware vSAN is not much different here, VMware vSAN doesn't do RoCE(v2) / iWARP because it's slow as pig, but... This is V2 of their design and their...
  7. NISMO1968

    Looking for a good SAN solution for VMware

    VMware vSAN can do RAID5/6 since... forever? ...so I don't see why you're complaining about low usable capacity, it's definitely on the level. Performance is on the slow side compared to the other other guys though. Good thing is it's in kernel and supported by hypervisor vendor so one throat...
  8. NISMO1968

    ReFS on new volume, or not

    Why do you think so?
  9. NISMO1968

    ZFS bottlenecks

    If you like StarWind performance you could think about replacing non-RDMA Intel NICs with Mellanox CX3 (they are cheap on eBay) or CX4 cards to have iSER rather than iSCSI for both East-West traffic & vSphere uplinks as well. RDMA is king :)
  10. NISMO1968

    ZFS bottlenecks

    Open-E is pretty shitty product in terms of performance, software quality/maturity and especially support being pretty much AWOL. If you plan to stick with ZFS I'd recommend either plain vanilla FreeBSD or Linux + ZoL done right (don't be afraid of ZoL, next version of FreeBSD is going to have...
  11. NISMO1968

    I need a SAN...or vSAN..or Shared Storage...or....WTF is this so complex?

    What's funny, in-VM is the only approach they do on VMware. I'm curious why did they decide to to "native" on Hyper-V, but run inside a VM on VMware?
  12. NISMO1968

    Software Defined Storage

    You'll be fine with Ceph. Alternatively go GlusterFS (if you don't plan to scale out which is a bit complicated with G/FS). ScaleIO is history... P.S. OpenIO looks promising! :)
  13. NISMO1968

    Is free non-production ScaleIO gone with the merger?

    It was exceptionally difficult to configure in the right way and it was a show stopper. Many people were complaining and DellEMC decided to pull out...
  14. NISMO1968

    I need a SAN...or vSAN..or Shared Storage...or....WTF is this so complex?

    All listed are good solutions/options. Datacenter licensing isn't a big deal if you run hyperconverged, but for storage-only S2D is overpriced. StarWind has free version, but it comes w/out UI (PowerShell mgmt isn't for everybody) and w/out guaranteed support (which is understandable). VMware...
  15. NISMO1968

    Software Defined Storage

    You can get the licenses under the table still, but ScaleIO has mediocre performance on few nodes, so anything below 8 simply doesn't make sense. Did you try Ceph?
  16. NISMO1968

    Performance Storage Spaces 2-way-mirror with SAS-SSD-JBODs

    Assuming 2-way mirror Clustered Storage Spaces (C/S/S) Vs. StarWind vSAN. StarWind will be faster on reads (because it will read from two sets of data aggregating I/O), but Clustered Storage Spaces will be faster on writes (because they don't need to send second copy of data to remote tier over...
  17. NISMO1968

    Need to pick out / build SAN

    You can do FreeBSD (Linux?) with ZFS combined with some shared SAS drives to build dual-controller DIY SAN. Check this out --> Home · ewwhite/zfs-ha Wiki · GitHub There's also a way to replicate ZFS pools for "shared nothing" setup but I'd rather avoid doing that...
  18. NISMO1968

    Need to pick out / build SAN

    Interesting! From your list Stormagic is the only "laggard" who uses controller VMs to provide storage. Last time we checked them they would barely deliver maybe 40K IOPS with a single underlying Intel DC3700 doing 450K+ IOPS. What OS are you using? What performance numbers are you getting?
  19. NISMO1968

    HA iSER Target with ESXi Test Lab using StarWind Free vSAN

    These are pretty amazing numbers! 5+ GB/sec with a sub-20% CPU & 30 ms response time. Any chance to see 4KB 100% random reads & 4KB 100% random writes? BTW, iSCSI vs iSER face-to-face would be nice to have as well :) P.S. Great job!!!