Recent content by grogthegreat

  1. G

    Advice on ZFS/FreeNAS for 40TB

    Since we are on the topic of ZFS downsides.... While using mirrors with ZFS makes it easier to expand since you only need to add two disks at a time, ZFS does not rebalance data after you add the drives. This means that if you add two new drives to your ZFS pool of mirrors after the pool is...
  2. G

    CEPH write performance pisses me off!

    What you described is called 'hypercovered' if you like tech lingo. ScaleIO is neat in that it can run hyperconvered (storage and hypervisor in the same box), or not. Not only does ScaleIO support it, but it even has a vcenter web client add on that will deploy that config for you; even...
  3. G

    CEPH write performance pisses me off!

    @whitey If you didn't see it in the picture, the SSDs are PM853T 960GB SATA drives. Found the amazing ebay deal for them on this site in fact. @marcio Since no one directly answered the minimum node question, yes scaleIO does need at least three nodes. I've had both three and four nodes in...
  4. G

    CEPH write performance pisses me off!

    Ceph is awesome since it does file, block, and object. On the other hand if you only need block and performance is a concern, I've been happy with ScaleIO. Just three SSDs gets me 4.3Gb write and 10.9Gb read.
  5. G

    ESXI - vSwitch for backup traffic?

    Using SCP will always be slow. Borg is great because it does its dedupe and compression before sending over the network which makes it very fast. As you've found, Borg doesn't yet run directly on ESXi. I have a ticket open on the issue but a programmer will need to take the issue on for there to...
  6. G

    Shared storage options for VMware ESXi

    Sorry to bump this old thread but for those wondering how ScaleIO does with a small SSD setup instead of the HDDs I've been posting, here you go. This is ATTO on a VM running on a SSD ScaleIO pool using just three nodes where each node only has a single PM853T 960GB SATA SSD. A 3 drive pool is...
  7. G

    ESXi Boot Device - Rust, SSD, SATA DOM?

    I've always booted ESXi from USB. If you have an internal USB port I'd suggest a quality 8GB usb stick. ESXi performance isn't related to the boot drive once it is booted so anything works. spinny rust isn't worth the space or power usage. SSD or satadom is a bit of a waste of that performance...
  8. G

    Question about OwnCloud

    I have owncloud running on a VPS with no desktop environment. Works great although you might want to check out Next Cloud. Seems the co-founder and a lot of core OwnCloud developers have bailed and created an OwnCloud fork. Worth comparing before installing.
  9. G

    Shared storage options for VMware ESXi

    Just above in post #24 I did the same test on 1Gb using the same setup. I have not played with rfcache. These results were with 2GB of ram cache per node enabled but I don't think the test got cached since we would probably see higher numbers.
  10. G

    Shared storage options for VMware ESXi

    Moved to a 10Gb switch (LM6B) without making any other changes. Running a low QD test on a single VM against a single volume probably isn't the best test but more data is better than none. I have not done any performance tuning or even tested the node to node bandwidth yet. Some point soon I'll...
  11. G

    Shared storage options for VMware ESXi

    I don't think you can use fault sets like that. ScaleIO treats fault sets sort of as one large node. The fault sets are still within the same cluster so the performance of your cluster would be really really bad if part of the cluster was separated by a WAN link.
  12. G

    Shared storage options for VMware ESXi

    @spazoid Protection domains are a group of storage servers that act together to form a cluster. If you have multiple node failures within a protection domain, it will only take down that protection domain and the other protection domains will be okay. It is really only useful if you have a...
  13. G

    Shared storage options for VMware ESXi

    VSA replication is nice and if you are using raid10 as the base for each host then yes it can handle more simultaneous failures than ScaleIO. It would take a lot to resort to backups. I've never used it but everything I've read says that it is fast as well since you are accessing local storage...
  14. G

    Shared storage options for VMware ESXi

    Even though ScaleIO does a lot beyond the fast rebuilds (checksumming, scrubs, snapshots, etc.) to protect your data it still doesn't remove the need for backups. Before you mentioned raid10 volumes replicated to another server. Since that is a backup and not part of the same storage volume, it...
  15. G

    Shared storage options for VMware ESXi

    Did you mean a raid10 instead of raid0? ScaleIO will be much faster. Here is why: When you have a drive fail in a raid 10 it rebuilds by copying data from the surviving drive in the mirror to the hot spare drive. You are reading from one drive and writing to one drive so you are limited to the...