Physical efficiency - Data Recovery in small area.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TrumanHW

Active Member
Sep 16, 2018
253
34
28
I need about 4 computers for doing different data recovery tasks.

Most need a full height card in them, and ideally would connect to one storage device to aggregate recoveries on.

Part of me thinks that it's almost possible to use ESXi server, and map PCI slots to Operating Systems (if that isn't stupidly worded) ... so that I could have multiple jobs within one chassis.

It may still not work for other reasons I can't anticipate yet.

Then, I'd also (obviously) want to recovery to RAID 61, (mirrored RAID 6, so each stripe is mirrored, for double protection, without any slowdown should I need to rebuild an array while copying the data off one). The last thing we need is a difficult recovery being lost after notifying a customer their data is ready for pickup.


OR, if ESXi isn't a sensible way of doing something like this -- perhaps using something like a SuperMicro 4 node CPU, which has horizontally mounted but full height risers for 2 cards.

Another problem I've had was exporting data over the network using R-Studio during the logical recovery phase; even after mapping the drive. (Recovered the array in Windows because there aren't many macs with 8 HDD slots, like my Dell T320) and then attempted to use 10GbE to an Atto 10GbE to Thunderbolt interface ... and kept having problems copying the data...

Would iSCSI likely be better suited for such tasks than mapped network drives?

And - lastly, though I'm going to post this in the forums section ... as I'm preparing for now and the future, I'd like an SFP+ switch with 6x QSFP+ ports ... and there don't seem to be many "DIY explanations" of how to install lower speed with voltage multipliers, and lower dBA fans ... to make the switch quiet.

I truly expect to be able to use 4TB to 8TB NVMe iSCSI Arrays within 3 years without breaking the bank - via ZFS.

Any ideas ... or problems you foresee with extremely sophisticated configuration-requirements?

Thanks
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
NVMe-oF will be using RDMA (infiniband, RoCE, iWARP) I am sure, can forget straight iSCSI access being reasonable for fast storage, anyway the point it your switch will need to support this.

Back to the original question though can can pass through PCIe slots or VM’s running on ESX. Your board should be hot swap though.