ZFS AIO

vjeko

Member
Sep 3, 2015
60
1
8
61
My "dream" was to use the AIO for storage and running VMs from cellar and have monitor in my room (home/learning
setup). Due to inexperience and lack of understanding of ZFS/ESXI and other requirements I purchased hardware which is limited:

-WYSE P45 PCoiP thin client to have in my room
-Lenovo TS140 ,32GB memory, LSI2008 ,2*1TB HDD (mirror) for storage,
1*120GB SSD for ZFS/e.g. OmniOS & ESXI,
2*120GB DC S3500 SSD for VMs (mirror)

TS140 has - 450W power supply (-12V & +5.08V 17.64W max)
I wanted to add SLOG (have 2 * S3710 200GB) but that would most probably be going
over the 17.64W (S3710 - 5V 0.8A = 4W) and with the LSI2008 in the PC there is
no more space for a multi-monitor graphic card.

I am rethinking the whole idea and need to have 2-3 monitors and the AIO
can be in my room/switched on as a workstation if daily switch on/off isn't too
detrimental, otherwise I need a separate workstation in the room and AIO in cellar.

I am running all pc's on UPS's and find from time to time glitches
in power but I think the UPS/battery is good enough to cover power loss situations.

Would appreciate some thoughts on whether I could make the TS140 work
with fewer drives or is it better to sell it and save up for a better solution/what
would that be - had a look at the "napp-it_build_examples.pdf" ?
 

gea

Well-Known Member
Dec 31, 2010
2,809
970
113
DE
- 2 x 1TB HDs are propably too old, slow and quite small. I would consider two newer HDs.
- 2 x DC S3500 for VMs are propable not the fastest solution.
I would skip the idea of an Slog and simply use the two DC 3710 for VMs. Just enable secure sync write.
 
  • Like
Reactions: T_Minus and vjeko

vjeko

Member
Sep 3, 2015
60
1
8
61
OK, so I'll keep it for now. I'm not in the financial situation now to buy anything.
So, I'll put server in cellar with
-In pc: 1* 120GB DC S3500 for ESXI & ZFS (I wanted to put ESXI on stick but not sure how
to handle log file and I need the SSD for ZFS anyway)
-Connected to LSI2008 HBA : 2* 200GB DC3710 SSD for VMs, 2*1TB HDD for storage

and for 2-3 monitor , I will buy a separate workstation later

HDD's to be replaced with larger ones later - but what is max capacity of disks that one can
connect to the HBA -can't see any mention on IBM 1015 web page ?
Lenovo TS140 specs indicate :
4 x 3.5” SATA 7.2K 6GB Enterprise SATA Drives
(500GB/1TB/2TB/3TB/4TB)
16TB maximum capacity
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
2,809
970
113
DE
you can connect any current disk to the LSI 2008. Only for several SSD it is quite slow.
You must also use it in pass-through mode as current ESXi no longer supports the LSI 2008.
 
  • Like
Reactions: vjeko

vjeko

Member
Sep 3, 2015
60
1
8
61
you can connect any current disk to the LSI 2008. Only for several SSD it is quite slow.
You must also use it in pass-through mode as current ESXi no longer supports the LSI 2008.
As far as I understand , the present installation of OmniOS 151024 (on ESXI 6,5) uses the LSI2008 via passthrough
- what components would be in a more professional system which does not need passthrough of the HBA ?

I noticed a "note" in ESXI under OmniOS VM->Edit settings:
" Some virtual machine operations are unavailable when PCI/PCIe passthrough devices are present. You cannot suspend, migrate with vMotion, or take or restore snapshots of such virtual machines. " What does "cannot take or restore snapshots of such virtual machines" mean for
the OmniOS /backup ?

I'm trying to identify what is the smartest thing to do with the TS140 eg. sell it as it seems to be getting quite a few limitations with age
(LSI2008 if used as an AIO, TPM1.2 if used as a Windows Workstation) - any thoughts ?
 

AveryFreeman

consummate homelabber
Mar 17, 2017
356
44
28
41
Near Seattle
averyfreeman.com
As someone who's done some similar setups, I've gotta say it's 100% easier to maintain a USB flash drive for booting ESXi only, instead of trying to share it with a VM datastore. ESXi runs exclusively from memory, so keeping it on faster mediums can only help boot time, not operational performance. But when you go to wipe or resize a datastore, it'll become immediately apparent why it's an infinitely better idea to keep them separate all the way around.

BTW I'm a huge fan of DC S3500s (I think I have like 8 of them), but NVMe is so much faster than SATA/SAS, and very cost effective. You can grab a 4x4 M.2 NVMe for a x16 slot, for ZFS it'd probably be better to get one that doesn't have a PCIe (PLX) switch built in, as long your motherboard supports 4x/4x/4x/4x bifurcation on that slot (it'll be a BIOS setting). Otherwise you'd need the PLX switch to see any more than 1 NVMe without bifurcation.

4 Samsung PM981s in there if you want good cost/performance ratio, or some 22110-size enterprise Seagate, Hynix or Micron NVMes. the Samsung DCTs are kinda meh, I'm most fond of the Microns with Hynix second but read some reports before buying anything.

I've been happy with broadwell-gen E5s (v4 / x99 / c612 or newer). From experience, E3 + Ivy bridge is too old, Ivy might be OK for e5 but not sure (x79 chipset? ollld). If you're going for something new but cost effctive Ryzen AM4 chipsets are really inexpensive and great for doing passthrough, or get a Threadripper if you can afford it. They're about to release an entirely new socket so could be worth waiting.

If you have the money do the Epycs or a new Intel whatever they are now lol. I am too broke for that stuff, but man the Epycs have like 128 pcie 4.0 lanes o. m. f. g.

Couple inexpensive but decent looking 4x nvme to PCIe slot converters, and the original grandaddy Dell version. Highpoint had one that had the PLX switch and raid, it's so expensive you'd probably be better off using the money for a motherboard with bifurcation (highpoint is garbage anyway):

Nice pcie 4.0 2TB enterprise M.2: Micron 7400 PRO M.2 22110 1.92TB PCI Express 4.0 (NVMe) Enterprise Solid State Drive - Newegg.com
Didn't see if it was QLC or not, if so my bad just meant to be an example.
 
  • Like
Reactions: vjeko