Home Setup - Design changes

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
power outage is not the only cause of unexpected reboots ...
in the end it depends on how important the vms are, how recent your backup is and how valuable your time is:)
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
im having a hard time with keeping the drives in mirror paired vdevs and having a slog drive kill all the write iops.
4-600MB write (3K iops) vs no slog write are 3200MB and 28k iops.

Im thinking of keeping the slogs for my other pools and let this one be sync disabled.
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
I'm thinking of upgrading my network from using 10GB of connect-x2 cards to 40GB IB direct. I may also upgrade to connect-x3 cards to get 56GB IB.

The hosts are running ESXI. Currently I have the servers with one port connect-x2 cards with connections (10GB ethernet) to ta dell switch with 2 10GB SPF+ connections.

What do I need to convert over to IB 40/56 direct connect? I'm assuming a new cable? Any issue with doing this with ESXI i should be aware of?
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Newer ESXi is not really compatible with IB unfortunately.
If you run it you need a subnet manager somewhere...
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
well i moved all my VMs back to local storage. I been getting the following messages in freenas and the vms seem to hang or slow down alot. So not sure if it a network issue or an ISCSI issue. I was running with sync=always just to be on the safe side, not sure if that adding issues.
upload_2019-1-15_22-35-5.png
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
that looks like a pci address - what is located on it? nvme or ssd (or nic)?
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
im not sure but ill try to determine pci. FreeNas is a VM and it has 2 pcie devices pass-through. The two HBA cards.

I also see in the logs an issue with nvme but these as virtual hdd passed in from ESXI. They are from a datastore created on the optane 900p.
Code:
> nvme0: aborting outstanding i/o
> nvme0: WRITE sqid:1 cid:83 nsid:1 lba:3477496 len:16
> nvme0: ABORTED - BY REQUEST (00/07) sqid:1 cid:83 cdw0:0
> nvme0: resubmitting queued i/o
> nvme0: WRITE sqid:1 cid:0 nsid:1 lba:98224 len:8
> ctl_datamove: tag 0x246e1cc on (0:3:0) aborted
> ctl_datamove: tag 0x246e1cd on (0:3:0) aborted
> ctl_datamove: tag 0x246e1cf on (0:3:0) aborted
> ctl_datamove: tag 0x246e1d0 on (0:3:0) aborted
> ctl_datamove: tag 0x246e1ce on (0:3:0) aborted
> ctl_datamove: tag 0x246e1d1 on (0:3:0) aborted
> ctl_datamove: tag 0x246e1d3 on (0:3:0) aborted
> ctl_datamove: tag 0x246e1d4 on (0:3:0) aborted
> ctl_datamove: tag 0x2c0921 on (1:3:0) aborted
looks like these two devices
vmx0@pci0:3:0:0: class=0x020000 card=0x07b015ad chip=0x07b015ad rev=0x01 hdr=0x00
vendor = 'VMware'
device = 'VMXNET3 Ethernet Controller'
class = network
subclass = ethernet
nvme0@pci0:4:0:0: class=0x010802 card=0x07f015ad chip=0x07f015ad rev=0x00 hdr=0x00
vendor = 'VMware'
class = mass storage
subclass = NVM
 
Last edited:

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
yes, ESXi nvme controller is not working properly in FreeNas atm.
okay i removed the nvme device from the VM hardware configuration and added a new LSI scsi virtual device and added the 4 x 30 GB optane to that.

going to slowly add my VMs back to see if i get more errors with networking since nvme been removed.
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
made some improvements to my freenas server.
Change at VM level the OS to FREEBSD 11 vs pre-11
upload_2019-1-16_16-47-10.png

Changed the NVME to LSI as mentioned above.

Added another VMNET3 NIC to the VM to separate the traffic from data storage pools and vm storage pools.

So far looking better.
upload_2019-1-16_16-48-38.png

Da1 - Data pool with Z1 3x 8TB drives and 1 30 GB optane (da1) - doing around 150 MB
Da4 - VM pool with mirror vdev 8x800GB SAS SSD and 1 30 GB optane (da4) - doing around 700 MB
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
I was thinking, does it make sense to split up FreeNas into two VMs. Once for Data/Backup storage the other for VMs? I was thinking about that ARC Hit ratio. Currently I have one FreeNas VM with 64 Ram sharing out both Data/Backup and VM storage. I was thinking of making FreeNas-Data with 40 GB Ram and FreeNas-VMs with 40 GB Ram. I could technically go up-to 64 GB ram for each but it takes a lot of the ram i have free away. Additionally I was wondering if I upgrade Ram from 1333 speed to 1800 speed (upgrade from 8GB sticks to 16GB sticks as well) would I get faster read results in FreeNas when it using ARC?
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
That depends on your usage scenario; if you use both equally much then it might make sense.
O/c you have a small overhead (disk, ram) but if you don't have too many shared components, why not.

On the other hand your memory... normally I'd say you wont see much of a difference, but from 1333 to 1866 is indeed a larger jump.
Not that you will see that much of it on the line, but it might yield a few percent - maybe 5% (uneducated guess). Whether thats worth the spend... depends on how much I guess
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
maybe a side project down the road. For now it seems FreeNas is finally settling in with the new SSD pool.

I havent decided on the Ram upgrade. if I can get 16 ddr3-1866 for 20-25 a stick i might get 24 to upgrade the dell r720. Im not sure if i want to invest in e5-2600 v2 gen equipment or start working on upgrading to e5-2600v4 / Intel gold gen.

I have too much hardware as is and just need to get to a middle ground with setup before deciding on selling/upgrading. Plus i need to make sure im not upgrading just to play with new stuff lol.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
One additional benefit is that you can run replication for both pools at the same time.
I have seen there is a larger replication update coming (finally!) in 11.3 but not sure that is also going to be remediated...