TrueNAS. Proxmox, and recursion issue! Maybe??

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Quartzeye

New Member
Jul 29, 2013
16
0
1
So here is my setup

I have a single host v7.2-4. On it I have a single VM running TrueNAS Scale v22.02.2.

The VM has a dedicated NIC passed in via PCI Passthrough and all drives passed via iscsi plus no config on the host for the nic or ports. The host has a separate NIC config'd with a bridge. The TrueNAS VM does not use the host NIC or drives.

TrueNAS is set up properly, pools, datasets, users, and shares all good. I have a single share to a dataset and the export file on TrueNAS shows RW and no_root_squash.

I created a vanilla Debian 11 VM and installed the NFS common package on the same host. As root in this VM, I was able to mount the share to the ds_proxmox dataset/share. This VM running on the same host as the TrueNAS VM but using the only bridge config'd on the host and shared between the host and all VM's except the TrueNAS VM. I have full access to the mounted share and can cp, write, delete to my hearts content.

However, on the Proxmox host, from the commandline or GUI, I can mount the same ds_proxmox dataset/share from the TrueNAS VM running on the same host. When using the GUI, it creates all the directories and sub-directories. Also. at the commandline, I can create a file on the mounted share by using touch. If I try to VI the file, it hangs up the entire mount point. In the GUI, if I try to upload an ISO, it uploads to the host tmp directory and hangs when attempting to cp the file to the directory on the mounted share. It does create a 0-byte file but that is it.

Why can I have full access to the share between VM's on the same host as the TrueNAS VM but I cannot fully access the share from the host it self? It shouldn't be a routing issue as the NIC in the TrueNAS VM is removed from the host OS via iommu and pci passthrough and the host and other VM's share a different physical NIC and bridge. The client VM's on the host, except for the TrueNAS VM, share the same bridge as the host.

I get if I was all using the same bridge or even multiple bridges in a multi-homed host server that NFS could get lost but not with PCI passthrough of a seperate physical NIC passed into the TrueNAS VM via PCI passthrough. So how does someone mount an NFS share from inside a NFS server VM back to the host file system that the VM is running on. I can mount the share locally then add it to Proxmox as a Directory and I can mount it as a NFS share but I cannot actually do anything with the mounted share on the host.

The only thing I know that works is if I map a host directory into an LXC container running an NFS server. Then I can mount that share from the LXC container back to the host as shared storage across the cluster. I have that working but that is no substitute for something like TrueNAS.

My goal is simply to set up a TrueNAS server on each of my (3) servers, create shares for he majority of the storage passed into the TrueNAS servers, then mount those shares to each server host and each VM’s as necessary running on those host. I can do that with the VM's no problem, I just cannot get the host to properly mount and access the TrueNAS shares. The sole idea is to abstract the storage from each host itself and use all the shared storage across the (3) servers. Three big data pools for each server to use.

Any insight on what is going or how best to implement my storage would be appreciated.