again.. upgrading to a newer version of nappit is not the answer.. seems like later versions of openzfe, regardless of platform has many more issues than what I am currently using.
I think its the nappit gui that has done something to the acl/permissions of this file system.. its the only...
Yeah that is all well and good but you still can’t explain why my other datasets work fine in the same pool
so it is not
nappit version
osx version
SMB configuration on server or client
its a permission issue or corruption on that dataset.
I have tried everything
some times the first file...
moving to OmniOS 141046 would require me to set up an all new VM.. my current nappit to old to update
which I may have to do, but if the corruption is in the ACL/Sharing that resided on the POOL/FILESYSTEM that won't fix anything as the pool import will just bring back the same corruption
spoke too soon.. I drug to files into the share, first one copied quickly and finished, Hung again on the second..
something is definitely jacked.. I don't create new filesystems often, just using my existing pools and filesystems .. so this may be a regression in nappit gui 21.06a10
to me it looks like the permissions on that particular file system are all jacked up... either the gui did something unexpected, or something.
I created a new test file system, shared it, mounted the smb share, connected to if from Mac, created a folder and subfolder using Finder, and copied...
I felt like the release info was not important as all the other files systems on the same server are working in multiple pools
aclinherit and aclmode both passthrough, tried discard, no change back to passthrough like the other file systems
nbmand off as its only shared by SMB, guest and that...
Another weird issue with napp-it and mac
I have several zfs file systems in a esxi all in one, some for media shares.. all has been working well
I created a new folder in one of the SMB shares, and now I can seem to SMB from my Mac and move files over to that file system
I can still transfer...
back when Oracle closed off open source people started realizing its ALL ABOUT THE DATA...
ZFS was on a wining path, and with open source helping move it in both features and testing it was well on its way to becoming a possible new standard.
Now that openZFS and oracle ZFS have hard forked...
I have still yet to see a compelling reason to abandon my dual l5640 12 core 24 thread setup. 48gb ram
with 5x8tb and 2x2tb spinning rust and 2x240gb and 2x80gb ssd pools running esxi this thing does great
sure idle watts could be better, but with 8vms running, media server, security dvr...
@kawal
This worked great.. all other solutions, didnt work but this did.
One unusual observation:
Just like you, word 10 had to be changed to E30C so that the disable bit was corrected to 0
and I had to roll my MAC back 1.
What was weird is that while physically nic2 the right Nic, still...
I have been running a Napp-it all in one on esxi for a couple years now to replace an old ZFS server that I had running on macOS Sierra
for the most part its been a pretty good fit, however not without its issues....
1
I just had to destroy a 24TB media pool and rebuild because Napp-it /...
so then how did napp-it default to adding this drive by partition? I just don't get it...
I normally have used ZFS completely by command line... so I know exactly what I am doing and how I am doing it...
what I would like to see with napp-it.. either by default or as an optional setting ...
the question I asked was simply .. has anyone tested the latest esxi 6.5 and or 6.7 builds on westmere EX to see if power optimizations were made... period
@Patrick
and while I am at it... Pat.. I remember when servethehome was actually about HOME users and reviewed and discussed setups for HOME and SMALL BUSINESS .. now your just a shill for enterprise.. not a single article on the home page in years that actually talks to the title of the...
@Patrick @BlueFox
ok fox and patrick...
go ahead and find me a cpu / mobo combo that will provide me equal or better for running multiple low latency workloads ... single threads must have a passmark equal to or greater than the l5640 as these single threads must be able to provide 1080p...
thanks Gea...
buying new drives to make a 3rd pool to replicate to I cant afford right now.. and I dont hear you advocating destroying and restoring the current pool from the single backup.. so I guess you are advocating the disk replacement method... question is to why did Napp-it, when this...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.