RDM with nappit or freenas

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
FreeNAS seems to have a horrible problem with RDMs.

Is this not the case with OmniOS+nappit?

Disks Not Configured in FreeNAS 9.1 Release | FreeNAS Community
I have never used RDM. It is more a home option on hardware without pass-through capability.
But I have not heard of horrible problems on Solaris with full disk RDM and I would accept it as a usable option.

What I would check is if the pool is importable/exportable to a barebone config and if smartmontools are working (should be the case). more: http://forums.servethehome.com/sola...na-napp/2560-not-able-enable-passthrough.html
 

bmacklin

Member
Dec 10, 2013
96
5
8
I have never used RDM. It is more a home option on hardware without pass-through capability.
But I have not heard of horrible problems on Solaris with full disk RDM and I would accept it as a usable option.

What I would check is if the pool is importable/exportable to a barebone config and if smartmontools are working (should be the case). more: http://forums.servethehome.com/sola...na-napp/2560-not-able-enable-passthrough.html
Thanks. I was able to get RDM working for one drive but I am however, concerned that I am not going to be able to use all of the space on my disks using raidz. I have a partition on a SSD for L2ARC, 2x1TB drives, 1x1.5TB drive, and a 4 TB drive. I think the best that a raidz can do is take 1 TB from the 1.5 and 4 TB disks, leaving 0.5 TB and 3 TB unallocated. Is this correct? In that case, could I make another zpool, perhaps in striped mode for the "left-over" parts?

I was confused about using nappit; I could not figure out how to enable iSCSI target. Anybody have a guide out there on how to setup napp-it for iSCSI and for ESXi to see the space as storage?

I also tried windows server 2012 r2. It's more user friendly but I was able to confirm reports of horrible performance in storage spaces. I got like 20mb/s read/writes using ATTO on a raid volume in storage spaces.

But I suspect the 2x1 TB drives may be going bad which is why I'm seeing horrible performance. How can I tell for sure?

-B
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
1. you can partition disks and build pools from these parts. But this is "bad use"
If you do this with RDM this is really bad use - I would never do this

What you can do is build mirrors or a raid-z with the intention to replace the smaller disks asap

2. In all-in-one configs, use NFS for ESXi datastores - always
There is no auto reconnect with iSCSI

iSCSI itself is quite easy to setup

Menu services
- enable Comstar services

Menu Comstar
- create a LU (logical unit)
- create a target
- create a target group with the target as member
- set a view from your LU to your target group to make the LUN visible

3 You need some know how on any server OS.
Windows storage spaces is not comparable with ZFS pools regarding data security and features.
Windows ReFS is using some features of ZFS but known to be slow and not really at the same technology level.

4. build pools from any single disks and do benchmarks