iSCSI on an all-in-one

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

turquoisewords

New Member
Jan 20, 2012
3
0
0
Boulder Colorado
Hi,
I wanted to post this on _Gea's thread OpenSolaris derived ZFS NAS/ SAN (Nexenta*, OpenIndiana, Solaris Express) on hardforum, but it wouldn't allow me to create new posts .... so here I am instead :=)

I recall that in one of his posts (but can't seem to find it now) _Gea suggested that you should not use iSCSI in an all-in-one. I was wondering why. Or perhaps I misunderstood what he was saying ... My planned usage model is the following:
ESXi 5.0 -- on suitable server h/w with passthrough SAS controller
OpenIndiana 151a -- for virtual SAN
Oracle [Enterprise] Linux (variant of RHEL) -- for Oracle server. I had been planning on using iSCSI connections from this VM to the OpenIndiana SAN, but after reading _Gea's advice I now question that approach (and suppose I would use NFS instead).
Also a few other VM's that would be running Oracle client software.

I eagerly await hearing the justification for why you shouldn't use iSCSI on an all-in-one.

Thanks in advance,
--peter
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
Hi,
I wanted to post this on _Gea's thread OpenSolaris derived ZFS NAS/ SAN (Nexenta*, OpenIndiana, Solaris Express) on hardforum, but it wouldn't allow me to create new posts .... so here I am instead :=)

I recall that in one of his posts (but can't seem to find it now) _Gea suggested that you should not use iSCSI in an all-in-one. I was wondering why. Or perhaps I misunderstood what he was saying ... My planned usage model is the following:
ESXi 5.0 -- on suitable server h/w with passthrough SAS controller
OpenIndiana 151a -- for virtual SAN
Oracle [Enterprise] Linux (variant of RHEL) -- for Oracle server. I had been planning on using iSCSI connections from this VM to the OpenIndiana SAN, but after reading _Gea's advice I now question that approach (and suppose I would use NFS instead).
Also a few other VM's that would be running Oracle client software.

I eagerly await hearing the justification for why you shouldn't use iSCSI on an all-in-one.

Thanks in advance,
--peter
Performance should be quite similar.

Problem:
If you boot-up ESXi it cannot find its shared storage (comes up with a delay when your OI SAN-VM is up).
With iSCSI you must reconnect manually while NFS and VM's on it comes up automatically

Extra:
With NFS you have file-based access for easy move/ copy/ clone/ backup/ ZFS snap access from Windows to any VM files.
via parallel SMB shared dataset. With iSCSI you must clone/recover/connect the whole disk/dataset.
 
Last edited:

turquoisewords

New Member
Jan 20, 2012
3
0
0
Boulder Colorado
Thanks for the quick reply, Gea! I think I understand better now. However, how about this setup:

ESXi boot disk and OI VM are on separate boot drive -- not iSCSI of course and on separate controller
The iSCSI connections are made only from within VMs to the virtual SAN once it is up -- they are not ESXi datastores. I am thinking specifically of the linux VM that will be an Oracle server. When it comes up it connects to iSCSI targets on the OpenIndiana SAN. The SAS controller used by OI has been dedicated to the OI VM via passthrough and that storage is not available to ESXi. Would this be OK?

Thanks again for all your hard work with napp-it! I certainly appreciate it.

--peter
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
Thanks for the quick reply, Gea! I think I understand better now. However, how about this setup:

ESXi boot disk and OI VM are on separate boot drive -- not iSCSI of course and on separate controller
The iSCSI connections are made only from within VMs to the virtual SAN once it is up -- they are not ESXi datastores. I am thinking specifically of the linux VM that will be an Oracle server. When it comes up it connects to iSCSI targets on the OpenIndiana SAN. The SAS controller used by OI has been dedicated to the OI VM via passthrough and that storage is not available to ESXi. Would this be OK?

Thanks again for all your hard work with napp-it! I certainly appreciate it.

--peter


perfect!
you can build hfs, ntfs or ext filesystems on top of ZFS pools and features like snaps, checksums and block-based incremental replication
 
Last edited:

turquoisewords

New Member
Jan 20, 2012
3
0
0
Boulder Colorado
perfect!
you can build hfs, ntfs or ext filesystems on top of ZFS pools and features like snaps, checksums and block-based incremental replication
Great! Now I feel much more confident that my plan will work out well. I was initially taken aback by your remark not to use iscsi in an all-in-one, but now I see you really meant "don't use iscsi for ESXi datastores in an all-in-one". This is for a work project, not personal (although I plan to replace my multiple linux and windows boxes at home with an all-in-one when disk prices come back down). We will be using a hosted server -- and it took some convincing of the provider to put together hardware that will work for this (particularly the separate SAS HBA).

I am assuming performance should be very good from the VMs that that connect to the iscsi targets since they are all in the same all-in-one. In fact I would expect an Oracle server to get much better performance via iscsi rather than nsf since it is so write intensive. Does this make sense?

--peter
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
..

I am assuming performance should be very good from the VMs that that connect to the iscsi targets since they are all in the same all-in-one. In fact I would expect an Oracle server to get much better performance via iscsi rather than nsf since it is so write intensive. Does this make sense?

--peter
If you activate VMCI between VM's and/or use ESXi vmxnet3 driver (test vmxnet3, some reports problems),
you can get up to 10 Gb internal performance between VM's. iSCSI and NFS should perform similar.

Use enough RAM for your Storage VM (8 GB+, the more the better), if all of your transfer is internal in your
all-in-one consider to deacticate ZFS sync write property (writes must be commited to disk). They can slow down
performance a lot unless you use a fast SSD/DRAM write cache for sync writes or SSD only pools.

Use a Raid-10 config with fast disks and as much vdevs as possible for performance. Use hotfix disks.
(i use SSD only pools for some performance hungry VM's)
 
Last edited: