Seeking M1015 alternative for ESXi / ZFS NAS Box

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Stokkes

New Member
Dec 20, 2013
10
0
1
Hey all,

I'm looking for an alternative to the M1015 card that I've been using for the past few years under Ubuntu + FlexRAID. I'm getting a bit sick of the unRAID/FlexRAID type solutions and decided to go with ZFS on Linux as it's now quite stable.

With the previous solutions, write speeds were never an issue as data was only ever being written directly to 1 disk at a time. Speeds were pretty good. I've setup a test ZFS box under Debian 7.3 and passed through my M1015 to do some trial runs.

I know the M1015 doesn't do write-back caching and by design, ESXi leaves caching to the HBA's. All this to mean that the M1015 under an ESXi VM using ZFS is a recipe for horrible write performance (we're talking 13-20MB/s with no compression and no deduplication). If I boot debian from a LiveCD and import my ZFS pool, my writes go up to 50/60-100+MB/s with the drives still connected to the M1015. So I know it's related to the ESXi caching.

I've looked at a few other cards (specifically the M5014/5015/5016), but these don't do JBOD/Passthrough (most posts tell people to go grab the M1015).

What options are available out there? I'm hoping there's something that has some cache, supports write-back on the hardware itself, allows me to do JBOD/Passthrough and works in ESXi. I know that may be asking a lot - hoping experts here can chime in.

Cheers,
 

Thatguy

New Member
Dec 30, 2012
45
0
0
Hey all,

I'm looking for an alternative to the M1015 card that I've been using for the past few years under Ubuntu + FlexRAID. I'm getting a bit sick of the unRAID/FlexRAID type solutions and decided to go with ZFS on Linux as it's now quite stable.

With the previous solutions, write speeds were never an issue as data was only ever being written directly to 1 disk at a time. Speeds were pretty good. I've setup a test ZFS box under Debian 7.3 and passed through my M1015 to do some trial runs.

I know the M1015 doesn't do write-back caching and by design, ESXi leaves caching to the HBA's. All this to mean that the M1015 under an ESXi VM using ZFS is a recipe for horrible write performance (we're talking 13-20MB/s with no compression and no deduplication). If I boot debian from a LiveCD and import my ZFS pool, my writes go up to 50/60-100+MB/s with the drives still connected to the M1015. So I know it's related to the ESXi caching.

I've looked at a few other cards (specifically the M5014/5015/5016), but these don't do JBOD/Passthrough (most posts tell people to go grab the M1015).

What options are available out there? I'm hoping there's something that has some cache, supports write-back on the hardware itself, allows me to do JBOD/Passthrough and works in ESXi. I know that may be asking a lot - hoping experts here can chime in.

Cheers,
I have a 9211-8i and it works quite well.

Have you tried doing pci-e passthrough of the controller to the VM?
 

Stokkes

New Member
Dec 20, 2013
10
0
1
I have a 9211-8i and it works quite well.

Have you tried doing pci-e passthrough of the controller to the VM?
I have a 9211-8i and it works quite well.

Have you tried doing pci-e passthrough of the controller to the VM?
Maybe I've done something wrong? Here's the setup:

- Intel E1230 v2
- 16GB RAM
- 1x M1015 passed though to Debian 7.3 VM running ZFS on Linux
- 4x3TB WD RED's in a raidz1 configuration (here's the commands I ran to initialize):

Code:
zpool create -m none -o ashift=12 tank raidz <device ID's>
zfs set atime=off tank
zfs set sync=off tank
zfs set compression=off tank
zfs set dedup=off tank
zfs create tank/media
Write Test:
Code:
root@brian:/tank/media# dd if=/dev/urandom bs=1024 count=1000000 of=/tank/media/dd.file
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 73.9154 s, 13.9 MB/s
That's pretty abysmal. I'm seeing the same kind of speeds when copy a file from another hard drive attached to the M1015 (but not part of the ZPOOL) to the zpool.

The speed when I take ESXi out of the mix are around the 60MB/S sustained rate.

Looking for ideas, I don't need 100MB/s for my use, but it would be nice to be able to max a GigE connection to the ZPOOL which I don't think I'm getting anywhere near. Hence the looking for new hardware part.

Cheers,
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Do you have the M1015 flashed to IT mode? It seems like you are using it in IR mode and counting on the "default JBOD for un-configured drives" feature, which might explain the poor performance.
 

Stokkes

New Member
Dec 20, 2013
10
0
1
Do you have the M1015 flashed to IT mode? It seems like you are using it in IR mode and counting on the "default JBOD for un-configured drives" feature, which might explain the poor performance.
I am quite sure it's flashed to IT mode. Is there a way to verify? I believe I had to flash it to IT mode when I used it with unRAID/FlexRAID (unless I accidentally flashed it to IR mode).

(Also, how would that explain that I get 3x-5x faster write speeds when I take ESXi out of the picture, but use the same hardware?)
 

chune

Member
Oct 28, 2013
119
23
18
I haven't been keeping up on ZOL, but is it really "stable"? Just from a troubleshooting standpoint, why not download the preconfigured napp-it VM that runs omnios (solaris) and pass through the HBA, import your pool and run some of the built-in benchmarks?
napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana and Solaris downloads scroll down and click "download napp-it_13b_vm_for_ESXi_5.1/5.5.zip"

That will at least rule out hardware.
 

Arrogant

Member
Jan 11, 2014
44
0
6
Did you ever figure this out? I am very interested as I am lookin to build a very similar system. Possiby will use the 1015 on exsi with ubuntu and zfs. If you did not figure this out I will probably just skip exsi, and just run ubuntu and ZOL.