Looking to update OmniOS/NAPP-IT from r151014

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

NOTORIOUS VR

Member
Nov 24, 2015
78
7
8
43
I was getting some ldap console errors I believe was due to a time mismatch, so I changed that and rebooted. That said I saw an error again this time about boot time and rcache, and looking a little deeper I saw errors in the log about multipath status degraded on the disks?

I've attached the log in txt format maybe that will help?

startup-log.txt
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Mostly talkativeness

Did you use SAS disks with mpio (You can disable mpio in Disks > Details > edit mpt_sas for LSI HBAs)?
Did you use an AD that is not available during bootup? (does not matter for performance)
 
Last edited:

NOTORIOUS VR

Member
Nov 24, 2015
78
7
8
43
Hi Gea,

my disks are 4TB WD RED's on 2x LSI cards.

If the log isn't showing anything concerning to you then I am currently at a loss of what else to check of why I'm now having speed issues after the upgrade. I suppose I could try exporting and re-importing my pools into the old/original napp-it r15014 VM and see if something changes? What do you think of that?

At least that would mean the napp-it instance can be ruled out and then the only thing left would be the ESXi upgrade.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Problem finding can be hard as every installation is different.

Especially from ESXi 5.5/ OmniOS 151014 to ESXi 6.7/OmniOS there are many changes, not only the OS but a switch to newer ESXi drivers and on OmniOS a switch from SMB 1 to 2.1

As it seems iperf is ok as is the local ZFS performance as is the ESXi internal performance between VMs, the remaining options are between ESXi and its network stack to the Windows client.

Have you tried another physical Windows client (one with a better nic than the Realtek)
 

NOTORIOUS VR

Member
Nov 24, 2015
78
7
8
43
Problem finding can be hard as every installation is different.

Especially from ESXi 5.5/ OmniOS 151014 to ESXi 6.7/OmniOS there are many changes, not only the OS but a switch to newer ESXi drivers and on OmniOS a switch from SMB 1 to 2.1

As it seems iperf is ok as is the local ZFS performance as is the ESXi internal performance between VMs, the remaining options are between ESXi and its network stack to the Windows client.
Fair enough, if you believe the internal performance of the controller/disk/VM is correct then at least I can focus on what is left.

Have you tried another physical Windows client (one with a better nic than the Realtek)
Unfortunately... The only other client I can try is my laptop, but that would be using a TB3 adapter to GigE and I have no idea how good it is as I haven't needed to use it in this way (max throughput). My other option is to pull and use the 4 port Intel GigE NIC from my server that I am not using anymore since moving to 10GbE and install it into my desktop for testing.

I will report back my findings when I have a chance to bring down the server to pull the NIC.

thanks again for the help Gea, much appreciated.
 

NOTORIOUS VR

Member
Nov 24, 2015
78
7
8
43
Sorry for the delayed reply Gea. I tried my laptop and it showed the exact same limitations as my desktop.

Again, my issue really cannot be my PC's or the network as that hasn't changed at all where I was always able to max out gigE speeds.

That said, I've for now come to terms with it as I am currently building a new server to replace my existing one along with a new storage setup. Currently I have a 10x 4TB disk Z2 array and it has treated me well, but I am already past the recommended low disk threshold (just under 6TB free space).

Currently I have about 16TB of Video Media, and this will be my primary focus for the new array. The remaining data is misc things that I care more about (ie. would like to have more safety) which is around 4.5TB right now and will grow less quickly. I don't currently run any VM's off of the array and if I would I would do so on SSD's not on the spindle drives.

I'm currently trying to decide what I should do in regards to the new array. I have a 12 disk SA120 storage array for the new build (I should probably plan on getting a 2nd SA120 for expansion). I'm trying to wrap my head around this info: How I Learned to Stop Worrying and Love RAIDZ | Delphix

Should I build another 10 disk array (with say 8 or 10TB drives) in Z2 and use the whole single pool for all my data agian, or should I split up to a smaller disk array in Z1 for media and make another Z2 array for my other files?

I feel like I am limited either way I do it unless I add more disks. I don't want to worry about media space for some time.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Without special reasons like power saving, performance or size too different, keep one pool on problems with the other etc, I would create one large pool from several z2 (6-10 disks) or z3 vdevs (>10 disks per vdev) as performance increase with disks and vdevs. With a lower overall fillrate, fragmentation is also lower.

about LZ4
Without compress, a ZFS block has a fixed size like 128k. Such a block (power of 2) has to be distributed over the disks of a vdev. Only when the number of datadisks is also a power of 2 this fits otherwise you have a waste what can lower overall capacity up to 10%.

When you enable compress, size of a ZFS block is variable. As you mostly enable LZ4 today, you do not longer care about "golden numbers" of disks per vdev. With 24 disks I would use 3 x 8 disk Z2 vdevs
 

NOTORIOUS VR

Member
Nov 24, 2015
78
7
8
43
Sounds good, thanks for the suggestion (to calm my mind).

That said, I think I will have to ditch my SA120 idea completely. It's just not condo friendly in the noise department :( Even my cat was swatting at near by switches (not plugged in but under the unit/table) because of the noise lol.

After spending most of the day modding the Rosewill 4U to fit the new server EATX MB, I fired it up and flashed the on board 2208 to IT mode, connected the SA120 fired it up and was greeted with FAR too much fan noise for my use case from the twin PSU's fans (both PSU's were plugged in to get the fans to run at "normal" levels). I found a fan controller software for the SA120 but I think that is if it's being used with a JBOD controller not a pass-through controller (maybe - still have to play around a bit).

So now I'm hunting for other cases I can use for my use case... but it seems like short depth (<18") cases are just not available. Norco's 24E is very expensive for just a JBOD chassis which is a shame.

Worst case I will have to continue to use my existing tower case, but I really wanted to get things off the ground but with all the restrictions I have for space it makes my choices difficult (and expensive usually lol).



 
Last edited:

NOTORIOUS VR

Member
Nov 24, 2015
78
7
8
43
Gea,

I'm attempting to get sg_ses to work under napp-it... not much info about it under omnios, I found one older Hard Forum thread where someone said it works under napp it but maybe I'm missing something... when I type sg_ses in the command line it says command not found?

I'm desperately trying to control the fans on the SA120 to see if they can be at least bearable.

Thanks!

EDIT: I was able to regulate the fans down in a windows VM using the Lenovo utility as a test (gave up on napp-it for now for the fan test)... it's bearable but not sure if it's a real fix.

Pulling the fan units out and the PSU I see they're using 40x40x30mm fans... Nocua has nice 40x40x20mm fans which would work nicely in the fan units but the PSU might be a little tricky since that would leave a gap between the chassis and the fan of ~10mm, that said some aluminium tape would be a quick fix to seal that area without too much trouble.

Also some light reading here: https://forums.servethehome.com/index.php?threads/lenovo-thinkserver-sa120-rackmount-das.6829/page-6 tells me the wiring is non-standard. But that isn't a big road block for me.

So 6x Noctua fans and some time I should be able to have a quiet SA120 in theory for around $125 CAD. Better than starting over I guess!

 
Last edited:

NOTORIOUS VR

Member
Nov 24, 2015
78
7
8
43
Problem finding can be hard as every installation is different.

Especially from ESXi 5.5/ OmniOS 151014 to ESXi 6.7/OmniOS there are many changes, not only the OS but a switch to newer ESXi drivers and on OmniOS a switch from SMB 1 to 2.1

As it seems iperf is ok as is the local ZFS performance as is the ESXi internal performance between VMs, the remaining options are between ESXi and its network stack to the Windows client.

Have you tried another physical Windows client (one with a better nic than the Realtek)
Hi Gea,

I finally got around to exporting my pool from the latest instance of napp-it, and reimporting the pool into the previous instance (r151014) and guess what? The speed is BACK! I am again maxing out my gigE connections, and even transferring between the VMs/VMswitches is faster/more stable! No more speed issues! and I was able to rule out ESX 6.7U1 as well I suppose (although maybe there is an issue with it and your latest release?).

Now the question is why? Would there be anything you require from me to help diag this?

Transfer from file server to VM:


Transfer from VM back to file server:


Transfer from file server to Desktop (gigE):

The little reductions in speed in the middle were from me loading this forum in a browser :p

Transfer from Desktop to file server (gigE):
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
I cannot confirm any problems with OmniOS 151028.

So I would -
- try another OmniOS (ex my ova template v4 if you use an older one, v4 comes with newest fixes)

If this does not help
- install OmniOS from iso and check with e1000
- add vmware tools (pkg install open-vm-tools) and try vmxnet3s
 

NOTORIOUS VR

Member
Nov 24, 2015
78
7
8
43
I cannot confirm any problems with OmniOS 151028.

So I would -
- try another OmniOS (ex my ova template v4 if you use an older one, v4 comes with newest fixes)

If this does not help
- install OmniOS from iso and check with e1000
- add vmware tools (pkg install open-vm-tools) and try vmxnet3s
Hi Gea,

ok thank you.. in the coming weeks I will do that (probably on the new server I've built) and report back here. I'm quite certain that was using the current v4 OVA BTW.

Thanks!
 

NOTORIOUS VR

Member
Nov 24, 2015
78
7
8
43
I cannot confirm any problems with OmniOS 151028.

So I would -
- try another OmniOS (ex my ova template v4 if you use an older one, v4 comes with newest fixes)
Hi Gea,

had some time today to poke around my setup and installed v4 OVA (it was v3 I was using before BTW)... same HW specs as before (24 GB RAM):

with default tuning applied my localhost iperf results are impressive:
Code:
root@batcavefs:~# iperf -c 127.0.0.1 -w 1M
Connecting to host 127.0.0.1, port 5201
[  4] local 127.0.0.1 port 57884 connected to 127.0.0.1 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  6.75 GBytes  58.0 Gbits/sec
[  4]   1.00-2.00   sec  6.56 GBytes  56.3 Gbits/sec
[  4]   2.00-3.00   sec  7.30 GBytes  62.7 Gbits/sec
[  4]   3.00-4.00   sec  6.58 GBytes  56.5 Gbits/sec
[  4]   4.00-5.00   sec  7.33 GBytes  62.9 Gbits/sec
[  4]   5.00-6.00   sec  7.21 GBytes  61.9 Gbits/sec
[  4]   6.00-7.00   sec  7.25 GBytes  62.3 Gbits/sec
[  4]   7.00-8.00   sec  7.26 GBytes  62.4 Gbits/sec
[  4]   8.00-9.00   sec  7.35 GBytes  63.1 Gbits/sec
[  4]   9.00-10.00  sec  7.01 GBytes  60.2 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  70.6 GBytes  60.6 Gbits/sec                  sender
[  4]   0.00-10.00  sec  70.6 GBytes  60.6 Gbits/sec                  receiver
Unfortunately iperf tests from other VM's again seems to suffer greatly in throughput and consistency vs old omniOS. Quite frustrating as I really do not understand why the old VM performs so much better.

iperf3 test from VMXnet3 Win10 VM to Omnios_v4 OVA:
Code:
C:\iperf-3.1.3-win64>iperf3.exe -c 10.10.1.2 -w 1M
Connecting to host 10.10.1.2, port 5201
[  4] local 10.10.10.10 port 50003 connected to 10.10.1.2 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  1.20 GBytes  10.3 Gbits/sec
[  4]   1.00-2.00   sec  1.25 GBytes  10.7 Gbits/sec
[  4]   2.00-3.00   sec  1.25 GBytes  10.8 Gbits/sec
[  4]   3.00-4.00   sec  1.25 GBytes  10.8 Gbits/sec
[  4]   4.00-5.00   sec  1.37 GBytes  11.7 Gbits/sec
[  4]   5.00-6.00   sec  1.27 GBytes  10.9 Gbits/sec
[  4]   6.00-7.00   sec  1.25 GBytes  10.7 Gbits/sec
[  4]   7.00-8.00   sec  1.17 GBytes  10.0 Gbits/sec
[  4]   8.00-9.00   sec  1.27 GBytes  10.9 Gbits/sec
[  4]   9.00-10.00  sec  1.28 GBytes  11.0 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  12.6 GBytes  10.8 Gbits/sec                  sender
[  4]   0.00-10.00  sec  12.6 GBytes  10.8 Gbits/sec                  receiver
Still have lower throughput though with the latest OmniOS OVA:

Writing this file to the server from the VM was 50-90 MB/s, reading starts off half decent around 230-250 MB/s and then falls on it's face to 110-115 MB/s for the remainder of the transfer. Neither of this shows up on the old napp-it VM.



I have yet to try the e1000 and opem-vm-tools suggestion... I'll try to get to that next. But I don't see why any of this is happening. Now I'm curious to see if this would happen on my new server as well. Unfortunately it's not easy for me to connect my disks to it right now to import it. Making a new pool is not an option as I don't have other disks available at this time. I suppose I can check iperf3 tests from the windows VM

More testing:

iperf3 from my mediaserver Ubuntu 16:

Testing via the main vSwitch (normal network traffic):
Code:
root@mediasrv1:~# iperf3 -c 10.10.1.2 -w 1M
Connecting to host 10.10.1.2, port 5201
[  4] local 10.10.1.10 port 36828 connected to 10.10.1.2 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   807 MBytes  6.77 Gbits/sec    0    492 KBytes
[  4]   1.00-2.00   sec   968 MBytes  8.12 Gbits/sec    0    492 KBytes
[  4]   2.00-3.00   sec  1010 MBytes  8.47 Gbits/sec    0    492 KBytes
[  4]   3.00-4.00   sec  1011 MBytes  8.48 Gbits/sec    0    492 KBytes
[  4]   4.00-5.00   sec   939 MBytes  7.87 Gbits/sec    0    492 KBytes
[  4]   5.00-6.00   sec   975 MBytes  8.18 Gbits/sec    0    492 KBytes
[  4]   6.00-7.00   sec   996 MBytes  8.36 Gbits/sec    0    492 KBytes
[  4]   7.00-8.00   sec   985 MBytes  8.27 Gbits/sec    0    492 KBytes
[  4]   8.00-9.00   sec   963 MBytes  8.08 Gbits/sec    0    492 KBytes
[  4]   9.00-10.00  sec   983 MBytes  8.25 Gbits/sec    0    492 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  9.41 GBytes  8.09 Gbits/sec    0             sender
[  4]   0.00-10.00  sec  9.41 GBytes  8.09 Gbits/sec                  receiver
Testing on storagenet vswitch:

Code:
root@mediasrv1:~# iperf3 -c 10.10.16.2 -w 1M
Connecting to host 10.10.16.2, port 5201
[  4] local 10.10.16.10 port 55856 connected to 10.10.16.2 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   358 MBytes  3.00 Gbits/sec    0    491 KBytes
[  4]   1.00-2.00   sec   650 MBytes  5.45 Gbits/sec    0    491 KBytes
[  4]   2.00-3.00   sec   714 MBytes  5.99 Gbits/sec    0    491 KBytes
[  4]   3.00-4.00   sec   664 MBytes  5.57 Gbits/sec    0    491 KBytes
[  4]   4.00-5.00   sec   707 MBytes  5.93 Gbits/sec    0    491 KBytes
[  4]   5.00-6.00   sec   717 MBytes  6.02 Gbits/sec    0    491 KBytes
[  4]   6.00-7.00   sec   706 MBytes  5.92 Gbits/sec    0    491 KBytes
[  4]   7.00-8.00   sec   726 MBytes  6.09 Gbits/sec    0    491 KBytes
[  4]   8.00-9.00   sec   710 MBytes  5.95 Gbits/sec    0    491 KBytes
[  4]   9.00-10.00  sec   714 MBytes  5.99 Gbits/sec    0    491 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  6.51 GBytes  5.59 Gbits/sec    0             sender
[  4]   0.00-10.00  sec  6.51 GBytes  5.59 Gbits/sec                  receiver
Testing to new server (new hardware 2x 2695 V2's) connected via 10Gbe:
Code:
root@mediasrv1:~# iperf3 -c 10.10.10.34 -w 1M
Connecting to host 10.10.10.34, port 5201
[  4] local 10.10.1.10 port 47954 connected to 10.10.10.34 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   555 MBytes  4.66 Gbits/sec    0    479 KBytes
[  4]   1.00-2.00   sec   810 MBytes  6.80 Gbits/sec    0    479 KBytes
[  4]   2.00-3.00   sec   787 MBytes  6.60 Gbits/sec    0    479 KBytes
[  4]   3.00-4.00   sec   709 MBytes  5.95 Gbits/sec    0    479 KBytes
[  4]   4.00-5.00   sec   746 MBytes  6.26 Gbits/sec    0    479 KBytes
[  4]   5.00-6.00   sec   738 MBytes  6.19 Gbits/sec    0    479 KBytes
[  4]   6.00-7.00   sec   769 MBytes  6.45 Gbits/sec    0    479 KBytes
[  4]   7.00-8.00   sec   734 MBytes  6.15 Gbits/sec    0    479 KBytes
[  4]   8.00-9.00   sec   639 MBytes  5.36 Gbits/sec    0    479 KBytes
[  4]   9.00-10.00  sec   661 MBytes  5.54 Gbits/sec    0    479 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  6.98 GBytes  6.00 Gbits/sec    0             sender
[  4]   0.00-10.00  sec  6.98 GBytes  6.00 Gbits/sec                  receiver
Now the same test w/ the old napp-it VM (OmniOS 151014) from mediaserver:
Code:
root@mediasrv1:~# iperf -c 10.10.1.2 -w 1M
------------------------------------------------------------
Client connecting to 10.10.1.2, TCP port 5001
TCP window size:  416 KByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
[  3] local 10.10.1.10 port 42060 connected with 10.10.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  4.44 GBytes  3.81 Gbits/sec
root@mediasrv1:~# iperf -c 10.10.1.2 -w 1M -P 10
------------------------------------------------------------
Client connecting to 10.10.1.2, TCP port 5001
TCP window size:  416 KByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
[ 11] local 10.10.1.10 port 42120 connected with 10.10.1.2 port 5001
[  3] local 10.10.1.10 port 42104 connected with 10.10.1.2 port 5001
[  5] local 10.10.1.10 port 42106 connected with 10.10.1.2 port 5001
[  4] local 10.10.1.10 port 42102 connected with 10.10.1.2 port 5001
[  7] local 10.10.1.10 port 42110 connected with 10.10.1.2 port 5001
[  6] local 10.10.1.10 port 42108 connected with 10.10.1.2 port 5001
[  8] local 10.10.1.10 port 42112 connected with 10.10.1.2 port 5001
[  9] local 10.10.1.10 port 42114 connected with 10.10.1.2 port 5001
[ 10] local 10.10.1.10 port 42116 connected with 10.10.1.2 port 5001
[ 12] local 10.10.1.10 port 42118 connected with 10.10.1.2 port 5001
^C[ ID] Interval       Transfer     Bandwidth
[  6]  0.0- 8.1 sec   964 MBytes   993 Mbits/sec
[ 10]  0.0- 8.1 sec   819 MBytes   843 Mbits/sec
[ 11]  0.0- 8.1 sec  1006 MBytes  1.04 Gbits/sec
[  3]  0.0- 8.1 sec   851 MBytes   876 Mbits/sec
[  5]  0.0- 8.2 sec   946 MBytes   974 Mbits/sec
[  4]  0.0- 8.1 sec   853 MBytes   878 Mbits/sec
[  7]  0.0- 8.1 sec   834 MBytes   859 Mbits/sec
[  8]  0.0- 8.2 sec   816 MBytes   839 Mbits/sec
[  9]  0.0- 8.1 sec   835 MBytes   860 Mbits/sec
[ 12]  0.0- 8.1 sec   905 MBytes   932 Mbits/sec
[SUM]  0.0- 8.2 sec  8.62 GBytes  9.08 Gbits/sec
test from Win10 VM:
Code:
C:\iperf-2.0.5-2-win32>iperf -c 10.10.1.2
------------------------------------------------------------
Client connecting to 10.10.1.2, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  3] local 10.10.10.10 port 50044 connected with 10.10.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   240 MBytes   201 Mbits/sec
C:\iperf-2.0.5-2-win32>iperf -c 10.10.1.2 -w 5M
------------------------------------------------------------
Client connecting to 10.10.1.2, TCP port 5001
TCP window size: 5.00 MByte
------------------------------------------------------------
[  3] local 10.10.10.10 port 50045 connected with 10.10.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  4.36 GBytes  3.74 Gbits/sec
C:\iperf-2.0.5-2-win32>iperf -c 10.10.1.2 -w 1M
------------------------------------------------------------
Client connecting to 10.10.1.2, TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[  3] local 10.10.10.10 port 50046 connected with 10.10.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  4.50 GBytes  3.86 Gbits/sec
 

NOTORIOUS VR

Member
Nov 24, 2015
78
7
8
43
Well I suppose I've been looking at this through the wrong glasses.

What I know for sure is that 100% the latest OmniOS/napp-it OVA does not at all like my Desktops Realtek GigE card. That is 100% clear to me now. Writing to the server just doesn't work out and is capped @ ~35MB/s. So it seems something, somewhere in the latest OmniOS does not like Realtek stuff at all. Why that is I don't know. If I get an Intel card into my desktop I will re-do testing on this to make sure. Old OmniOS/napp-it has no issues maxing out the Realtek GigE.

But after duplicating my setup on my new server hardware, running fresh 6.7U1, I tested with fresh installations of Win Server 2016 and the latest v4 Napp-it and installation of the older Napp-it (151014), and the newer version is slightly faster (no tuning on either), both in reading and writing two a mirror of 2x 1TB WD reds (all I have for testing on the new hardware) either on the local vSwitch (VM to VM) or even over the 10Gbe connection (over CAT 5e lol) over the physical switch.

I've also learned that the dip that sometimes occurs on write to the VM's SSD (as in the picture in the previous reply w/ the red arrow) I believe is just the SSD being busy or it's cache is full possibly.

I'm going to put this to bed now, it seems I've been stressing over this for no good reason (except for my desktops NIC speed). I believe once I have my new HDD's in place for the new server all will function as it is supposed to. I will report back either way.
 

NOTORIOUS VR

Member
Nov 24, 2015
78
7
8
43
Realtek is just bad, Get some intel cards
I know many say that... but I honestly never had any real issues with Realtek NIC's.... and until the update no issues w/ speed either.

Realtek's are also the choice of virtually all consumer desktop motherboard mfg's.
 

nerdalertdk

Fleet Admiral
Mar 9, 2017
228
118
43
::1
We don't use the C word on this site :D

But its when you press the Realtek nic's they show there dark side. Plus Realtek er cheaper then Intel's which is why most mfg use them and yes they work for 99% of the users.
 

dragonme

Active Member
Apr 12, 2016
282
25
28
i have had lots of weirdness lately with this intel s5520 board and the combination of napp-it/ominos and esxi above 6.0

when I went to 6.7 or 6.5 hitting napp-it from outstide the esxi box with a file transfere completelty broke connectivity... unplugging the data cable didnt fix it .. I had to reboot the switch that esxi is attached to ... I could smb file transfere to other VMs inside esxi but not napp-it.. reallly weird .. and this board is of course an intel nic board..

even on esxi 6.0.. omnos networking leaves a bit to be desired.. both internal virtual netowrk performance as well as outside the box..
 

NOTORIOUS VR

Member
Nov 24, 2015
78
7
8
43
I haven't done enough testing between other platforms to know what is normal and what's not. I suppose I should dig a little deeper and see if similar bottlenecks occur elsewhere with other types of platforms (FreeNAS, ZFS on Linux, Windows file server, etc).
 

dragonme

Active Member
Apr 12, 2016
282
25
28
I am just saying that esxi compatibility goes deeper than ... will it run.

for example.. it was clear after doing better 'forensic' analysis after upgrading from esxi 6.0 to 6.7 that things were not quite right.. sure my unsupported chipset and the unsupported l5640's were reported in esxi.. but it was clear it was no longer 'fully baked'

when booting my esxi server in Maintenance mode it idled at 120 watts for a dual l5640 setup, 2x raid cards, 4 ssd and 3 spinners... that is respectable .. after upgrading to 6.7 .. it idled just shy of 160watts... a dead giveaway that esxi was not right under the hood..

just because something 'works's doesn't mean it will necessarily work 'well'