Refreshing the Intel S2600 with ESXi & Napp-It

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

pc-tecky

Active Member
May 1, 2013
202
26
28
(admins - please move as deemed appropraiate) so..with so many different changes happening all at once.. I figure this might be the best place for it..

I've had a cough for most of December, but it came to a nasty wretched head right during Christmas.. I thought I'd over it rather quickly, but it lingered on and took far longer than I desired.. so I'm here writing my findings and quirks on my Intel/Intel/Intel system (that Natex combo kit that many folks jumped on a few years back).

as my energy has slowly returned and the coughing subsided, I went from trying to design a front-end HVAC filtration add-on design of sorts for Ikea's Hejne wooden shelving unit and measuring computer cases, to moving the computers around, to re-organizing my "man-cave" computer room....in a very slow methodical deliberate fashion.. taking my time and lengthy breaks between moving items.... all around my apartment unit.. I now have most my Cisco equipment sitting in one corner next to the Hejne "rack" against the wall, some other odds and bits sitting next to the desk, printer back online, etc. etc...

---I am curious to know if anybody has taken thought to implementing such a computer case design that incorporates HVAC filters to reduce and keep dust out of the "guts" of computers and servers for the home user? some of the immediate issues I discovered include simulated or observed models for heat dissipation and/or entrapment, access to forward facing (optical) drives, power buttons, and indicators lights, etc. - almost requires an extended remote front panel design that quickly becomes cumbersome using a half-enclosed yet a half-open tech bench like design.. would you need to implement a positive pressure design that uses an independent constantly powered 2"-3"-4" diameter x ~19" wide scroll cage blower fan (like those used in fireplaces) that works when multiple systems are in various power states - all off, mixed on and off, all on... anyways, I digress..

...I quickly found myself in front of the all Intel P4208/Intel S2600/2x Intel Xeon E5-2670 CPUs w/ 128GB server system.. it still has the Intel 6-port 1GbE card and two LSI 2008? 8-port IT-flashed HBA cards running 8x 4TB (shucked external ) Seagate HDDs, and one SanDisk 500GB(?) SSD.. I knew the ESXi installed on the USB drive was an older version, 5.5 or 6.0(?!?), and the networking was all janked up - so I already knew that it had to go.. I gathered the SSDs, and scrounged up all the SD and USB drives; I found a blank 32GB fit USB drive, and choose to keep the ~500GB SanDisk (for ISOs) and add the Samsung 1TB SSD for a select few base VMs (Napp-It, etc.). so in that vein, I downloaded the latest versions of vSphere ESXi 6.7u3 Hypervisor, Napp-It AiO, FreeNAS, WinSCP, and some additional ancillary items to work with.. I also finally got the Apple Mac Mini (Late 2014 model) up and running, but was disappointed to discover this model has fixed non-upgradable RAM -- maybe some VMs with more CPUs cores and RAM? ...

A few items I found quirky.. Between Windows 10 Pro, modern browsers disabling embedded Java-based applets, Intel's RRM4 IPMI/BMC using a Java-based Console Redirection applet, and the inability to use an ISO with the virtual CDROM were all rather annoying, or a BIOS EFI vs BIOS Legacy setting was the culprit to not booting remotely. I then instead choose to burn the ISO image to DVD/CD media and used a DVD/CD drive connected to an external 3-in-1 USB to SATA adapter, and was then able to boot and install ESXi to the empty 32GB fit USB drive. Once ESXi was installed, I looked around, and found the NIC mapping in disarray. So then I did some research, and found out how to edit a configuration file to re-order the NIC mappings so they are now in a logical and consecutive order.

But now I'm back with a few questions: will a VM's network performance, in particular with Napp-It or FreeNAS, be better with one NIC or when two NICs are bounded as a linked aggregate? or does it even matter?

hey @gea, curious what the latest and best practices are for Napp-IT AIO? is there an updated guide(s) for Napp-It and ESXi 6.7u3? unclear how to keep the OmniOSce 15030 LTS updated without updating to OmniOSce 15032 (the latest version)? (I'm used to "sudo apt update && apt upgrade" on the Raspberry Pi.)
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,189
1,202
113
DE
hey @gea, curious what the latest and best practices are for Napp-IT AIO? is there an updated guide(s) for Napp-It and ESXi 6.7u3? unclear how to keep the OmniOSce 15030 LTS updated without updating to OmniOSce 15032 (the latest version)? (I'm used to "sudo apt update && apt upgrade" on the Raspberry Pi.)
A "pkg update" updates OmniOS 151030 lts to newest 151030 state

If you want to update to 151032, you must
- first change repository to 151032
- pgk update

see 4. at
napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana and Solaris : Manual
 
  • Like
Reactions: Tha_14

pc-tecky

Active Member
May 1, 2013
202
26
28
ah, ok.. know anything creating a fatter pipe by aggregating link(s)? getting some strange behavior so far..

# dladm create-aggr -L active -l [e1000g0|] -l net1 aggr0

as an immediate response, I'm getting errors of a (paraphrased) 'Loopback condition on the trunk link' that just scrolls and scrolls and scrolls... making diagnosing issues vary problematic, downright impossible to see what I'm doing..