lga3647 esxi build to host my Oracle Apps/Databases

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

BennyT

Active Member
Dec 1, 2018
166
46
28
Hi,

This thread is for my LGA3647 VM server project and I've just begun gathering parts. My existing setup I have a few lga2011 RHEL and Oracle Linux boxes running Oracle Apps (and all the services that entails) and their corresponding databases, also Subversion repository, Plexmediaserver, etc. and none of it is virtualized . I want to build a decent esxi host and have all of my Oracle environments, etc. running in VMs instead of on baremetal. I need a server that has resources to accommodate all my CPU/RAM hungry Oracle environments (many of which I want to run concurrently).

Keep in mind I've not used or installed esxi or built such a VM system before. I'll likely say naive and dumb stuff. My background is in Oracle Apps/Databases development.

So I've been saving for building up a server to run ESXI... and here we go.

chassis: Norco 4224 (late 2018)

Norco cases do not get much love but I like the 4224. I knew what I was purchasing so in that respect I'm very pleased with this Norco. My reasoning for selecting this Norco chassis over a SM846 or Dell rack server is that I can make the 4224 run much quieter in my office den where it will reside.​
3x120mm hotswap fans on fanwall. Also, a 2.5" drive tray.​
20181214_170101.jpg
Pictured below showing top down into the case, the fanwall-backplane. Powered by single molex shown near top of that image. To the left of it are three sets of two pins headers grouped tightly together. Almost looks like a 6pin header. I read that the two pin headers are for "sense" and "control". @EffrafaxOfWug in the forums actually reverse engineered it because Norco doesn't provide documentation. In other words, you could connect those pins to the corresponding sense/control pins on a 4 pin header of a mobo. The fans would be powered by the molex, but they'd receive PWM speed controls via the sense/control two pins. *Note: I've since connected breadboard wires fromt he fan backplane to a 3 fan splitter which goes to a single motherboard 4pin fan header. (see the posts which follow for detail photos describing this).​
20181207_124641.jpg
Each SAS Backplane is powered by a single molex, not two. I understand that the old backplanes had two molex connectors to accommodate redundant psu. That's fine, I'm not using redundant psu.​
20181214_170026.jpg

The six backplanes each say "01 SAS 12Gb v1.3".​
20181214_170043.jpg
I plan to run a mix of enterprise SSDs (Samsungs new datacenter 883DCT SATA SSDs) and some platter 3.5" HDDs (512n as I read esxi prefers 512 native). I'd like to run the Oracle Apps and Databases guests on the SSDs if possible.​
The drive caddies do not have slider vents like the older models did. Meaning that if a bay is empty I can't close off that bay for airflow. I'll just plug them with cardboard probably. I knew this already before purchase.​
The exhaust fans are two 80mm 3pin. I'm replacing those with Noctua PWM fans.​
Motherboard: SM X11-DPI-NT

it's got some cool features:​
  • DDR4 6 channel 2666 memory
  • 3x SAS connectors for SFF8087 12GBps (12 SATA or SAS drives and that's without expander)
  • 2x NVMe Oculink connectors... apparently each of those two connections can accommodate a cable adapter to a U.2 or M.2 backplane or can breakout to 4 SATA connectors each. Honestly I never heard of oculink before seeing it on this board.
  • M.2 NVMe slot
  • 2x orange SATA connectors (the orange means they can power Supermicro's little SATADOM flash drives) - an interesting option for booting hypervisor such as esxi, but I'll probably just use USB thumb drive still.
  • 2x10GBe - neat but can't really take advantage of it yet.
20181214_142828.jpg
20181214_142901.jpg

20181214_142957.jpg


Funny, it says 24 DIMMs on the box. But there is only 16. I think they are referring to the other X11 boards which some do have 24..
20181214_143302.jpg

CPU/RAM/STORAGE:
Not quired yet. Still trying to decide which sku is best for me. Here is an interesting breakdown matrix of the Xeon Scalable. Intel Scalable Processors Xeon Skylake-SP (Purley) Buyers Guide
Storage I'd like the Samsung Datacenter 883DCT SATA SSDs OR intel 4510. But mostly be 3.5" regular old HDDs as I have alot of them around already.​

Rack:
This is going into a Tripp Lite 18u 33" deep rack enclosure. I understand Norco chassis are difficult to find slider rails for which allow proper fitment in a standard 19" rack. I plan to try the RL-26 but I've read mixed reports. I have a few sliding shelves I can opt to use if the RL-26 rails are complete failure.​

MISC:
6xSAS sff8087 (two per package) with right angle at terminating end (helps because the backplane on the 4224 very near the fan wall). I can only utiilize three of these for now as I only have three SAS connectors on the mobo and I have not ordered a SAS card(s) yet. As I fill up the 24 drive bays I'll either get a SAS expander or purchase a few LSI SAS9211 controller cards so I can utilize all 6 backplanes in the 4224.​
20181214_154856.jpg
moexhaust fans and a big old psu​
20181214_155221.jpg
three of the following for the fanwall:​
20181209_041039.jpg
It's a work in progress and this is just me laying the foundation. Feel free to leave a comment anytime.

Thanks,

benny
 
Last edited:

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Just for dev usage and not commercial license right ?
If the latter then I like the very high frequency low core count parts to keep oracle license costs down but performance up. Oracle apps love memory !
Looks like a very solid build.
 
  • Like
Reactions: gigatexal

BennyT

Active Member
Dec 1, 2018
166
46
28
Hi Evan
Yes, good question regarding Oracle product licensing. I'm using their products for learning and development. All the Oracle Enterprise editions and apps products/modules would cost a fortune otherwise fortunately i also have a MOS account for patching, doc_IDs, etc.

Regarding vmware licenses, i will be using VMUG yearly subscription to aquire vsphere products.

I hope to come to conclusion on cpu and RAM decisions next month.

Thanks

Benny
 
Last edited:

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Pictured below showing top down into the case, the fanwall-backplane. Powered by single molex shown near top of that image. To the left of it are three sets of two pins headers grouped tightly together. Almost looks like a 6pin header. I read that the two pin headers are for "sense" and "control". Someone in the forums actually reverse engineered it because Norco doesn't provide documentation. In other words, you could connect those pins to the corresponding sense/control pins on a 4 pin header of a mobo. The fans would be powered by the molex, but they'd receive PWM speed controls via the sense/control two pins. It's weird and overly complicated. I'll plan to simply remove the fan backplane PCB, put noctua PWM fans into the caddies (if they'll fit) and connect the fans directly to the motherboard headers and not the norco fan-backplane. not 100% sure on that idea , we'll see.
T'was I that did that on the UK incarnation of this case; I wouldn't really say it's a complicated way of doing it, it's actually very simple and gels well with how PWM works. I'd have liked them to consolidate all the PWM functions into a single pair of pins, but there we go.

Connecting three fans to the motherboard via individual 4-pin connectors was possible, but the design of the case doesn't lend itself to that - you'd need looooong 4-pin extensions to be able to do so (the fan wall runs up to the roof of the server so you have to go around the sides), and you lose the ability to easily swap/remove the fans - and the cables going into the backplane are impossible to get to unless you remove the fan cages as the space is very tight.

Edit: just noticed that your version of the fan wall actually seems to have a slot underneath the fans which mine doesn't (but then yours is the 4U vs. my 3U).

The fan cable mod I did was dead easy, I just cannibalised one of the 4-pin extension cables that came with one of my Noctua's, twisted (and eventually soldered and heatshrinked) some wires together and voila.

As per your post, it's not a perfect case but it's easily good enough for home use especially given the price (although now I know better I'd probably have spent extra on the InWin equivalent to get an SGPIO-capable backplane).
 
  • Like
Reactions: BennyT

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
If your not paying any per core license costs for oracle (VMware is per socket currently) then can go for more or less any cpu, sure I would not go for the especially low speed ones but otherwise anything with a base of 2.2ghz and a good turbo is where I would be looking.

For low power a pair of 4110 or 4114 or similar, but then if lower power was a target I would have been more included for a single 6132 or similar. Of course go crazy and put dual 6132 or 6148’s :)

(Anyway for cpu the numbers are just more common ones, others are also good but stick to either the economy low power 41xx or good performance 61xx. The bronze is a miss and the 51xx with single FMA is a miss, platinum is super expensive)
 
  • Like
Reactions: BennyT

Jannis Jacobsen

Active Member
Mar 19, 2016
365
80
28
45
Norway
Dont get used to running oracle db in vmware :)
we considered it a few years back, and talked to Oracle.
we would have to count all the cores on the host machines in the vmware cluster, and get licenses for oracle based on that.
we would have had to get 16 or so extra oracle enterprise db licenses at around $30.000 pr.license.
we got a 4 core standalone server instead to use with our 2 existing enterprise licenses :)

-j
 
  • Like
Reactions: BennyT

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
That’s why I asked about license, if just for development and play and you have what you need then ok.
Something like the Gold 5122 is what I wound use for just oracle, 12 x 32gb = 384gb per socket and that’s a nice oracle DB server which has a decent license cost.
 

BennyT

Active Member
Dec 1, 2018
166
46
28
Hi @Jannis Jacobsen , yes. This is just for dev, proof of concepts, learning at home while experimenting with new oracle product installations releases. No commercial production environments will be on this.

@EffrafaxOfWug , thanks for researching those fan pins for us. You’ve convinced me and I will try it. Would be great to keep hotswap-ability using PWM fans.

@Evan I would like a couple of 6130, but was also looking at 5118 or 5120. I didn’t know about any drawbacks in the 5xxx series though. I’ll keep researching. Thanks

Benny
 
  • Like
Reactions: Jannis Jacobsen

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Not so much drawbacks as much as not value for money , stepping up to 6100 series gets you 2 x FMA units , 3 upi links instead of 2 which of course has not real impact for you and your board, and 2666 memory support over 2400.

I like gold 6132 say compared go 5120 for the not that much $$ extra.

Take a look at Patrick’s benchmarks as an example, page 2
https://www.servethehome.com/intel-xeon-gold-5120-benchmarks-and-review/
 
  • Like
Reactions: BennyT

BennyT

Active Member
Dec 1, 2018
166
46
28
Hello

noctua fans fit in the Norco 4224 hotswap caddies. Also put a couple of the 80mm on the exhaust.



20181224_223301.jpg 20181225_081435.jpg

And the rack is ready, just waiting on other parts to arrive.

20181219_171233.jpg


Merry Christmas

Benny
 
Last edited:

BennyT

Active Member
Dec 1, 2018
166
46
28
Breadboard wires from years ago that I never used. 13" (33 cm) long. I used them in various arduino hobby projects, but that never took off. These were male/female and I unclipped the black plastic from a few of the male ends and trimmed the pins off and re-attached the plastic end clip. Now I have female to female. I'll use these for the PWM mobo header pin4 (control pin) and pin3 (sense pin) to the corresponding Norco fanwall backplane control and sensor pins.

20181225_155201.jpg
2018-12-26_8-49-52.png
 
Last edited:

BennyT

Active Member
Dec 1, 2018
166
46
28
*Edit1/Update:
Although i still don't fully understand lrdimm vs rdimm or RANKs i decided on 32GB sticks. My logic is that when i eventually do purchase a2nd cpu, with 32GB sticks instead of 64GB sticks I'll have more sticks to move over to from cpu1 to cpu2. And honestly 192GB via 6 channels per cpu ain't bad at all.

*Edit2/Update Dec 28 2018: We decided to proceed with two sticks of 32GB for now instead of four or six sticks. The "war department" questioned why we needed $1500 of RAM considering I'm still learning how to use ESXI.

Purchased:
2x Hynix 32GB RDIMM 2666MHz from SM eStore

1x Gold 6130

1x Noctua NH-D9 cpu heatsink with 92mm fan for 4u narrow socket

1x Norco rl-26 sliding rails. I'll give them a shot even though I hear more bad than good about them. I need to see for myself and post the results here. There are at least two versions of those rails and I'm unsure which version design I ordered. If these rails won't work with the 4224 chassis then I'll get some $32 Navepoint universal rails​
 

Attachments

Last edited:

BennyT

Active Member
Dec 1, 2018
166
46
28
I've been wiring the fan loom for the control/sense PWM pins on the mid fanwall.

I traced the sense/control pin orientation on the Norco Fanwall backplane. I'm going to try using a single mobo PWM header (probably FANA?) to send control/sense for all three fans on the fanwall. Anyone see a problem with that? Using a single fan header on the mobo means only one of the three fans can send pin3"sense" info to the mobo. but the mobo can send three fans speed control on pin4. I'll post photos incase it helps someone do the same on their norco 4224

fanwall backplane. Each fan caddie connects to one of the three headers below. The molex power feeds pins1 and pin2 (see the thicker traces on top going to all three headers). The other pins (pins 3 and 4) are fed from the six pins in lower left corner of the photo.
20181229_172620.jpg

closeup of the 6 pins. I'm only going to use one mobo header to send control to all three fans. But because I'm only using one mobo header I can only use one of the three fans to send "sense" info on pin3. I chose the middle fan to send "sense" info to the mobo.
MagnifyingGlassWithLight - 02018-27-29-05-27-27.jpg

I used breadboard wires to connect the various pins to my splitters (the splitters came with the Noctua 80mm exhaust fans. nice!). The blue are for "control" pin4 and the single green is the "sense" pin3. Note there is only one green pin3 connected.
20181229_172543.jpg

20181229_173857.jpg

The Noctua fans came with extensions in addition to the splitters, so I used the extrension to keep from cluttering the mobo area with ugly blue/green breadboard wires. I'll try to hide the breadboard wires away under the fan pcb when I do cable management later.
20181229_174452.jpg

20181229_181252.jpg

20181229_181551.jpg

II'm very thankful to the kind people on this forum like @EffrafaxOfWug who first identified how this Norco fanwall functioned for PWM fan. I wouldn't have known what to do. I'm simply documenting now with photos for others to follow later
 

Attachments

BennyT

Active Member
Dec 1, 2018
166
46
28
I've a noob question regarding IPMI, AST2500 and dGPU.

Can anyone think of a reason I might need a dGPU? I have an MSI AMD RX470 lying around not being used, is there any reason I should install it in this build? The only reason I was thinking I might want a dGPU is to see the motherboard post and BIOS, or to make an image of datasore using clonezilla, etc..

My other standalone linux servers have integrated gpu and I've never given dGPU much thought. But I think the supermicro board uses IPMI (which I've never used before) and the ast2500 chip for onboard graphics for accessing BIOS etc..

Would having a dGPU benefit me at all ?

Thanks
 

rune-san

Member
Feb 7, 2014
81
18
8
There's no benefit to installing the GPU. ESXi has no need for high end graphics. The ast2500 will give you all the graphics you need through the IPMI.

The only reason I'd install it is if you have any VM workload that might benefit from a GPU being passed through to it. Oracle at this time isn't really leveraging any GPU Offload, although I've seen it toyed with.
 
  • Like
Reactions: BennyT

BennyT

Active Member
Dec 1, 2018
166
46
28
Thank you @rune-san for confirming about ipmi and ast2500 etc. And you are correct, I won't need gpu pass through. The Oracle Apps/EBS/Database is far from anything graphic intensive, thats for sure.

-


I'm waiting on parts to arrive I decided to use a VMUG Advantage subscription and begin downloading their products.

There seems to be duplicated iso files spread accross the various products line in the VMUG Advantage estore. But once I went to the vmware's official website I compared the products from vmware products downloads page to the products in VMUG store. I did this because the VMUG website download descriptions are pretty sparse compared to VMWares official download descriptions.

The files I think I want from the VMUG store are these:
VMware vSphere with Operations Management Enterprise Plus
VMware-VMvisor-Installer-6.7.0-8169922.x86_64.iso
Boot with this ISO image in order to install ESXi.​
VMware vCenter Server Standalone for vSphere 6.x (English)
VMware-VCSA-all-6.7.0-10244745.iso
ISO includes the GUI and CLI installer (I plan to use the GUI installer) for vCenter Server Appliance, Platform Services Controller, vSphere Update Manager and Update Manager Download Service.​
There was also a VMWare vCloud product lineup which contained much of the same image files as the vSphere and vCenter product lines but also included additional ova files for bigdata and dataprotection. I don't need those ova, but the activation codes for the ESXi hyperviser were not listed under VMware vSphere product for some reason (just the serial number was there). The activation code was instead listed under this vCloud product, so I added vCloud to my vmug checkout cart. Perhaps where the activation code was absent the serial#s can be used same as activation codes. *edit: affirmative. some products are activated using "activation code" others are activated using "serial number".

in conclusion, I checkedout with: 1) vSphere with Operations Mgmt 2) vCenter Server and 3) vCloud

I made a record of the serial #s and activation codes from each of those products but the only files I downloaded were those two .iso image files shown above.

Happy New Year

-Benny

*EDIT: I just noticed that the build versions on vmug website are a little behind the build# on VMware's official download page. I don't think that will matter though because once installed I imagine I can simply do a "check for updates" or something similar. The major release versions matched, just the build #s on VMUG were a little behind. *edit: affirmative. Once installed I was able to update to latest build#, at least thats how it is for workstation pro on my laptop... not tried yet with vSpehere on the server as I'm still waiting for server CPU, heatsink and RAM to arrive but I expect it is same. I'll update here after I can confirm if I can update to latest vSphere product build#s.
 
Last edited:

BennyT

Active Member
Dec 1, 2018
166
46
28
20190101_161255.jpg

Populated 10 of the 24 sleds in the Norco chassis with drives of various density, 23TB total.

To channel the airflow properly around the populated sleds I filled the empty sleds with styrafoam. I got this idea from user Stux on the freenas forums.

Norco, in the past, sold their chassis with adjustable sliding vents on front of each drive sled, but no longer. You can still buy the old sleds though from IPC for $5 if you don't mind paying rediculous shipping prices. But the package styrafoam that the chassis shipped in works perfect too.
 
Last edited:
  • Like
Reactions: kskreider

BennyT

Active Member
Dec 1, 2018
166
46
28
The 6130 arrived today along with the Noctua Cooler. Gratuitous photos .

The Skylake SP is huge compared to the lga1155 processor pictured next to it.
20190102_172417.jpg 20190102_172356.jpg 20190102_175320.jpg 20190102_173105.jpg

narrow clip and square clip adapters came with the heatsink. Noctua will ship free of charge the clips for Omni Fiber SKUs if thats what you have, but they don't package them with the cooler. I think because Omni CPU are pretty rare, especially in a DIY setup which uses Noctua coolers.20190102_172640.jpg 20190102_172602.jpg

I like this 92mm cooler because it pushes the air out towards the exhaust. Some other third party coolers have the fans oriented such that they push the air in oddball direction towards pcie instead of toward the I/O shield. I think that happens with 120+mm heatsink fans because otherwise the fans might contact/ interefere with other things on the board such as ram or another socket.

I'm excited to get the hardware side of this build done and get onto installing software and setting up my VMs. I’m imagining how awesome it will be when applying big patches to an oracle Apps EBS environment, where before I’d only have 2 parallel workers concurrently running... some big upgrade patches would run for literally half a day on some of my older systems... imagine if I can now allocate a bunch more cores prior to applying a patch,zoom! I’m excited! Then after the patch I can deallocate those cores. I’m going to love having these systems virtualized.
 
Last edited:

BennyT

Active Member
Dec 1, 2018
166
46
28
I put most of it together this evenning. I'm still waiting on a boot drive to arrive. I'll play around with IPMI or install ESXI onto a USB until then to get my feet wet with it.

If anyone could recommend a good choice for RAID 10 controller please let me know. I've connected three of the SAS backplanes to three internal mini SAS connectors on the mobo for now, but I'd really like to learn how to setup RAID 10 for some of datastores. I'm thinking three SAS 8i RAID controller cards, or one 16i + an 8i.

20190103_221734.jpg
*edit: I may move the heatsink fan that is closest to the midplane to the other side of the heatsink since it's so close the 120mm fanwall. Or maybe I'll remove that extra 92mm heatsink fan all-togther. We'll see how cooling and noise levels are.


Norco RL-26 ballbearing rails. These are cheap quality, but they do fit and work. They are far from glassy smooth sliding though. But only $40. I may put in universal navepoint L fixed rail if ballbearings begin to fall out or if this jams.
20190103_154502.jpg

20190103_181153.jpg

20190103_180852.jpg
 
Last edited:
  • Like
Reactions: nry

BennyT

Active Member
Dec 1, 2018
166
46
28
bummer! the computer will not post. I've connected display to VGA. No signal. I need to order a speaker buzzer for header JD1 on the motherboard to see what error code beeps I'm getting.

I read that the scalable processors can be finicky when bolting down the heatsink. Supermicro says 12 INCH lbs. Maybe I'll need a tq wrench too because I've no idea how to measure that without a tq wrench. Patricks review said he had POST errors from improper tq.

any suggestions greatly appreciated. thanks!
 
Last edited: