Synology Replacement

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Thanks man..appreciate it. Just getting everthing ready. In that case, ESXi6 it is. Probably go FreeNAS now.
And the crowds rejoice...j/k I always cringe when I hear/read UnRaid, I KNOW some here love it but I guess I've just been spoiled or drank the ZFS kool-aid LONG ago to even consider anything else.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
The LSI card only needs x4 Lanes on PCIE 2.0 max.

The Intel X520 depending on model could use pcie3.0 x8 if I recall so I'd likely swap the placement of those since the LSI won't ever need more than the PCIE2.0 spec, likely only x4 tho.

Intel 750 NVME 4x pcie3.0
LSI 4x 4x pcie2.0 (so thats what like 2x pcie3 lanes??)
X520 likely 4x pcie 2.0 or 3.0 depending on card/model I believe

That should leave enough lanes for onboard stuf too, I THINK :)
@T_Minus is correct, I had my X520 on a x4 slot and it capped at 8Gbps which I always assumed was a bus limit just never did the math, soon as I slapped it back in a x8 slot got near line-rate 10g perf back. Guess gen2/3 PCIe makes quite a bit of diff as well.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,650
2,066
113
I haven't used Intel onboard SW RAID in ESXI until now... I setup RAID1 on 2x 80GB S3500 to be used for ESXI & NAPP-IT... little did I know that Intel SW Raid (or any software after I researched) does not work in ESXI and upon install boot you are presented both drives :)

Powered off, moved to the onboard LSI 3008 setup the new raid1, reboot to install esxi, and bam, 1 volume detected, install continues.

Going to test this out and see how it goes, may still move ESXI to USB, and then setup LSI RAID1 again on 2x200GB drives for Napp-IT and a couple other VMs I care about being mirrored.

Got a couple more sticks of ram so the new setup looks like:

-Mobo: SuperMicro X10SRH-CF w/on-board LSI 12Gb/s (3008) controller
-CPU: Intel E5-2670 V3 (1 CPU)
-RAM: 96GB Hynix (16GB DDR4 x6)
-SSD: 2x S3500 80GB (RAID1/ ESXI+NAPP-IT)
-SSD: 2x Intel 710 100GB (RAID1/Security Camera Feed Clips+Pics)
-AOC #1: SuperMicro NVME
-AOC #2: Intel 10GigE 2 Port NIC w/Fiber -> Switch (likely 1 for general VMs, 1 dedicated to security VM/network)
-AIC #3: M1015 HBA (for 6 WD Drives)
-NVME: 2x Intel 400GB 750 NVME (VM Guest OS)
-SATA: 6x WD RED 5TB
 
Last edited:

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,650
2,066
113
Idling w/Napp-IT VM idling.
No HDD.
No NVME.
No 10Gig NIC.
64GB 2133 RAM
LSI Onboard 3008 "on/enabled/in-use" (2x Intel S3500)
CPU Fan At 100% (SM 4U ACTIVE) due to not reconfiguring thresholds yet.

Idle Power @ Outlet: 58W

Not bad for 12 Cores @ 2.3ghz and fan at 100%.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,650
2,066
113
I'm thinking it may not be the best plan to use onboard SATA and "pass through" the disk in ESXI-->Napp-IT via Command Line "hacking" it to work...

Before I got to test that out I seem to be having an issue with one of the disks that has a partition already...
"Error: Can't have overlapping partitions"

Or, should I say the vSphere vague error of "Cannot change host configuration"... for how much this stuff cost (vmware in general) you'd think they'd have better errors!

Gotta fix that first! Then might just slap in the M1105 and call it a day, or try to hack cli a bit more to pass-through disks 1-by-1 since the onboard SATA device an't be passed through as a "whole", at-least not that I can see.

Sadly it's the same error when trying to set the partition table to msdos too (it won't let me) -- so gotta see where to go next!
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,650
2,066
113
None of the partedUtil commands would work --- all giving me the "Overlapping Partition" error.

Thought I'd try a quick fix and slap it in my windows desktop USB 'on the desk' quick and easy drive test/transfer station... found 8 partitions with random sizes and some unallocated space... deleted all partitions and was left with 2 unallocated space ~50% of drive on each.

Diskpart in windows10 showed 0 partitions

Issued a "diskpart clean" and bam! Cleaned it up and combined them to 1 unallocated area.

Tossing it back in my build, and we'll see what ESXI does now :)

And it's seen, and working in ESXI now.. wahoo ;)

If only the 'fix' would have worked!
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,650
2,066
113
105W Idle

Excluding:
- 2x Intel 750 NVME & SuperMicro NVME AOC
- Intel NIC
- (maybe) M1015 HBA
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,650
2,066
113

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,650
2,066
113
Was hoping to run dd+badblocks yesterday/today but we had a storm come in yesterday and I Didn't want to deal with having to re-start the tests again! Good thing, we had power go out a handful of times and come back early this AM. Supposed to come in again here now with high winds and snow!! I don't want to worry about the battery backup, and generator of something else so I'll wait :) until Monday/Tuesday when it's passed!

Any reason not to use -c with badblocks and have it eat up 8GB RAm at a time (or more)? Seems it would finish much faster. I did some quick searching and it seems most leave it as default but others might bump it to use 1-2GB at once -- I'm thinking those are all "home" users with small RAM available... if I have 64+GB available it seems if I want it to go fast as possible I should do it in bigger chunks?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,827
113
@T_Minus you might find it funny but I built a 4x5TB unit this week. I am not 100% sure why or what it is going to be used for, but I think seeing your thread title on the recent posts stream made me think "it is about time to build yet another small NAS".
 
  • Like
Reactions: pgh5278

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,650
2,066
113
I am ready to throw out this Fractal Define R5 and XL and any other Fractal case I have due to their bullshit of not including enough rubber pieces to install more than 2 hard drives!!! Seriously 2 out of 8, why would they not include enough screws and rubber is beyond me.

I checked my XL case, same thing!!!!
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,650
2,066
113
My XL Accessory Box Says:

Rubber Spacers: 12
Anti-Vibration HDD SCrew: 32

I have 32 screws, but why why why only 12 spacers to use?

Checking R5 box to see what # in there, makes no sense to me.. even if I combine the rubber spacers I can't install 8 hard drives!
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,650
2,066
113
Well it looks like my Fractal XL case actually has the Anti-Vibe Spacers already on teh HDD Tray.
My Fractal Define R5 is missing all of them... ugh.

At-least I have enough to get my 8 drives installed now! I started hunting for my old Antec ones, they work but I didn't have enough still around from those old cases.


Well, that saved my build.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,650
2,066
113
And she's aliveeeeeeeeeeeeeeeeeeeeeeeeeee

Just need to migrate the Icy Dock 6x 2.5" and 2x 12Gb/s drives, and start installing OS's... got esxi 6/u1 installed on the 32GB Sandisk, and ready to load up napp-it on the s3500.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,650
2,066
113
I couldn't stand not testing her out, so the icy dock will wait :)

Got napp-it installed, my raidz2 pool setup with 6x5TB WD RED and installed ZuesRAM and 200GB 12Gb/s SAS drive to play around as SLOG.

Installed Win10 onto that pool, and tested... then increased cores so it could go faster :)

sync=always

This is with ZuesRAM as SLOG (no L2ARC) on the RaidZ2 pool made of spinning rust.
(WD RED5TB, 32GB RAM, All Cores Available)

ScreenClip.png


(Note this is 200GB SAS 12Gb/s drive not ZuesRAM -- ZuesRAM was almost 2MB/s faster on 4K, but I did not re-bench the SAS drive after changing CPU allocation.... hit 20GHZ max during testing)

ScreenClip.png


My wife says I need to step away from this thing and join the real world for today/tomorrow so I'll be back tomorrow night playing some more :) I have the Intel 750 NVME passed-through but not working in Napp-IT yet, and will throw in a P3600 tomorrow to try out.

I'll also be adding the 4x SSD to the other 12Gb/s port and install Win10 on some S3500 or other SSD, maybe 730s? And play around :)


Oh, and the sweet part.... 120w idle WITH ZuesRAM (~12W) and the 2x200GB SAS SSD. Without those ~104-108w idle! Not bad for 8x HDD, 2x SSD, 2x NVME, M1015, SM NVME AOC, onboard LSI 3008 HBA and 12 cores that will do 2.99GHZ :D

The Intel 750 400GB will be for the actual guest os, just figured I'd test the raidz2 out for fun while it's not utilized yet :D
 
Last edited:

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,650
2,066
113
Thanks.

I'm already wishing I put it in a 2P Xeon Board, I need more PCIE Slots to test / setup various configs.

Thinking I'll keep an eye out for a 2nd one of these CPUs, and throw it in a X10DRX when I find another deal on one of them, and I still need to find 2 more 3U Chassis made for that board, but for a DIY home, I don't "need" the special 3U case I'll DIY a case if needed... just would love the PCIE slots so I can easily add HBAs, IB NICs, NVME PCIE and m2/pcie adapters for playing around :)
 
  • Like
Reactions: msvirtualguy