Half decent "All in One" Build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada
Build’s Name: Callicrates (Beauty & Power)
Operating System/ Storage Platform: ESXi 6/ Server2012R2/ Debian
CPU: E3-1245v2
Motherboard: C204 based intel
Chassis: Chenbro Micom RM23212
Drives: Most likely 4TB HGST Deskstar NAS
RAM: 32GB ECC Unbuffered 1333 Mhz
Add-in Cards: Serveraid M1115 (in IT mode), probably later adding in a 4 port intel Gbps nic
Power Supply: No name 400W initially. Will upgrade to hotswap as funds allow
Other Bits: Possibly a Samsung 850 or two

Usage Profile: Local media server, possibly NAS, Elastix PBX, VM playroom

Other information…

Hey folks,

I have built a ton of machines over the years including a few small servers for friends etc, however this time, I'm about to embark upon a half decent build for my own use here at home. I still have a few kinks to work out and some money to find for it, but I'm nearly there.

Here's some random thoughts I have been having while waiting on the hardware arriving...

The VM datastore will live on at least one, or if funds allow, a pair of mirrored Samsung 850's hanging off the integrated 6Gbps SATA ports on the mainboard. ESXi will either also be on those, or perhaps on a small 2.5" drive or USB stick, I haven't decided yet.

The mainboard has 2 onboard SAS2 ports driven by an LSI2008, so I could just pass that through to the media server VM directly, most likely Plex on Debian, giving me 8 bays of local storage, rather than using an iSCSI link to a seperate media store volume. I also have an M1115 that I can pass through to the Windows server VM and can use the remaining 4 bays for that. I can expand that later internally with a breakout cable. The PBX VM can quite happily run in a 40GB partition on the VM datastore drive(s), it doesn't really need much more than that and its TDM card can be passed through. Another way I see of doing this might be to pass through both SAS2 controllers to a Debian VM and run all 12 bays in a NAS type arrangement, sharing out volumes to the other VM's as needed. This will involve using iSCSI and SMB, albeit internally. I don't see this being an issue as it will all be getting stuffed down a couple of 1 Gbps teamed nics, initially anyway and when funds allow I'll add in a 4 port Intel card and use the integrated nics for a management only port.
Anyway, that's kind of where I'm at on the hardware front.

The storage side of things is a bit more murky for me. I'm still undecided on using MD raid and LVM, or using ZoL. My thoughts are ZFS is new and interesting, and from what I have read so far about it, seems like a good contender. The other side of that of course is that MD/ LVM has been around since man was still chipping on stone tablets, it just works, is reasonably fast and for the most part never really seems to give folks trouble. There are up's and downs with both schemes obviously.

The workloads expected are maybe 2 or 3 streams from the media server with perhaps one of them transcoding, the windows server will be getting hit up for the usual stuff like DNS, DHCP, AD etc and the PBX will have very light sporadic use with likely little or no transcoding involved.

I welcome some comments on my plans :)

Am I bonkers for even attempting this? Pointers for better performance etc?
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Sounds VERY similar to what a LOT of us have done here including myself. I run a 3-node vSphere 6 cluster w/ an assortment of stg solutions....Some VT-d HBA...some pure virtual storage appliance. In the end the AIO/VT-d setups will give you the best performance/flexibility/scalability. Recommend a small sandisk cruzer fit for ESXi install, use the hopefully mirrored 850's off local sata ports, install AIO/VT-d stg appliance VM here (choice here are only limited by your imagination), I personally prefer ZFS based OpenStorage NAS's such as OmniOS (Solaris, OI, smartos, any-Illumos based distro will work), FreeNAS, napp-it on top of Omni/whatever-ZFS slice of bread you may choose or like you said go sw Linux raid mdadm or you could even tread the bleeding edge of btrfs/rockstor. I would contest though that ZFS is not 'the new kid on the block' rather quite mature/10+ yrs of dev/usage in PRD env's. WAY more scalable/capable overall feature-set than Linux sw raid IMH/humble opinion.

More in line/releated to your post is the fact that I run all you have listed and much more just fine off my infra very happily so you are certainly headed in the right direction.

GL on build, it'll all come together and you will be quite happy I predict :-D

ucn-infra-11_11_2015.png
 
  • Like
Reactions: Patrick

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada
Hi,

Thanks for the encouragement and tips. Sorry about the ZFS comment, I blame caffeine shortage :)
What I actually meant to type there was ZoL, in reference to it being new. I know Solaris et all have been working away on it for over a decade, so it's obviously a decent solution. My only real concern would be how good/ closely the ZoL team have managed to port it. I did look briefly at btrfs, but the development of it being behind the curve and its lack of stability with higher raid levels instantly turned me away from it. I suspect though, if some decent effort was put behind it along with some real corporate money and will, it would easily become the Linux equivalent of Solaris's ZFS, and by all accounts, with a better feature set. Obviously only time will tell how that game plays out in the long run, but I'm keeping my fingers crossed it gains momentum. I would use it over ZFS in a heartbeat, if it was stable, just for the ability to grow an array easily if for no other reason. I could also try FreeBSD I suppose, or as you mentioned one of the prebuilt NAS solutions, but beyond staring at these through the wrapper a couple of times, that's about as close as I have ever got to them. I came from a M$ background, so just getting my head round Debian has been a bit of a challenge for me as is. I figured Debian, as the other most popular distro, Ubuntu, was based on it, would give me a 2 for 1 headstart in Linux generally. I have also dabbled a bit with CentOS 6/7 on account of the PBX. I'm by no means what I would call proficient with either of them, but I can just about get by with the standard stuff. It's all a learning curve and giggle is my friend.

So would you recommend that I only install the storage solution on the 850's, with the media server etc on the magnetics, and just share the storage to them, or do you think that giving the media server and windows VM's direct storage through passthrough would be a better way to go? I'm unsure performance wise that it makes much of a difference to my use case, no matter what I do I think I'm likely to just about saturate my real nics anyway, but I can see where divorcing the storage from the VM's might allow easier backup's and some added flexibility etc, especially if I later add another server in a cluster arrangement. The setup/ config is about on a par either way I think, but what about maintenance/ admin of it? Also what do you use as a backup scheme, something like Veeam to another local storage pool or something Cloud based?

Your rig looks really sweet, but I would need a divorce of my own before I would be allowed to play with that level of kit here, and I'm not sure which one would cost me more money :p

I'll get some photos up of the build as it progresses, hopefully it will inspire someone else like me that's just getting started...
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada
I received the first of my parts yesterday, the M1115 raid controller. I think I have managed to cross flash it ok, certainly didn't see any error messages float by during the erase/ flash cycle, but I can't test it yet as I don't have any forward breakout cables to test with. Hopefully the rest of the bits will be arriving over the next few days and I can start building :)
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
Many using zol.
I am running 3 machine with zol on centos 6 and 7.

Btrfs0/1 is OK since running some locally.

I am still waiting raidz functionality in btrfs.. Still not mature.

Yap.. All my server,workstation are running Linux...

Move from esxi to proxmox early this year and running rock solid.
Proxmox 3.4 and 4.
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada
Thanks canta, that's good to know. The last time I looked at Promox, it was as a complete linux newb and I was very quickly overwhelmed and put off by all the cli work needed to do anything beyond tinker with it. Maybe now that I'm no longer frightened of Linux, I may have another look at it in the future :)

Ok, so my chassis finally arrived yesterday. Here's some photos, apologies for the phone camera quality, it's all I had handy yesterday:
































So, my first impressions:

The chassis was well packed and protected in a combination of rigid foam and cardboard, so should survive shipping to almost anywhere.
Solidly built, there's virtually no discernable flex front to back.
Disk caddies slide in and out smoothly with just enough resistance to feel positive, plus they lock home with a reassuringly heavy click.
The backplanes look well laid out, although the enclosure is not making full use of all the features offered by them as far as I can tell, however that just gives me room to play with it later. There is a decent sized air space through the backplanes for disk cooling which is good.
Although the enclosure did not come with slide rails, the screws for mounting them are there ready to go when the time comes to fit some.
The disk caddies are a combination of plastic and metal, there is a little flex in them, but that will all but disappear as soon as a drive is mounted in them. They will accomodate both 3.5" and 2.5" drives.
The fan wall is pretty tight to the backplane, as it should be, there's not a whole lot of room in there if you have hands like shovels. It's sufficient though, given how often you'll need to do anything in there.
The fans, 3 of them at 80mm, are removeable by simply squeezing and pulling them up and out the carriers. The downside is that the backplane doesn't support pwm, being only 3 pin, although the fans do and the wiring to them is long enough I think to reach most mainboards fan connectors along the top edge. There are plenty of fan connectors though, so I'm sure it would be easy enough to add a small fan controller and more fans should the need arise.
The fans are of reasonable Japanese quality, ball bearing type made by Sanyo. I haven't heard them run yet, but suspect that used with pwm control, they will be quiet enough not to disturb me.
The rear of the enclosure is quite possibly its weakest point, the metal used in that area is a little thin and has quite a bit of flex to it. Some flex here is understandable to some degree anyway, given the large cut-outs required. It's not a deal breaker by any means, and certainly no worse than your average PC/ Enthusiast case these days, but it's not the "Dell" belts and braces sturdy. The only other thing I could say about it would be that there is nowhere to mount any rear fans should you wish to, the only fans back there will be power supply fans.
There is plenty of room in the rear to take most mid level mainboards a decent power supply and controller cards and has 7 low profile expansion slots, plenty enough for most purposes I would think.

In conclusion, I'm very happy with it. The price, as they say, was right, you get your money's worth here and then some. It will do all that I need. It doesn't look half bad either with the little understated styling it has to offer around the front end and I don't think that it would look out of place in any rack :)

I'll update the thread as I progress with the build...
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada
Another little update...

The postie finally delivered my RAM and Processor today, not together I might add, they put one in the mail box and delivered the other later in the day by van.

I decided as I was still waiting on the CPU, I may as well test the RAM. I pulled out my wee trusty Dell R210II, and dropped the RAM in. It should have booted fine, but it didn't. Fans ramped up and down then all went quiet. I pulled the dimms in pairs, still same result. Comparing the RAM that I had to the Samsung stuff that Dell put in it, the only difference I could tell was the Dell stuff has a CAS of 13, the Crucial stuff I just got has a CAS of 9, other than that, voltage, ranking etc was identical. My heart sank...When the CPU arrived, I just put it aside, sure in my mind that the RAM was either damaged or incompatible.

After a few hours of moaping about in dismay and a bite to eat later, I couldn't resist any longer. I fitted the CPU and RAM to the new box, crossed my fingers and pulled the trigger. Amazingly, it fired right up. Now the spec in both manuals says that the RAM should work, but for some reason the Dell is being picky. So, I dodged a bullet with it, I think :)

As I had already cross flashed the Serveraid card, I turned my attention to updating and going to IT mode on the integrated LSI2008 controller. What a bear that turned out to be. The mainboard is fully uefi and simply point blank refused to boot from a dos disk. I tried everything I could think of, but no dice. I thought I was out of luck, but I kept digging the net. Eventually I came across a post by a smarter ass than myself, as you often do, and voila, as if by magic, controller updated and in IT mode. This was done entirely from a UEFI Shell using the LSI sas2flash.efi utility. I take my hat off to him, it could have been days and days of trawling the net before I would have been successful.
Here's a link to the info I used Flashing IT Firmware to the LSI SAS9211-8i HBA - brycv.com

So with that out the way, I tore down an old machine and borrowed an old sata disk and borrowed one out my wee Dell R210II. Fitted them to the caddies and cabled up, 1 disk to each controller. I also stuck my TDM card in there just in case it caused some sort of issue with interrupts or something. Only 1 disk showed up, the integrated controller was working at least. I moved the card to the next slot and voila, both disks were visible. I just ran the windows 7 setup dvd to confirm that the disks were available.

It was a real Macgyver setup tonight with bits and bobs hanging all over the place. I have no cables, no wired USB backplane and no disks for it yet, but at least I know it's all working the way it should be and I had some fun. Once the rest of the little bits arrive, I'll get the installs done and configured. I have added another few photos below of tonight's effort :)













Ok, Ok, I know you are all dying to know. Yes, I was wrong. The fan wall sounds like a lawnmower on nitrous, at least it does when they are blasting full tilt. I moved the wiring from the backplanes onto the mainboard PWM connectors and reduced the noise to about 1/3rd of what it was. There is still plenty of air flow even at that level. In a basement install, it's just about acceptable now, but you wouldn't want it in your living space, not with those fans. I'll need to buy some Noctua's for it. Damn, more money...

Lastly, one thing I hadn't noticed the other day about the caddies. They come with little blanking caps in them to stop air passing through them when they are empty. That means optimal airflow for the caddies that do have disks in them. Nice touch by Chenbro I thought :)
 
Last edited:

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada
Hey folks, I thought it was about time for another wee update...

As I'm still waiting on loads of little things for this box, like cables, adapters and brackets etc, I figured I might as well get the fan noise under control and get the, new to me, disks tested. The parts that are missing in action really are on a slow boat from China, I think it's a rowing boat :eek:
I was spoiled when in the UK, most items from China usually arrived within 10-15 days, but here it's like 6 weeks, not counting the time it takes Canada Post to deliver stuff cross country on a bicycle :D

Ok, so here's how the disks arrived. Pretty well packed I thought, so +1 to newegg's supplier. I had visions of them arriving stuffed in a thick jiffy bag or something. They are all in sealed anti-static bags with little silica gel packs inside. The disks are nice and clean and look to be damage free, no obvious dings or scratches. They are all dated Nov 2012, 6Gbps and have A50 firmware, so at least I'm in with some chance of them being pretty equally matched, speed wise. For those that are reading this but that haven't read my other posts about these disks, these were sold to me as refurbished from newegg with limited warranty.









The way I tested the drives was pretty straightforward and is documented elsewhere in these forums, if you need to do so yourself but don't know how to get started. I downloaded a copy of ubuntu live desktop, fired it up and installed both smartmontools and gsmartcontrol. I ran gsmartcontrol and tested all the disks in parallel. I ran both short tests and surface tests and noted the important parameters after each run. The short test took less than a minute to complete, the surface test took around 8hrs for my 3TB disks. Again, I noted the results, So far, so good. Next, I ran up a tmux shell and created 8 instances and, just for the sake of not getting confused about which screen I was looking at, I named each instance to correspond to what linux assigned as a drive name, /dev/sda, /dev/sdb etc. I then ran a badblocks cycle in each instance for each disk, which, near as dammit, allows the tests to run on all disks in parallel. The total time taken was very nearly 60 hours to test all 8 disks. When the badblocks cycle was finished on all disks, I ran gsmartcontrol once more and took note of the results, all good. So the disks appear to be in good shape and will hopefully stay that way for a suitably long period of time. Time spent now, testing disks and weeding out any duffers, will save me a thousand headaches later. Despite the time it takes, especially with larger capacity disks, this is one step you should never skip, even if the disks are brand spanking new. Never forget, the postie can pay football with new disks just as easily as he can with refurbished ones :)

Moving on to the fan noise issue. I decided after my initial test run, that the fans were loud enough to be at home on an A380 Airbus. The original fans are San-Ace 80's, which are to be fair, decent quality fans. These are used in loads of large chassis, including some of the worst supermicro screamers out there, so it goes some way to explain the bleeding eardrums. In a soundproof server room, it would not be a problem, but in a home environment, it's just way too loud to be tolerated for any more than a few minutes without ear defenders. I need something a lot less aggressive to my ears.

Having read loads of threads over the years about Noctua fans, but never having actually tried any, I thought they might be worth a shot here. I ordered 3 x NF-A8pwm for the fan wall and an NF-A6 for the power supply from amazon. 10 minutes with a drill and the wall fans were mounted on their little rubber mounts, as supplied with them. The fan wall is removable in this chassis and is also sitting in rubber grommets, so the fans are pretty well dampened and the chassis should experience minimal vibration from them. The power supply fan required the plug to be swapped over in order to be able to plug into the board. I just snipped off the plugs and soldered the original one back on to the Noctua fan wires, so job done. In case you're wondering what the large hole is for between fans 1 & 2, it is for an SFF-8087 cable to pass through. The longest cable I was able to source with the required bends on it, is about 6" short of making it from the front to the rear of the chassis by taking the normal route and just to make my day, there are also other cards in the way. I therefore plan on running it in a straight line, or near as I can, from the ServeRaid card through the fan wall and up to the mini SAS port on the backplane. Doing it this way should cause minimal air turbulence, if any at all. And before it gets mentioned, I'm aware that there are no fingerguards on the fans. The shipping info was updated yesterday to Jan 14th, but they should have been here already. I think the bloke I got them from in the US is planning on hand delivering them, and he's hitchhiking! I have ordered some more and they should be here in a week to ten days all being equal, until then, I'll keep my fingers out :D







I have shot a couple of little videos, each about 30-40 secs long to give you some idea of the improvement. Now it's not a fair comparison I must be honest, I forgot that I had a test boot disk (a WD 10k Raptor no less) sitting on top of the power supply in the "Before" videos, so you can hear it also, but just barely over the fan noise. I pulled the storage disks so they are not in the chassis. I have tried to keep the camera/ mic at the same sort of distance away each time, but I didn't measure it or anything :)

These are the "Before" videos:

All fans going full blast: VID 20151123 244132595

Fan wall being controlled by mainboard PWM: VID 20151123 244356935

This was meant to be just the Power supply and CPU fans, but I forgot about the boot disk: VID 20151123 244549390


These are the "After" videos:

All fans going full blast: VID 20151212 235135947

Only the Power supply and CPU fans: VID 20151212 235229781

Just for a laugh, without the CPU fan: VID 20151212 235317085


I think you'll agree, it's a bit quieter now, save for that stock Intel CPU fan. Now, this is not meant to be a review of any kind, but would I buy Noctua's again? Nope, not unless they arrived in little white boxes devoid of all the crap they supply to boost the cost of them, and they are pretty spendy. They are quiet though, really quiet, as you might expect with a sealed oil bearing, but they are only pushing around one third the air of the originals. The only upside is that they draw almost 1/10th the power of the San-Ace ones. I can live with it as I am locating this box somewhere stable and cool, but for a hot rack environment, these simply don't have the grunt needed. I did note the temperature of my disks when running badblocks, it never made it over 43 degrees on any disk, which is right around what I was expecting to see, so the fans are keeping up for the moment at least, but I will be keeping my eye on them as we move into the warmer weather.

I'll update this again when I get the rest of the box together...
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
nice build!!!

I just smiling when saw Noctua fans.. those are damn expensive :D

I usually buy SM PWM fans :D , and tweak the RPM via bios (PWM control) or my own PWM controller build with arduino.
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada
Cheers for that, yes, very nearly $100 in fans by the time I added shipping, so they are definitely not at the cheap end of the scale. They do come in, quite possibly, the nicest packaging I have ever seen a fan in and are supplied with a gazillion cables that you'll likely never use, which can only bump up the cost needlessly. It would be more beneficial to the end user, I think, to just ship them in a plain white box with an included finger guard and some rubber mounts and let the users decide what if any accessories they require. They do have a good reputation amongst the modders etc though, so I figured I may as well put my money where my mouth was and actually try them, as opposed to just giving an opinion based on paper spec and heresay. They are amazingly quiet for their size, but a little on the weak side with the volume of air they can move, even at full blast, so they would definitely only find a niche or two in the enterprise world. I haven't used their larger 120mm fans before, but they would perhaps be better if you had the room in the chassis to accomodate them :)
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
I run noctua's on my norco 4224, silky smooth quiet w/ PWM fan controls turned down to about 33-50% if memory serves me correct w/out breaking a sweat keeping things cool/moving sufficient air front to back. Yep, prolly $100+ in fans here as well on that guy across 3x 120mm fans and 2x 80mm fans...forget models exactly but they were highly recommended.
 
Last edited:
  • Like
Reactions: pricklypunter

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada
Ok, I'm back to it again, even though I'm still missing my mSATA adapters. I decided to use a spindle for the VM Datastore. The current plan is that I'll add the SSD later when I receive my adapters and then just move them over and remove the spindle.

This isn't an update per se, more of a sigh of relief really and this maybe help someone else out, if they get in a jam like I did.

I got everything pretty much together and settled down to get some installs and config done, but immediately hit a bump in the road. I discovered that for some reason I could no longer boot an x64 VM. What was really strange was that ESXi complained that VT-x was not enabled. I checked the bios, it said supported, VT-d was enabled etc and no other obvious signs of issue. I must have checked it a dozen times, but was missing something.

Ok, so what appears to have happened, although I cannot explain exactly how it happened, the bios has become corrupted and switched off VT-x. So off I go to find an updated bios, thinking easy peasy fix. It turns out after a little digging that my mainboard is a B400 manufactured by Inventec. I know, obscure huh? They are not exactly unknown to me as it turns out, and are a massive asian ODM company that design everything from kettles to satelites. Obtaining a bios update from them though is simply not on the cards. Nor for that matter, is there any chance of tracking down the actual asian board manufacturer. These guys like to remain anonymous, leaving little or no clues on the board or in the documentation. So I then turned my attention to AMI. It's their APTIO 4 uefi firmware and bios that is being used, as is often the case with older small server type boards. Much searching later and still no luck. I can find a gazillion different boards, everything from desktops to miniITX all using the same general bios version, but each having different modules and configurations tailored specifically for each board. Mine is obviously not amongst them. My heart sank, I've been here before with other non-branded boards with varying degrees of luck/ success. I decided that I'm not taking this lying down. I feel that it's a good mainboard in most other respects, but I knew that this may be a very long road to travel for a fix.

Getting my hands dirty modifying a uefi bios is not for the faint of heart. It could easily end in complete misery with an unbootable board, but undeterred I went on the hunt for some AMI software tools never the less. The tools I went looking for specifically were AMI's bios flash utility, AFUWin and their MMTool for APTIO version 4x. I also came across a nice little bios tool called UEFITool along the way, which is really useful for pulling out and extracting bios setup menus and the like, if that floats your boat. First things first. I ran the AMI flash util and save a copy of the original bios, just in case, then make 2 more copies of it so it's not accidentally overwritten and there is some way back should it all go sideways.

As it turned out, using the tools to dump the bios setup menu, I discover that there is a whole load of things disabled that I was not aware of to begin with, not just VT-x. Most of which I really don't need to bother with, but there were a couple of items that would be handy to have back again. I was really too tired last night to start back tracking the code looking for entry points, so went looking for an easier vector to take. This was when I remembered about AMIBCP for version 4x. After some digging on the web, I managed to get my hands on it and after opening the saved bios with it, I was able to look through the various setup screen menu items and re-enable them to be visible and modifiable. Once I finished tweaking things, I saved the bios file with a new name, and using the flash util, I "upgraded" my own bios. I was then able to go into it at boot and switch on VT-x again and a couple of other little tweaks too. The outcome is that ESXi now installs without complaint and runs x64 VM's again without issue. Success, and surprisingly, without the pain I was expecting.

I count myself very, very lucky here. I could have been digging into code for a while to fix this, but as it turned out, the menu items had simply dissappeared and been disabled. In retrospect, I can only assume that the bios was originally updated, after all it was working to begin with, and at some point when I have been in the bios setup doing something else, I have somehow triggered the bios recovery mechanism which overwrote the updated bios and disabled things. The most likely cause of this I can think of would be a vanilla AMI bios copy in the recovery blocks and the one that the board manufacturer had been playing with to get things working in the bios blocks. They must have not updated the recovery blocks when they were done tweaking :)

Anyway, a huge sigh of relief for me and I can now get my installs done when I get some more time to play with it. Hopefully this will help anyone else that has a similar issue.
 
Last edited:

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada
It's been a while since I updated this, but I finally got round to adding a SLOG device, a trusty Intel DC s3500 80GB, to the ZFS pool. I done this mainly for experimental purposes to see if I could improve anything performance wise. I figured I would share a couple of the benchmarks here.

I ran CrystalDiskMark several times over an iSCSI link (1Gbps) from my laptop to a ZVol on my storage pool, both before and after adding the SLOG. I also performed the same test, using the ESXi console, from a windows VM to the same ZVol on the storage pool. The storage pool VM uses 2 x bonded virtual NIC's (2 x 10Gbps VMXNet3) as does the windows VM. The ZFS storage pool VM is running the latest patched Debian 8 (Jessie) and ZoL. The storage pool is configured to use LZ4 compression, dedupe=off, atime=0. Beyond that configuration is vanilla, i.e as it comes out the box. I'm sure there are things I could tweak to eek that last little bit of performance out of it, but that's for another play day :)

Here are the results from my laptop prior to the SLOG being fitted, sync=standard:



Not too bad, it looks like my 1Gbps pipe is being fully utilized after all. The trouble with speed, is that when you get used to having it, things seem to feel, after a while, like they are running slow again. The reality though, is that I can transfer a 4GB file faster over an iSCSI link to the server, than I can copy the same file to another partition on my laptop or to an external USB 3.0 capable pen drive or disk. It's funny how you get used to things...

Here are the results from my laptop after the SLOG was fitted, sync=always:




As you can see, the reads show some sign of movement, which is not what I was expecting as the SLOG is primarily a write only device. This could either be a fluke, or I am misunderstanding something. The other small variances, I have come to the conclusion, are just a representation of what else is going on at the time with OS/ Hypervisor etc during each test run.

I knew that having sync=always would hit the write performance, but was hoping that as the ZIL is no longer being held on the pool that this would add something back and almost balance out the hit from sync writes. The figures from my laptop seem to suggest that there is a small penalty, but it's not too bad and ensures my data is safely transferred.

Moving on, here are the baseline results prior to SLOG being fitted, going from a windows VM to the storage pool located on another VM, again sync=standard:



Not too shabby really considering, reads are around where I expected with an 8 disk pool. I was however hoping for better write performance, but attributed this to the ZIL being kept on the pool.

Here are the results from VM-VM after the SLOG was fitted, sync=standard:



As you can see, it made very little difference, no big announcement that the ZIL is no longer on the pool etc. I guess I'm a little disappointed, I had hoped for something that at least confirmed I was heading in the right direction.

Lastly, here are the results VM-VM after the SLOG is fitted, sync=always:



All I can say is ouch. The writes fell off a cliff. I was expecting a hit in the write department, but this seems really drastic.

My conclusion then, adding a SLOG may not have improved things, beyond perhaps increasing data transfer integrity. It certainly doesn't appear to have added any speed benefit.

To be fair here, the VM that the storage pool is running on, only has a max of 16GB of RAM allocated to it, which is woefully inadequate for anything really serious. The server itself is maxed out at 32GB and I need the rest of the RAM for the other VM's and ESXi, so safe to say it's constrained. The only fortunate thing is that I am not using the storage pool as a VM Datastore presented over either NFS or iSCSI, so no OS's are stored or running on it where metadata not being committed to the spindles, would present a risk of a goosed filesystem in the event of something going wrong before the data is safely stored. Getting back to "if things go wrong during a media file transfer" etc, I will be the one transferring files back and forth, likely only a handful at a time and will obviously be the first to know if it went down the plughole during the copy, moreover only that file or files is lost or corrupted in flight. I can simply copy it over again, no real harm done. For me, at least for the time being anyway, it's back to plan A. Whilst I'm still pretty much saturating my 1Gbps link, I just can't live with the idea of putting a ball and chain round the server's ankles, however, I'm sure if I had better hardware, a better use case and a lot more RAM to play with, things would likely be very different. Was it money well spent? Sure, I wouldn't have known otherwise. Would I recommend doing it again? Perhaps not unless the setup was sufficiently different that my experiences so far were likely to infer having a different outcome :)
 
Last edited:

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
In summary:
Sync=default means on via ESXi/ NFS and off example via SMB (client decides)

Your pool has a sequential write performance (sync=off): more than 400 MB/s
Your Intel SSD has a write performance with sync=always: about 80 MB/s
= Your pool with the SSD as Slog has a write performance (sync=on) of about 80 MB/s

The principle behind
- Any write on ZFS is always throtteled over the ZFS rambased write cache that
collects many small random writes to one large sequential write to improve performance

With sync write enabled, you ADDITIONALLY log any single commited write to a ZIL device,
either onpool or on an Slog what means that your overall sync write performance cannot be faster
than either the pool performance or the ZIL performance.
 
  • Like
Reactions: pricklypunter

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada
@gea Am I understanding this correctly? I will need a Slog that is capable of around 500MB/s sequential before I would see the pool write performance get back up to where it would be with sync=off?