I found this interesting tidbit
http://oss.cumulusnetworks.com/CumulusLinux-2.5.3/patches/kernel/platform-quanta-lb8.patch
It looks like Cumulus did implement LB8 compatibility. 2.5.3 was the latest version I found which included in the LB8 patch.
Hi has anyone played with these switches?
I've been looking at Cumulus compatible switches lately. Had my sights originally set on the QCT stuff but this came across and it seems enticing because it is a bit cheaper than QCT.
Server 2016 supports device passthrough. In my honest opinion for a single node setup, if you can get Windows Server for free through Dreamspark, Windows Server is a top notch choice for running a hypervisor.
Full disclosure: I work for a SPLA partner.
In my opinion buying multiple exchange server licences at small scale is not financially sensible. Unless you really need a one off up front cost.
Especially for exchange where you might want multiple servers for HA you are far better off at small...
Exchange is not supported for hyper-v replica. This is because exchange has its own native high availability.
In my own testing it works fine though. Just understand that if it goes wrong you won't get support from Microsoft.
If you can afford it then just get two exchange server licences and...
For such a small setup, I would recommend hyper-v replica and call it a day. It isn't as nice as actual live migration/vmotion but I think it will be just fine. If you really insist on that capability then kvm is your best bet. Go linux and sort out some other network shared storage (I recommend...
Really liking the new change to TP5 where you can now have 3 tiers of storage.
All I need to take advantage of that in my design is to add a single controller JBOD and fill it up with 3.5" drives.
I just want to add that I never considered the cost of Windows licencing because I'm on SPLA and Datacentre edition is something I am already paying for. It costs nothing extra to go to S2D.
I do not agree that S2D is not scalable. To me the cluster becomes the building block. 16 nodes in a...
PM863 drives push the project out of budget unfortunately. Using the 850 EVO is a bit of a gamble but one I am willing to take. Sometimes taking such risks has paid off like when I bought HP 3Par Optimus SSDs for SOFS. I will indeed be using QDR infiniband. I've got my eye on those 4036E...
I might be rehashing information here.
The shared vhdx thing just mimics a SAS shared disk topology. It just presents the virtual disk to both virtual machines. It is what you do with the shared disk afterwards that matters.
The most common usage I can think of is to create failover clusters...
Looking to create a new Hyper-converged Storage Spaces Direct cluster and I'm looking for a second opinion on hardware choices.
This is what I am thinking of at the moment.
Base platform of Supermicro 1028R-WTNRT, it has 2x NVMe slots and 8 hot swap drive bays for SSDs
E5-2620 v4 CPU's - they...
What's the best way to get a second NVMe drive onto this board?
Is it possible to convert the mini-pcie slot to m.2 with pcie support? Or would I be better of sacrificing a pice x8 slot?
I managed to work things out myself. For anyone wondering how to over-provision a SAS SSD you can use the sg3-utils package. The sg_format command has a resize option whereby you specify the number of blocks you want to short stroke the drive to.
Sorry MiniKnight your link doesn't work.
I did some more research though and if I don't go for JBODs with SAS expanders and just go for DAS enclosures I have found two potential short depth enclosures.
Raid Machines 2.5" 8 bay 1U 6Gbps enclosure
and
Serial Cables 8 bay 12 Gbps JBOD - two...
I know the platform is probably overkill for just an exchange server but I want to leave the ability to run other VM's in the future if I need to.
I'm just looking at storage expansion at the moment. The smallest JBOD I can find is 20" deep, which I think could be too deep for the rack.
Any...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.