Nutanix ties up with Google Cloud Platform: The Future

  • Thread starter Patrick Kennedy
  • Start date
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

OBasel

Active Member
Dec 28, 2010
494
62
28
I just think it's stupid that there's 50 different APIs and ways to do the same thing. Provision a machine. Provision storage.

The next Nutanix competitor is going to just be a provisioning interface for whatever servers with KVM or containers. Like OpenStack interface for everything.

Virtualization has meh value. Basic storage has meh value. It's getting commoditized and those layers are being blurred if you're moving private to public.

What they're missing over a VMware is NSX networking.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,516
5,809
113
Why is that?
You aren't "thrilled" with your Proxmox VE setup?
It works great but I originally wanted to go Nutanix. There is clearly a lot more development on the Nutanix side.
 

capn_pineapple

Active Member
Aug 28, 2013
356
80
28
Gotta agree too btw.

Nutanix CE doesn't quite do enough for an intelligent and growing SMB usecase, but the full version is way to $$$ for the same requirements which is a pain (even that new Dell Xpress setup cuts out a fair amount of the smarts). It's a real pity. It's a massive market that they're just unable to capture due to buy in price.
 
  • Like
Reactions: _alex

capn_pineapple

Active Member
Aug 28, 2013
356
80
28
Split the costings and have a cheaper license fee with support as an additional paid service like other vendors. It's the support that's expensive anyway, both for Nutanix to run and additionally for companies to pay for.

It would allow for a more organic growth for small companies to move into their ecosystem, then BAM vendor lock-in like what they want
 

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
Hmm, I wonder if there are folks out there using CE in production...
No. Community Edition=non production, is not built for prod, built to get in as many hands as possible to give you an idea of the platform without having to manage an HCL hence the difference between CE and PROD

I just think it's stupid that there's 50 different APIs and ways to do the same thing. Provision a machine. Provision storage.

The next Nutanix competitor is going to just be a provisioning interface for whatever servers with KVM or containers. Like OpenStack interface for everything.

Virtualization has meh value. Basic storage has meh value. It's getting commoditized and those layers are being blurred if you're moving private to public.

EXACTLY!

What they're missing over a VMware is NSX networking.
Missing NSX without the NSX price tag :-O
Ouch !!
Network Chaining/Insertion and Native Micro-segmentation is included with the next release. You can fast forward to the demos.


@Patrick, btw, we do certify our software on commodity platforms, however it's larger SP, "Webscale" customers. I have reached out internally, however. I'll let you know but we would need some details on the current hardware. Typically it will come down to SSD/HDD or SSD type/size and HBA along with firmware, IPMI..etc.

As for the actual announcement around GCP, there is a big differentiation from that and our competitors. We will actually allow GCP Services to run on Xi on GCP such as Tensorflow..etc. When you compare that to say VMware on AWS, you don't get any of the AWS goodness and services and this is just the start. Native one-click DR to start with additional capabilities coming quickly, Kubernetes..etc.

BTW, our SMB and ROBO offerings are strong and have been comparable with others $, but now we have 1-node and 2-node options coming also in the video above which will provide better options for much smaller environments.
 
Last edited:

awedio

Active Member
Feb 24, 2012
776
225
43
@Patrick, btw, we do certify our software on commodity platforms, however it's larger SP, "Webscale" customers. I have reached out internally, however. I'll let you know but we would need some details on the current hardware. Typically it will come down to SSD/HDD or SSD type/size and HBA along with firmware, IPMI..etc.
You mean STH could be running a Nutanix backend, that would be cool.
 

cheezehead

Active Member
Sep 23, 2012
730
176
43
Midwest, US
Looked at them and continue to monitor them but haven't bought yet.

Hyper-converged solutions (in general) try to claim their solution works for everyone with all workloads. Really in any datacenter you run against three issues; cost, performance, and power/cooling. For those of us who have existing datacenters which were originally designed around 100% physical infrastructure, after moving to 100% virtual we land ourselves with large nearly empty rooms. If there is an established performance agree with business units (excluding the 99% usage scenario), most vendor solutions will work in this day and age. This leaves the last and most hindering part for many hyper-converged solutions is cost, lets ignore list cost as anyone doing any sort of large PO scenarios knows it's not about list, it's about what the discounted price will be. The discounted price is never published and have multiple factors in play, but everytime I've received real pricing the net result was the hyper-converged* solution would loose due to a significant price premium.

* = for clarification, I'm referring to COTS solutions where multiple arrays are involved across long-distance replication with 4HR SLA's.

There can be something said for simplified stack management, which depending on staffing levels can definitely sway an organization to justify the extra cost in a hyper-converged array (CAPEX vs staffing OPEX).
 
  • Like
Reactions: T_Minus and Evan

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Cost also depends if you have SAN FC infra already in place and needed going forward for other workloads like big rdbms (oracle, SAP, etc). If going hyperconverged means you can do away with a network then especially when there is between DC dark fiber involved there can be a cost benefit in the capex, and also if you don't need dedicated SAN admin then some staff saving as well.
If you don't get saving as above then I would have to say it's at best probably price neutral.
 
  • Like
Reactions: T_Minus

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
There can be something said for simplified stack management, which depending on staffing levels can definitely sway an organization to justify the extra cost in a hyper-converged array (CAPEX vs staffing OPEX).
Majority (99%) of time when we do TCO analysis with YOUR numbers and I stress YOUR numbers, we are quite favorable against 3-tier.

Also, you can displace SAN over time. We rarely go into a new customer and tell them, hey throw away your investment. It starts with a use case or a business problem that needs to be solved. Again, the beauty of starting small and scaling over time. TBH, i'd rather have you do it that way because the re-buy rate in my territory is over 90%. There's a reason for that. ;)

I agree, most can provide the performance that most would require. The question is can you simplify that stack while still providing performance and what I mean by stack, I don't mean HCI, I mean a Scalable MGMT plane for both on prem and off (cloud or otherwise). A ton of our customers see the full value in the stack, not just the storage piece. We have to move away from looking at HCI as a SAN replacement. To us, it's an enabler to allow us to deliver that Scalable MGMT plane for on prem and off.

The bottom line is that the conversations should be elevated beyond data center infrastructure it's about more of a single extensible control plane (one OS for both on prem and cloud) where real tangible benefits can and have been seen.
 

cheezehead

Active Member
Sep 23, 2012
730
176
43
Midwest, US
Cost also depends if you have SAN FC infra already in place and needed going forward for other workloads like big rdbms (oracle, SAP, etc). If going hyperconverged means you can do away with a network then especially when there is between DC dark fiber involved there can be a cost benefit in the capex, and also if you don't need dedicated SAN admin then some staff saving as well.
If you don't get saving as above then I would have to say it's at best probably price neutral.
Large FC deployments will take a long time to move away from, for those already using iSCSI/NFS/SMB it's a bit easier of play. The replication scenario actually can benefit from some of the HCI gear....where you can set prioritization on VM-level replication and vm-level deduplication could prove very powerful in the right scenarios.

We have to move away from looking at HCI as a SAN replacement.
At least from the VMUG's I've been at this has been hard pill to swallow for some orgs. Budgeting wise some places plan on alternating SAN, compute, and network over a multi-year rotation. Bringing in a solution slowly is fine but on the backend also marrying the org to a single vendor solution is a tough pill for some to swallow. Not saying it shouldn't be done but sometimes the easy button doesn't always makes sense *coughs SQL Server installs*.
 
Dec 30, 2016
106
24
18
44
Large FC deployments will take a long time to move away from, for those already using iSCSI/NFS/SMB it's a bit easier of play. The replication scenario actually can benefit from some of the HCI gear....where you can set prioritization on VM-level replication and vm-level deduplication could prove very powerful in the right scenarios.

At least from the VMUG's I've been at this has been hard pill to swallow for some orgs. Budgeting wise some places plan on alternating SAN, compute, and network over a multi-year rotation. Bringing in a solution slowly is fine but on the backend also marrying the org to a single vendor solution is a tough pill for some to swallow. Not saying it shouldn't be done but sometimes the easy button doesn't always makes sense *coughs SQL Server installs*.
I'd also add that not only do compute, SAN, and network have their own life cycles due to changes in technology, usable part duration, and increasing maintenance costs, but the shared storage business is beginning to move away from the "pay a maintenance premium as years go by or replace your entire array" business model.

SSDs, when properly cared for, can last over a decade and some AFA manufacturers warranty the drives for 7+ years or even a lifetime warranty. If you're no longer stuck repurchasing your storage every ~5 years then HCI makes even less sense since you'll not only be paying higher maintenance costs on the entire stack as years go by, but your only recourse is to replace all of your compute and storage simultaneously to move onto new nodes. Ripping and replacing a 5 year old HCI infrastructure might be administratively easy, but extremely costly.

Also, HCI doesn't solve the issue of storage complexity that really started the HCI movement. When I first heard about HCI I was managing a VMware datacenter with HP EVA storage arrays and it was maddening. HCI seemed like a great alternative to complex storage platforms, however, HCI still makes you decide on redundancy factors, replication factors, turning dedupe on/off, compression on/off, encryption on/off, Erasure Coding on/off, CVM CPU and RAM, etc. Storage shouldn't be hard, it should just work.

Nutanix has a great software platform and software suite around private and public cloud management, consumption, and monitoring but HCI as an infrastructure doesn't really make anyone's life easier.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Hmm never I think not once kept a SAN or the switch infra more than 4 years. (Sometimes lifetime is 3 but usually 4) Flash even has a short lifetime from a financial aspect.
HCI I see no different, I am sure 4 years is it, I won't say never but highly unlikely anybody keeps it beyond 4 years.
 

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
5 years is common, what's not is rip and replace HCI. Just doesn't happen. I have customers that are in our product over 5yrs and have replaced only those nodes that needed replacement nondisruptively with no data migration. Once you're all in, it's a cyclical non disruptive no data migration replacement not a forklift like SAN. I also have customers that are 100% Nutanix running all workloads and they have never had to rip and replace, only replace the nodes that have aged out (typically 5 years).