Software development company Nutanix kicked off its annual .Next conference, held in Chicago last week, with a slew of announcements – key among them is a new platform to optimise and govern agentic AI use cases.
CEO Rajiv Ramaswamy, in his keynote, said he believes that agentic AI is going to have a profound impact.
He added that every company and CIO he talks to wants to know about AI, how they can integrate it into their IT stack, and how they can get a return on their AI investments.
But, said Ramaswamy, these are “uncertain times” and digital sovereignty has become increasingly important. On sovereignty, he said this extends to a company’s data and its infrastructure, which should be managed by a company’s own staff. There are also supply chain and hardware constraints to contend with.
He said the world is moving from an era of prompting “your favourite bot or model” to an era that will empower agents with autonomy. “Your competitive edge is no longer just about the data that you have, but the autonomy you can enable with that data. It’s a massive opportunity.”
A foundational building block for agentic AI is the AI factory, he noted. This has moved beyond being a specialised niche – its use has “exploded” as everyone started using AI services.
Ramaswamy said a typical deployment model would see the AI factory, or the underlying hardware, delivered to customers by server vendors such as Nvidia and AMD, among others, which, in turn, power Nutanix’s software. Some companies have themselves stitched together this layer from open source components, but most organisations, except the very largest and most well resourced, have found this to be a challenge.
Nutanix announced its agentic AI stack at Nvidia’s GTC conference in San Jose in March this year, and Ramaswamy said it is now making its “AI Catalogue” of pre-built open source components available for developers to bring AI services to market.
It has also announced an AI gateway service to govern the policies of cloud-hosted and private large language models.
Bare metal
The company also announced it will be offering a bare-metal Kubernetes service, called NKP Metal. It joins Red Hat (OpenShift Bare Metal), AWS (EKS Anywhere), Google (GKE on Bare Metal) and Suse’s Rancher, among other vendors.
Running the Kubernetes container orchestrator on bare-metal infrastructure achieves low latency, as commands don’t have to run through the hypervisor, which creates and manages virtual machines. This suits applications in high-frequency trading, gaming and real-time AI inference.
NKP Metal, the company said, supports the use of dual-native architecture with both containers and virtual machines. It said this unified operating model could be used to run AI and other intensive workloads on bare metal.
The NKP Metal deployment option is now available in early access and will go into general availability in the second half of 2026.
Lee Caswell, SVP of product and solutions marketing at Nutanix, said its dual-native architecture is unique. “There are companies that provide ‘always virtualised’ or ‘always bare metal’, but no one else gives you the flexibility to have bare metal or containers and operate them as if they’re a single networked environment.”
He said this would assist in meeting organisations’ sovereignty requirements and would give them the ability to have security that moves from a bare-metal edge instance into a public cloud instance, or into a virtual private data centre.
“That’s a very difficult problem, and similarly, to be able to do data restores, from a stateful container, running on the edge in bare metal, to a virtual private data centre. These use cases are going to be more pronounced in the world of AI, because AI will be distributing data. The cloud was a centralising factor; AI is a distribution influence for data. As you have containers with portability, now the opportunity is to have a common operating model from bare metal into virtualised environments.”

