Subscribe
  • Home
  • /
  • Cloud Computing
  • /
  • Microsoft taps into Oracle’s computing power for Bing conversational search

Microsoft taps into Oracle’s computing power for Bing conversational search

Christopher Tredger
By Christopher Tredger, Portals editor
Johannesburg, 08 Nov 2023
Microsoft and Oracle look to apply AI infrastructure to bolster user engagement with Microsoft Bing.
Microsoft and Oracle look to apply AI infrastructure to bolster user engagement with Microsoft Bing.

Microsoft and Oracle have entered into a multi-year agreement to support the growth of AI services, and specifically AI models optimised to power Microsoft’s Bing conversational searches.

According to a statement issued by Oracle, Microsoft will use Oracle Cloud Infrastructure (OCI) Supercluster and its own Azure AI infrastructure to run live data through trained AI models that are being optimised to enhance user engagement with Microsoft’s AI-powered search engine.

By leveraging Oracle Interconnect for Microsoft Azure, Microsoft can use managed services like Azure Kubernetes Service (AKS) to orchestrate OCI Compute at massive scale.

Bing conversational search requires powerful clusters of computing infrastructure that support the evaluation and analysis of search results that are conducted by Bing’s inference model, the companies stated.

Inference models require thousands of compute and storage instances and tens of thousands of GPUs that can operate in parallel as a single supercomputer over a multi-terabit network.

Oracle said its OCI and AI infrastructure, or the Oracle OCI Supercluster, is supported by “up to tens of thousands of NVIDIA GPUs”.More precisely, OCI Superclusters can scale up to 4 096 OCI Compute Bare Metal instances with 32 768 A100 GPUs or 16 384 H100 GPUs, and petabytes of high-performance clustered file system storage to process massively parallel applications.

Oracle’s Interconnect for Microsoft Azure simplifies the path to a multicloud environment, offering interoperability between OCI and Azure without the need for complex re-architecture or re-platforming.

Share