UPDATED 15:27 EDT / JUNE 28 2023

AI

Enterprises reimagined: Nvidia and Snowflake’s partnership aims to transform enterprise AI

Imagine a world where enterprises can harness the hidden intelligence locked within their vast repositories of data, unleashing its transformative potential.

This vision has taken a significant step toward reality with the partnership between Nvidia Corp. and Snowflake Inc. The convergence of their expertise in artificial intelligence and data management has created a powerful alliance that promises to revolutionize enterprise AI using Large Language Models.

“Everybody’s got the sense of what generative AI can do,” said Manuvir Das (pictured, left), vice president of enterprise computing at Nvidia. “But for enterprise companies, every company is sitting on a corpus of data in which is hidden all of the intelligence of that company.”

Das, along with Christian Kleinerman (right), senior vice president of product at Snowflake, spoke with industry analysts Dave Vellante and George Gilbert at Snowflake Summit, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the details of the Nvidia and Snowflake collaboration and how it’s reshaping the future of AI in the enterprise realm. (* Disclosure below.)

Letting loose the power of LLMs for enterprise intelligence

The convergence of Nvidia and Snowflake’s expertise is poised to work like magic in the realm of generative AI, particularly with the utilization of LLMs. The implications of this collaboration are significant, as it addresses a pressing challenge faced by enterprise companies — extracting intelligence from their vast troves of data, which hold the hidden potential to transform their operations.

“It’s taking LLMs to the next level.,” Das explained. “When you have a foundation model … it’s like somebody who just graduated from college, a super smart new hire, but now you want to take that person, and what if they’d been at your company for 20 years and they had all the experience and knowledge of working in your company for 20 years? Wouldn’t they be a much more productive employee? That’s the difference between just a regular foundation model.”

Snowflake, with its robust data infrastructure, serves as the repository for this valuable corporate data, while Nvidia provides the intelligence engine to extract insights and develop customized models. Together, they bring this power to Snowflake’s cloud platform, allowing enterprises to unlock the untapped potential of their data.

Fear and skepticism over AI and LLMs

In the rapidly evolving landscape of AI, three distinct reactions have emerged: excitement, skepticism and fear. The excitement is justified, as demos and transformative experiences have showcased the potential of AI. However, skepticism is natural, given the history of overhyped technologies that failed to deliver, according to the duo.

Simultaneously, the fear stems from the disruptive nature of AI, as it has the potential to reshape industries and impact various use cases. To address these reactions, it is crucial to allow the dust to settle and let the real value of AI and LLMs shine through, convincing skeptics and alleviating fears, according to Kleinerman.

“The skepticism, I get it. There have been many other technologies emerging in the last few years that they’re like, oh, everything’s gonna change,” he said. “And the fear is real, because this technology is going to disrupt every industry, every use case — not completely replace. But in some way it’s going to affect the types of results and the experiences you have.”

Seamless integration and enterprise-grade experience: Nvidia and Snowflake join forces

The integration between Snowflake and Nvidia has been seamless, facilitated by the containerized nature of Nvidia’s software stack, according to Das. Snowflake’s containerized environment complements this, enabling the swift deployment of applications within the platform. However, the partnership’s true value lies in ensuring an enterprise-grade experience for customers. This requires addressing concerns related to data security, training data sources, model efficiency and optimization.

Both engineering teams are working to refine the containerized model, optimizing it for efficient data processing and ensuring it meets the high standards expected by enterprises, including Nvidia NeMo, an end-to-end, cloud-native enterprise framework to build, customize and deploy generative AI models with billions of parameters.

“The first thing that Nvidia NeMo brings to the table, which is part of this integration, is we have pre-trained a suite of foundation models where a lot of the training, the months and millions of dollars of training to get a general knowledge and generic skills has already been done,” Das said.

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of Snowflake Summit:

(* Disclosure: TheCUBE is a paid media partner for Snowflake Summit. Neither Snowflake, the sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One-click below supports our mission to provide free, deep and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU