UPDATED 14:00 EDT / JULY 21 2023

AI

AI is at an inflection point: Lamini provides LLM infrastructure for seamless onboarding

Since ChatGPT has opened the eyes on the potential of artificial intelligence, large language models are gaining steam.

Given that LLMs offer an enhanced level of convenience, speed, agility and intelligence by taking the toil and undifferentiated heavy lifting from developers, Lamini Inc. provides the needed infrastructure for easy onboarding, according to Sharon Zhou (pictured), co-founder and chief executive officer of Lamini.

“AI [is] very hot right now and what Lamini does is we offer infrastructure for any company, any enterprise. Their engineering team can train their own large language models, their own LLMs using their own data and their own secure infrastructure,” Zhou said. “It’s making that process much simpler such that everyone can have their own large language model, their own LLM, just as powerful as those models on their own data and infrastructure. That’s what we’re building and making it possible with our tool.”

Zhou spoke with theCUBE industry analyst John Furrier at the Databricks Data + AI Summit, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed why LLMs are proving to be a game-changer in the current era, and how Lamini offers the necessary infrastructure and engineering in this sector.

Data is key in LLM engineering

When it comes to LLM engineering, data is of utmost importance because it acts as the models’ backbone. Understanding the end LLM use case is also fundamental, given that it shapes the final product, just like ChatGPT with the chat interface, according to Zhou.

“The most important thing about LLM engineering is understanding the data so those who understand the data will be able to handle and be able to build their own LLMs the best,” she stated. “Those who are going to be at the forefront of the space are those with the best data. I think the second piece that is really important is understanding the product. ChatGPT, I know it took the world by storm last November … and what differentiated it was largely just the interface.”

LLM engineering will be a crucial stepping stone in software engineering. LLM foundation models have set the ball rolling in generative AI, as evidenced by ChatGPT and GitHub Copilot, Zhou pointed out.

“I think the future of software engineering is LLM engineering,” she noted. “I think there are two very popular LLMs now in production that people are familiar with. One is ChatGPT and the other one is GitHub Copilot. These models were trained on top of what’s known as these LLM foundation models that have been out for actually a couple of years.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the Databricks Data + AI Summit:

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One-click below supports our mission to provide free, deep and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU