UPDATED 16:31 EDT / JULY 20 2023

BIG DATA

Compute.ai aims to power up supercloud by going beyond the data warehouse

Driven by an increasingly pressing demand for profound, value-driven insights extracted from complex data, the supercloud concept is surging ahead, capturing the pulse of the business intelligence market.

Even though data warehouses and data lakehouses have been a great step forward when building repositories required in data analysis, connecting the data is paramount for in-depth and real-time business analytics. Compute.ai makes this a reality through a semantic layer that is referenceable, according to Chief Executive Officer Joel Inman (pictured).

“What we’re doing, our vision for the future, is really separating compute from data management,” Inman said. “Our view of the semantic layer is simply metadata that connects the different data silos so that you can put it all together. The use case for that would be … the ephemeral data that you don’t have time to put into a data warehouse, you don’t have time to run the analytics and the statistics on that, because if you take the time to actually put it into that data warehouse it’s gone.”

Inman spoke with theCUBE industry analysts John Furrier and Dave Vellante at the Supercloud 3: Security, AI and the Supercloud event, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed how Compute.ai is accelerating the supercloud narrative through data connectivity, artificial intelligence and machine learning, and a semantic layer.

How AI/ML comes into the picture

AI and machine learning is setting the supercloud ball rolling by acting as the accelerator on top. As a result, Compute.ai uses AI/ML to tame memory workloads, Inman pointed out.

“We use AI/ML in our product, and we use it to page to disk elegantly,” he said. “Within our code, we use these algorithms in order to garner memory efficiency that has never been seen before. So, bypassing the memory bus throttle that prevents CPUs from being 100% utilized. We can run CPUs at 100% utilization, even over-commit CPUs, over-commit memory and have a spill to disk that is very elegant.”

For cost-efficiency purposes, running AI and machine learning workloads will require new infrastructure. This is why out-of-the-box approaches are required when it comes to compute so that the benefits outweigh the costs. Compute.ai is helping with this goal as an empowering relational engine.

“If you put an AI workload on top of the current infrastructure where compute is trapped in database silos, you’re going to get costs that go through the roof,” Inman said. “There was a study that I read recently from CSAT that showed the biggest LLM model is going to cost $25 trillion in compute by 2026. Something has to be done about the compute and the infrastructure to support AIML, and that’s where we play.”

By supporting different AI and machine workloads, Compute.ai lays its eyes on compute. This approach ensures that productivity is pushed a notch higher because the company’s infrastructure accommodates various models.

“I was talking to a customer yesterday who said, ‘We have these models that we’ve been building, and they’re proprietary to us. And we need to be able to run them within our platform,’” Inman said. “They were using BigQuery from Google — a perfect example of generating and building these models and then putting them on our infrastructure.”

Compute as a new category

Over the years, compute has been part of the relational database architecture for convenience purposes. Nevertheless, the narrative is changing as the demand for compute continues to skyrocket, and having it trapped in data silos is detrimental, according to Inman.

“We believe compute is a new category,” he said. “People have never really separated it out before. I think that we’re seeing the early days, we’re seeing the need and we’re seeing people have data warehouses and data lakehouses both in their environment. We need to make use of it with a relational structure, a semantic layer and then the infrastructure to support that. The infrastructure to support that is going to be the supercloud.”

Since data is everywhere and it’s spilling over, the burning question becomes how it should be best utilized. As a result, Compute.ai’s vision is to ensure that compute is omnipresent, because it’s crucial in the realization of this objective, according to Inman.

“Think of us as a coprocessor or a compute engine,” he said. “We go into that system, and maybe it is a Snowflake, and you point dbt directly at our engine so you offload the compute. Compute is going to the compute engine, dbt is pointed at our product, and then you load it back into your data warehouse.”

When it comes to real-time data, the lag to value is usually a costly affair. As a result, the semantic layer comes in handy as a real-time digital representation of the data, Inman pointed out.

“When you’re creating the semantic layer, what you’re doing is you’re actually executing thousands and hundreds of thousands of joints,” he said. “You’re taking tables and formats and rows and columns from all the disparate areas of your business, and you’re putting them together into one semantic layer that is referenceable.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the Supercloud 3: Security, AI and the Supercloud event:

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One-click below supports our mission to provide free, deep and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU