UPDATED 13:47 EDT / JULY 27 2023

BIG DATA

Revamping data storage to harness AI for an information supply chain

Artificial intelligence has created an acute appreciation for data within the enterprise. However, it’s also created a sprawl that’s placed unprecedented strain on today’s storage systems.

So, how does the industry approach the mounting technical aspects of scale to create a new class of AI applications and manage data flow through the information supply chain?

“We’ve started to see changes in workloads from media and entertainment, healthcare, life sciences [and] financial services sectors,” said Christopher Maestas (pictured), worldwide executive solutions architect at IBM Corp. “AI really has changed it, because it picked the middle of the road — not the itty-bitty files that you see or the large streaming data that you’ve been doing. We’re really seeing that data size change and, again, having to adapt to a different data size that we’ve not traditionally handled in the past.”

Maestas spoke with theCUBE industry analyst Dave Vellante at IBM Storage Summit, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the need to refactor data systems to power tomorrow’s AI applications. (* Disclosure below.)

A multifaceted approach to the scale problem

Equipping data platforms to scale out and absorb the AI explosion goes beyond creating a high-performance environment. IBM’s approach extends that reasoning by providing multiple data interfacing methods to accommodate emerging workload categories, according to Maestas.

“[It gives] you different ways of interfacing with the data, whether you wanna explore containers in the future, whether you want to look at pushing it out to an object store — we really are adapting to how you’re accessing the data,” he explained.

Another area that IBM is addressing is that of cross-platform integrations. The company’s storage platform now “plays nice with others” to connect and cache from other object storage providers, according to Maestas.

“What we can do is actually connect and cache from other data sources that are non-IBM, that are object storage, NFS, NetApp kind of filers … and bring that data in where we can catalog it, we can manage the data for you,” he said. “You can actually create new decisions based on a workflow that you’re doing in AI and have that data be brought in from one data source and be pushed out to another data source.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the IBM Storage Summit:

(* Disclosure: TheCUBE is a paid media partner for the IBM Storage Summit. Neither IBM Corp., the sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One-click below supports our mission to provide free, deep and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU