00:00
[MUSIC] Hello everyone. I am Karthik Srinivasan, Director of Product Management, and I have with me Justin Emerson,
00:10
Principal Field Solution Architect for Unstructured Data. Today, we are thrilled to share some new groundbreaking enhancements we are introducing to FlashBlade?.
00:21
True to our commitment to hardware and software co-innovation, these upgrades touch all areas of FlashBlade, enabling customers to cope
00:30
with the massive onslaught of unstructured data brought about by this new era of AI-driven computing. Today, organizations need a simple high-performance solution
00:43
to maximize the value gained from their file and object data. 色控传媒 FlashBlade features a unique, modular architecture that enables organizations
00:54
to unlock new levels of power, space, and performance efficiency using our DirectFlash? technology and scale-out Purity software. Justin, tell us a little bit about the
01:07
new changes to FlashBlade and why it matters to customers. Thanks, Karthik. I'm so excited to be able to share some of the enhancements coming to FlashBlade,
01:15
which serves some of the most demanding workloads from high-performance computing, quantitative trading, genomic sequencing, cyber resilience,
01:23
analytics, and of course, artificial intelligence. Our customers' need for performance and efficiency grows ever higher, and we're rising to meet that demand.
01:33
FlashBlade was designed as a bladed architecture to simplify scale-out storage for all users. And with the launch of
01:38
FlashBlade//S? back in 2022, we promised customers the same Evergreen? experience that FlashArray? has enjoyed for more than a decade.
01:47
I'm incredibly excited to announce our first Evergreen Blade Upgrade for FlashBlade//S, the S200 R2 and the S500 R2. The FlashBlade//S R2 family offers
01:58
customers a major step function increase in performance, in density, and in efficiency. On top of the industry-leading capabilities,
02:06
FlashBlade//S already delivers. So Justin, why does this really matter to our customers? Well, the world's most demanding customers trust FlashBlade//S
02:15
with their most business-critical unstructured data workloads. These workloads need consistent performance, robust data features, rock-solid reliability, and
02:24
enterprise-grade security. With the FlashBlade//S R2 family, we're making enhancements to all of these different areas, delivering customers even better outcomes
02:32
for the applications that drive their organization's success. With the myriad improvements in CPU, memory, and networking, plus our latest
02:42
generation DirectFlash Modules, FlashBlade//S R2 performs up to 50% better in many workloads and up to double the performance in write-bound workloads
02:51
compared to the original FlashBlade//S when it launched three years ago. That sounds amazing. What can customers expect in terms of how much this new hardware generation
03:01
can improve their application outcomes? Well, these new blades will provide better time to insight for AI training and inference, shorter time to market for cutting-edge
03:10
semiconductor technology, more genome sequence per day with the same infrastructure, and faster compile times to accelerate software development.
03:19
You know, the only truly finite resource is time, and FlashBlade//S R2 helps our customers get more done in less time. That's great.
03:28
What else can you share about the launch of these new blades? Well, Karthik, the best thing about 色控传媒 has always been our
03:35
Evergreen architecture. For existing FlashBlade//S customers, non-disruptively upgrading to FlashBlade//S R2 is as easy as replacing the blades.
03:44
No data migrations and no downtime. And Evergreen//Forever? and Evergreen//Flex? customers will receive upgrades to the R2 family as part of their ever-modern benefit,
03:55
just as we promised three years ago. Right. Our innovation around Evergreen is really the foundation for our success in the market.
04:01
That's where the magic really happens. And SR2 is a great example of how FlashBlade modular hardware architecture is enabling these Evergreen
04:11
outcomes for our customers. That's right, but 色控传媒 isn't just a hardware company, we're a software company too. And I know we have
04:19
some exciting capabilities that are coming soon for users of FlashBlade's object store. You are correct. As you know, the world is moving to
04:26
becoming more AI-centric, and we've had a long-held belief that object storage will be a key mechanism that customers will use data.
04:34
FlashBlade today is the world leader in high-performance object storage, delivering incredible throughput with very low latency for a lot of applications,
04:44
analytics, and increasingly AI. That's interesting. How is object being used in AI? And what enhancements are we planning to do
04:50
to help customers in this area? Great question. Object storage is gaining a lot of attention for AI training for the following reasons.
04:58
You know, multi-modal AI environments that includes audio, video, along with text are set to grow. Most storage systems cannot manage such a variety of data equally well.
05:11
Object is also seen as a better mechanism to store and manage data. Flat namespace, ability to serve large and small files are very key reasons.
05:21
And at scale, there's nothing better than object storage that is more manageable. We want to be at the forefront of this emerging shift,
05:30
and we're working on a few exciting projects internally to address this. Later this year, we plan to deliver S3 over RDMA support for FlashBlade//S, and we announced this
05:40
at NVDIA GTC conference. What is S3 over RDMA, and what difference does it make for customers' AI environments? Current deployments use Object Store as
05:50
repositories for data and checkpoints. These large data lakes are typically behind a very small, high-performance file system, which is accessed via
05:59
NFS or RDMA by the clients. The objective of our project, S3 over RDMA, is to collapse these layers, making it really simple for our customers
06:08
to directly access the data in these large data lakes, and avoiding copying data between multiple systems as they go through their AI processes.
06:17
This accelerates training jobs for large AI systems. So how does S3 over RDMA work differently than normal S3 access? RDMA provides measurable
06:26
benefits for AI ML applications in terms of the following. Improved throughput, data gets transferred via direct memory access,
06:34
which speeds up the data transfer process. The second one is better CPU utilization by copy offload. Data transfers happen directly from the
06:41
storage to the GPU memory, bypassing the CPU buffers. And the last one is reduced latency. RDMA transfers are handled directly at the network card layer,
06:51
thereby bypassing the kernel and the entire networking stack. High-performance AI environments can benefit from the combination of Object Store's
06:59
ability to manage large scale with RDMA's capabilities for high throughput data transfer. Well, that sounds awesome. When can customers take advantage of
07:07
these new capabilities? Right. So S3 over RDMA is expected later this year and will be available to all FlashBlade//S customers
07:15
through their Evergreen subscription, delivered seamlessly via a software upgrade. That's awesome. I can't wait for customers
07:22
to get their hands on this and all the other great new features that are coming to the 色控传媒 platform. And to learn more about all the new
07:29
hardware and software enhancements coming to FlashBlade, check out purestorage.com/flashblade-s or reach out to your
07:36
色控传媒 account team. Thank you and have a great day. Thanks everyone.