CoreWeave Leads AI Infrastructure with NVIDIA H200 Tensor Core GPUs


Terrill
Dicki


Aug
29,
2024
15:10

CoreWeave
becomes
the
first
cloud
provider
to
offer
NVIDIA
H200
Tensor
Core
GPUs,
advancing
AI
infrastructure
performance
and
efficiency.

CoreWeave Leads AI Infrastructure with NVIDIA H200 Tensor Core GPUs

CoreWeave,
the
AI
Hyperscaler™,
has
announced
its
pioneering
move
to
become
the
first
cloud
provider
to
introduce
NVIDIA
H200
Tensor
Core
GPUs
to
the
market,
according
to

PRNewswire
.
This
development
marks
a
significant
milestone
in
the
evolution
of
AI
infrastructure,
promising
enhanced
performance
and
efficiency
for
generative
AI
applications.

Advancements
in
AI
Infrastructure

The
NVIDIA
H200
Tensor
Core
GPU
is
engineered
to
push
the
boundaries
of
AI
capabilities,
boasting
4.8
TB/s
memory
bandwidth
and
141
GB
GPU
memory
capacity.
These
specifications
enable
up
to
1.9
times
higher
inference
performance
compared
to
the
previous
H100
GPUs.
CoreWeave
has
leveraged
these
advancements
by
integrating
H200
GPUs
with
Intel’s
fifth-generation
Xeon
CPUs
(Emerald
Rapids)
and
3200Gbps
of
NVIDIA
Quantum-2
InfiniBand
networking.
This
combination
is
deployed
in
clusters
with
up
to
42,000
GPUs
and
accelerated
storage
solutions,
significantly
reducing
the
time
and
cost
required
to
train
generative
AI
models.

CoreWeave’s
Mission
Control
Platform

CoreWeave’s
Mission
Control
platform
plays
a
pivotal
role
in
managing
AI
infrastructure.
It
offers
high
reliability
and
resilience
through
software
automation,
which
streamlines
the
complexities
of
AI
deployment
and
maintenance.
The
platform
features
advanced
system
validation
processes,
proactive
fleet
health-checking,
and
extensive
monitoring
capabilities,
ensuring
customers
experience
minimal
downtime
and
reduced
total
cost
of
ownership.

Michael
Intrator,
CEO
and
co-founder
of
CoreWeave,
stated, “CoreWeave
is
dedicated
to
pushing
the
boundaries
of
AI
development.
Our
collaboration
with
NVIDIA
allows
us
to
offer
high-performance,
scalable,
and
resilient
infrastructure
with
NVIDIA
H200
GPUs,
empowering
customers
to
tackle
complex
AI
models
with
unprecedented
efficiency.”

Scaling
Data
Center
Operations

To
meet
the
growing
demand
for
its
advanced
infrastructure
services,
CoreWeave
is
rapidly
expanding
its
data
center
operations.
Since
the
beginning
of
2024,
the
company
has
completed
nine
new
data
center
builds,
with
11
more
in
progress.
By
the
end
of
the
year,
CoreWeave
expects
to
have
28
data
centers
globally,
with
plans
to
add
another
10
in
2025.

Industry
Impact

CoreWeave’s
rapid
deployment
of
NVIDIA
technology
ensures
that
customers
have
access
to
the
latest
advancements
for
training
and
running
large
language
models
for
generative
AI.
Ian
Buck,
vice
president
of
Hyperscale
and
HPC
at
NVIDIA,
highlighted
the
importance
of
this
partnership,
stating, “With
NVLink
and
NVSwitch,
as
well
as
its
increased
memory
capabilities,
the
H200
is
designed
to
accelerate
the
most
demanding
AI
tasks.
When
paired
with
the
CoreWeave
platform
powered
by
Mission
Control,
the
H200
provides
customers
with
advanced
AI
infrastructure
that
will
be
the
backbone
of
innovation
across
the
industry.”

About
CoreWeave

CoreWeave,
the
AI
Hyperscaler™,
offers
a
cloud
platform
of
cutting-edge
software
powering
the
next
wave
of
AI.
Since
2017,
CoreWeave
has
operated
a
growing
footprint
of
data
centers
across
the
US
and
Europe.
The
company
was
recognized
as
one
of
the
TIME100
most
influential
companies
and
featured
on
the
Forbes
Cloud
100
ranking
in
2024.
For
more
information,
visit

www.coreweave.com
.

Image
source:
Shutterstock

Comments are closed.