NVIDIA to Showcase Data Center Innovations at Hot Chips 2024


Ted
Hisokawa


Aug
23,
2024
14:53

NVIDIA
engineers
to
unveil
advancements
in
the
Blackwell
platform,
liquid
cooling,
and
AI-driven
chip
design
at
Hot
Chips
2024.

NVIDIA to Showcase Data Center Innovations at Hot Chips 2024

At
the
upcoming
Hot
Chips
2024
conference,
NVIDIA
engineers
are
set
to
present
groundbreaking
innovations
aimed
at
enhancing
data
center
performance
and
energy
efficiency.
The
event,
scheduled
for
August
25-27
at
Stanford
University,
will
feature
four
talks
by
senior
NVIDIA
engineers,
according
to

NVIDIA
Blog
.

NVIDIA
Blackwell
Platform

NVIDIA’s
presentations
will
highlight
the
NVIDIA
Blackwell
platform,
which
integrates
multiple
chips,
systems,
and
NVIDIA
CUDA
software
to
drive
the
next
generation
of
AI
applications.
The
NVIDIA
GB200
NVL72,
a
multi-node,
liquid-cooled
solution,
will
also
be
showcased
for
its
ability
to
connect
72
Blackwell
GPUs
and
36
Grace
CPUs,
setting
a
new
benchmark
for
AI
system
design.
Additionally,
the
NVLink
interconnect
technology
will
be
discussed
for
its
high
throughput
and
low-latency
capabilities,
crucial
for
generative
AI.

Liquid
Cooling
Advancements

A
significant
portion
of
the
conference
will
focus
on
liquid
cooling
technologies.
Ali
Heydari,
director
of
data
center
cooling
and
infrastructure
at
NVIDIA,
will
present
designs
for
hybrid-cooled
data
centers.
These
designs
aim
to
retrofit
existing
air-cooled
data
centers
with
liquid-cooling
units,
offering
a
more
efficient
and
space-saving
solution.
Heydari’s
team
is
also
collaborating
with
the
U.S.
Department
of
Energy
on
the
COOLERCHIPS
program
to
develop
advanced
cooling
technologies.
Using
the
NVIDIA
Omniverse
platform,
they
are
creating
digital
twins
to
model
energy
consumption
and
cooling
efficiency.

AI-Driven
Chip
Design

NVIDIA
is
also
leveraging
AI
to
enhance
semiconductor
design.
Mark
Ren,
director
of
design
automation
research
at
NVIDIA,
will
discuss
how
AI
models
improve
design
quality
and
productivity
by
automating
time-consuming
tasks.
These
models
include
prediction
and
optimization
tools
that
help
engineers
analyze
and
improve
designs
rapidly.
AI
agents
powered
by
large
language
models
(LLMs)
are
being
developed
to
autonomously
complete
tasks,
interact
with
designers,
and
learn
from
a
database
of
human
and
agent
experiences.
Ren
will
share
examples
of
AI
applications
in
timing
report
analysis,
cell
cluster
optimization,
and
code
generation.

These
presentations
underscore
NVIDIA’s
commitment
to
pushing
the
boundaries
of
data
center
computing
through
innovative
technologies
in
AI
and
cooling
solutions.
The
company’s
efforts
aim
to
deliver
unparalleled
performance,
efficiency,
and
optimization
across
various
sectors.

Image
source:
Shutterstock

Comments are closed.