Llama 3.1 Now Optimized for AMD Platforms from Data Center to AI PCs


Joerg
Hiller


Jul
23,
2024
18:13

AMD
and
Meta’s
Llama
3.1
brings
enhanced
AI
capabilities
to
AMD
platforms,
including
Instinct
MI300X
GPUs
and
EPYC
CPUs.

Llama 3.1 Now Optimized for AMD Platforms from Data Center to AI PCs

In
a
significant
development
for
the
artificial
intelligence
(AI)
ecosystem,
AMD
has
announced
that
Meta’s
latest
Llama
3.1
large
language
model
(LLM)
is
now
optimized
for
AMD
platforms.
This
includes
everything
from
high-performance
data
center
solutions
to
edge
computing
and
AI-enabled
personal
computers,
according
to

AMD.com
.

AMD
Instinct™
MI300X
GPU
Accelerators
and
Llama
3.1

The
Llama
3.1
model,
developed
by
Meta,
introduces
enhanced
capabilities,
including
a
context
length
of
up
to
128K,
support
for
eight
languages,
and
the
Llama
3.1
405B,
the
largest
openly
available
foundation
model.
AMD
has
confirmed
that
their
Instinct
MI300X
GPUs
can
efficiently
run
this
model,
leveraging
their
leading
memory
capacity
and
bandwidth.
A
single
AMD
Instinct
MI300X
can
handle
up
to
eight
parallel
instances
of
the
Llama
3
model,
providing
significant
cost
savings
and
performance
efficiency
for
organizations.

Meta
utilized
AMD’s
ROCm™
Open
Ecosystem
and
Instinct
MI300X
GPUs
during
the
development
of
Llama
3.1,
further
solidifying
the
collaborative
efforts
between
the
two
tech
giants.

AMD
EPYC™
CPUs
and
Llama
3.1

AMD
EPYC
CPUs
offer
high
performance
and
energy
efficiency
for
data
center
workloads,
making
them
ideal
for
running
AI
and
LLMs.
The
Llama
3.1
model
serves
as
a
benchmark
to
help
data
center
customers
assess
technology
performance,
latency,
and
scalability.
For
CPU-only
environments,
AMD’s
4th
Gen
EPYC
processors
provide
compelling
performance
and
efficiency,
making
them
suitable
for
smaller
models
like
Llama
3
8B
without
requiring
GPU
acceleration.

AMD
AI
PCs
and
Llama
3.1

AMD
is
also
focusing
on
democratizing
AI
through
its
Ryzen
AI™
series
of
processors,
allowing
users
to
harness
the
power
of
Llama
3.1
without
advanced
coding
skills.
Through
a
partnership
with
LM
Studio,
AMD
offers
customers
the
ability
to
use
Llama
3.1
models
for
various
tasks
such
as
typing
emails,
proofreading
documents,
and
generating
code.

AMD
Radeon™
GPUs
and
Llama
3.1

For
users
interested
in
driving
generative
AI
locally,
AMD
Radeon™
GPUs
offer
on-device
AI
processing
capabilities.
The
combination
of
AMD
Radeon
desktop
GPUs
and
ROCm
software
enables
even
small
businesses
to
run
customized
AI
tools
on
standard
desktop
PCs
or
workstations.
AMD
AI
desktop
systems
equipped
with
Radeon
PRO
W7900
GPUs
and
Ryzen™
Threadripper™
PRO
processors
represent
a
new
solution
for
fine-tuning
and
running
inference
on
LLMs
with
high
precision.

Conclusion

The
collaboration
between
AMD
and
Meta
to
optimize
Llama
3.1
for
AMD
platforms
marks
a
significant
milestone
in
the
AI
ecosystem.
The
compatibility
of
Llama
3.1
with
AMD’s
diverse
hardware
and
software
solutions
ensures
unparalleled
performance
and
efficiency,
empowering
innovation
across
various
sectors.

Image
source:
Shutterstock

Comments are closed.