Understanding Decoding Strategies in Large Language Models (LLMs)


Darius
Baruo


Aug
22,
2024
04:58

Explore
how
Large
Language
Models
(LLMs)
choose
the
next
word
using
decoding
strategies.
Learn
about
different
methods
like
greedy
search,
beam
search,
and
more.

Understanding Decoding Strategies in Large Language Models (LLMs)

Large
Language
Models
(LLMs)
are
trained
to
predict
the
next
word
in
a
text
sequence.
However,
the
method
by
which
they
generate
text
involves
a
combination
of
their
probability
estimates
and
algorithms
known
as
decoding
strategies.
These
strategies
are
crucial
in
determining
how
LLMs
choose
the
next
word,
according
to

AssemblyAI
.

Next-Word
Predictors
vs.
Text
Generators

LLMs
are
often
described
as
“next-word
predictors”
in
non-scientific
literature,
but
this
can
lead
to
misconceptions.
During
the
decoding
phase,
LLMs
employ
various
strategies
to
generate
text,
not
just
outputting
the
most
probable
next
word
iteratively.
These
strategies
are
known
as

decoding
strategies
,
and
they
fundamentally
determine
how
LLMs
generate
text.

Decoding
Strategies

Decoding
strategies
can
be
divided
into
deterministic
and
stochastic
methods.
Deterministic
methods
produce
the
same
output
for
the
same
input,
while
stochastic
methods
introduce
randomness,
leading
to
varied
outputs
even
with
the
same
input.

Deterministic
Methods

Greedy
Search

Greedy
search
is
the
simplest
decoding
strategy,
where
at
each
step,
the
most
probable
next
token
is
chosen.
While
efficient,
it
often
produces
repetitive
and
dull
text.

Beam
Search

Beam
search
generalizes
greedy
search
by
maintaining
a
set
of
the
top
K
most
probable
sequences
at
each
step.
While
it
improves
text
quality,
it
can
still
produce
repetitive
and
unnatural
text.

Stochastic
Methods

Top-k
Sampling

Top-k
sampling
introduces
randomness
by
sampling
the
next
token
from
the
top
k
most
probable
choices.
However,
choosing
an
optimal
k
value
can
be
challenging.

Top-p
Sampling
(Nucleus
Sampling)

Top-p
sampling
dynamically
selects
tokens
based
on
a
cumulative
probability
threshold,
adapting
to
the
distribution
shape
at
each
step
and
preserving
diversity
in
generated
text.

Temperature
Sampling

Temperature
sampling
adjusts
the
sharpness
of
the
probability
distribution
using
a
temperature
parameter.
Lower
temperatures
produce
more
deterministic
text,
while
higher
temperatures
increase
randomness.

Optimizing
Information-Content
via
Typical
Sampling

Typical
sampling
introduces
principles
from
information
theory
to
balance
predictability
and
surprise
in
generated
text.
It
aims
to
produce
text
with
average
entropy,
maintaining
coherence
and
engagement.

Boosting
Inference
Speed
via
Speculative
Sampling

Speculative
sampling,
recently
discovered
by
Google
Research
and
DeepMind,
improves
inference
speed
by
generating
multiple
tokens
per
model
pass.
It
involves
a
draft
model
generating
tokens,
followed
by
a
target
model
verifying
and
correcting
them,
leading
to
significant
speedups.

Conclusion

Understanding
decoding
strategies
is
crucial
for
optimizing
the
performance
of
LLMs
in
text
generation
tasks.
While
deterministic
methods
like
greedy
search
and
beam
search
provide
efficiency,
stochastic
methods
like
top-k,
top-p,
and
temperature
sampling
introduce
necessary
randomness
for
more
natural
outputs.
Novel
approaches
like
typical
sampling
and
speculative
sampling
offer
further
improvements
in
text
quality
and
inference
speed,
respectively.

Image
source:
Shutterstock

Comments are closed.