AI Brain Implant Enables Bilingual Communication for Stroke Survivor


AI Brain Implant Enables Bilingual Communication for Stroke Survivor

In
a
groundbreaking
advancement,
scientists
have
successfully
enabled
a
stroke
survivor
to
communicate
in
both
Spanish
and
English
using
a
neuroprosthesis
implant.
This
development,
detailed
in
a
study
from
the
lab
of
Dr.
Edward
Chang
at
the
University
of
California,
San
Francisco,
represents
a
significant
leap
in
medical
technology,
as
reported
by
the

NVIDIA
Technical
Blog
.

Research
Highlights

The
research,
published
in

Nature
Biomedical
Engineering
,
builds
on
Dr.
Chang’s
earlier
work
from
2021,
which
demonstrated
the
ability
to
translate
brain
activity
into
words
for
individuals
with
severe
paralysis.
The
latest
study
focuses
on
a
patient
named
Pancho,
who
has
been
unable
to
speak
since
his
stroke.
By
employing
a
bilingual
AI
model,
the
neuroprosthesis
decodes
Pancho’s
brain
activity
and
translates
it
into
words
in
both
Spanish
and
English,
which
are
then
displayed
on
a
computer
screen.

Technological
Implementation

To
achieve
this,
researchers
trained
a
large
neural
network
model
on
Pancho’s
brain
activity
using
the
NVIDIA
cuDNN-accelerated
PyTorch
framework
and
NVIDIA
V100
GPUs.
The
neuroprosthesis,
which
is
implanted
on
the
surface
of
Pancho’s
brain,
differentiates
between
brain
activity
intended
for
Spanish
or
English
communication.
This
differentiation
is
crucial
for
accurately
translating
his
thoughts
into
the
desired
language.

During
the
study,
Pancho
read
and
attempted
to
articulate
words
in
both
languages.
Scientists
then
recorded
his
brain
activity
and
trained
the
AI
model
to
translate
these
activities
into
the
corresponding
words.
Remarkably,
the
AI
model
achieved
a
75%
accuracy
rate
in
decoding
Pancho’s
sentences.

Implications
and
Future
Prospects

This
research
holds
promise
for
significantly
improving
communication
methods
for
individuals
who
cannot
speak
or
rely
on
alternative
communication
devices.
The
longevity
of
Pancho’s
neuroprosthesis,
implanted
four
years
ago,
underscores
the
technology’s
potential
long-term
impact.

One
of
the
study’s
key
findings
is
its
implications
for
understanding
how
the
brain
manages
language
communication.
Contrary
to
earlier
neuroscience
studies
suggesting
that
different
languages
are
processed
in
separate
brain
regions,
this
research
indicates
that
speech
production
in
different
languages
may
originate
from
the
same
brain
area.
This
insight
could
pave
the
way
for
more
advanced
neuroprosthetic
devices
capable
of
assisting
bilingual
individuals.

Furthermore,
the
study
highlights
the
adaptability
of
generative
AI
models,
which
can
learn
and
improve
over
time,
playing
a
critical
role
in
translating
brain
activity
into
spoken
words.
Alexander
Silva,
the
lead
author
of
the
study,
expressed
optimism
about
the
technology’s
future,
noting
the
profound
impact
it
could
have
on
patients
like
Pancho.

For
those
interested
in
delving
deeper
into
the
study,
the
full
research
paper
is
available
in

Nature
Biomedical
Engineering
.
Additional
information
about
Dr.
Chang’s
previous
research
on
transforming
brain
waves
into
words
can
be
found
on
the

NVIDIA
Technical
Blog
.

Image
source:
Shutterstock

Comments are closed.