NVIDIA Showcases AI Security Innovations at Major Cybersecurity Conferences


Luisa
Crawford


Sep
19,
2024
10:04

NVIDIA
highlights
AI
security
advancements
at
Black
Hat
USA
and
DEF
CON
32,
emphasizing
adversarial
machine
learning
and
LLM
security.

NVIDIA Showcases AI Security Innovations at Major Cybersecurity Conferences

NVIDIA
recently
demonstrated
its
AI
security
expertise
at
two
of
the
most
prestigious
cybersecurity
conferences,
Black
Hat
USA
and
DEF
CON
32,
according
to
the

NVIDIA
Technical
Blog
.
The
events
provided
a
platform
for
NVIDIA
to
showcase
its
latest
advancements
in
AI
security
and
share
insights
with
the
broader
cybersecurity
community.

NVIDIA
at
Black
Hat
USA
2024

The
Black
Hat
USA
conference
is
a
globally
recognized
event
that
features
cutting-edge
security
research.
This
year,
discussions
highlighted
the
applications
of
generative
AI
tools
in
security
and
the
security
of
AI
deployments.
Bartley
Richardson,
NVIDIA’s
Director
of
Cybersecurity
AI,
delivered
a
keynote
alongside
WWT
CEO
Jim
Kavanaugh,
focusing
on
how
AI
and
automation
are
transforming
cybersecurity
strategies.

Other
sessions
featured
experts
from
NVIDIA
and
its
partners
discussing
the
revolutionary
impact
of
AI
on
security
postures
and
techniques
for
securing
AI
systems.
A
panel
on
AI
Safety
included
Nikki
Pope,
NVIDIA’s
Senior
Director
of
AI
and
Legal
Ethics,
who
discussed
the
complexities
of
AI
safety
with
practitioners
from





Microsoft

and
Google.

Daniel
Rohrer,
NVIDIA’s
VP
of
Software
Product
Security,
addressed
the
unique
challenges
of
securing
AI
data
centers
in
a
session
hosted
by
Trend
Micro.
The
consensus
at
Black
Hat
was
clear:
deploying
AI
tools
necessitates
a
robust
approach
to
security,
emphasizing
trust
boundaries
and
access
controls.

NVIDIA
at
DEF
CON
32

DEF
CON,
the
world’s
largest
hacker
conference,
featured
numerous
villages
where
attendees
engaged
in
real-time
hacking
challenges.
NVIDIA
researchers
supported
the
AI
Village,
hosting
popular
live
red-teaming
events
focused
on
large
language
models
(LLMs).
This
year’s
events
included
a
Generative
Red
Team
challenge,
which
led
to
real-time
improvements
in
model
safety
guardrails.

Nikki
Pope
delivered
a
keynote
on
algorithmic
fairness
and
safety
in
AI
systems.
The
AI
Cyber
Challenge
(AIxCC),
hosted
by
DARPA,
saw
red
and
blue
teams
building
autonomous
agents
to
identify
and
exploit
code
vulnerabilities.
This
initiative
underscored
the
potential
of
AI-powered
tools
to
accelerate
security
research.

Adversarial
Machine
Learning
Training

At
Black
Hat,
NVIDIA
and
Dreadnode
conducted
a
two-day
training
on
machine
learning
(ML),
covering
techniques
to
assess
security
risks
against
ML
models
and
implement
specific
attacks.
Topics
included
evasion,
extraction,
assessments,
inversion,
poisoning,
and
attacks
on
LLMs.
Participants
practiced
executing
these
attacks
in
self-paced
labs,
gaining
hands-on
experience
critical
for
shaping
effective
defensive
strategies.

Focus
on
LLM
Security

NVIDIA
Principal
Security
Architect
Rich
Harang
presented
on
LLM
security
at
Black
Hat,
emphasizing
the
importance
of
grounding
LLM
security
in
a
familiar
application
security
framework.
The
talk
focused
on
the
security
issues
associated
with
retrieval-augmented
generation
(RAG)
LLM
architectures,
which
significantly
expand
the
attack
surface
of
AI
models.

Attendees
were
advised
to
identify
and
analyze
trust
and
security
boundaries,
trace
data
flows,
and
apply
the
principles
of
least
privilege
and
output
minimization
to
ensure
robust
security.

Democratizing
LLM
Security
Assessments

At
DEF
CON,
NVIDIA
AI
Security
Researchers
Leon
Derczynski
and
Erick
Galinkin
introduced
garak,
an
open-source
tool
for
LLM
security
probing.
Garak
allows
practitioners
to
test
potential
LLM
exploits
quickly,
automating
a
portion
of
LLM
red-teaming.
The
tool
supports
nearly
120
unique
attack
probes,
including
XSS
attacks,
prompt
injection,
and
safety
jailbreaks.

Garak’s
presentation
and
demo
lab
were
well-attended,
marking
a
significant
step
forward
in
standardizing
security
definitions
for
LLMs.
The
tool
is
available
on
GitHub,
enabling
researchers
and
developers
to
quantify
and
compare
model
security
against
various
attacks.

Summary

NVIDIA’s
participation
in
Black
Hat
USA
and
DEF
CON
32
highlighted
its
commitment
to
advancing
AI
security.
The
company’s
contributions
provided
the
security
community
with
valuable
knowledge
for
deploying
AI
systems
with
a
security
mindset.
For
those
interested
in
adversarial
machine
learning,
NVIDIA
offers
a
self-paced
online
course
through
its
Deep
Learning
Institute.

For
more
insights
into
NVIDIA’s
ongoing
work
in
AI
and
cybersecurity,
visit
the

NVIDIA
Technical
Blog
.

Image
source:
Shutterstock

Comments are closed.