OpenAI’s Cybersecurity Grant Program Highlights Pioneering Projects


OpenAI's Cybersecurity Grant Program Highlights Pioneering Projects

OpenAI’s
Cybersecurity
Grant
Program
has
been
instrumental
in
supporting
a
diverse
array
of
projects
aimed
at
enhancing
AI
and
cybersecurity
defenses.
Since
its
inception,
the
program
has
funded
several
groundbreaking
initiatives,
each
contributing
significantly
to
the
field
of
cybersecurity.

Wagner
Lab
from
UC
Berkeley

Professor
David
Wagner’s
security
research
lab
at
UC
Berkeley
is
at
the
forefront
of
developing
techniques
to
defend
against
prompt-injection
attacks
in
large
language
models
(LLMs).
By
collaborating
with
OpenAI,
Wagner’s
team
aims
to
enhance
the
trustworthiness
and
security
of
these
models,
making
them
more
resilient
to
cybersecurity
threats.

Coguard

Albert
Heinle,
co-founder
and
CTO
at

Coguard
,
is
leveraging
AI
to
mitigate
software
misconfiguration,
a
prevalent
cause
of
security
incidents.
Heinle’s
approach
uses
AI
to
automate
the
detection
and
updating
of
software
configurations,
improving
security
and
reducing
reliance
on
outdated
rules-based
policies.

Mithril
Security

Mithril
Security
has
developed
a
proof-of-concept
to
enhance
the
security
of
inference
infrastructure
for
LLMs.
Their
project
includes
open-source
tools
for
deploying
AI
models
on
GPUs
with
secure
enclaves
based
on
Trusted
Platform
Modules
(TPMs).
This
work
ensures
data
privacy
by
preventing
data
exposure,
even
to
administrators.
Their
findings
are
publicly
available
on

GitHub

and
detailed
in
a
comprehensive

whitepaper
.

Gabriel
Bernadett-Shapiro

Individual
grantee
Gabriel
Bernadett-Shapiro
has
created
the
AI
OSINT
workshop
and
AI
Security
Starter
Kit,
providing
technical
training
and
free
tools
for
students,
journalists,
investigators,
and
information-security
professionals.
His
work
is
particularly
impactful
for
international
atrocity
crime
investigators
and
intelligence
studies
students
at
Johns
Hopkins
University,
equipping
them
with
advanced
AI
tools
for
critical
environments.

Breuer
Lab
at
Dartmouth

Professor
Adam
Breuer’s
Lab
at
Dartmouth
is
focused
on
developing
defense
techniques
to
protect
neural
networks
from
attacks
that
reconstruct
private
training
data.
Their
approach
aims
to
prevent
these
attacks
without
sacrificing
model
accuracy
or
efficiency,
addressing
a
significant
challenge
in
the
field
of
AI
security.

Security
Lab
Boston
University
(SeclaBU)

At
Boston
University,
Ph.D.
candidate
Saad
Ullah,
Professor
Gianluca
Stringhini,
and
Professor
Ayse
Coskun
are
working
to
improve
the
ability
of
LLMs
to
detect
and
fix
code
vulnerabilities.
Their
research
could
enable
cyber
defenders
to
identify
and
prevent
code
exploits
before
they
can
be
maliciously
used.

CY-PHY
Security
Lab
from
the
University
of
Santa
Cruz
(UCSC)

Professor
Alvaro
Cardenas’
Research
Group
at
UCSC
is
investigating
the
use
of
foundation
models
to
design
autonomous
cyber
defense
agents.
Their
project
compares
the
effectiveness
of
foundation
models
and
reinforcement
learning
(RL)
trained
counterparts
in
improving
network
security
and
threat
information
triage.

MIT
Computer
Science
Artificial
Intelligence
Laboratory
(MIT
CSAIL)

Researchers
Stephen
Moskal,
Erik
Hemberg,
and
Una-May
O’Reilly
from
MIT
CSAIL
are
exploring
the
automation
of
decision
processes
and
actionable
responses
using
prompt
engineering
in
a
plan-act-report
loop
for
red-teaming.
Additionally,
they
are
examining
LLM-Agent
capabilities
in
Capture-the-Flag
(CTF)
challenges,
which
are
exercises
designed
to
identify
vulnerabilities
in
a
controlled
environment.

Image
source:
Shutterstock

Comments are closed.