Amir Sadovnik - Oak Ridge National Laboratory
( Register to receive a reminder )
Wednesday, Mar 12, 2025 04:30pm - 05:30pm ET
What do we mean when we talk about AI Safety and Security?
Mar 12, 2025
Abstract
In February 2024, Gladstone AI produced a report for the Department of State, which opens by stating that "The recent explosion of progress in advanced artificial intelligence … is creating entirely new categories of weapons of mass destruction-like and weapons of mass destruction-enabling catastrophic risk." To clarify further, they define catastrophic risk as "catastrophic events up to and including events that would lead to human extinction." This strong yet controversial statement has caused much debate in the AI research community and in public discourse. One can imagine scenarios in which this may be true, perhaps in some national security-related scenarios, but how can we judge the merit of these types of statements? It is clear that to do so, it is essential to first truly understand the different risks AI adaptation poses and how those risks are novel. That is, when we talk about AI safety and security, do we truly have a clarity about the meaning of these terms? In this talk, we will examine the characteristics that make AI vulnerable to attacks and misuse in different ways and how they introduce novel risks. These risks may be to the system in which AI is employed, the environment around it, or even to society as a whole. Gaining a better understanding of AI characteristics and vulnerabilities will allow us to evaluate how realistic and pressing the different AI risks are, and better realize the current state of AI, its limitations, and what breakthroughs are still needed to advance its capabilities and safety.About the Speaker

Dr. Sadovnik is a senior research scientist and the Research Lead for Center for AI Security Research (CAISER) at Oak Ridge National Lab. As part of this role, Dr. Sadovnik leads multiple research projects related to AI risk, adversarial AI, and large language model vulnerabilities. As one of the founders of CAISER, he's helping to shape its strategy and operations through program leadership, partnership development, workshop organization, teaching, and outreach.
Prior to joining the lab, he served as an assistant professor in the department of electrical engineering and computer science at the University of Tennessee, Knoxville and as an assistant professor in the department of computer science at Lafayette College. He received his PhD from the School of Electrical and Computer Engineering at Cornell University, advised by Prof. Tsuhan Chen as member of the Advanced Multimedia Processing Lab. Prior to arriving at Cornell he received his bachelor's in electrical and computer engineering from The Cooper Union.
In addition to his work and publications in AI and AI security, Dr. Sadovnik has a deep interest in workforce development and computer science education. He continues to teach graduate courses related to machine leaning and artificial intelligence at the University of Tennessee, Knoxville.
Ways to Watch
