Should the AI Safety Community Prioritize Safety Cases?
I recently wrote an Introduction to AI Safety Cases. It left me wondering whether they are actually an impactful intervention that should be prioritized by the AI Safety Community.
I recently wrote an Introduction to AI Safety Cases. It left me wondering whether they are actually an impactful intervention that should be prioritized by the AI Safety Community.
Safety Cases are a promising approach in AI Governance inspired by other safety-critical industries. They are structured arguments, based on evidence, that a system is safe in a specific context. I will introduce what Safety Cases are, how they can be used, and what work is being done on this atm. This explainer leans on Buhl et al 2024. At the end, I survey expert opinions on the promise/weaknesses of Safety Cases.
TL;DR: The EU’s Code of Practice (CoP) mandates AI companies to conduct state-of-the-art Risk Modelling. However, the current SoTA is has severe flaws. By creating risk models and improving methodology, we can enhance the quality of risk management performed by AI companies. This is a neglected area, hence we encourage more people in AI Safety to work on it. Work on Risk Modelling is urgent because the CoP is to be enforced starting in 9 months (Aug, 2, 2026).
The Luddites were a social movement of English textile workers in the 19th century, famous for smashing the machines that were replacing their jobs. The term Luddite is now used to describe opponents of new technologies (often in a derogatory way). However, I believe many people using the term misunderstand what the Luddites did and wanted. Indeed, the Luddites and their ultimate failure can teach a modern AI-labour movement valuable lessons.
The sleep pod opens with a soft hiss. Five hours—that’s all I need anymore. The pod regulated everything through the night: temperature shifting through optimal sleep cycles, gentle massages during deep sleep, binaural tones guiding my brain through REM. I step out feeling completely rested.
You might think a horrible catastrophe is imminent due to climate change, plummeting birth rates, democratic backsliding or whatever your doom-of-choice is. In the face of such catastrophes it might seem unimportant, distracting or downright offensive to consider what beautiful futures might look like. However, I believe that thinking about the end goal is essential for steering towards positive outcomes.
The development of AGI could be the most important event of our lifetimes. Ensuring that AI is developed and deployed safely could be the most impactful thing many of us can work on. However, the development of Frontier AI systems is happening in only a handful of companies. This leaves the rest of us to wonder: How can we influence Frontier AI Companies?
The AI 2027 Tournament on Metaculus poses 16 questions about the near term future of AI. They are derived from the AI 2027 scenario and cover predictions about technological, economical, political and societal developments. I made predictions for 6 of the questions and want to share my reasoning. I believe there is a virtue in making public predictions, open my reasoning up to criticism and contribute to the discourse.
This post summarizes the taxonomy, challenges, and opportunities from a survey paper on Representation Engineering that I’ve written with Sahar Abdelnabi, David Krueger, and Mario Fritz. Cross posted from the AI Alignment Forum