Safety Cases Explained: How to Argue an AI is Safe
Safety Cases are a promising approach in AI Governance inspired by other safety-critical industries. They are structured arguments, based on evidence, that a system is safe in a specific context. I will introduce what Safety Cases are, how they can be used, and what work is being done on this atm. This explainer leans on Buhl et al 2024. At the end, I survey expert opinions on the promise/weaknesses of Safety Cases.
