AI Safety
Module: ethics
What it is
AI safety is the field focused on ensuring AI systems behave as intended and don't cause harm. This includes technical work on alignment, robustness, and control, as well as governance questions about deployment, monitoring, and accountability.
Why it matters
As AI becomes more capable, safety becomes more critical. Unsafe AI could cause harm through errors, misuse, or unexpected behaviours. Understanding AI safety helps you appreciate why developers implement restrictions and why debates about AI capabilities and risks matter.