Safety, security and risk

 
chris-yang-1tnS_BVy9Jk-unsplash.jpg
 

AI poses or intersects with a range of safety and security challenges, some of which are immediate and some of which may only manifest in future as more powerful systems are developed and deployed in a wider range of societal settings. Many of the most transformative impacts of AI, both in terms of potential benefits and risks, remain decades in the future. However, there is work to be done now on these future challenges, by exploring safety challenges in fundamental AI research, and understanding the risks associated with future AI development scenarios. Furthermore, safety norms and practices put in place now support us in being better prepared for the challenges of more capable future systems. AI:FAR explores near-term risks associated with the role of AI in synthetic media, manipulation and information security; defense and military use; and critical processes such as agriculture. It also explores longer-term challenges associated with potential future developments in AI.

This strand includes the Future of Life Institute-funded Paradigms of AGI and their Associated Risks project, which explores safety challenges that may emerge for AI systems with increasing generality and capability.


Recent papers include: