Recommendations

 

We aim to identify interventions and actions that are likely to be robustly beneficial across a range of potential future, given uncertainties about how AI systems and their impacts will develop over time. We work with partners across academia, policy, industry and civil society to support the implementation of high-priority policy recommendations in practice.

Responsible research and innovation

We work with the AI research community to develop norms and best practices around responsible research, in order to ensure that AI development is safe, beneficial and inclusive. In particular we highlight that:

  • The AI community must take the dual-use nature of their work seriously, including by developing norms which take misuse into account in research priorities and publication decisions.

  • Community impact assessments based on participatory research should form part of assessment protocols like Algorithmic Impact Assessments.

Risk mitigation

We identify and recommend strategies to mitigate risk in different application areas and sectors, particularly highlighting that:

  • More robust processes for assuring the safety and behaviour of AI systems are needed as AI is increasingly being deployed in safety-critical domains.

  • Nuclear Weapon States should, either unilaterally or in agreement, avoid including machine learning and autonomy in nuclear weapons systems (NC4ISR).

The landscape of AI governance

Based on our  work to understand the AI governance landscape we aim to identify key places where it can be strengthened. In particular we suggest that:

  • Strengthening domain-specific regulatory bodies is likely to be a more effective approach for regulating AI than a new international AI governance organisation at present, given the wide variety of AI techniques and applications.

  • Deploying AI systems ethically in crisis scenarios will require new governance models with (i) ability to think ahead rather than deal with problems reactively (ii) more robust procedures for assuring behaviour and safety of AI systems (iii) building public trust through independent oversight.

 

Recent work

Summary of recommendations:

  • Develop cross-cultural research agendas for AI ethics and governance

  • Translate key papers and reports

  • Alternate continents for key research conferences

  • Establish exchange programs for PhD students

Summary of Recommendations:

  • Conduct and fund third party auditing of AI systems.

  • AI developers should run red teaming exercises.

  • Pilot bias and safety bounties for AI systems.

  • Share more information about AI incidents.

  • Develop audit trail requirements for AI systems.

  • Support research into the interpretability of AI systems.

  • Develop and use privacy-preserving machine learning..

  • Develop hardware security features for AI accelerators.

  • Perform and publish a high-precision compute measurement of a single project in great detail.

  • Substantially increase funding of computing power resources for researchers in academia.