Governance, ethics and responsible innovation

 
deniz-fuchidzhiev-HNfZAnl3RM4-unsplash.jpg
 

Ensuring that the impacts of AI are beneficial requires engaging with the principles and practices that underpin the development and deployment of AI. It requires shaping governance at the national and international level. And it requires the development of a global, cooperative community working together to ensure that AI benefits everyone.

Research within this theme focuses on:

· Norms and principles: What kind of publication norms, collaboration norms and practices, and ethical principles are needed within AI research communities, and in the broader application and governance of AI? How do we move beyond high-level principles to practical implementation, and how can tensions between principles be navigated? What norms and principles could most positively influence the trajectory of AI’s global impacts as more powerful technologies are developed in higher-stakes contexts?

· Cross-cultural cooperation: Wherever AI is developed, its impacts will be global. And global ethics and principles must be shaped by a diversity of global voices. AI:FAR, in collaboration with colleagues in CSER and CFI, has been working to build a global cooperative community, with a particular focus on building links to leading Asian thinkers, technologists and institutions.

· International governance: Recent years have seen a proliferation of proposals for the international governance of AI, and the establishment of new bodies and fora. Our research examines the current state of international law and governance for AI , the strengths and weaknesses of different models of future governance, and governance priorities for achieving meaningful and inclusive stewardship of AI globally.

· Responsible Innovation: Responsible innovation means “taking care of the future through collective stewardship of science and innovation in the present”. Collective stewardship of AI requires meaningful collaboration between academia, industry, policymakers, civil society and affected communities. Our research examines ethical activism in the AI community, methods for trustworthy collaboration and oversight of AI research, and the role of participatory methods to achieve a better understanding of cross-societal concerns and priorities for AI. We work closely with technology-leading research groups, governments and civil society organisations.

Relevant papers include:

Sastry, G., Heim, L., Belfield, H., et al. including Avin, S. (2024). Computing Power and the Governance of Artificial Intelligence. arXiv preprint arXiv:2402.08797.

Corsi, G., Seger, E., & O hEigeartaigh, S. (2024). Crowdsourcing the Mitigation of disinformation and misinformation: The case of spontaneous community-based moderation on Reddit. Online Social Networks and Media, 43, 100291.

Chiodo, M., Müller, D., & Sienknecht, M. (2024). Educating AI developers to prevent harmful path dependency in AI resort-to-force decision making. Australian Journal of International Affairs, 78(2), 210-219.

Corsi, G. (2024). Evaluating Twitter’s algorithmic amplification of low-credibility content: an observational study. EPJ Data Science, 13(1), 18. https://arxiv.org/pdf/2305.06125

Hernandes, R., & Corsi, G. (2024). LLMs left, right, and center: Assessing GPT's capabilities to label political bias from web domains. arXiv preprint arXiv:2407.14344.

Hua, S. S., & Belfield, H. (2023). Effective Enforceability of EU Competition Law Under AI Development Scenarios: a Framework for Anticipatory Governance. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (pp. 596-605).

Trager, R., Harack, B., Reuel, A., Carnegie, A., Heim, L., Ho, L., Kreps, S., Lall, R., Larter, O., hÉigeartaigh, S.Ó. and Staffell, S., (2023). International governance of civilian AI: A jurisdictional certification approach. arXiv preprint arXiv:2308.15514.

Seger, E. et al. including Ó hÉigeartaigh, S.S. (2023). Open-sourcing highly capable foundation models: An evaluation of risks, benefits, and alternative methods for pursuing open-source objectives. arXiv preprint arXiv:2311.09227.

Janjeva, A., Mulani, N., Powell, R., Whittlestone, J., & Avin, S. (2023). Strengthening Resilience to AI Risk: A Guide for UK Policymakers. Centre for Emerging Technology and Security. ·

Chiodo, M., & Müller, D. (2023). Manifesto for the Responsible Development of Mathematical Works--A Tool for Practitioners and for Management. arXiv preprint arXiv:2306.09131.

·Tzachor, A., Devare, M., King, B., Avin, S., & Ó hÉigeartaigh, S. (2022). Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities. Nature Machine Intelligence, 4(2), 104-109.

ÓhÉigeartaigh, S. S., Whittlestone, J., Liu, Y., Zeng, Y., & Liu, Z. (2020). Overcoming barriers to cross-cultural cooperation in AI ethics and governance. Philosophy & technology, 33, 571-593.

Kunz, M., & Ó hÉigeartaigh, S. (2020) Artificial Intelligence and Robotization. In Robin Geiß and Nils Melzer (eds.), Oxford Handbook on the International Law of Global Security (Oxford University Press, Forthcoming).

Brundage, M., Avin, S., Wang, J., Belfield, H., Krueger, G., Hadfield, G., Ó hÉigeartaigh, S.S. ... & Maharaj, T. (2020). Toward trustworthy AI development: mechanisms for supporting verifiable claims. arXiv preprint arXiv:2004.07213.