The Problem

Key decision-makers seeking to govern AI often have insufficient information for identifying the need for intervention and assessing the efficacy of different governance options. Furthermore, the technical tools necessary for successfully implementing governance proposals are often lacking [1], leaving uncertainty regarding how policies are to be implemented. For example, while the concept of watermarking AI-generated content has gained traction among policymakers [2] [3] [4] [5], it is unclear whether current methods are sufficient for achieving policymakers' desired outcomes, nor how future-proof such methods will be to improvements in AI capabilities [6] [7]. Addressing these and similar issues will require further targeted technical advances.

Description of the image
Figure 1: An overview of the open problem areas covered in this report, organized according to our taxonomy.

How We're Contributing To A Solution

As such, in this paper we aim to provide an overview of technical AI governance (TAIG), defined as technical analysis and tools for supporting the effective governance of AI. By this definition, TAIG can contribute to AI governance in a number of ways, such as by identifying opportunities for governance intervention, informing key decisions, and enhancing options for implementation. For example, deployment evaluations that assess the downstream impacts of a system could help identify a need for policy interventions to address these impacts. Alternatively, being able to design models that are robust to malicious modifications could add to the menu of governance options available to prevent downstream misuse.

In particular, we make the following contributions:

Figure 1 provides an overview of the open problem areas, organized according to the taxonomy. We hope that this paper serves as a resource and inspiration for technical researchers aiming to direct their expertise towards policy-relevant topics.

Where to Start

You can find our searchable repository of open problems here.​

If you have a resource that we haven't mentioned or if you think a problem has been solved, please reach out here.

How To Cite Us

As with every research project, a lot of time and passion were put into this initiative. If you found our work useful, we'd appreciate it if you'd cite us. Some standard citations can be found below:

@misc{reuelbucknall2024open,
title={Open Problems in Technical AI Governance},
author={Reuel, Anka and Bucknall, Ben and Casper, Stephen and Fist, Tim and
Soder, Lisa and Aarne, Onni and Hammond, Lewis and Ibrahim, Lujain and
Chan, Alan and Wills, Peter and Anderljung, Markus and Garfinkel, Ben and
Heim, Lennart and Trask, Andrew and Mukobi, Gabriel and Schaeffer, Rylan and
Baker, Mauricio and Hooker, Sara and Solaiman, Irene and Luccioni, Alexandra Sasha and
Rajkumar, Nitarshan and Mo{\"e}s, Nicolas and Ladish, Jeffrey and Guha, Neel and
Newman, Jessica and Bengio, Yoshua and South, Tobin and Pentland, Alex and
Koyejo, Sanmi and Kochenderfer, Mykel J and Trager, Robert},
journal={arXiv preprint arXiv:2407.14981},
url={https://taig.stanford.edu},
year={2024}
}

Contact

ben.bucknall@governance.ai
anka.reuel@stanford.edu