Artificial Intelligence and tackling the challenge of designing new public institutions

When new tasks come along, often new institutions have to be created too.  Modern government can be thought of as an assembly of these institutions, some old like prison services and police, schools and welfare, some newer like utility regulators and digital agencies.

Powerful technologies tend to require a lot of institutional innovation. A good example is the car – which is now governed through a lattice of institutions in most countries, whether responsible for vehicle licensing, highways management and construction, testing drivers, regulating safety or emissions, managing parking or handling taxation, as well as rules, laws, taxes and social norms.

So as AI becomes ever more central to our societies and economies it’s not surprising that attention is belatedly turning to new institutions which can help to ensure we get the best of the opportunities while minimising the risks.  

The opportunities are vast – efficiency, speed, growth, targeted learning or health.  But so are the risks – from bias and discrimination to breaches of privacy. 

The current makeup of AI institutions

In relation to AI some institutions already exist. China’s Cyberspace Administration was set up in 2014 and has become increasingly involved in implementing new AI laws.  The UK set up a Centre for Data Ethics and innovation within government to think through dilemmas of AI tools like facial recognition and targeted advertising in 2018. Various countries are creating new bodies, such as Spain’s Artificial Intelligence Supervision Agency (AESIA). As the EU’s AI Act comes into force in 2025 member states are likely to have to create regulators too.

Some believe that existing institutions for competition, consumer protection or personal data can handle everything. This has been the approach of many governments including the US, where responsibility is split between existing organisations like the Federal Trade Commission and the National Institute of Standards and Technology (which focuses on standards for AI), though work recently began on a new AI safety institute.  

But many of the issues raised by AI are crosscutting – such as facial recognition or liability – and the history of past technologies suggests that it is very unlikely that reliance on existing institutions will be enough.

So, what should governments be considering?

What designs are most likely to work well? There are now many sources of ideas and policy observation, from newsletters and centres to international bodies like the OECD.  But the work on institutional design has been thin, and much less imaginative than the work on AI itself, often drawing on consultancies.   

Here we briefly sketch some of the key issues and considerations that will face all national governments and will also be relevant to some cities and regions. TIAL is already working with many governments and organisations worldwide, from the governments of Germany and Finland, to Bangladesh and India, to the European Commission and United Nations.

Our founders have been involved in the design of many AI-related institutions in the past, from designs for machine intelligence commissions and national regulators, to options for global observatories and new models for public procurement.

Our aim is that TIAL should work to help with the design work on the next stage of these many strands of institutional design, drawing on the best available ideas and evidence.

New regulators

The regulatory challenges of AI are vast and will face almost every existing regulator – but are likely to also require new ones. Fields include finance, autonomous vehicles, employment, law, media.

Some of the issues will include: transparency and, explanation (will companies have to make their algorithms visible, and if so how and to who); tort and harm (who will be liable when things go wrong, for example driverless cars crashing?). Should there be rules to stop AI-supported misinformation?

Laws are now being passed across the world, with a mix of principles, applications and institutional responsibilities. New institutions are being created too to handle the many cross-cutting issues. As governments consider new institutions they will need to think about what could be called the ‘thousand-cell matrix’, the many possible challenges, domains and responses, which will require a wide variety of actions.

They can also be helped by drawing on current actions – from the EU’s AI Act to the US Presidential Order of late 2023 and many draft laws.

The key insight is that nations will almost certainly require a lattice, or mesh of regulatory capacities – some new, some adaptations of existing ones, and some deliberately working across multiple tiers of government.

Procurement/commissioning

The second big area facing governments is how they use AI well.   There are many uses of AI, and so potential priorities for procurement and commissioning: 

  • Algorithms for prediction – already widely used in probation, health care and social services to predict risk and ensure preventive action
  • Algorithms for decisions – for example to determine eligibility for welfare or pensions
  • AI for coordination – e.g., traffic management
  • AI for security – e.g., facial recognition for use in policing, or spotting child abuse online
  • AI to spot anomalies – e.g., already widely used in spotting fraud
  • AI to assist negotiation of contracts
  • AI for warfare – to guide and control drones or missiles

In all of these cases the direct responsibility may continue to rest with individual ministries and agencies, but there may also be big gains to be achieved from pooling expertise or capability in new institutions, as turned out to be the case for digitisation of services more generally.

There will also be a need for a range of types of testing environments, often with secure use of data and algorithms: testing on technical function; simulated games to assess real-world effects and behavioural impacts.

TIAL is now running a special programme on options for procurement and commissioning of AI, with results to be published later in 2024.

A big issue to consider here is how to ensure orchestration of data to fuel AI. In many fields data is proprietary or closed – and therefore hard to use for public challenges. Here too there are many possible institutional options for the future – from data trusts to repositories. TIAL’s members have been closely involved in many institutional designs in this space in the past – including data trusts, personal data stores and data regulation.

Steering AI R&D

A third priority is directing research and development to current or prospective public purposes. In the past a lot of AI R&D was driven by the military, and then in the last twenty years there has been huge investment for commercial uses like search engines, recommendation engines, or targeted advertising. But governments have been much slower to direct R&D to their own priorities, even though they often subsidised much of the R&D out of public funds.   

Some countries have major programmes aiming to fill this gap, including the Canadian Institute for Advanced Research (CIFAR) and the UK’s Alan Turing Institute. China’s are very substantial, including a focus on city management, transport and education as well as security.

But most would agree that these are quite modest compared to the resources devoted to the military and business uses. Yet there are a wide range of functions where AI can help, whether as assistant, coach and even manager – but getting these roles right will require intensive R&D and experiment.

There are many options for the organisation of ‘whole-of-government innovation’ (some recently surveyed with the European Commission) which provide a menu of options for governments.

These are just a few of the new tasks that may require new institutions. Others include the handling of law, including ownership of IP and obligations, particularly for training of LLMs, which may require new institutions to regulate and redirect flows of money. Others may have to be organised at a transnational scale – including potential proposals for a global observatory to keep track of developments in AI and share options for law.  

Several transnational bodies, including the OECD and the IEEE have been actively involved in sharing experiences and proposing new options, such as a potential ‘IPCC for AI’.

The TIAL team has been involved in this field for many years – proposing and helping to design regulators for AI; engaging with many governments; and also developing options for transnational governance. We have written in the past about the wasted decade – when governments were far too slow to catch up with the impact of algorithms in daily life, from social media to welfare systems, probation to healthcare. Our hope is that the pace of imaginative thinking can now accelerate, focused on the many specific aspects of AI governance that are currently gaps.

What’s needed — and TIAL’s role

We see the specific TIAL contribution as happening at three levels

  • First, analysis and description of new institutional designs around AI – complementing work we are already doing with the UNDP and others as an observatory, linking into the various surveys and research centres that have been set up over the last few years.
  • Second, developing maquettes or options, specific proposals for the tasks described above (eg AI in schools, governance of key risks etc). This remains a glaring gap, and not an obvious competence of research centres and consultancies involved in AI.
  • Third, working with partners, primarily national governments, to guide them through a live design process where there is a commitment to create a new institution.