After a long preparation, the United Kingdom has officially published its national strategy for artificial intelligence. The Office for Artificial Intelligence, a special government office jointly coordinated by the Department for Business, Energy & Industrial Strategy and the Department for Digital, Culture, Media & Sport, did not mince its words in presenting the document: for Her Majesty’s Government, artificial intelligence has ‘huge potential to rewrite the rules of entire industries, drive substantial economic growth and transform all areas of life’.
London makes no secret of the fact that it wants to position itself at the global forefront of this innovation: ‘The UK is a global superpower in AI and is well placed to lead the world over the next decade as a genuine research and innovation powerhouse‘. This is a bold statement, considering its proximity to both Europe, which aims to impose its rules worldwide, and the United States, which is still – de facto – at the apex of the industry.
With these premises, it is not surprising how confidently the official text begins: ‘Our ten-year plan to make Britain a global AI superpower’. This is how ministers Kwasi Kwarteng and Nadine Dorries introduce the document, which we will analyse here in its most significant contents.
The strategy is based on three pillars:
- investing and planning for the long-term needs of the AI ecosystem in order to maintain the UK’s leadership as an AI and science superpower;
- supporting the transition to an AI facilitated economy, harnessing the benefits of innovation in the UK and ensuring that AI transmits benefits to all sectors and regions;
- ensuring that the UK achieves the right national and international governance of AI technologies to encourage innovation, investment and protect the public and its core values.
Reading the text, one immediately notices how the experts who drew up the strategy understand that artificial intelligence, while bringing undeniable benefits to the entire economy of a country, is not on its own sufficient to ensure that these benefits reach all sectors. Undoubtedly, there will be companies that will find it difficult to keep up, either because it will not be easy to find staff who know how to work with AI, or because certain technological transfers will not appear immediate to many entrepreneurs. Perhaps this is why the text addresses them at certain crucial points: “If you run a business – whether it is a startup, SME or a large corporate – the government wants you to have access to the people, knowledge and infrastructure you need to get your business ahead of the transformational change AI will bring, making the UK a globally-competitive, AI-first economy which benefits every region and sector.“
The document then includes a timeline of actions, divided into immediate actions (within three months), medium-term actions (6 to 12 months) and long-term actions (after one year), ordered according to the three priorities listed above. Among the activities to be done immediately we can see interventions in education, a focus on data governance and a three-pronged action to explore AI in key sectors such as health, defence and patents.
In the medium term, the government intends to set in motion initiatives that we might familiarly describe as a ‘shopping list’ for AI: clarifying what skills workers need to acquire to work with AI, figuring out how much private funding the sector should receive, establishing how much overall computing power is needed to sustain the domestic AI industry. In addition to this, also within a year, the UK intends to publish a regulation for AI governance – in direct competition with the European one – and initiatives for algorithmic transparency, standards and AI security.
In the long term, to be started not earlier than a year (maybe longer), there will be initiatives to check what the national approach is towards the procurement of semiconductors, to make government datasets available, to exchange technologies with third countries and – with a delay that has caused controversy – to pay attention to social and ethical issues in the development of AI technologies.
In the long term, there’s also an interesting passage on AGI (Artificial General Intelligence), that type of artificial intelligence that for the moment is only to be found in films and science fiction novels. In reality, we still do not know for sure whether AGI is really achievable (apart from fanciful predictions that we leave to the techno-gurus), and at the moment the mere notion of ‘thinking machines’ bothers those who are trying to keep their feet on the ground. But since an uncontrolled arrival at AGI could indeed be devastating for humanity, the UK government wants to be ready: “While the emergence of Artificial General Intelligence (AGI) may seem like a science fiction concept, concern about AI safety and non-human-aligned systems is by no means restricted to the fringes of the field. […] [W]e take the firm stance that it is critical to watch the evolution of the technology, to take seriously the possibility of AGI and ‘more general AI’, and to actively direct the technology in a peaceful, human-aligned direction.“
Returning to the three pillars, each of them is preceded by several chapters summarising what has been done so far, explaining what are the opportunities and what are the difficulties, and concluding with a list of actions that the Government is committed to undertaking.
Pillar 1: Investing in the long-term needs of the AI ecosystem
Among the actions that the UK will take to address long-term needs, the following stand out in particular:
- launch of a new national AI research and innovation programme to stimulate new investment in fundamental AI research;
- plans to attract expertise from abroad (‘the brightest and the best’) to the UK to develop AI;
- programmes to teach artificial intelligence to children, reaching all demographics, through the National Centre for Computing Education (NCCE); and
- producing structured open data, easily managed by automated procedures, for the benefit of both the public and businesses.
Pillar 2: Ensuring that AI benefits all sectors and regions
This is the point we were commenting on earlier, namely to ensure that the benefits of AI fall to all. To achieve this the UK government is committed to:
- launch a programme to stimulate the development and adoption of AI technologies in high potential and low AI maturity sectors;
- publish, in early 2022, a draft National Strategy for AI in Health and Social Care, a document that will set the direction for AI in health and social care until 2030;
- bilateral and multilateral agreements to promote the UK’s strategic advantages in areas such as energy, through the extension of aid to support local AI ecosystems in developing nations;
- publish research on the drivers influencing AI deployment in the economy; and
- publish the MoD’s AI strategy, explaining how to achieve and sustain technological advantage and be a scientific superpower in defence; the strategy should also outline the establishment of a new Defence AI Centre, a centre for military AI.
Pillar 3: Governing AI effectively
The third and final pillar can be understood as a response to other international partners, the European Union in the first place, who have already published – at least in draft form – their ideas on how to regulate artificial intelligence. The UK does not intend to passively accept the rules of others, although – as is natural to expect – it states that it will work to harmonise its governance with those “still in development” such as the European Commission’s proposal and that of the Council of Europe: “We will work to reflect the UK’s views on international AI governance and prevent divergence and friction between partners, and guard against abuse of this critical technology.“
The government’s plans for governance are as follows:
- develop, in a white paper to be published in early 2022, a pro-innovation national position on the governance and regulation of artificial intelligence;
- create an AI Standards Hub to coordinate the UK’s efforts in AI standardisation globally;
- support the continued development of new capabilities in the area of trustworthiness, acceptability, adoptability and transparency of AI technologies through the National Research and Innovation Programme;
- make public, when adopted, the MoD’s approach to AI technologies;
- develop an intergovernmental standard for algorithmic transparency;
collaborate with the Alan Turing Institute to update guidance on AI ethics and safety in the public sector; and - working with national security, defence and leading researchers to understand how to anticipate and prevent catastrophic risks.
What is missing?
There are significant weaknesses in the strategy, elements which we expected to find but which either did not arrive or which seem to have been put there just for show. Let us briefly list them.
Funding
This is perhaps the most serious absence. Usually, these strategic documents are accompanied by an equally important announcement on the money that the government will make available, understood as new and massive investments, which also serve to emphasise the importance of the policy and suggest its place in the country’s priorities. The document lists past funding, funding provided to other initiatives, but one number is missing, the number that was supposed to put the crown on the national strategy. It is possible that there has been an unfortunate overlap in dates, given that Chancellor of the Exchequer Rishi Sunak plans to present the spending review on 27 October, but it is nevertheless a missed opportunity.
Governance
It is odd that one of the three pillars is substantially incomplete. Governance is considered one of the priorities of the national strategy, yet we have only read vague outlines and a string of “we will think about it”. The first useful document will only come out in ‘early 2022’ and will be a white paper. The feeling is that the UK could no longer delay a response to the proposed European regulation on artificial intelligence, but in fact had nothing concrete to show for it. The result is that London has upped the ante (pointing to regulation as one of the key strategic pillars) but at the same time put everything off until next year.
Ethics
We did a test. First, we counted how many times the document dealt with “ethics” or “ethical” principles. Together, the two terms appear 19 times in the document.
Then we counted how many times “defence” was mentioned, either in the sense of military defence or “national security“. The terms appear 46 times, more than twice as often as ethical issues.
This crude count is in line with the feeling one gets after reading the strategy, namely that ethical issues are not only not addressed with due importance, but that the document shows more interest in dealing with military and national defence issues. These latter topics must certainly find their rightful place within a national strategy, but that is not the point. The problem is how ethics has often been relegated to a secondary position, placed there almost because it would have been scandalous not to mention it. Knowing some of the people who worked on this strategy, the lack of consideration given to AI ethics issues surprised us, but we assume they were the victims of cutbacks and compromises between different parties.
Too bad for the UK; a true AI superpower cannot avoid giving a prominent place to ethical issues. We, therefore, observe how the European Union remains the leader – for now unchallenged – in promoting the ethical principles of artificial intelligence.
The post The UK wants to be an AI superpower, but it goes easy on ethics appeared first on Artificial Intelligence news.