Resources

Artificial Intelligence – what laws do we need?

18 October 2019

Many applications of AI are in their infancy but they are growing up fast. Global spending on AI is predicted to reach $77 billion by 2022 with a majority of companies now prioritising investment in machine learning and cognitive applications such as chatbots. Social, economic, ethical and legal questions must be examined now so that future regulation may determine how AI systems will work in future for social good.

What is the Ai Global Framework?

This is not an article on the laws of artificial intelligence (AI). Specific legislation does not yet exist. But a global policy framework is being developed and some of its outcomes will give rise to the AI laws of the future.

There is presently a variety of initiatives such as the Declaration of Cooperation on AI, made by the European Union member states last year, to ensure that they will work together to meet the moral and legal challenges of machine learning and AI trends.

What are the eight principles of Responsible Ai?

This year, ITechLaw, the leading global membership organisation for technology lawyers, created Responsible AI, a document setting out eight ethical principles to be reflected in future legislation:

  1. The first principle demands that AI systems should be deployed and used in a way that is compatible with human agency and respect for fundamental human rights. They must operate within social, political and environmental constraints by enhancing positive outcomes for people and mitigating environmental impacts (such as consumption of energy) and the presence of false information on the internet. Society should avoid the use, without close human intervention and oversight, of AI-enabled military technology.

  2. Humans must be accountable for the acts and omissions of AI systems. Every organisation that uses them significantly should nominate a person who is accountable for compliance with all eight principles. The position of an “AI ethics officer” will be analogous to that of a data protection officer under the GDPR. This principle recommends that legislators must not grant legal personality to robots which may potentially act independently of their human creators. In 2017, Saudi Arabia granted citizenship to a robotic system that went by the name of “Sophia” but this was largely seen as a symbolic gesture with no legal implications.

  3. Processes made by AI systems must be transparent and explicable to persons who are affected by any action or decision.

  4. AI systems must be non-discriminatory and fair in terms of what they achieve. They would be judged by the same standards applied to decision-making processes conducted by humans. There may be difficult issues here: we know the inputs we put into AI systems and the expectations we want. However, we will increasingly rely on algorithms that are “black boxes” whose functioning is effectively unknown.

  5. As well as operating in ways that are ethical, AI systems must be as safe and error-free as possible. There will be issues for any systems whereby AI interacts with the Internet of Things in applications such as the use of autonomous vehicles, say where AI calculates a least worse safety outcome in a catastrophic situation such as where the safety of passengers may be prioritised over that of other road users.

  6. The sixth principle requires open access to data sets for research and non-commercial use for public benefit where the operation of AI systems could create potential “datapolies” that are anti-competitive.

  7. AI systems must be compliant with data privacy rules. The vast data collecting potential of AI technologies such as diagnostic telemedicine tools and facial recognition may already be incompatible with data protection law. The latter has been banned in San Francisco. The Information Commissioner’s Office is presently investigating the use of facial recognition technology in public space within a private estate near Kings Cross in London. It has recently published an analysis of the trade-offs that may be needed between AI technologies and the data protection principles and how organisations might balance the two.

  8. Intellectual property (IP) rights only protect the results of human creativity. Rather like Naruto, the monkey who took his own selfie but enjoyed no copyright in the photograph, no IP should attach to the random creation of an AI-generated work itself. IP in AI-authored or enabled works under direct human authorship, or the means by which they are created, would however belong to individuals or corporations.

The Responsible AI principles will help to frame the laws relating to the use of AI-based systems for positive purposes. It is important that a line in the sand is drawn now before, the myriad of possibilities that will arise from AI will turn it from a superhuman to an antihuman invention.

By Laurie Heizler

For further advice on IP and Technology Media, please call us on 01483 543210 or alternatively email enquiries@barlowrobbins.com