Do we need a regulator for new AI life forms?

Artificial intelligence (AI) is everywhere, and I can say with honesty that I was there near the beginning.

Long ago in the 1980s I was working with a colleague, Dr Robin Gostick, trying to integrate AI into the systems we were developing to help companies improve their decision making. Of course, we were only marginally successful. We didn’t have the software or computer power now available, but it was pioneering and a credit to Coopers & Lybrand (now PwC) and Peter Burnham, that we were given the chance.

With the huge increase in data available to all businesses I know that the work we started is being carried forward and today, AI, in its many forms, is impacting on our lives in ways most of us probably don’t realise.

There is a general agreement and acceptance that driverless cars will soon be on the roads in much larger numbers. Meanwhile many companies already use manifestations of AI far more commonly than you may expect. It may be a chatbot or the system controlling a large warehouse picking and selecting your groceries for home delivery. AI is already pervasive.

I remembered that Professor Stephen Hawking had once spoken about AI. I went back to the BBC news website to remind myself. The quote was already three years old:

Prof Stephen Hawking, one of Britain’s pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence. He told the BBC: “The development of full artificial intelligence could spell the end of the human race.”

Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans. Kubrick’s film 2001 and its murderous computer HAL encapsulate many people’s fears of how AI could pose a threat to human life. “It would take off on its own, and re-design itself at an ever-increasing rate,” he said. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

He was not finished and in 2016 said:

“The potential benefits of creating intelligence are huge,” he said. “We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one – industrialisation. And surely, we will aim to finally eradicate disease and poverty.

“Every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilisation.”

But there is a new goal for the computer scientists; the goal and target in the development of AI is sentience.

When I was involved in AI we didn’t have any pretentions that our work would lead to a ‘sentient’ machine but would simply improve data analysis. On reflection I am not sure that then I even knew the definition of the word sentient. It wasn’t in my vocabulary but today, it is the place where  every commentator’s dictionary falls open.

Last week there was a minor Parliamentary tiff when some conservative Members of Parliament were accused of denying that dogs, cats and many other animals were not sentient. It was not true, and that debate has quickly passed.

This weekend I watched the frightening vision of artificial intelligence getting out of control in the film Ex Machina. A highly realistic, sensual, sentient, and female manifestation of a human, Ava, manipulates and kills its inventor to escape into the ‘real world’.

The film is disturbing in its vision and demands that we consider where we are going.

With a sentient capability will a driverless car make moral decisions? In a potential accident will it preserve the lives of its passengers or those of a woman pushing a pram with a baby who has stepped suddenly onto a crossing?

Do we want machines that have any moral capacity? If machines do acquire a set of moral codes than that capacity will develop through machine learning and that learning will have to come from a fixed and exclusive clique. That clique may be the scientists themselves, or a selected group selected of white, middle class consumers. Whatever the group it can never be the accumulated experience of the totality of world history and its people.

As a country we need to keep working on AI and rightly it is at the centre of any industrial strategy that is developed. We should never be afraid of its impact on jobs and employment. History shows from the Luddites and industrial revolution that new and different jobs are created.

However, we still need to be concerned.

In the UK we have ‘The Human Fertilisation and Embryology Authority (HFEA)’ which, to quote their web site is ‘the UK’s independent regulator of fertility treatment and research using human embryos. A world-class expert organisation in the fertility sector, we were the first statutory body of our type in the world.’

No one denies that we need rules and ethics around how we develop new life forms form embryos and DNA.

Is it now the time to have a similar regulator for how we develop a new sentient life form through AI?