AI is ‘biggest change for society since printing revolution’, top expert says

Artificial Intelligence has taken huge strides in the last decade. It can now recognise faces, coordinate self-driving cars or even compose music. W. Brian Arthur talked to EURACTIV Germany about why AI is “an enormous change for our society” and why it is important to regulate it carefully.

 

W. Brian Arthur, one of the most influential thinkers of the Santa Fe Institute – the cradle of complexity science – became famous 30 years ago for his economic theory of the lock-in effect, explaining the growth of today’s tech giants. He is now a leading expert on AI. EURACTIV Germany interviewed him during his visit at the Complexity Science Hub in Vienna.

 

EURACTIV: On Wednesday, the European Commission presented its next steps for building trust in artificial intelligence and to boost its development. What do we need to prepare ourselves for? How will AI change our society?

 

Arthur: Many think that AI is just another technology. But I believe that something deep and fundamental has happened here. This is not just another industrial revolution, this is the biggest change for our society since the printing revolution. Before the printing revolution, knowledge wasn’t easily publicly available. You had to have access to the books of a monastery or similar. But after, suddenly anybody who could afford to buy a book had access to all that private information, greatly accelerating the Renaissance, the Reformation and modern science. Modern society was brought about by the public availability of knowledge in the form of books. Now, with AI we have the public availability of intelligence. This is going to be an enormous change for our society. And it’s just starting.

 

Can you give us some examples of what that might mean?

 

Human beings are very good at recognising patterns. We’re very good at making judgments of all sorts. And we always thought computers would never get good at that. Computers were good at arithmetic and accounting and dealing with numbers and sorting telephone books. But suddenly, during the last ten years, computers have got good at seeing patterns, for example, recognising faces, coordinating self-driving cars, making sense of huge data sets. As another example, very few humans could compose music, but now, I could now instruct an artificial intelligence program to do it for me.

 

So we are discovering that a lot of the things that have made us human are now publicly available on the Internet or publicly available in the virtual part of the economy.

‘Adverse impacts’ of Artificial Intelligence could pave way for regulation, EU report says

 

The EU should consider the need for new regulation to “ensure adequate protection from adverse impacts” in the field of Artificial Intelligence, a report published on Wednesday (26 June) by the Commission’s High-Level Group on AI says.

 

It is a shift that poses many ethical questions about how to make use of this shift and how we can make sure society benefits at large…

 

No matter what new technology comes along, people are suspicious of it. And it is right to be suspicious because we don’t know what sort of world that will bring us. Artificial intelligence can bring us a world where we can automatically scan someone’s brain and find a tumour but it can – and will – also be misused.

 

Traditionally, in human societies up until recently, there has always been an idea of what is acceptable in your social community. It could be dangerous to drive down a street at 200 kilometres an hour, so we regulated it. But we haven’t had the same process in technology for some reason. There’s nobody deciding that by community standards, Facebook should not allow things to be posted that are just untruthful.

 

There are legal rules saying that outside countries shouldn’t manipulate elections but they can do it surreptitiously and it seems like large companies like Facebook aren’t doing enough to stop them. Especially in the US, many high tech companies monopolise their part of the market and people shrug their shoulders and say: ‘Well, I don’t mind’.

 

Europe is much better in this regard because, as a whole, Europeans don’t put up with a lot of nonsense from companies.

 

Still, it doesn’t seem as if Europe was very successful in regulating big technological developments. Do you find it scary that it is largely companies who set the tone?

 

Absolutely. What really counts is whether these large companies can behave in a manner that benefits society rather than just their own profits. And we’re finding out, at least in America, they will typically choose their own profits. But slowly, people are waking up to the idea that these tech giants aren’t just charging more money than they would if they were smaller. They’re not behaving well.

Artificial Intelligence presents ‘black swan’ ethical issues, Commission report says

 

A series of ‘critical concerns’ in the development of Artificial Intelligence may have future unforeseen “high-impact” ramifications, a European Commission-led project has suggested.

 

What does this mean for regulators?

 

It means that we have to be clear about what sort of society we want and not shy away from regulating or discouraging the things that we don’t want. Otherwise, we might get societies that we simply don’t want to live in and by then things like particular algorithms or methods that artificial intelligence uses may be deeply embedded in society and very hard to get rid of.

 

A key question in this regard is how we deal with data and privacy. Should regulators push for an increase of transparency of both data and algorithms? 

 

Yes, but there are always two sides to that argument. You still want to give an incentive for innovation. Typically, you do that by allowing patents on algorithms, so that for 20 years or so nobody can use your algorithm unless you license it to them. On the other hand, you want the algorithms – just like powerful medicine for example – to become publicly available. So there is a trade-off.

 

At the same time, pushing for more transparency might open the market for newcomers and make it possible for them to compete against tech giants. You came up with the economic theory of the lock-in effect, meaning that with the growing scope of a tech company, customers find it increasingly hard to change between services, like in the case of Google and Facebook. How do you think they will adapt? Will they continue to grow in the next decades?

 

They will continue to grow and to gather adopters for the next year or two or five years. But I wouldn’t say in the next decades because the technology changes and other companies will find different ways of using it. What people need and want shifts and that might mean that a large company like Microsoft can get partially shut out. So this lock-in effect doesn’t last forever.

 

France and Germany prepare plan to create European champions

 

Following the rejection of the Alstom-Siemens merger, France and Germany are working on a review of the EU’s competition rules to create European champions capable of competing with US and Chinese multinationals.

 

You work at Santa Fe Institute, a pioneering centre in the field of complexity science. This branch of research looks at the world as an interacting system of all kinds of actors and processes. The tool kit that it offers is increasingly used around the world to explain and predict social and economic processes – for example via agent-based modelling, a type of computer simulation of real-world developments. Is that hype or will the trend continue further?

 

I would say it will continue to grow. We are discovering that we can test ideas with computer models and computer algorithms. You could test what would happen in the case of a financial crash, for example. You wouldn’t want to try to test that in real life anywhere but you can test it on a computer.

 

In science in general, we have had two different tools so far: mathematics, where you could look at the implications logically of certain assumptions you were making and how things worked. And experiments, where you could set up a lab experiment and look at how they worked.

 

Now we have a third tool and that’s the computer. What we’re doing is setting up artificial worlds and using the computer as a kind of a lab to test what the implications of changes in that world might be.

 

Still, these models are only as good as their assumptions and can easily be misleading. Is there a danger that we put too much trust in data and computer simulations?

 

Yes, if you put in nonsense assumptions you’re going to get nonsense results. It’s the same as in the lab, or with mathematics. So using computers is an art form. We tend to believe that assumptions are the truth but there are many version of the truth.

 

We are just asking “what if” questions. It’s like a flight simulator for the mind.

 

[Edited by Zoran Radosavljevic, Benjamin Fox and Samuel Stolton]