Experts share what scares them the most about AI

Sophisticated AI could make the world a better place. It might let us fight cancer and improve healthcare around the world, or simply free us from the menial tasks that dominate our lives.

That was the primary topic of conversation last month when engineers, investors, researchers, and policymakers got together at The Joint Multi-Conference on Human-Level Artificial Intelligence.

But there was an undercurrent of fear that ran through some of the talks, too. Some people are anxious about losing their jobs to a robot or line of code; others fear a robot uprising. Where’s the line between fearmongering and legitimate concern?

In an effort to separate the two, Futurism asked five AI experts at the conference about what they fear most about a future with advanced artificial intelligence. Their responses, below, have been lightly edited.

Hopefully, with their concerns in mind, we’ll be able to steer society in a better direction — one in which we use AI for all the good stuff, like fighting global epidemics or granting more people an education, and less of the bad stuff.

Q: When you think of what we can do — and what we will be able to do — with AI, what do you find the most unsettling?

Kenneth Stanley, Professor at University of Central Florida, Senior Engineering Manager and Staff Scientist at Uber AI Labs

I think that the most obvious concern is when AI is used to hurt people. There are a lot of different applications where you can imagine that happening. We have to be really careful about letting that bad side get out. [Sorting out how to keep AI responsible is] a very tricky question; it has many more dimensions than just the scientific. That means all of society does need to be involved in answering it.

On how to develop safe AI:

All technology can be used for bad, and I think AI is just another example of that. Humans have always struggled with not letting new technologies be used for nefarious purposes. I believe we can do this: we can put the right checks and balances in place to be safer.

I don’t think I know what exactly we should do about it, but I can caution us to take [our response to the impacts of AI] very carefully and gradually and to learn as we go.

Irakli Beridze, Head of the Centre for Artificial Intelligence and Robotics at UNICRI, United Nations

I think the most dangerous thing with AI is its pace of development. Depending how quickly it will develop and how quickly we will be able to adapt to it. And if we lose that balance, we might get in trouble.

On terrorism, crime, and other sources of risk:

I think the dangerous applications for AI, from my point of view, would be criminals or large terrorist organizations using it to disrupt large processes or simply do pure harm. [Terrorists could cause harm] via digital warfare, or it could be a combination of robotics, drones, with AI and other things as well that could be really dangerous.

And, of course, other risks come from things like job losses. If we have massive numbers of people losing jobs and don’t find a solution, it will be extremely dangerous. Things like lethal autonomous weapons systems should be properly governed — otherwise there’s massive potential of misuse.

On how to move forward:

But this is the duality of this technology. Certainly, my conviction is that AI is not a weapon; AI is a tool. It is a powerful tool, and this powerful tool could be used for good or bad things. Our mission is to make sure that this is used for the good things, the most benefits are extracted from it, and most risks are understood and mitigated.

John Langford, Principal Researcher at Microsoft

I think we should watch out for drones. I think automated drones are potentially dangerous in a lot of ways.The computation on board unmanned weapons isn’t efficient enough to do something useful right now. But in five or ten years, I can imagine that a drone could have onboard computation sufficient enough that it could actually be useful. You can see that drones are already getting used in warfare, but they’re [still human-controlled]. There’s no reason why they couldn’t be carrying some kind of learning system and be reasonably effective. So that’s something that I worry about a fair bit.

Hava Siegelmann, Microsystems Technology Office Programs Manager at DARPA

Every technology can be used for bad. I think it’s in the hands of the ones that use it. I don’t think there is a bad technology, but there will be bad people. It comes down to who has access to the technology and how we use it.

Tomas Mikolov, Research Scientist at Facebook AI

When there’s a lot of interest and funding around something, there are also people who are abusing it. I find it unsettling that some people are selling AI even before we make it, and are pretending to know what [problem it will solve].

These strange startups are also promising things that are some great AI examples when their systems are basically over-optimizing a single path that maybe anyone didn’t even care about before [such as a chatbot that’s just a little better than the last version]. And maybe after spending tens of thousands of hours of work, by over-optimizing a single value, some of these startups come in with these big claims that they did achieve something that nobody could previously do.

But come on, let’s be honest, many of the recent breakthroughs of these groups that I don’t want to name, nobody cared before, and they are not generating any money. They are more like magician tricks. Especially ones that see AI as just over-optimizing a single task that is very narrow and there’s no way they can scale to pretty much anything else other than very simple problems.

Someone who’s even a little bit critical of these systems would quickly encounter problems that go against the company’s lofty claims.

REGISTER NOW

By Dan Robitzski / Futurism Reporter

Staff Reporter at @futurism Covering Science and Tech. Freelancer Elsewhere. MA in Science Journalism at NYU SHERP 35.

Website

Twitter

(Source: futurism.com; September 5, 2018; https://tinyurl.com/y8kstnla)
Back to INF

Loading please wait...