AI: a blessing or curse for humanity? | FT Tech
Artificial intelligence is playing an ever-increasing role in our lives. But will this prove to be a blessing for humanity, or have we created a monster? We talk to leading futurists and experts to find out the impact they believe AI will have on our personal potential, jobs, and even safety
Produced by Alpha Grid
Transcript
You can enable subtitles (captions) in the video player
AI is already here.
We're now able to get machines to do things that look like what humans can do.
You can ask them anything and they'll give you a very intelligent answer.
It might be that just a few other changes to the AI systems would actually suddenly surpass us in all aspects. And we'd then see, oh, we weren't quite as complicated and sophisticated and special as we thought we were.
We don't know if it's going to be amazing or problematic.
People call it a wicked problem. That's an actual technical term. You try and interfere with bits of it and you find there are unexpected consequences.
Everybody's a little late to the party when it comes to artificial intelligence. We've got a lot of people having meetings and not a lot of things happening on the other side of those meetings. So what is the singularity? Well, it depends on who you ask.
A singularity comes from physics. It's a point in time we can't see beyond.
There's a black hole. But we don't know what's happening inside the black hole.
The first time I heard about the technological singularity was Vernor Vinge, a science fiction author. And so he said, look, singularities are these things through which we cannot predict the future. And once AI becomes more intelligent than humans, I don't know what's going to happen.
Certainly you go out to 2029, these computers will know everything that human beings will know. We're going to expand that at an unprecedented rate in terms of scientific progress. So by 2045, we'll expand what we know at least a million times, which is quite hard to understand. That's why we call it the singularity.
Is AI going to spin out of control and take all of our jobs and then murder us in our sleep? Probably not. But AI doesn't have to kill us in order to make life really, really difficult.
There is no simple solution to the challenges of AI. We've already seen many bad examples. The systems running the National Health Service, they were infected by a piece of ransomware. And it ended up causing havoc. In the future, there may be similar parts of our infrastructure that are governed by IT. And if we lose control of that, then it could be goodbye.
I guess the concern that is if it sees us as a threat through some way or the other, then it will prevent us from being a threat. If it sees us as being unimportant and needs to use the resources of the planet or the Sun, then it will probably do that and without, really, concern about humanity.
So it's basically our jobs to make sure that we make sure that we have machinery that we can keep control of. And any society that doesn't do that is not going to be the society that wins.
You need to agree to the frameworks. What kinds of things are you allowed to do with these AIs? And we have lots of rules in other parts of life. So we need more of the big tech companies with more support from academia and independent analysts to prioritise these safety issues, including - here's a significant one - an off switch. If we decide that a system is behaving strangely, can we terminate it?
It's not sort of an alien invasion of intelligent machines from Mars that's come to compete with us. We've created them. It's not like we don't have regulation. Take medicine. It's filled with regulation. You can't just put something out and expect people to use it. It's got to go through all kinds of tests and so on. So I can't just take a computer and create something and then expect people to use it. It has to go through regulation. We're constantly having to protect ourselves. It's true in medicine. It's true in every different area.
I am really hopeful about humanity. And I think that if we apply AI in the right way across our organisations, we can free people up from doing mundane tasks to get them to do more interesting creative things.
So the thing that gives me hope is that we can choose better decisions going forward. But that also means that the onus is on us. We're the ones that have to make the good decisions. Or we're going to all suffer the consequences.
Well, I believe in the concept - it's another kind of singularity. It's called the economic singularity. Fairly soon, there won't be any jobs. There will be very few jobs that people will be paid to do because anything that we might be paid to do will be done better, more effectively by robots, AI, and automation of various sorts. More and more of us will be in this situation where we can't earn a living by our labour, our intelligence, our creativity. And that's going to require a big transformation of the social contract.
A lot of people will find they have skills that were more valuable than they would have been before. And a lot of people will find that the skills that they spent decades acquiring are no longer very valuable. So there will be disruption at the family level. And then the question is, do we have sufficient social will to make sure that our societies and our governments are supporting those families and the individuals so that they can get through those hurdles?
It's not us versus the computers. It's us with the computers. We've already replaced all human employment several times. A couple hundred years ago we had the Luddite movement. If you were spinning cotton, you lost your job because a machine could do it. And ultimately, the machines came along and they could do everything that people could do. And all those jobs went away. But then we created new jobs with the machines. We're going to continue to do that. It's going to be disruptive. You're going to be able to do things that human beings have never done before at a very high speed. And that's going to be thrilling. And it's really going to be part of who we are.
I actually think that creativity will become a premium. The way that we differentiate ourselves is through creativity. I think that we should be focusing our energies on creating what is called a world of abundance, a world where all of the goods that we need to survive and thrive, our food, our healthcare, our education, et cetera, is essentially free.
The vision of people like Ray Kurzweil and others is that the new species that takes over will be us enhanced. We in some sense will merge with some aspects of AI. And we will become transhumanists. Smarter, kinder, more collaborative, less prone to egotism, less prone to cognitive stupidity. There's a lot of clever but stupid people in the world. We all do stupid things despite being clever. So let's have that fixed in our brains.
We're going to do things we never could do before. It's going to have tremendous impact on the way we live. I believe we'll overcome cancer and other diseases. Take the Moderna vaccine. That was created by a computer in two days. We say, oh that's artificial intelligence. It's not real. It's definitely real.
When we look at the singularities, humans tend to have a negative perspective. But again, the whole point is that we don't know what happens beyond it. And I'd like to think that over the next 10, 20 years, AI is going to free more and more people up to allow them to contribute in ways that they haven't been able to contribute. And I suspect that that will take humanity to another level.