Global Ethics Forum: The Pros, Cons, and Ethical Dilemmas of Artificial Intelligence

Global Ethics Forum: The Pros, Cons, and Ethical Dilemmas of Artificial Intelligence


(synth music) – Welcome to Ethics Matter. I’m Stephanie Sy. We are talking about
artificial intelligence. It is a vast topic that
brings up monumental questions about existence, about humanity, but we are going to focus,
of course, on the ethics. In fact, some of the
world’s top scientists have said recent
breakthroughs in AI have made some of the
moral questions surrounding the research urgent
for the human race. For this we’re joined
by Wendell Wallach, a bioethicist at Yale
University and the author of A Dangerous Master:
How to Keep Technology from Slipping
Beyond Our Control. Wendell was recently
named co-chair of the Council on Technology
Values and Policy for the World Economic Forum. So Wendell, you are perfect
for this discussion. Before we dive into
the many hot topics when it comes to AI, I know that the term is
sort of an umbrella term that encompasses a lot of
different technologies, so will you just give us
sort of the beginner’s explanation of AI
to start us off? – Well, originally
artificial intelligence, which was a term
coined in the 1950s at a conference at Dartmouth, it largely meant just
achieving kind of a human level intelligence
within machines, and the people at that
conference thought that that would happen
within a decade or so. They were very optimistic. They thought the challenge
was relatively easy. It’s now become a
bit more confusing what the term actually
does and doesn’t mean, largely because every
time a goal is achieved, such as beating
a human at chess, the bar gets raised. Somebody says, well,
that wasn’t really artificial intelligence
in the way it beat the human at chess in this case. It didn’t really play
the way a human chess player would play. But even the folks in
the more advanced fields of artificial intelligence
feel today that we are just beginning to have
true artificial intelligence, and a lot of what
we’ve done so far is largely automating systems, largely programming
them to follow through procedures that humans have
thought about in advance. – So that’s a great place
to start because, in fact, one of the trending
hot topics in AI is that Google reached
what has been called by some the holy grail of AI. They programmed a computer
to beat the best Go player in the world. That was a game that I
played with my grandmother when I was a kid. I was never very good at it. It’s very complicated. Why was that so significant? Why was it a bigger
deal than when the IBM computer beat Kasparov in ’97? – It’s just that it’s
a more complicated game and a game that takes
more imagination in terms of how you play
it, how you execute it. So it had played a
European champion, and then months later
they have this million dollar challenge with Lee Sedol, who is one of the
great Asian players, is considered by many to
be the greatest player in the world. The program was developed
by DeepMind Google. DeepMind Google is a
very interesting company in the field of deep learning. Deep learning is this
great breakthrough that has opened up this
doorway for people to feel that artificial
intelligence is finally coming into being. And whereby the term
seems to suggest that these computers have
very sophisticated learning capabilities, it’s really a narrow
form of learning that they’re engaged in. But what they have
been able to do is, they’ve been able to solve
some of these problems that have bedeviled computer
scientists for decades now. Problems and simple perceptions. So a deep learning
computer can actually label all the objects in
a photograph accurately. That was something that had
not been done hithertofore. – But doesn’t a lot of
what deep learning involve involve basically
statistical learning? In other words, a sort
of number crunching? How much of deep learning
involves imagination, involves creativity and
adaptation in the way we would define
human intelligence? – Well, whether it really
involves human intelligence, that’s a very different story, but it does involve what
are called neural nets, which is a form of
computer programming which tries to
emulate a little bit what’s going on in the brain. Or at least a computerized
model of what’s going on in the brain. So your desktop computer
has one central processor that manages all
the computations. A neural net can have
many different processes. The other thing about a
neural net is that it is built in layers, and we don’t necessarily
know what the computer is doing in those middle layers. So in this case what deep
learning computers are doing, they are analyzing
a massive data flow, and they are
discovering patterns, many of which humans perhaps
don’t even know about or can’t recognize. And therefore they
are finding new ways of proceeding or at
least understanding some of the existing strategies. – Okay, so starting from that, did that victory of AlphaGo
over Lee Sedol, as an ethicist, seeing that victory of machine
over human in that context, did that cross a threshold
for you that concerns you as an ethicist? – No, this one doesn’t. This one doesn’t at all. I think it only does in the
sense that people get caught up in these human/machine
comparisons. The only comparison that really
should be made here is that we humans have something that
we call winning the game. And yes, AlphaGo, the software
program, won the game. It figured out how to
play against Lee Sedol. It won four games to one, and in one of the
games it did a move that surprised everyone, that perhaps no human
player would have ever seen the connection to that
board in that possible move. So that was fascinating. But I actually sat on
a panel with Lee Sedol, and one of the things that
I noted was, first of all, the computer did
not play the game that Lee Sedol was playing. Other than the winning, other than that measurement
we make at the end of the game that this was winning, what the computer was engaged
in was very different than what Lee Sedol was engaged in. Secondly, this computer
had actually played or studied millions
of games of Go. A human being, like Lee Sedol, he’s maybe played
tens of thousands. I don’t know exactly how
many games he’s played. But perhaps what’s remarkable
is that without all these resources and programmers and
teams trying to figure out how to make the
machine so smart, that this human was
able to win one game. And he believes he
could have won more. He thinks there was another
game that revealed a flaw in AlphaGo’s programming
but he didn’t exploit it effectively. – I think the reason
why people like that man versus machine, and
they talk about it in that way, is because humans are
worried about being replaced to some degree by machines. That’s already happening, right? According to a study out
of Oxford a few years ago, 40% of jobs will be
automated by machine learning and mobile robotics. – They actually said 47%
of existing American jobs could be replaced. And when their form of
analysis was applied to China and India, it was 69% and 75%. – Which is concerning. – [Wendell] Mind-blowing. – Yeah, mind-blowing, by 2034. So that certainly must bring
up ethical issues around the people that are
creating these machines and adopting these technologies. – Right. This is a very difficult issue. It’s the long-running concern, the Luddite concern
going back 200 years ago, that each new form of
technology will rob more jobs than it creates. Up to now we haven’t seen that. Each new technology
eventually creates more secondary jobs
than it eliminates. But I am among those who believe
that increasing automation is already putting
downward pressure on wage growth and job growth,
not only downward pressure, but as we get more and
more intelligent machines, we’re going to see
this rapid pickup. It doesn’t take a lot of
jobs to be replaced in a year to create a panic mode. But the difficulty is, is this an artificial
intelligence problem or is this a societal challenge? I actually talked with the
president of a company that was building robots that basically
take boxes off of racks and move them and
put them on trucks. And his concern was that
there were millions, perhaps 25 to 100 million
or more workers in the world who do this kind of work. On one level it’s inhuman work. But he recognized we’re
going to take jobs. I looked at it and I said, well, I don’t know that this
is really his problem. Societies are going to
have to adopt technologies that improve productivity, and it’s not bad that we’re
taking away inhuman jobs. On the other hand, we have a
major societal challenge here. If wages are no longer the
main way in which people get their incomes, have
their needs provided, then how will they get
those needs provided? – And this is when
conversations about universal basic income,
and that has come up. I guess it also leads to
the question of whether, even if we can automate more, and even if we have
the capability to
replace professionals with machines and with
robots, should we? – That’s a difficult question. And I think every society
has to have that debate. Societies are going
to differ on that. Unfortunately, as with
most ethical issues, there’s not a simple yes and no. There are different values, there are different
prioritizations of values. So it becomes an ethical
issue in the terms of what are the options and
what are the ways of, ethics is often not about
do this or don’t do that. But it’s if you
go down this road, then that has those
ramifications, and how are you going to deal
with those ramifications? Or if you go down an
alternative pathway, then how will you deal
with those considerations? – They’re such important
questions, I think, to be asked because this
technology is happening now. Truck drivers and cab
drivers could be some of the workers replaced. And that brings me to
the next hot topic in AI, which is self-driving vehicles. Proponents say that self-driving
cars could reduce the some 40,000 annual traffic
fatalities that we have a year just in this country. So there have been a
couple accidents already with these self-driving cars. And they seem to make
people so uncomfortable with the technology,
it gives us pause. And it must bring up sort
of the interesting questions about sort of the value
of the individual versus the overall good that may
come with self-driving cars. Interesting questions
for an ethicist. – Real interesting questions, and they’ve gotten posed
recently in the form of what are sometimes known as
trolley car problems. – [Stephanie] Explain
the trolley car problem. – Well, the trolley
car problems have been around since 1967, first
proposed by a philosopher, Philippa Foot, but
they proliferate into hundreds of different problems. And basically, they’re problems
where you are asked whether to take an action that could
perhaps save five lives but it would cost another life. So traditionally, you throw a
switch that redirects a train to a different track, but
one person still dies. Suddenly, these are getting
applied to self-driving cars and questions are being asked. Well what should
the self-driving
car do if it’s gonna hit a bunch of children? But it could, rather
than hit those children, drive off the bridge and kill
you, the passenger in the car? – Right. – There has been recent
research done on this. There were some articles in
Science Magazine in June, where generally the public felt
that the car should do what would kill the least
number of people. On the other hand, most of those people also
said they would not buy a car that would kill them
or the other occupants. So now, in order to
save a few lives when a once-in-many-trillion-time
incident occurs, millions of people do not
buy self-driving cars, and then we have thousands
of losses of lives that would not have
occurred otherwise. So it’s one of these interesting
things where the short-term ethical calculation may
actually come into conflict with the long-term
ethical calculation. Now, I’ve made the
proposal that there is no right answer to this problem. It’s a totally new situation. You need to have many
stakeholders sitting down and establish new
societal norms. And frankly, I think, you
don’t program the cars to make a decision, even if we could
get them to make a decision. – Even if there was an ethical
dial on which you could program the car to reflect
your own personal values, you don’t think we
should have that? – Well, no. That’s a possible
option, and to be honest, I proposed that a few years ago. But if it’s going to discourage
people from buying a car, or people are going to be
afraid to do anything because they would think that even
though I’m not driving the car, am I responsible if
I’ve totaled the car, to kill other people
rather than kill me? It sets up a very interesting
ethical quandary for the individual who has
to make the choice. But my concern is
there is, in this case, a utilitarian calculation, meaning the greatest good
for the greatest number, but the tension throughout
history has been through utilitarian calculations that
violate individual rights. And so this would be a real
tough thing for a society as a whole to make a decision
that maybe we aren’t going to make that individual rights
decision because in the long run it really has other
ramifications. – Individual rights and
maybe national rights, because let’s talk
about another big topic when we talk about AI, which is lethal
autonomous weapons. Does that question of the value
we place on ourselves or our own country versus
the greater good, humanity’s greater good, does that also play
into a discussion about autonomous weapons? – Certainly. I mean, so there is
this movement to ban lethal autonomous weapons. And that basically means
any weapon that can select its own targets and kill them, or at least decide who is
the target among groups of people that it is looking at. And there are all kinds of
moral goods that come from this. The obvious moral
good is it could save some of your soldiers. On the other hand, if it saves
some of your own soldiers, it could lower barriers
to entering new wars. All kinds of aggressive
countries will be aggressive if they don’t feel like their
citizenry is gonna rise up against them because
there will be a lot of soldiers’ lives lost. When we invaded Iraq, we had almost no loss
of life at all during the invasion itself. But here we are– – In an insurgency in
which soldiers’ lives are still being lost, yeah. – And it has concatenated into
perhaps as many as 300,000 lives lost in Syria alone. So sometimes you can enter
into a conflict because you think there’s a moral good, or at least you can save
your own soldiers’ lives, but if you are lowering the
barriers to starting new wars, or if you have machines
that could unintentionally start a war, or that you
start to have flash wars the way we have flash crashes
on Wall Street because machines are suddenly
reacting to each other, my lethal autonomous
weapons versus your lethal autonomous weapons, in ways that we don’t even know
what they’re responding to, but in that flash war
1000 lives were lost, you have some real
concerns going on. So a number of us, actually
a pretty large community, have begun to support this
Ban Killer Robot movement, and the UN has been meeting
for three years already in Geneva with expert meetings, and those will continue
over the next few years, to see if it is possible to
forge an arms-control agreement that many countries
would sign onto, that nearly all countries
would sign onto, to ban this kind of weaponry. Now it does not mean banning advanced artificial
intelligence, but it means banning weaponry
in which humans are not in the loop of
decision-making in real time, that they are there when the
crucial decision is made, not just that they delegated
a decision to the machine hours or weeks beforehand. – Could I posit that there
might be a day in the future where we can program
machines to be more moral and ethical than human beings? – There might be. – A day where they’re
more consistent. Because certainly, human
beings don’t always act in the interest of humanity
and for the greater good. – Well, that’s sort of the
self-driving car dilemma in one form, but I mean, I’m also known for a book
that I co-authored with Colin Allen of Indiana
University eight years ago, called Moral Machines: Teaching
Robots Right from Wrong. And it looked at
precisely this question. It looked at this question
of how can we design machines that are sensitive to human
values and implement them within the decision
processes they engaged in? We were largely looking at
it from the perspective of philosophers with a bit
of computer science. But suddenly, this has
become a real challenge for engineers, largely because
of these deep learning breakthroughs and these fears
being raised about whether we are now going
to make advances in
artificial intelligence that will eventually
have superintelligence. So because of that, some of
the leading responsible members of the AI community, particularly people
like Stuart Russell, have instituted this
new approach to building artificial intelligence that
they call values alignment. Now up to now, machines
are largely being built to achieve certain goals. And the point here was, no, we don’t need to be just
looking at building our machines so they will fulfill a goal, because artificial intelligence
may actually be able to fulfill its goal in a
stupid or dangerous manner, but we need them to be
sensitive in the ways in which they fill their goals, the ways in which they
take certain actions, to human values. So their concern is how do
we implement this within engineering practices
within the very forms of artificial intelligence
that will be built, not only that they
implement values, but how do we ensure that
they are controllable, that they’re safe, that they will be sensitive
to the concerns we are? – Is there a way, and maybe
this is beyond your purview, but is there a way to make
sure that we can continue to control the machines
versus them controlling us? I mean if they reach a
level of superintelligence, or singularity as
some have called it? – Right. It was called the
singularity for many years, and now this term
superintelligence
has superseded, only because the singularity
got a little bit confusing. No one knows. And it’s not an
immediate concern, in spite of the fact that the
newspapers put a picture of Arnold Schwarzenegger
as the Terminator on every conceivable article
where anybody even talks about artificial
intelligence. I’m just amazed at how
many pages I’ve shared with him over the years. This is all a very
distant prospect, and many of us even question
whether it is achievable. But to laud the engineers,
is they’re saying, “Yes, but by the time we know “whether or not it’s achievable, “it may be too late to address
whether it’s manageable. “So let’s start dealing
with the control problem, “with the values
alignment problem, now.” And I applaud this because
it also will be appliable to the near-term challenges. People like me, who have
been talking about near-term challenges with emerging
technologies and artificial intelligence in general, we’ve suddenly been
getting a lot of attention over the last year or two. And it’s largely because
people like Stephen Hawking and Elon Musk and Bill Gates
are raising longer-term scare issues around superintelligence. So it’s a two-edged sword. – But it’s also because
of the actual leaps in the technology and
the fact that we do have self-driving cars,
we do have AlphaGo. Elon Musk, who has
invested in DeepMind, is one of the signatories to
that letter that basically said, let’s be careful
of the pitfalls of AI, which is encouraging,
but then I would– – Not only that. He put his money
where his mouth is. He gave $10 million dollars to
the Future of Life Institute for projects that
would work on safe AI, robust and beneficial AI is
kind of the more generous way of putting it, and people like me received
grants to move forward some research in those areas. So that’s uh– – That’s encouraging. – [Wendell] That’s
speaks to him. – What about government’s role? – Governments are just beginning
to look at these concerns. The White House has held some
meetings over the past year where they have been
looking at issues around AI and whether there
are initiatives that
should be taken. Many of the international
standard-setting boards are looking at whether there
are international standards that should be set either
in electrical engineering or in other kinds of fields. But this is all very new. On the one hand, it’s opened the door for
overly zealous ethicists, like myself, to maybe
call for things that aren’t fully needed, but may be needed and we
need to give attention to it. On the other hand, there’s always a concern that
that will squirrel innovation, but you don’t want to put
in place bureaucracies based on laws that look
like they are good today but the technology will
make obsolete tomorrow. So I personally have called
for the development of global infrastructure to
now begin to think about how to ensure that
AI and robotics will
be truly beneficial. And that infrastructure
isn’t just hard law and hard
governance, if anything, we should limit that, but
look more at soft governance, industry standards,
professional codes of conduct, insurance policies, all
kinds of other mechanisms. But also look at how we can
engineer AI more safely, so what are the
engineering challenges, how we can make values, such as a responsible agent
or concern for privacy, part of the design process
that engineers think through when they implement
new technologies. So there is that,
beginning to think through. Well what really can
international bodies do, what can the engineers
do, what can industry do, what can be handled in other
more adaptive evolutionary mechanisms than the ones we
tend to rely upon today in terms of hard law and
regulatory agency? We tend to think about those, those are kind of
the first response. But perhaps we need to
think through a new, more parsimonious
approach to the governance of emerging technologies, where we can certainly
penalize wrongdoers, but we don’t expect to come
up with regulations and laws for any conceivable
thing that can go wrong. – Well, and part of that
is that there’s a potential net benefit to humanity with
a lot of the technologies we’re talking about. We’ve been focused on the
potential negative effects of AI in this conversation, but there’s these net
positives in sort of robust AI research that could
really benefit us. – Tremendous benefits
in health care. I mean, we’ve talked
about driverless cars. I think that the overall
moral good is clear with driverless cars. All of these technologies
are being developed because somebody believes
that there is some tremendous worldwide
good that can derive from their research. The difficulty is, can we
maximize those benefits while minimizing the downsides,
minimizing the risks? And that’s what requires
much more attention. – Absolutely fascinating. Wendell Wallace,
thank you so much for joining Ethics Matter. – Thank you ever so much. This has been great fun. (synth music) – [Speaker] For more on
this program and other Carnegie Ethic
Studio productions, visit carnegiecouncil.org. There you can find video
highlights, transcripts, audio recordings, and
other multimedia resources on global ethics. This program is made possible by the Carnegie Ethics Studio
and viewers like you. (synth music)

5 thoughts on “Global Ethics Forum: The Pros, Cons, and Ethical Dilemmas of Artificial Intelligence

  1. Very good overview of the issues, without panicking and inflating the dangers beyond their reality.

  2. Thank you so much for such a wonderfully insightful discussion on perhaps the most crucial ethics topic of our time that nobody is talking about. This needs more views, and other conversations like it. Ethics should always be at the forefront of technology, not the other way around!!

  3. Enable a computer to adjudicate international law. The computer will be open source. When international conflict occurs, the computer will amalgamate all the peace proposals from around the world and provide for conflict resolution. All those implementing the computer’s amalgamation will wear the latest lie detector and mind reading devices so as to ensure the best choice of personnel. The computer will use the data from the amalgamated peace proposals to expand the law and or provide avenues by which rogue states such as the US can participate. If we can submit into a computer all the human information (wisdom based experience) such that it becomes our collective wisdom, then it will most on most occasions out do any one of us as socially brilliant as we think we are.

    Our collective wisdom is not sufficiently wise to be up to every challenge in our very complex world, but it’s the best we got and if we apply it we will have more knowledge about our human species, as such the next attempts to mediate disputes will be better.

  4. One day AI might be able to learn all the moral and social rules, but AI will never be able to notice those legs in a way human does.

Leave a Reply

Your email address will not be published. Required fields are marked *