Global Ethics Forum: Homo Deus with Yuval Noah Harari

Global Ethics Forum: Homo Deus with Yuval Noah Harari

(soft music) – It is a sincere
pleasure to welcome the celebrated
international sensation, Yuval Noah Harari
to this podium. He is the author of the
critically acclaimed New York Times best-selling book entitled Sapiens A Brief
History of Humankind. Fans of his earlier
work, and there are many, have been waiting
anxiously for the sequel. Their patience
has been rewarded. Homo Deus A Brief
History of Tomorrow is the title of his newest work, and it is just as riveting
as it is provocative. – What I really want
to discuss is the next big revolution in history. Of course there have
been many different revolutions in the last
thousands of years. We had revolutions
in technology, in economics, in
society, and in politics. But one thing remained
constant for thousands, even 10s of thousands of years, and this is humanity itself. Humanity did not change
really since the Stone Age. We are still the
same people that we were in the Roman Empire, or in Biblical times,
or in the Stone Age. We still have the same
bodies, the same brains, the same minds as the
people who painted the cave art at
Lascaux and Altamira. The next revolution
will change that. The really big revolution
of the 21st century will not be in our
tools, in our vehicles, in our society, in our economy. The really big revolution will
be in ourselves, in humanity. The main products of
the 21st century economy will be bodies, and
brains and minds. We are now learning how
to hack not just computers but how to hack organisms and in particular,
how to hack humans. We are learning how
to engineer them and how to manufacture them. So it is very likely
that within a century or two homo sapiens
as we have known it for thousands of
years will disappear. Not because, like in some
Hollywood science fiction movie, the robots will
come and kill us, but rather because we
will use technology to upgrade ourselves,
or at least some of us, into something different,
something which is far more different from us than we are
different from Neanderthals. Medicine is shifting,
or will shift, from healing the sick to
upgrading the healthy. You can say that the really
big project of humankind will be to start manipulating or gaining control of
the world inside us. For thousands of years, humanity
has been gaining control of the world outside us,
gaining control of the animals, the forests, the rivers,
but with very little control of the world inside us. We knew how to stop a
river from flowing;. We did not know how to
stop a body from aging. We knew how to kill
mosquitoes if they buzzed in our ears and
interrupted our sleep. We did not know how to
stop buzzing thoughts in our own minds that
interrupt our sleep, When we go to sleep,
you want to fall asleep, and suddenly a thought comes up. What to do? You don’t know. It’s not a mosquito
that you can kill. This will change in
the 21st century. We will try to gain
the same control that we had over the world outside. We will try to gain
the same control over the world inside us. And if this really succeeds, it will be not just the
greatest revolution in history. It will actually be
the greatest revolution in biology since the
very beginning of life. Life first appeared,
as far as we know, on planet Earth around
four billion years ago, and since its appearance
nothing much changed in the fundamental laws of life. Yes, you had all the
dinosaurs, and all the mammals, and all these things, but
the basic rules of the game of life did not change at
all for four billion years. For four billion years
you had natural selection. Everything, dinosaurs,
amoebae, coconuts, humans, evolved by natural selection,
and for four billion years all of life was confined
to the organic realm. Again, it doesn’t matter if
you’re a Tyrannosaurus rex or a tomato, you were
made of organic compounds. In the coming century humankind
may have the ability to, first of all, replace
natural selection with intelligent design
as the fundamental principle of the
evolution of life. Not the intelligent design
of some God above the clouds, but our intelligent
design will be the main driving force of the
evolution of life. And secondly, we may gain
the ability to create the first inorganic
lifeforms after four billion years of organic evolution. And if this succeeds, then
this is really the greatest revolution since the
very beginning of life. Of course there are many
dangers involved in this. One danger is that we’ll
do to the world inside us what we have done to
the world outside us, which is not very nice things. Yes, over thousands of years
humans have gained control of the world outside,
of the animals, of the forests, of the rivers. But they didn’t really
understand the complexity of the ecological system
and they weren’t very responsible in how
they, us, behaved, which is why now the
ecological system is on the brink of collapse. We used our control to
completely destabilize the ecological system, to
completely unbalance it, largely due to ignorance. And we are very ignorant of
the world inside us also. We know very little
about the complexity not just of the body and brain, but above all about the
complexity of the human mind. We don’t really understand
how it functions and what keeps it in balance. The danger is that we
will start manipulating the world inside us in such
a way that will completely unbalance our internal
ecological system, and we may face a kind
of internal ecological disaster similar to the external ecological disaster
that we face today. Another danger is
on the social level. We may end up, due to
these new technologies, with the most unequal
society in human history, because for the first time in
history it will be possible to translate economic inequality
into biological inequality. For thousands of years there
were differences between rich and poor,
nobility and commoners, but they were just economic,
and legal, and political. The kings may have imagined
that they are superior to everybody else,
they are more capable, they are more creative,
they are more courageous, whatever, and this is why
they are kings and nobles. But this wasn’t true. There was no real
difference in ability between the king
and the peasant. In the 21st century
this may change. Using the new technologies, it will be possible
basically to split humankind into
biological castes. And then once you open
such a gap it becomes almost impossible to close it. Another related danger is that, even without all this
new bioengineering and things like that,
we will see an extremely unequal society as elites and
states lose their interest, lose their incentive to
invest in the health, and education, and
welfare of the masses. The 19th and 20th centuries
were the ages of the masses. The masses were
the central force in politics and in society. Almost all advanced countries, regardless of political regime, invested heavily in the health, and education, and
welfare of the masses. Even dictatorships
like Nazi Germany or like the Soviet Union built
massive systems of education, and welfare, and
health for the masses, hospitals, and schools, and
paying teachers and nurses, and vaccinations, and sewage
systems, and all that. Why did they do it? Not because Stalin and
Hitler were very nice people, but because they knew that
they needed the masses. Hitler and the Nazis
knew perfectly well that if they wanted Germany
to be a strong nation with a strong army
and a strong economy, they needed millions of poor
Germans to serve as soldiers in the army and as workers
in the factories and offices, which is why they had
a very good incentive to invest in their
education and health. But we may be leaving
the age of the masses. We may be entering a new
era in which the masses are just not useful
for anything. They’ll be transformed
from the working class into the useless class. In the military it
has already happened. Very often in history
armies march a few decades ahead of the civilian economy. And if you look at armies today, you see that the transition
has already happened. In the 20th century the best
armies in the world relied on recruiting millions
of common people to serve as common
soldiers in the army. But today the best
armies in the world rely on fewer and fewer humans and these humans are not your
ordinary common soldiers. They tend to be highly
professional soldiers, all the elite special
forces and super-warriors and the armies rely
increasingly on sophisticated and autonomous
technologies like drones, and cyber warfare,
and things like that. So in the military
field most humans already in 2017 are useless. There is nothing
to do with them, they are not needed to
build a strong army. The same thing may happen
in the civilian economy. We hear more and more
talk about the danger of artificial intelligence and machine learning pushing
hundreds of millions, maybe even billions, of
people out of the job market. Self-driving cars that
10 years ago sounded like complete science fiction,
today the only argument is whether it will take
five years, or 10 years, or 15 years until we see more and more self-driving
vehicles on the road, and they will push all the taxi
drivers, and truck drivers, and bus drivers out
of the job market. You won’t need these jobs. Same things may happen in
other fields like in medicine. Of course new jobs might appear. People say, “Okay, you
don’t need truck drivers, “but there will
be many new jobs, “let’s say in
software engineering. “Who will program all
these new AI programs? “And there will be
lots of jobs designing “virtual worlds and
things like that.” This is a possible scenario. One problem with this
scenario is that as AI becomes better and better, we have
really no guarantee that even programming software
is something that humans will do better than computers. The problem is not in
having new jobs in 2050, the problem is having new
jobs that humans do better. Just having new jobs
that computers do better won’t help in terms
of the job market. Another problem is
that people will have to reinvent themselves
again and again and again in order
to stay relevant, in order to stay
in the job market. This may not be easy. If you think about an
unemployed taxi driver or an unemployed cashier
from Walmart who at age 50 loses his or her job
to a new machine, new artificial intelligence. So at age 50, to reinvent
yourself as a software engineer, this is going to be
very, very difficult. The worst problem, of course,
is not in the developed countries but in the
developing countries. The really big question
is what will happen to the Nigerians, to the
Bangladeshis, to the Brazilians. If millions of textile
workers in Bangladesh lose their jobs because of
automation, what will they do? We are not teaching the
children of Bangladesh today to be software engineers. So we may be facing in the
21st century a completely new kind of inequality which
we have never seen before in human history,
on the one hand, the emergence of a new
upgraded elite of super-humans enhanced by bioengineering
and brain-computer interfaces and things like that,
and on the other hand a new massive useless class,
a class that has no military or economic usefulness
and therefore, also no political power. Finally, there is the political
question of authority. What we may see in
the 21st century, alongside the processes I just
discovered is a fundamental shift in authority from
humans to algorithms. There have been a
few previous shifts in authority in history. Hundreds of years ago, say in the European Middle Ages, authority came down from
the clouds, from God. You wanted to know who
should rule the country or what to do, whether in
terms of national policy or in terms of
your personal life, authority to answer these
questions came from God. So you asked God, and if
you had no direct connection to God you read the Bible
or you asked a priest or a rabbi that knew
what God wanted, and this was the
source of authority. Then in the last two
or three centuries, rose a new worldview, a new
ideology, called humanism, and humanism said no, the source of authority
is not above the clouds. The source of authority
is inside you. Human feelings and
human free choices, these are the ultimate
source of authority. You want to know who
should rule the country, you don’t read the Bible,
you don’t ask the pope, you don’t ask the chief
rabbi, you go to every person, every human being, and
ask, what do you feel? What do you think? And based on that, we know
who should rule the country. Similarly in the economy,
what’s the highest authority? In the economy
it’s the customer, the customer is always right. You want to know whether
a car is good or not, who do you ask? You ask the customer. If the customers like
it, if customers buy it, it means this is a good car. There is no higher
authority than the customer in a modern
humanistic economy. It’s the same thing in ethics,
what’s good and what’s bad? So in the Middle Ages, it’s what God says, it’s
what the Bible says. For example, if you think
about homosexuality, why was it considered
a terrible sin? Because God said so,
because the Bible said so, and these were the highest
authority in the ethical field. Then came humanism, and said no, the highest authority
is human feelings, whether it makes humans
feel good or bad. If two men are in love and
they don’t harm anybody else, both of them feel very good
about their relationship, they don’t harm anybody, what could possibly
be wrong with it? We don’t care what’s
written in the Bible or what the pope says. We care only about
human feelings. So this was the ethical
revolution of humanism, placing human feelings at the
top of the ethical pyramid. And this is also why
humanist education, the main ambition of
humanist education, was very different from the
education in the Middle Ages. In the Middle Ages, the
main aim of education was to teach people what God
wants, or what the Bible says, or what the great, wise people
of the past have written. The main aim of a
humanist education is to teach people to
think for themselves. You go to a humanist
educational establishment, whether it’s kindergarten
or university, and you ask the
teacher, the professor, “What do you try to teach
the kids, the students?” So the professor would say, “Oh, I try to teach history
or economics or physics, “but above all I try to teach
them to think for themselves.” Because this is the
highest authority. What do you think? What do you feel? So this is humanism, the
big revolution in authority of the last two or
three centuries. And now we are on the
verge of a new revolution. Authority is shifting
back to the clouds, (audience laughing) to the Microsoft Cloud,
to the Google Cloud, to the Amazon Cloud. Data and data processing is
the new source of authority. Don’t listen to the Bible and don’t listen
to your feelings. Listen to Amazon, listen to
Google, they know how you feel, they know you better
than you know yourself, and they can make
better decisions on your behalf than you can. The central idea of
this new worldview, which you can call
dataism because it invests authority in data, the central tenet is
that given enough data, especially biometric
data about a person, and given enough
computing power, Google or Facebook can
create an algorithm that knows you better
than you know yourself. Now how does it
work in practice? Let’s give an example so it
doesn’t stay too abstract. Let’s say you want
to buy a book, I’m in the book business, so
it’s very close to my heart, how do you choose
which books to buy, or which books to read? In the Middle Ages
you go to the priest, you go to the rabbi,
and they tell you, “Read the Bible. “It’s the best
book in the world. “All the answers are there. “You don’t need anything else.” And then comes
humanism and says, yeah, the Bible, there are
some nice chapters there, but there are many other
good books in the world. Don’t let anybody tell
you what books to buy. You just go and the
customer is always right and all that, you just
go to a bookstore, you wander between the aisles, you take this book, you
take that book, you flip, you look inside, you have
some gut instinct that, oh, this is an interesting
book, take it and read it. You follow your own
instinct and feelings. Now you go online to the
Amazon virtual bookshop, and the moment you enter
an algorithm pops up, Ah, I know you. I’ve been following you
and following millions of people like you, and
based on everything I know about your previous
likes and dislikes, I recommend that
you read this book. You’ll find it very interesting. But this is really just
the first baby step. The next step, if there
are people here who read books on Kindle,
then you probably know, you should know, that
as you read the book, the book is reading you. For the first time in
history books are reading people rather than vice versa. As you read a book on Kindle,
Kindle is following you, and Kindle, which means Amazon, knows which pages you read slow, which pages you read fast, and on which page you
stopped reading the book. And based on that, Amazon
has quite a good idea of what you like or dislike. But it is still very primitive. The next stage, which is
technically feasible today, is to connect Kindle to
face-recognition software, which already exists, and then
Kindle knows when you laugh, when you cry, when you’re
bored, when you’re angry. The final step, which probably
will be possible in five, 10 years, is to connect
Kindle to biometric sensors on or inside your body
which constantly monitor your blood pressure,
your heart rate, your sugar level,
your brain activity. And then Kindle,
which means Amazon, knows the exact emotional
impact of every sentence you read in the book,
you read a sentence, what happened to
your blood pressure. This is the kind of information
that Amazon could have. By the time you finish the book, let’s say you read
Tolstoy’s War and Peace, by the time you finish the book, you’ve forgotten most of it. But Amazon will never
forget anything. (audience laughs) By the time you
finish War and Peace, Amazon knows
exactly who you are, what is your personality type and how to press your
emotional buttons. And based on such information, it can not only
recommend books to you, it can do far more spectacular
and frightening things, like recommend to you what
to study, or whom to date, or whom to vote for in election. In order for authority
to shift from you to the Amazon algorithm, the Amazon algorithm will
not have to be perfect. It will just have to be
better than the average human, which is not so very
difficult because people make terrible mistakes in
the most important decisions of their lives. You make a decision of what
to study, or whom to marry, or whatever, and after 10 years, oh no, this was such
a stupid decision. So Amazon will just have to
be better than that in order for people to trust
it more and more, and for authority to shift
from the human feelings and choices to these
external algorithms that not only understand
how we feel but even understand why we feel
the way that we feel. It’s very important to
emphasize that nothing is really deterministic
about all that. What I’ve outlined in this
talk are not forecasts. Nobody really knows what
the future will be like. They are more possibilities
that we need to take into account, and we
can still do something about these possibilities. Technology is never
deterministic, it
gives us options. If you again look back
to the 20th century, so the technologies
of the 20th century, the trains, and electricity, and radio, and
television, and all that, you could used these
technologies to create a communist dictatorship,
or a fascist regime, or a liberal democracy. The trains did not tell
you what to do with them. Electricity did not
come with a political manual of what to do with it. You have here a very
famous picture taken from outer space of
East Asia at night. What you see at the bottom
right corner is South Korea, what you see at the upper
left corner is China, and in the middle, it’s not
the sea, it’s North Korea. This black hole there,
it’s North Korea. Now why is North Korea
dark while South Korea is so full of light? Not because the North
Koreans have not encountered electricity
before, they’ve heard of it, they have some use for it, but they chose to do with
electricity very different things than what the South
Koreans chose to do with it. So the same people,
same geography, same climate, same
history, same technology, but different choices lead
to such a big difference that you can actually
see from outer space. And it will be the same
with the 21st century. Bioengineering and
artificial intelligence are definitely going
to change our world, but we still have some options. And if you don’t like some
of the future possibilities that I’ve outlined in this talk, you can still do
something about it. Thank you. (audience applauding) – [Narrator] For
more on this program and other Carnegie Ethics
Studio productions, visit There you can find
video highlights, transcripts, audio recordings, and other multimedia
resources on global ethics. This program is made possible by the Carnegie Ethics
Studio and viewers like you.

3 thoughts on “Global Ethics Forum: Homo Deus with Yuval Noah Harari

  1. Rising children and care about elders and disabled will be much important jobs in a future, Ecologists, biologists, Nature protectors, zoologists, geologists , archeologists, museum curators, restaurators. culture and art jobs will florish….possebilities are endless if think about…

Leave a Reply

Your email address will not be published. Required fields are marked *