Faculty Forum Online, Alumni Edition: The World According to Deep Learning and Second-Wave AI

Faculty Forum Online, Alumni Edition: The World According to Deep Learning and Second-Wave AI


>>Hi, I’m Whitney Espich, the CEO of MIT
Alumni Association. And I hope you enjoy this digital production created for alumni and
friends like you.>> Good afternoon. Welcome to the Massachusetts
Institute of Technology my name is Rod McCullom. A little bit about me. I’m a science and technology
writer for publications such as Nature, Scientific American, The Atlantic, The Nation and
I also write about artificial intelligence, metrics, infectious disease and the science
of violence. I was a Knight Science Journalism Fellow from 2015 to 2016. As a reminder, we welcome your questions.
We have a Q&A feature found on your tool bar. For those of you on YouTube, you may add your
questions or comments next on the stream. We encourage you to tweet using #MITBetterWorld.
We will try to get to as many questions as we can. I’m so glad today to introduce our
feature presenter, Brian Cantwell Smith, the Reid Hoffman of artificial intelligence and
the human at the University of Toronto. He’s a professor of information, philosophy, cognitive
science and the history and philosophy of science and technology. Professor Cantwell
Smith holds a bachelor of science, master of science. From 1981 to 1996, he is the principal
scientist at the Xerox Palo Alto Research Center and professor of philosophy at Stanford
university. Protheth Professor Smith was the founder for the center of language and information
at Giancarlo Stanton — Stanford University. And President of the society for philosophy
and psychology from 1998-1999. From 1996 to 2001, Professor Smith was a professor of cognitive
science, computer science at Yale University from 2001 to 2003, he was at Duke University
as distinguished professor of philosophy and new technologies and in the departments of
philosophy and computer science. He moved to the University of Toronto in 2003 and served
as the Dean of the Faculty of Information from 2003 to 2008. And we’re very pleased
to welcome him to the Alumni this afternoon. Welcome and good afternoon, Professor Cantwell
Smith.>> Well, thank you very much. I hope you
can hear me. It’s really kind of you and the whole Alumni association to host this event.
And I’m just thrilled to be here. I’m going to try to keep my remarks as short as I can
in order to lots of questions. So I look forward to that.>> Thank you.>> Should I go ahead and just plunge into
the discussion?>> Yes, go, please.>> OK. You know, one of the things that I
wanted to talk about was the sort of background of this project. I wrote my first A.I. program
back then but I was very interested in how accommodation would be capable of understanding
issues that mattered about us. Whether it could do justice to the depth and complexity
in the human condition. it’s the dialectic between the human condition and — had been
in focus for me my entire life. so it’s funny because when I started in 1972, I joined a
social inquiry major but then, quickly, I moved over to the A.I. lab but I’ve always
had the kind of wonder about the adequacy of A.I. but also about what A.I. is. I’ve
been struggling for now 30, 40 years to understand what computing is. And I actually don’t think
current theories of computing do justice to the notion of computing. So it’s kind of like
I was playing dungeons and dragons and I always found stairways and I just sort of always
probing. probing the philosophical conditions of everything. So that’s kind of a background
to this whole project. I don’t know if I can — this is– this is the talk I’m going to
give us actually — arises, basically, out of this book which is coming out this summer,
“the promise of artificial intelligence: reckoning and judgment.” I think they’re very
serious things at stake. what I want to talk about today is what I think matters about
all the recent stuff about A.I., the deep learning and second-wave A.I. stuff, so there’s
lots of different discussions of it and I think the most important insight in a way
that backs that’s behind deep learning and A.I. has to do with this sort of glimpse it
gives us into the nature of the world. So rather than talk about technology itself,
I’m interested in the structure of the world that in fact is the world’s being in a certain
way is allowing deep learning and A.I., to do justice to it and that’s where in fact
deep learning and second-wave A.I. are getting their power. So let me just say a few things
about deep learning in that regard. sorry the technology here is maybe less than perfect.
So here is the kind of picture of how deep learning systems and Nurmagomedov network
works. Those — neuronetwork also works. here is a slightly better characterization of deep
learning, which is that deep learning is essentially a statistical method, a method for statistical
classification and prediction of patterns based on sample data. Now often quite a lot
of it. Using an Internet connected fabric of processors arranged in multiple layers.
So the pictures are examples of that, but the fact that this is doing statistical clarification
is very important. But still, I don’t think we’ve gotten to the heart of the issue of
deep learning and why it is that matters. Here is a yet better but I don’t think yet
perfect characterization of deep learning. So gofai is the stuff that I got born and
bred on was just in full flight. it’s called good old-fashioned A.I. by my dear friend
John hogland. One way to charactersize it, people say it’s based on logic and that was
true but logic wasn’t as important as these facts about it which is the conception of
intelligence which to some extent arose out of introspection on our part was intelligence
was deep, many step interference by a serial processors using modest amounts of information
involving a relatively small number of strongly correlated variables. If you think about if
shot’s is the man and is mortal and so on and so forth, former logic is strong correlation.
Proves in third order logic that also be quite deep and some of the systems got fairly involved
in pretty deep chains of reasoning. Deep learning is in some ways the opposite in five different
dimensions. On this is interesting. It is typically shallow. A few steps of interference,
basically, by a parallel process, not a serial one, based on massive amounts of information,
not modest amounts of information involved on weakly correlated variables. And that stuff
is — and in terms of what it tells us about the world. So those two things in particular
are facts that I want to talk about more. So what was true back then, it was that we
kind of thought that the world was such that the GOFAI forms of reasoning would actually
work in which it is a world that we imagine is the sort of world that would be amenable
to the sorts of interference that we’re described on the left or the sorts of interferences
relied in logic. this is a Stanford research institute in California when I went out there.
This is from like 1980. This is shaky, one of the first mobile robots. What I want to
draw attention to is this is the world they bilt for shaky.– built for shaky. They built
a world that actually was like what it was that those of us in GOFAI thought the world
was like. Namely, they projected our idea of the world on to the world and made a world
out of an easily describable object. This is not actually what the world is like and
I think one of the things I tell students is that I think A.I. is essentially a history
of enormous humility as we encounter the inadequacies of a lot of our assumptions. Here’s a picture.
Now, this is a pretty ordinary picture. I just picked it randomly on the web. It’s basically
an empty room. And you know t got two people in it. But it’s a tremendous amount more complicated
than the world we just saw. But not only is it more complicated, what you’re seeing in
this world and you are actually not seeing what it’s like as it were directly, you’re
not even seeing the two dimensional projection of what it’s like. You’re processing that
image with your brain, which is a neurodevice comprising 100 billion elements with 10 to
the 14th interconnections between and among them, hone for the purpose of dealing with
human vision over 500 millions of revolution. And what arises in your consciousness when
you see this picture is something that has been processed. by a processor of complicity.
So that makes me wonder and in fact, I’ve talked to this artist friend of mine what’s
the world like that we process it and it delivers to our consciousness a sense of a room. And
so this friend, Adam Lowe, he painted a picture and what the world looks like before all that
processes and here’s this picture. Now, this picture is a chapter in my book from a long
time ago. I don’t know if you can see my cursor. I can see my cursor but I hope you can. This
is what he thinks arises — arrives into our perceptual processes. I can parse it easily
but not everybody can parse it away. The head’s up near the top. They’ve got two legs. They’re
walking. Their arm is here and they’re carrying a pail. A pail is here with a bunch of things
around it. It’s in the basement of his house. I’ve been to this doorway. Inside is a wooden
box with scraps of wood and so-so and so forth and he’s just walking by this door. And he’s
sort of skeet. I mean, this is a skeet in a way. We tank images of this kind of complexity
and turn them into things that are parsed for our minds and for our concepts. the conception
is computing function f from a scene. He has made it a f to the minus one to a scene so
we applied perception on the scene, winded up to the perception of the other case. And
that’s basically — that’s basically what he comes to. Now, I’m going to stop this for
a minute. OK. So the world is not the way we imagined in the
days of GOFAI. The bottom line of this. The world is a mess. Our ideas were cleaned and
sharpened but the world itself was a mess. Is a mess. Will always be a mess. How can
the A.I. system deal with that mess? The conceptual structure, the GOFAI were ined a quat to that
mess and that was — inadequate to that mess and that would you the reason for the defeat
of GOFAI was the conceptual representations we used are inadequate the world in which
we deployed these systems. Now there was a reaction to this and it started pretty early
and one of the things — one of the first reactions to it was in fact Rod Brooks who
was at the A.I. lab. So I’m going to put up a slide of some of his early features. Rod
took a room on the ninth floor of tech square where the A.I. lab and he put sandal over
the floor bed and he led some robots. Some of those were pretty small and they clambered
around and climbed over rocks and it was a revolution with respect to robotics and it
was successful. These robots were able to clamor around in rooms that were not clear
and distinct in the way GOFAI assumes. These robots went to the moon and so on as you can
see. These are bigger than the ones he had on the ninth floor but it was the same idea
base on Rod’s idea and these things were the first robots that could deal with a world
that was not a clean and distinct world of objects. but the thing about this was they
dealt with this world in a rather striking way. They didn’t reason in terms of clear
and distinct categories. So in that way, they made an advance in terms of the assumptions
over GOFAI. But the thing is they didn’t reason at all. That’s why they didn’t represent the
world. They were reactive behavioral robots and this was sort of a — so Rod Brooks won
the award and it made him famous and led him to being appointed director of the A.I. and
other people like David kirsch talk about the sea change in A.I. Today the ear wig,
tomorrow the man. Look at the first sentence on the abstract there. Startling amount of
intelligent activity can controlled without reasoning or thought. Basically, the attention
in A.I. moved downward from conceptual representation towards navigating the world which was a mess.
And so we got — this is a wonderful book if any of you teach this kind of robots. I
mean, it’s from the same era, I think. But you can see from the picture on the upper
right that he showed if you connect the vehicles to sensors and they attracted to light and
you hook up the wheels, the thing will steer around and produce remarkably intelligent
seeming behavior. But it’s still a behavior. Now behavior is not enough. And one thing
that in fact led to I think the successes of deep learning and second-wave A.I. was
sort of if we’re going to get past behavior and get back to thinking, how can we actually
get back to thinking that is not based on clear and distinct objects? So let’s go back
to these neuronetworks which we saw before. Now here’s a metaphor for what I think we
learned. And I’m going to show you a picture. This is a picture of the Georgian islands
on the summer bay. and they’re pretty messy. You might think that these islands per se
illustrate the quath — inadequate of clear and distinct GOFAI of doing justice to the
world. But this picture is in fact still a little bit GOFAI-like and it’s like — it’s
like — GOFAI in that regard, which is that it’s still essentially parsed into objects.
What I want to show you next is in fact the same photograph, in fact, the fact that — the
photograph that I made this photograph out of without the boundary of the water demarcated.
Look at the transition between this and the next picture. What this picture shows is the
subterranean, the texture of — under water islands are and the islands are revealed for
what they are as essentially outcroppings above the surface of the water that in fact
connect a world of stunning complexity. And I’m going to end that. So, so the world I
believe is like the picture of the submarine structure the complexity of the world out
there is like what’s on the water there and the concepts are just like islands above the
threshold of consciousness that in terms of which we have words, in terms of which ariveable
in the sense that we can articulate them. But in order to think, we have to deal with
that, with the full complexity of that subterranean structure. And it’s really that that I think
is the demand for A.I. assistance, is to how to you deal with that subterranean structure.
If I recognize somebody like Rod Brooks or, you know, my partner or whatever like that,
a stay tuned, this is something — student or something like that, this is what’s said
about deep learning, is that we don’t recognize that this person has cheeks
of this Whitney or eyes of this — width or eyes of this kind or whatever. We recognize
thousands and thousands and thousands of microscopic features, basically, which we process which
we get up to a higher level concept like this is Rod or this is trump or this is whatever.
it’s only when it rises above the surface of the water that it actually emerges into
anything that we can conceptually describe. But they in fact are navigation of the world
is dealing in a non-iveable way, in a non-distinguished way without
that underlying structure. the deep learning architecture is capable of dealing with that
underwater complexity. Millions and millions of features under water, millions and millions
of extremely weak correlations between and among them that actually allow it to generate
sort of the conceptual representations. And this pushes one into a kind of concern about
what the role of conceptual San Diego is, what’s the role of language? What’s the role
of what we can articulate? And it’s as it were late in the process as we arrive in the
realm of the articulation that we have introspection but a huge amount of our thought process are
dealing with this underwater structure and it’s because of that structure that the features
of the deep learning architects are as they are. I’m going to actually show one more thing
that I think is actually a little description I once wrote about about object properties
and relations. That we thought was what the world was like when we were doing GOFAI. I
think of those as the long distance crux and normal life of conscious, you know, articulate
life. They’re essential because in fact, they allow us given finite resources like the stuff
you learned at MIT of so on and so forth, critical for communication, critical to deal
with situations that are along the way and so on and so forth but the cost of packing
them up into those ions, into those discreet objects for portability and long distance
travel is they’re not insulated from the fine grain richness of underwater life of the particular
indigenous lifes the richness of the very lives they sustain and of the world that we
are all actually a part of. So, here’s the summary so far. GOFAI tried to implement intelligence
in terms of clear and distinct conceptual categories. That didn’t work. Rod Brooks in
what I think of as wave 1.5 got over that assumption but the way it got over it was
being reactive. But you can’t get terribly far with it. You can’t think about what’s
distance, what’s absent. You need reasoning. And so what deep learning and second-wave
A.I. have done is they’ve allowed us to realize that realistic reasoning requires vast amounts
of submarine complexity underneath and among the concepts in terms of which we think. All
right. So I’m actually going to stop at this moment. Because it’s 12:30. This is only part
one of this talk. I was going to go on and talk about what this book was and what the
consequences of about being the structure of learning and talk about where I think A.I.
is and what it is that it doesn’t do. In the book, I make a big distinction between reckoning
and judgment which was a much more serious thing. We can talk about that. But how about
it stop and see how we’re doing? Hopefully, they’ll be people who have questions. I remember
from MIT that people with questions were not hard to find.>> Thank you so much, Professor Smith for
that very engaging and informative presentation. A reminder to viewers to ask questions that
are for our guest today using the Q&A feature or the comments panel in YouTube live. A bunch
of books, Professor Smith. Can you tell us about it?>> Yeah. Well, so a problem I have is that
things that I start out thinking are going to be short tend not to be long. I went to
MIT what I thought was a book I written and they thought I am not going to public it.
— publish it. It turned into this book so it’s not a very long book, especially for
me. It’s only like 140 pages but it’s a book. It’s done. It’s off the MIT press. It’s going
to be officially published on October 9. I think there are going to be copies sent to
reviewers and stuff before that and I was just visiting some friends at MIT and they
said if you go on zoom.com, it is listed and you can preorder it.>> I just saw it. It’s already there. So
just a couple of queasms reminder to our viewers to ask questions about our faculty guest today,
Brian Cantwell Smith, using the Q&A feature of zoom. There are some really interesting
comments so far.>> Yeah, great.>> I want to ask you about your background.
I see that all your degrees are in computer science at MIT. You have appointments and
you teach philosophy. Can you talk about that? It’s not the normal career path, I would say.>> No, it’s note normal career path. I don’t
know if there are people who are in their 30’s or 40’s. But my dad was a charity theologian
and as a kid, he knew 38 languages. So I just ran from there. I don’t know any language
other than English and I went to MIT. I did this technical stuff. It’s far from where
I grew up as I could find, basically. I think I’ve always been a little bit congenital floss
ter in the sense guess philosopher as the sense. Philosophy was the hallway, it was
just the natural sciences. It was natural knowledge. And out of it grew natural philosophy
and then out of natural philosophy grew physics and stuff and then it got identity in its
fields and they’ve got rooms and built rooms off this hallway. And analytic philosophies
became a room of its own. I like the hall way. I’m interested in these big deep questions.
So that’s the flofere — floffer– philosopher I am. I went to some meetings with my friends
and gradually, I got my Ph.D. and I started talking to these people of cognitive science
and philosophies were very — philosophers were very pleased so I got them interested
and at one point, they said you know, not only am I interested, I might like to be a
philosopher and they went oh, that’s different. So imagine, you know, had good friends on
the Blue Jays or something. You’re a great team
and — like I can’t play baseball. It was like a complete gulp to anticipate — so I
probably spent 10 years learning how to go to talks, thousand sync what the questions
were what, the history were and so on and so forth. I’ve never taken a philosophy course
in my life but I marinated for a long time and you can do that and you can cross between
and among different fields and at first, I was, you know, first I was just an adjunct
professor in Vienna and they didn’t want me as a professor and I was there for a while
and I realized maybe the guy has small children and do terrible children and I was a person
who thinks about real questions so much gradually and actually crossed these divisions. So it’s
been very slow and very gratifying. The faculty I’m in is a social science. I taught in humanity
in engineers and science and stuff. Which is kind of what universities were for and
there was this word unity at the beginning of university which is supposed to be a single
sense of technology. The university has gotten fractured in a way that careers often required
a narrowness of disciplinary narrowness. But I actually think the moment is serious enough
that we absolutely need people who can cross this. So one thing that I — OK, here’s a
metaphor and then I’ll shut up. One way to think about the discussion about A.I. at the
moment is that there are people — think of a graph if you can see my hands. Think of
a graph in which this is the dimension of expertise and this is the dimension of death
of understanding the human condition. There are people who have a lot of technical expertise
out here and their sense of human conditioning is about a millimeter deep. I don’t know.
There are people who have a tremendously deep sense of what constitutes humanity and what’s
mattered about civilization and history. The premise they’re understanding of technology
is a millimeter deep. And I want to put a stake in the ground. I mean, we should be
out there at .8, .8. I can’t get to .8. I’m not quite either of them anymore but at least
I want to put a stake on the ground around 5.5 or .6. That’s where the debate is needed,
I think. That’s the consequence of ending up in philosophy and the other kinds of disciplines.>> Sorry, I’m actually not hearing you.>> Can you hear me now?>> Now, I can. Yeah.>> Sorry about that. I want to go to questions
because there are a lot of questions.>> Sure.>> Two questions on the third wave of A.I.
from 2011 to — what do you think of third wave of A.I. would look like?>> I talked about a little bit about that
in the book. Brieflies the way people are using the word third-wavy for having a context
aware, you know, at the model of the world — I don’t think it’s anything like different
enough from second-wave A.I. to what I think we need. What I talk
think about a person with good judgment. I think you know, John with the phrase GOFAI
was very good in regard to this stuff. I think — and also the discussions if people are
aware of them about what would constitute genuine contention rather than just — well,
people talked about simulations and stuff. I don’t think computers are simulations. I
think they’re real. But I don’t think they have anything like the depths of the understanding
of the world that I think real human judgment needs. So here’s a couple of properties that
I think — actually, maybe I can put this up that I think are real judgment would require.
here are some properties. Hopefully, people can see this. That I think a system would
need. It’s got to be directed towards the world, not directed just representation and
not just to have representations. So I think being directed to the world, not to the representation
is a demanding skill and if I push — if I click on a button on my computer and it says
eject this disc or eject this U.S.B. key or something, I don’t think the computers are
directed towards the U.S.B. key. They’re just directed that which is in the drive because
it doesn’t have any understanding of what’s in the drive as opposed to what we have a
difference — we understand the difference between being in the U.S.B. slot and the thing
that is in the U.S.B. slot and the thing that is in the U.S.B. slot might not be the thing
that we expected. I don’t think computers can distinguish appearance from reality. I
don’t think deep — oh, I don’t know, alpha go or alpha go 0 I think you have to care
about the difference between your representations and the world — I think you got to be existentially
involved in the world in order to know their difference between the world and your representations
of it. You have to be able to distinguish what’s possible and and actual and you have
to know that that towards which you’re directed is here in the world and you’re in the world
and the — you have to know that both you and the object are in the world — the existential
things that I think have been into thousands and thousands of years of human McCullom which
didn’t require changes in our D.N.A. or architecture but real, real standards that we hold accountable,
one thing — OK, so I don’t think third wavy is dealing with that but pursuit to what I
was saying in terms of the nature of the world — it requires an ability to recognize that
how you take the world to be what I call how you register the world, be it in terms of
objects, be it in terms of fantasy, be it in terms of differential equations, whatever.
You have to hold those registrations accountable for the world and know addition of representation
is actually going to give you a sense of the world that the model is the model of because
it is going to be just one more model. I think we know the difference between models and
the world and we always in every step of our life, we hold models accountable for the world.
And what it is to hold a model accountable to the world, the world itself, not just to
so the world as described in yet another model. That’s a profoundly different thing from anything
I think that’s been in A.I. ever since I was there in 1972 and all the way up through second
and GOFAI and nothing that I’ve seen about third wave A.I. addresses that at all. We’re
a long way away from understanding how to get to actual judgment.>> Thank you. There are a lot of questions
we’re going to try to get to as many as we can.>> OK. I’ll try to be quicker.>> That’s all right. You’re doing great.
Another question. And this is on quantum computing. What do you think is the impact of quantum
computing will have on deep learning?>> I realize that. And a lot of things are
quantum phenomenon. For example, whiskey with the ice cube and the ice cube is likely to
float and everything appears to be right. But the thing about the glass and the whiskey
is the fact that ice floats so the solid form of water is condensed in the convicted form.
So it’s not just quantum phenomenon. Personally, I doubt it. I have my own ideas. But I think
— I don’t think quantum mechanics is going to get to any of the hard issues of judgment
or any of the hard issues of consciousness. I just don’t think that. That was not your
question. Your question was how is it going to impact deep learning? There is no doubt
that quantum supervision and things like this can do a lot of exclamations for alternatives,
especially in — especially in problems which are sort of semi neatly decomposeable because
— how many things will submit to that kind of algorithm? You can break credit cards security
more easily. It is not going to have a conceptual impact on the problems that A.I. are going
to deal with. I don’t think anything about those kinds of stuff I’m talking about as
stakes and judgment and stuff. Of course we’re going to use quantum –>> There’s several questions on ethics. That’s
a major issue in the news right now. I know we only have a little bit of time. What is
some of the more pressing concerns or challenges that your research is have as far as ethics
are concerned.>> I have a ton of respect for ethical issues.
It’s not that I don’t think they’re serious. I’m not — two things. I don’t think ethics
of A.I. is actually the — into the interesting issues about A.I. I’m interested in what is
A.I. What can it do and what could it not do. We might be able to frame with respect
to it. Given that I don’t think we have the cartography of the landscape of issue very
well, who knows? A lot of ethical issues — I don’t think the ethics of it — I don’t find
the ethics of A.I. into what I think the serious issues of A.I. It’s good because it brings
people — it does a little bit of bridging between the pure technical kind of calculate
kinds of — that are leading to a lot of powerful systems and people who are consider — it’s
just — I’m afraid it’s a more popular than it is a deep issue. That’s one thing. I figured
out the trolley problem and you can’t control it and there’s two roads and you can steer
down one or the other. And one of the roads, there are a bunch of people talking about
the trolley problem. I don’t think the it is the best way to go. I don’t think that
ethical discussions are the right way to get it. Cars are doing it and stuff. My question
about cars is does driving a car require judgment? Could bit something that reckoning would be
enough?>> You’re talking about self-driving cars
which deep learning are the foundation of that.>> I mean, yeah, it is about driverless cars.>> We have two questions around. Thank you
so much for your time.>> No, it’s great.>> There are two questions including one
from Eva on how we as humans can interact with A.I. and what’s your feelings about that.
It’s a fatal distinction. Because first of all there, are many human things he ch are
not my favorite. People do automatic things. They do silly things and there are people
who do terrible things. are people we elect do terrible things. I don’t want to veil rise
the humans and in all circumstances or anything like that. And on the machine side, I think
we’re probably machines at some sense. Unless you’re a dualist, you think we’re arrangements
of atoms and machines are changing. I don’t want to define to the thing in terms of people
and machines. I would like to understand what is intelligence and what kinds of task required
and what kinds of intelligence is too broad a word. You have to have a map of it. So how
do we interact with these things? Well, the questions is what are they? Are they things
capable of friendship? One thing I don’t think we should do. I don’t think we should interact
with them. Without adequate conceptual grip on what they are and basically take the fact
that they’ve demonstrated some behavior which in people would mean that this person has
these properties and you assume these things would have that property. And not only interact
with it but give it responsibility for deciding prison sentences or educations for our children
or something like that. In other words, we would miss understood — misunderstand. And
we should not take which I worry about, we shouldn’t take what a deep learning can do
as a normative standard so people should act more like A.I.’s. If these machines are good
recs for the what would bit to raise children? Reckoning is being handled by machines? Just
like they think long division is being handled by machines now. What would it be to raise
the standards on what it is to be human? So that we lift the human condition up to a higher
level in virtue of that which the A.I.s can do. That’s the kind of question that I think
distinguishes it. Sorry, I need to hear you again.>> Can you hear me now?>> Yes.>> This is a really fascinating discussion.
We have almost 90 questions so far. What developments in deep learning do you see beyond an increase
in the number of elements or notes?>> OK. So if the question is how do I think
they are going to advance past where we currently are?>> Right.>> I don’t think that’s the most important
issue basically with respect to even deep learning itself. So anymore than increasing
the megahertz will change. It will be nice to have the computer running at 50 megahertz
instead of five. There are real issues. Think about my picture of underwater — the structure
of the underwater terrain underneath those islands. One of the things that GOFAI was
really good at was compositional reasoning that you could deal with negative investigation
and implication and all of this is kind of stuff. But those concepts were, you know,
were laughably discreet and clean compared to the underwater structure. I think figuring
out how deal with compositional and what if this were true but that weren’t true, in ways
in which the concept is being streeted as Adams if the concept were treated as the tips
of hugely complex underwater structures that have emerged out of the surface of enviability
so that you can reason about losing — and the process is staggering if you do that
in certain ways. Maybe we could do it in what is that would lose some but not all of it.
There were huge issues there. There’s lots of issues I think way more interesting than
just numbers of layers. We may need more layers we might need more processors but that’s not
the conceptual part.>> You just mentioned Jeff hinnton who was
your colleague at the university of Toronto. He is described as the father of deep learning.>> Right.>> There’s another question from robin and
this was — hello? Hello? Hi. Hello? Can you hear me?>> Yeah, I can hear you.>> Sorry about that.>> Yeah.>> There’s several more questions that have
also come up about your book. Can you talk more about that?>> About facebook?>> No, about your book.>> I wouldn’t say — well, I would say this.
I wouldn’t say anything terribly direct because I think it’s a different level but to the
challenge — so he wrote that intelligence or whatever — years ago he changed it to
reasoning and he said you should use the world as its own best model when you can. Originally,
it was the world is great if you have it as model but enormous amount of stuff was not
in front of me. If I turn around, at last door there but it’s shut. So I have no access
of what is outside the door. I’m sure if I open the door, there’s going to a hallway
there. Because it was the last time I opened it. I have to represent that floor and that
hallway because I don’t have access to it. And what about tomorrow? I don’t have any
access on the future or the past because in fact, physics prohibits your connection with
either the present or the past. I think the locality of physics is a staggering limitation
and the fact that we get around the world as well as we do is a testament to us. In
order to be able to deal with the distal in the way, and what’s not right here and now.
Think about a wonderful claim. We think so that our hypotheses can die in our — you
think we’re going to walk around the highway here but at last semi-truck coming 80 miles
an hour. If I walk out there, I’m only going to be halfway across. So that’s great challenge from Rod. What do we
need — what are the situations in which we can’t use the world as a direct model? That’s
actually under– my representation is and I talk about what representation is and why
you need it.>> I want to ask, once again, let me congratulate
you first and that’s for your appointment as the inaugural Reid Hoffman at the university
of Toronto. Can we talk with me a little bit about the appointment and the back story on
Reid Hoffman and I would appreciate that.>> It’s interesting. My partner has a son
and I have a stepson who is wonderful. But I have put my heart into teaching for many,
many, many years. That’s one reason I haven’t published as much as I should. As it happens,
in 1989, I was teaching this course. A bunch of us started this program. It’s called symbolic
system but it’s the cognitive science and a bunch of students were in there and there
was this guy called Reid Hoffman. He was great, affable, very smart. I have an e-mail from
him from that class saying he was not going to get his assignment in on time because he
had to do something else and so on and so forth. I don’t think I can blackmail from
him on that. So anyway, you know, I wrote him a reference to go to oxford to teach philosophy
but after a year, he said this is enough for me. He became the C.O.O. of paypal and your
audience knows about it more than I. And I sent him a note. Let’s get together and talk
about it and I just wanted to congratulate him of this and he said what about all those
books you were writing? You’re supposed to have them all done by now and I said I would
but I actually — you know, life is time consuming and he said maybe I can do something about
that. So, I’m enormously grateful to him. You cast seeds and it’s like a farmer. You
throw a lot of seeds out there when you — they don’t have to turn anything that you’re aware
of. But he came back and he said look, he was grateful and he would like to hold know
account and I believed him. So he gave them for essentially what at Stanford we use odd
to call a folding chair. It’s a chair that will last until finish six years and the chair
folds up. I have a ton of thanks and respect and so on and so forth for him for doing that.
No one deserves it. It’s a magnificent act of generosity on his part and people say what
is it required for the chair? But he wants those books I was talking about 30 years ago
to get published. This book is a first little step towards that. Sorry, I’m not — I’m not
hearing you again.>> Thank you so much. Can you hear me now?>> Yeah.>> Thank you so much, professor Brian Cantwell
Smith. We had more than 90 questions and obviously, we didn’t have time to get them all. Maybe
we’ll do a part two or something. But you were a really fantastic guest and we appreciate
that. On behalf of the MIT Alumni association, thank you for tuning in to the forum online
and thank you to professor Brian Cantwell Smith of the University of Toronto for joining
us.>> Thank you to everybody who came along.>> And the alumni — thank you so much. The
staff will tweet about today’s set using #MITBetterWorld and send any questions to [email protected]
And thank you so much for watching.>> Thank you.>>Thanks for joining us. And for more information
on how to connect with the MIT Alumni Association, please visit our website.

Leave a Reply

Your email address will not be published. Required fields are marked *