Faculty Forum Online: Ron Rivest

Faculty Forum Online: Ron Rivest


Hi, I’m Judy Cole, the
executive vice president and CEO of the MIT Alumni Association. And I’m delighted to welcome
you to this web production of the MIT Alumni Association. Hello, and welcome, everyone,
to the Faculty Forum Online. This is a program of the
MIT Alumni Association. My name is Nate Nickerson. I’m the vice president
for communications at MIT, and I’ll serve as moderator. In today’s program,
we’ll go from now until 12:45, which is
45 minutes from now. A technical note– for alumni
who wish to ask a question, you should enter your
first name, location, and your question in the
form that’s on your screen. We’ll try to get to as
many of those as possible. Our guest today is Institute
Professor Ron Rivest. And, Professor Rivest, thanks
for taking time to do this. Thank you. So Professor Rivest is a member
of MIT’s Computer Science and Artificial Intelligence
Lab known as CSAIL. And specifically, it’s
Theory of Computation group. He is a founder of CSAIL’s
Cryptography and Information Security group, and he is
acclaimed for his contributions to the RSA public-key
cryptosystem. This system secures
communications between computers using
products of large prime numbers. And as many of you know,
Rivest is the R in RSA. Among many honors,
Professor Rivest received the Turing Award
in 2002, which, as you know, is the highest
honor in the field. And in 2010, Professor Rivest
earned MIT’s James R. Killian Jr. Faculty Achievement Award. And it was this summer
that Professor Rivest was named an institute professor. So in preparing for
today’s discussion, I asked Chancellor
Eric Grimson to tell me what he would say to
all of you if he bumped into you at a cocktail party and
you said, oh, I’m going to have a conversation with Ron Rivest. So I want to tell you
what Eric told me. And I will embarrass Professor
Rivest by saying this. Eric said to me, that’s easy. I would tell them that Ron
is the most humble, brilliant thinker you will ever meet. Now, I recognize
the irony in saying that is all your humility is
now gone if Eric Grimson says that about you. So, look, we’re going to
jump into your questions in a moment, but I’ve got
a couple right off the bat. So let’s start with this one. Professor, if you can walk us
through briefly what RSA does and why it’s important, I
think that would be helpful. Thanks, Nate. Thanks for the kind words. And thanks to Eric, I guess,
too, for his kind words. It’s a pleasure to be here. I’ve never done one
of these before. And to let all of my
friends and colleagues out in cyberspace there
that are watching in– I wish I could see you as
well as you can see me. So the question was about RSA. RSA was a cryptosystem that
we developed here at MIT back in the ’70s. S is for Adi Shamir, and
A is for Len Adleman. And they were members of
the mathematics department at the time. And the stimulus for the
work was a beautiful paper written by Diffie and Hellman
on public-key cryptography– the idea of public-key
cryptography, which they didn’t know how to implement. And so our proposal
was the first. And it turned out
to be an enduring proposal for how to implement
public-key cryptography. The idea being that everybody
would have a public key. So you would have
a public key, Nate. And I could encrypt email to
you using your public key. And then you could decrypt
it using your knowledge of the secret key. And you mentioned
prime numbers before. Prime numbers play a
key role in the RSA. And they play a key role in
lots of cryptographic schemes. Although, in fact,
modern cryptography has been moving
away from a number of theoretic bases of it. And so the idea is you would
publish a product of two large prime numbers. And I can use that to
encrypt a message to you. And you can use the knowledge
of the prime factors themselves to decrypt. And so the details– I can get into it if you like. But that’s the
essence of the idea. The difficulty for the
adversary is essentially that of factoring
the product that you publish into the prime numbers. And if the adversary
can’t do that, then they can’t read your email. OK, good. I think if folks out there
have more specific questions about RSA, to send them in. We can ask them. But I think that’s a
good overview for now. So, professor, tell
us a little bit about what we’re seeing
in the news about kind of notions from the government
about having backdoors. That’s a great question, Nate. Yeah. So there’s always
been this tension between some of the
missions that the government has and some of the
academic research. Even back to the early days of
RSA, one of the first things that happened after Diffie and
Hellman published their paper and proposed presenting it at
a IEEE Conference in October of 1978, I think it was. And they received a letter from
a fellow who worked at the NSA, saying you’d be in violation
of the export regulations if you publish this work. It’s certainly been clear that
the ability of private citizens and individuals
to encrypt things could interfere with both
intelligence gathering and law enforcement. And that debate has been
going on since then. I think recently it’s
bubbled up again. We thought it was sort
of over in the ’90s. The government had
proposed at that time a clipper chip to be installed
in all computers and devices that send encrypted email. That was abandoned finally
as being unworkable. More recently the FBI has
been pushing for ability to access devices that
have encrypted content, or access the plaintext of
encrypted communications. And that makes sense
from their viewpoint. They’ve got a mission to do. And the getting of access to
information could help them do what they do. However, it’s my
personal opinion and that of many
colleagues that trying to implement such a scheme
just opens Pandora’s box into a whole host
of vulnerabilities and difficulties, and that’s
not really a feasible thing. So tell us. If we were to open
Pandora’s box, what are two or three
things inside of that box that are really bad? Well, for example–
I mean, the goal here was for US agencies to have
access to encrypted content. But once you put a backdoor
in, these are corporations. Google, Facebook, and so on
are international corporations. Every other government
in the world is going to want similar access. So you’re not only
going to have the US government with a backdoor key. You’re going to have the
British with a backdoor key. You’re going to have the
Israelis with a backdoor key. You’re going to have the
Iranians with a backdoor key and eventually, have the North
Koreans with a backdoor key. And we don’t want to go there. So that’s one issue. Encryption is a fundamental
tool of technologies these days. Every communication that
goes over the internet should be encrypted. Many are. Not all are, but more should be. But you’re getting
into the innards of every device and
every communication when you start to talk
about policy like that. The complexity is mind-boggling. It just sort of
says, we take all of our digital infrastructure,
and we redesign it all to put backdoors in everywhere. And it just is infeasible from
a technical point of view. So it’s just way
too complicated. Security comes from
simplicity, and this would not be a simple thing. It would be probably the most
complicated security mechanism man has ever conceived of. Well, OK. So let me ask you
a really basic– but, I think, common
question and see if you have any
insight on this at all. And it’s going to be
kind of vague, OK? But for folks out there,
what about cloud computing is on your mind for
the average user? And I’m not talking
about governments. I’m not talking about
heavy-duty corporate security. I’m talking about my
family photos in the cloud. Is this just great
and I shouldn’t worry? Or should I be terribly
worried, or somewhere between? I think it depends
on your mental choice about, how much do you care
about the privacy of your pet and family photos and
so on to invite them? And I think I use some
of the cloud services. I put stuff there that
is moderately private. But it wouldn’t be a
disaster if some of that became public either. So it’s a risk
choice for everybody. There’s certainly a risk anytime
you use a computer, anytime you connect to the internet. And all of us live with
a certain amount of risk in what we do. And I think people making
individual decisions for themselves and
the family probably could be more comfortable
with a certain amount of risk than a government agency
like OPM or whatever, who have certainly had a
recent disaster with all personnel records and
the records they manage. So I think it’s a question
of, how valuable is the data? I think for most individuals
the primary question is not confidentiality. It’s reliability. Do you back up your data? Are your photos backed up? You don’t want to
lose all of them. Right. Interesting. I’d be a different– that’s
a much more serious problem than having them
posted on somewhere. Got it. So I want to get to– before we
start getting to questions from out there, can you tell us–
because I have only a dim understanding of this– what your work has been
and is around voting? Yeah, thank you. Yeah, that’s been a
very interesting area of research for
me since 2000 when we had the presidential
election that nearly melted down the country. And I’m hoping we won’t
have that again in 2016, but we never know. There was a nice report that
came out of the Brennan Center based on– worked on that
Verified Voting, too, that says a lot of
the machines now are pretty old and subject
to catastrophic failure of various sorts. So I’m hoping the election will
go smoothly, but we’ll see. It’ll be very interesting
both politically and technically there. The work I do on voting is
concerned about basically that– trying to ensure that
the election outcome that is announced is, in
fact, the correct one, that the winner is really the
winner, that the voters elected them the winner, and that
you’ve got evidence to prove it. I mean, I think an
election should not just produce the outcome. It should produce evidence
that outcome is correct. So you have, for
example, paper ballots that have been
verified by the voter, saying that this is
really how I voted. And they can check it
before they deposit it. You’ve got ways of auditing the
collection of paper ballots, or the various
electronic records that get produced to do this. So there’s a variety
of tools that we now have for doing
post-election audits and doing verification of
the outcome that are really quite interesting. And voting is challenging. It’s hard because you need
to separate the identity of the voter from his vote. You don’t want to
have it post anywhere. This is how you voted, Nate. In other words, you can start
selling your vote and things like this. So you need to– on the one hand,
separate the voter from his vote, which makes
it different than banking. A lot of people
say, why can’t I– why isn’t it just like banking? We can sort of solve all
this with some online system. It’s very different. And you need to make
it disconnected. On the other hand,
you need to make sure the disconnection
operation maintains the integrity of the vote. And then you have ability
to check the votes, that they’re really
what you cast, even though you can’t
prove it to anybody else how you cast them. So we use cryptography for
some of these techniques. We use statistics for others. There’s a variety
of tools that allow us to confirm what the
outcome is– the correct one. That’s really what
the voters chose. And there’s a variety of very
interesting things happening in the world of voting now. And I point out, for example,
the work in Austin, Texas, which has got a new system
design out called STAR-Vote. It’s a combination of some of
these cryptographic techniques, plus paper ballots and
statistical techniques– really quite interesting. And I wish them success
in moving forward with that project. So I would imagine
with work on voting– so you’re looking at it from
the point of view of a computer scientist. But it has to be that within a
couple hours of any discussion, it starts to get into social
issues of what you’re really trying to do, right? Because by making
voting digital, you massively increase
participation, or the possibility
of that, right? Most of– that’s a
common hypothesis that improving technology will
improve turnout or something like that. By and large, it
seems to be false. Oh, no. Right. I mean, so there’s
little evidence that supports that going
digital improves things. In fact, there was
some evidence on why– that it goes the other way. And then they get online
or something like that, and it could improve turnout. Mostly– I mean, it seems
that voters– most voters are either one of three types. Either they’re
committed voters– they go and vote no matter
what the technology is. They’re committed
people who don’t care, and they’re never going to show
up no matter what the thing is. And there’s 1 or 2% who will
show up with the weather sunny, and they don’t have
something else to do. So the amount of benefit
you get in terms of turnout from changing the
technology is not large. And it’s not a major reason why
one should change technologies, I think, and dive into the tar
pit of complexity and internet voting and high-tech
stuff, which looks glitzy. But as we see every day, there’s
so many security problems. You don’t need to go
there with voting. Paper ballots work fine. Got it. So I’m going to plow ahead with
my questions until some of you out there– and there are
many of you out there– have a question. So I’m interested to know what
fields your PhD students go into and whether there has
been an exchange in this over the years. Or what’s interesting about
what you’re seeing from alumni? So I think the theory group
as a whole produces students who are good at
security, at algorithms, at analysis of efficiency. You’re looking at different
computational models, or producing models
of various situations, a lot of probability-based
models, a lot of– these days, big data models,
working with distributed computing, and so on, too. So the theory group has a
broad spectrum of interest. And our students go off into
either academia or industry, working at Google or
something like that. I’ve had students do PhD
theses on encryption techniques and voting techniques and
other security techniques. And I think many of them
end up seeking academia. It’s common. Has there been a shift over
years in what folks are really interested in doing
when they leave here, or is it just that range? I think more for
the undergraduates than for the graduate students. I think for the
graduate students the models have stayed
more stable and– although, postdocs are
becoming a little more common than they were. But graduate student academia
versus an industry research lab. I mean, the character of
the industry research labs has changed. You’ve got AT&T, Bell
Labs in the old days. And nowadays, you’ve
got Google or something like that or academia. But undergraduates see
a much richer spectrum of opportunities, I
think, than they used to. And the industry
is really booming. And many of them are
exploring startups or looking at things that were
recently startups like Google or Facebook. OK, so you are– thank you, folks. Your digital questions
are filtering in, in the analog way. It’s just as good
as paper voting. So Richard in Singapore
asks, “How secure is HTTP or SSL in keeping
snoopers away or governments?” So HTTPS, SSL is a
complicated protocol. And like anything
that’s complicated, it has bugs that are
being found as we speak and continually being improved. So it provides a
reasonably high fence. Whether it’s absolute–
probably not, right? I mean, bugs are found
every few months it seems. But I think the right
attitude is security is a matter of building fences,
and there’s always a question as to how high the fence is. It’s not a question about
building a fence that goes up to infinity. You just make it more
difficult for the adversary. And somebody will–
with enough effort find a way around
the fence, or find a way of corrupting
the fence builder, or something like this. So it’s certainly
a trustworthy tool. I recommend it to use, but don’t
expect perfection, necessarily. You keep an eye out for patches
and updates and improvements. OK, so SC in Washington
asks the following. “So how do we secure
the data generated by the Internet of Things?” That’s a great question. And if you want to
give folks, who may not be familiar with
that term, a kind of down-and-dirty
description of what the understanding of the
Internet of Things is. So my understanding of the
Internet of Things is that you’ve– wiring everything
to the internet and even small gadgets
and devices– your toaster or whatever. Why you want to do this
is up to you, I suppose. But everybody seems to be
jumping on that bandwagon. Maybe it’ll be fun. Maybe it’ll be useful. But you have the issue
that the computers that you’re– the chips that you’re
attaching to these devices may be fairly underpowered and not
capable of much cryptography. That’s changing. The chips are getting better,
and we’re able to do more. But one of the issues that you
face is the technical question as to, how do you
secure it based– because the cryptography
that you can do is pretty minimal
on these chips. And my personal take on this is
that every such device– and we talked about this back when we
had Project Oxygen at our lab back in the ’90s, even before a
lot of this became quite real. We were supposed to have a– the chip had a proxy
server somewhere. So each chip is pretty minimal. But it’s got a secure
channel to a server run by the owner of the
manufacturer somewhere. And so then that more
powerful computer can act on the chip’s
behalf, so that you don’t need to put all the smarts. You don’t need to put
all the cryptography. You don’t need to put everything
into the device itself into your toaster. Your toaster has a
software agent somewhere that speaks for
it on the internet and will filter requests,
or authenticate commands, and so on. So I think that’s the
right architecture for most of these things. So that would be my approach. So don’t try to put everything
into the device itself. Your chip is not going to
be able to handle what’s needed if it’s got to do some
fancy public-key crypto, maybe. But have it just have a
secure channel to a proxy. OK, here’s a big question, and
it comes from James in Seattle. So I’ll be fascinated to see
how you even approach this one. So the question is that the good
guys these days in cryptography call it– or data
security always seem to be behind the bad guys. So first of all, do
you agree with that? And what do you make of
kind of good guy, bad guy? But then to the degree that
you find something in that, James asks, “When
will we catch up?” And, of course, maybe
the answer is never. Yeah, a great question. So that’s one of the nice
things about cryptography. There always is this sort of
game theoretic aspect to it. And there’s the good
guys and the bad guys, and you can view it as a game. And some of the
formalizations actually take that a bit
further and really look at it more carefully from
a game theoretic viewpoint. So we’re building systems
at such a horrendous rate. I mean, every day
there’s millions of lines of code being
written across the country and around the world. And every 1,000 lines of
code you can expect there’s going to be a
critical bug or so. And so we’re introducing bugs
at a terrific rate as well. So as long as we’re doing
that, the game is not over. And, in fact, the
game keeps continuing. And the bad guys will say,
well, there’s a new system that was announced last month. And let’s start poking at it
and running fuzz tests on it and doing things. And they’ll start
finding problems with it. So I think that this game
will continue for a long time, and people will keep poking
holes in new systems. We’re getting better
at designing the tools for doing the crypto for– the cryptographic algorithms
are getting better. We’re getting better
at the software tools for developing secure codes. And these add expense to
the development process. But you’ve got an expense
if you get broken into, too. So it’s probably better
upfront to design your system with a lot of care. But I don’t expect this game
to be over for a long time. And the question
of playing catch-up is often that the good
guys are in reactive mode. They’re just saying,
oh, we got broken into. And they find out how
they were broken into. They find– then they
patch it, I figure. So that’s a game that’s
definitely reactive. And you’re always playing
catch-up in that mode. So this question
actually reminds me a lot of a conversation. I don’t remember
with whom I had it, but it was someone in CSAIL. And I was asking something
really similar to what James is asking, which is, how do you– what do you make of this
kind of cat and mouse game? And the person I was talking to
said something, which I guess is sort of obvious. But it kind of struck me
as being interesting– was, well, you got to
take a step back and ask, what’s really going
on in that scenario? You’ve got sort of a
target, if you will, and just a lot of really
motivated people banging on it, and banging on it,
and banging on it. So it’s not– the right
metaphor isn’t like a sporting competition where each
side is trying to do the same for the other side. It’s that they’re each playing
completely different games. They have different motivations. Is that part of this? Yeah, that’s
absolutely part of it. Yeah, yeah. And you have a worldwide
playground, if you will, where you can pick targets
anywhere on the planet and try to log
into their systems, or break what they’re
doing, and so on, too. So there are people
everywhere who are trying to break
into Google or Facebook or any US-based corporations,
and some of them succeed. So it’s a planetary game. There are many
adversaries out there. And we just need
to keep beefing up what we do both for
good system design as well as monitoring to see
when things get broken into. Because things will break. Things will get broken into. And you have to keep an eye
as to– on to what’s going on. And you could be– have all of your intellectual
property stolen out from under you without knowing
what’s going on, unless you have good monitoring. Right. Right. OK, so let’s take a question
from someone so close to home that he’s basically here. This is Aaron in Cambridge
who asks the following. I think I understand
the question, but I think you certainly
will, professor. So the question is,
“What techniques are being used to combat against
increasing computing power that can more easily crack
traditional security?” Great question. And can you reframe it? Yeah, yeah. So most cryptographic
techniques are based on computational difficulty. So the adversary who’s
trying to break in needs to search a key space,
or solve some hard problem to figure out how
to read the traffic, or to break in to
authenticate themselves. And most of these
schemes have what’s called a security parameter,
which may be a key link. And it’s 128 bits
or 256 bits, which sets the difficulty
level for the adversary– that and the design of
the algorithm itself. So, typically, you
have a situation where you can control the
difficulty for the adversary, assuming that your
assumptions are correct, and your algorithm doesn’t
have any unforeseen backdoors or cracks in it, so merely
increasing the size of things. Now, take RSA, for example. When we first published RSA back
in the ’70s, we thought that maybe a 300-bit key
would be enough– the product of 250-bit primes. These days the NSA
is recommending keys that are 10 times
that as a protection against possible advances
in quantum computation, which may be on the
horizon, so 3000-bit keys. And so by just increasing
the size of the key’s user, you could provide very
substantial protection against these kinds
of attacks that combine with additional
computing power. Computing power is sort of
linear in how much you spend. The difficulty for
the adversary is exponential in the size of
these key links, typically. So you can easily
scale things up, so that you can outrun his
advances in computing power. So the challenge for these
cryptographic designs is not so much that
you need to defeat predictable advances
in hardware capability, or scaling up of the
hardware, but, rather, new algorithmic techniques,
maybe with the exception of quantum computing,
which is not only a new kind of computing, but
enables new kinds of algorithms as well. And should quantum computing
really come into reality as a feasible engineering
possibility, things like RSA would become unusable. Because our good
algorithms on quantum computers were breaking RSA due
to people like Peter Shor who was here at MIT. So I’m going to return to the
questions that are coming in, but I want to ask one. And this is– I think I can promise this
will be the kind of broadest, vaguest question that
you get today, OK? And it’s a question I
have for a lot of faculty at the institute. It goes like this. A lot of work in general
is just reactive. The day comes upon you. And you’re just trying to
kind of keep your day moving. But there come days– and maybe there are few
and far between where you have the kind
of mental equivalent of a blank sheet of
paper and some time and a good cup of
coffee, where you can kind of answer for yourself
the following question, which is– this is you talking
to yourself, right? It’s now, look, the main
question in cryptography right now, going forward–
the thing I’m really thinking about most generally
speaking is along this avenue. Is there something that broad
that keeps your attention? I think that looking at security
frameworks of importance is what drives the field a lot. People in cryptography
often say, what are the applications out
there that people care about? Because there’s lots and lots
of mathematical questions you could look at,
too, in investigation of old, historical
schemes or variants. Sometimes those are interesting
and pay off well, too. But a lot of the cryptography
papers that you’ll see are motivated. They have a storyline with them. They say, Alice has
a medical database. She wants to store that
medical database on the cloud. She wants to do searches
on that database. She doesn’t trust
a cloud provider. She wants– can you
solve problems like that? And so the application
scenarios are what drive the
formulation of a lot of the cryptographic problems. And it’s given rise to lots of
wonderful recent developments. There’s lots of work now by
Vinod Vaikuntanathan and Shafi Goldwasser and many others on
computing on encrypted data. So you’re able to give
somebody encrypted data. You’re able to ask
them to compute on it, which is kind
of a strange thing without decrypting. So they can take encrypted data
and produce encrypted answers without ever knowing
what they’re doing. And this turns out to be just
the right thing for these kinds of cloud computing scenarios. So people look at applications. They say, what am
I trying to do? What can’t I do? Why can’t I get this done? And you find new
formulations of problems happening by the applications. And so that’s– sometimes there
are more abstract questions about computability and
computing with different models. I mean, we have, for example,
the interesting question as to, what can you really
do with a quantum computer? Should you be able to build one? And people like
Scott Aaronson are looking at the computational
complexity questions arising from quantum computation. That sort of a more
abstract formulation doesn’t have a particular
application in mind. But many of the
crypto things really do arise from applications. Yeah, got it. OK, great. All right. So we have a question. I think this is a
really interesting one from Bill in Maryland. So Bill asks, “In
your opinion, are the most secretive
American initiatives absolutely secure from
adversarial spying?” I don’t have any
security clearances. So if you’re asking me
what the government really is capable of protecting well
and so on, too, all I know is what I read in
the newspapers. But what you can read
in the newspapers is telling, too, right? The State Department
gets broken into. The head of the NSA
says, we assume routinely that our networks
are compromised and things like this. So I do presume that with
a lot of effort and care that our most carefully
guarded secrets are in fact kept secure. But there’s lots of places
where things fall apart, too. Yeah, got it. A question about whether
you can talk about what’s called multi-tenant
cloud security. So, I mean, the
issue there is you’ve got several tenants running
on the same CPU, even. And they may be adversarial. And you may have leakage of
information between one process and another that
are running there. So you may be able
to, for example, extract a cryptographic
key from process A by process B, who is able
to measure, say, the time that a certain
cache fetch is taken and so on, so other technical
sensors that they have available on the chip itself. And those are real concerns. Those are difficult
to protect against. If you really have very
high-value data that you want to protect, you want
to know that your process is running on a dedicated CPU, and
there’s nobody else sharing it that might be adversarial with
you, or just bring it in-house. Don’t run on the cloud at all. OK, got it. How about a Bitcoin question? Is that all right? Yeah. And I’ve been actually spending
time on digital currency stuff these days. I have in the past, too. OK, good. So John in New York
asks the following. John says, “Wouldn’t the
adoption of blockchains for global finance create
a real risk of a future of financial scandal?” And if you can
kind of break apart that question and help folks
understand what John is asking. I don’t know what
he has in mind. Bitcoin is a very
interesting development. It has as a key feature
this blockchain idea, which basically is a
decentralized public ledger. So anybody can
publish transactions if they have Bitcoins. They can publish their Bitcoin
transactions onto this ledger. And it’s maintained
by a collection of miners working
around the world to extend this blockchain. So it’s a public ledger. Having a public ledger– I don’t see as being
something which is that risky. I mean, if the crypto breaks,
you have a problem, I suppose. But I’m not terribly
worried about that. It’s a consensus
protocol, so there’s time for things to settle down. So I’m not quite sure
what he’s referring to. There’s lots of
fascinating questions about digital currencies. The blockchains– you
get into the gaming issues about– if you
solve a blocked one, should you release it? There’s some nice work
by some fellas at Cornell that talk about those
kinds of game aspects. I think just the whole
field is bubbling up with interesting ideas
and possibilities there. So I expect we’ll see
lots more in that thing, and there’ll be scandals. I think most of
the scandals that happen have to do
with mismanagement of the cryptographic keys that
are used to either protect their own Bitcoins, or protect
those of their customers, or just straight-out invisible. So there’s a trust
issue involved there, even though the
decentralized character of the Bitcoin blockchain
says, you don’t necessarily need to trust any third parties
to maintain these records. So I’m not sure what
John is concerned about in terms of scandals. There could be. There are always going to be
scandals once you got people managing other people’s
money, I suppose, and that’s a lot of what we see. Yeah. It’s not going to go away? Who knows? We may live in a
moneyless society someday. So we’ve talked a
bunch about security. And so this next question
is about privacy, so I want to ask it. It’s a specific question,
but I would also invite you to talk about
privacy as opposed to security if any thoughts occur to you. So the question from
Sonora Needham is this. “How do you think Europe’s
battles with Facebook over privacy are
going to pan out?” So first, let me– so the context. When you talk about
security, I think the first thing to
talk about is what your security objectives are. And it could be
confidentiality or it could be integrity of various sorts. And privacy could be certainly
one of your security goals. Obviously, privacy is a piece
of a larger security framework. The European concern
with privacy– what’s been happening
with Google and Microsoft and Facebook recently is– it’s fascinating to
watch I can’t predict how it’s going to come out. I really don’t know. I think that there’s a
definite back and forth. Negotiation is going
to have to take place. The US has been very lax
about allowing companies to push data around–
personal data in a way that the Europeans are
increasingly concerned about, particularly given the
Snowden revelations. So I don’t know what to predict. I think that there’s a lot
of interesting discussions to have there. I think one of the
organizations that I work with on the advisory board
of is the Electronic Privacy Information Center. They’ve got a great
website, which talks about a lot of
these kinds of issues and how the privacy
landscape keeps evolving. And I would recommend people
interested in that to go look there for their information. Got it. Now, this is– I’m really glad this question
came in from Los Angeles. I’m not seeing a name
here, but– great. So the person
writing in asks you to talk about your own
intellectual property, what you do with it. And if you could sort
of answer the question, why should young innovators
trust the public domain over filing patents? How do you think that through? And what’s been your
history with that? I think it depends. So RSA was patented by
MIT, and then the patent expired quite a while ago now. And that was helpful in getting
RSA the company up and going. So that’s one data point
I have in terms of that. I think these days
things are moving fast. Keeping things as
trade secrets and just moving quickly to get
stuff on the market is a way to go if you’re
developing software. It depends on the technology. If you’re trying to do something
which has networking effects, digital currency is one
of those good examples. People aren’t going
to trust typically closed-source solutions. So you’ll need to have
various software everybody’s going to be using on
their own computers to do something for them. They may very much prefer
an open-source situation where they can look
at the code, at least, and look at in installment. I would say it
should be free, too, until– that you’re not
having to pay somebody for the software. But I think that openness is
much more the thing today. It’s hard to tell. I mean, you can
see it both ways. Facebook has essentially
a closed source. It’s not open-source product. It’s a tremendous success. They keep working on it. On the other hand, things like
Bitcoin, Linux and so on– [INAUDIBLE] they have
an open source with. We see a couple of ways. If you’re an entrepreneur
with an idea who– that you want to
push out there, you could try to encapsulate it
in some intellectual property. Maybe your investors would
demand that you do so. Maybe they would demand
that you do the opposite. I think it really depends
on the kind of market you’re trying to address. OK, so this– we have a question
from Rene in Austin, Texas. And it’s a big one and a
good one So here it goes. “At what point do
existing approaches to defending private property
become prohibitively costly? Does the internet
ever get to a point where it is too costly
to do business safely?” So I don’t know what kind of
cost you’re talking about. There’s cost in defending
yourself against– so I’m not sure what
intellectual property you mean, if you’re talking about
private data sets, or if you’ve got IP
that’s in a research lab that you’re trying to protect
against Chinese hackers getting in, or something like this. Well, let’s take the case of– let’s take a possible example
of this line of questioning, right? You’re a retailer, and you’ve
got millions and millions of customer records. And, oh, boy, isn’t that
a big responsibility? And how do you deal with that? So [INAUDIBLE]
that’s not my forte. I don’t deal with
retail, I think, so much. But I think that there
are best practices there. I mean, the banks are the
ones who have spent the most. I mean, I think about
15% of their IT budget is spent on security. So if you take that
as a benchmark, a typical retailer
would probably expect to be spending a
little less than that– 5 or 10% on security throughout. So if those numbers were
getting up to 30, 40%, you’d probably say it was
too much or something. But most– a lot
of business– they have to do their
business online. Many of them outsource
their sales effort to companies like Amazon that
can put the staff together to maintain a
highly secure site. So I think many of
the mom and pop stores are just terrible at security,
and it’s too much for them. OK, so I’m going to go back
to a couple of my questions while we get a few more. So this is going
back to the backdoor, but it’s going to quote some
remarks you made about that. So at this spring’s
RSA Conference and in the report
published this summer, you warned against allowing
the government a regular access route into Americans’
data and personal devices. You said that it
would create, quote, “A house with many,
many doors with keys held by many, many
parties and that it would stifle innovation.” And I think we covered the kind
of first part of that question. But the stifling of innovation– I wonder if you could say
a couple words about that. Yeah. I think that you’re
talking about a situation prospectively where you’ve
established a legal requirement that companies doing any
sort of communication, or storing any kind
of digital data would have to get their data
protection techniques approved or certified by the government
before they could sell them. Because you’d have to say, well,
now, can the FBI get into this, or can the FBI get into that? So you invent a new
menu app for looking at your menus of your
local restaurants. And all of sudden, you’re
having encrypted communication for that, perhaps. You’re storing a
password for a user who wants to log in frequently. So now– but now, the software
needs to be certified somehow or approved. What’s the approval
process going to be? There are three million
apps on the Google iPhone and the Android app store– something like that. So every one of those
would prospectively have to be approved somehow
by some government agent– Right, right. Got it. –who would have to
say, yeah, this is OK. It meets our backdoor
access for law enforcement. That’s just unworkable
in my viewpoint. Yeah, got it. OK, so a couple of folks have
asked about the RC4 cipher. So tell us about
what that is and what has been changed about it. That’s been an
interesting story, too. RC4– there’s a whole series of
RC things that I’ve worked on– Ron’s Code for– one
that I was telling them never got published. So RC4 was one I did in 1987. It’s a stream cipher. It was proprietary for a while,
part of that product line that RSA had as a company
and eventually became public. It was ARC4 for a while– originally RC4. But it’s about
six lines of code. It’s very simple and quite fast. And it’s been used widely
in lots of products now. There’s no patent on it. So once it was out there, it
was analyzed by other people and seemed pretty robust
and usable for a long time. Recently, however,
people have continued to look at the security of it. It’s had known statistical
irregularities for a while. And those have become
better studied. And attacks on RC4 have
gotten more powerful. Partly because we know
of the statistics better. Partly because the computing
power of the adversaries are getting better. Partly because the kind
of protocols for RC4 was used– allow
repeated attacks quickly over a period of time. So it’s an interesting cipher. I would now say it’s
time to retire RC4. I agree with those
who are saying it’s showing enough weakness now. As they say, cryptosystems
never get stronger. They only get weaker. So what’s next? Well, I had a proposal
with Jacob Schuldt, who was one of the attackers on
RC4, for a redesign of RC4, taking into account
of what we did. The new design is called Spritz. It’s on my website if you want
to look at the details there. It’s like RC4. Basically, we ask the questions. If we had undertaken the
RC4 type design today with what we know today
about the statistical issues and so on, what
would it be like? And so that’s what we designed. It’s a bit slower. It’s much more secure,
and it’s out there for people to look at now. There’s many, many good crypto
algorithms out there, too. So it’s in a landscape where
there’s lots of good standards. In particular the
US government has been doing a very
interesting job of pushing– having contests
for encryption standards and adopting algorithms,
which may not even be designed by Americans
as is national standards. It’s been an interesting
development to watch. So can we shift? We mentioned quantum
computing earlier. And I want to return to
it if that’s all right. OK, so Bill from Maryland
asks how important it is to be the first nation
to utilize quantum computing. So there’s quantum
computing for the purposes of quantum computing itself
for general computation. And for that I think
the question is still open as to what
kind of problems you can solve quickly on
a quantum computer. What kinds of things
can we really do faster with a quantum computer
than a classical computer? Ironically, it seems
that really the best thing that a quantum computer
could be put to use on is breaking RSA. And once you’ve got
a quantum computer, nobody is going to
be using RSA anymore. So what’s the point? So we have that
kind of situation. But it’s a fascinating
theoretical question as to what quantum
computation is good for. I wouldn’t be
surprised if 40 years from now their quantum
computation is quite feasible. I think it’s a long process. It’s going to take a long time
to figure out the engineering aspects of that. It’s a challenge like that
of fusion power or something like that. So we have two minutes
left, professor. So I’m going to ask
an old journalist question, which is, what did
you wish I was going to ask you? I had no idea what you
were going to ask me. I think you’ve asked a
lot of great questions here, talking about– we talked about
encryption policy. We talked about RC4 and RSA. We’ve talked about voting. So there’s a lot of things
that are dear to my heart. And I think those
are great questions. Things that you didn’t ask– I don’t know. Those were great questions. All right. Thanks to our alumni. OK, so we’ll stop there. So to all of you out there,
on behalf of the MIT Alumni Association, I want to
thank you for being with us. And, professor, I want to thank
you for your insights today. Thank you, Nate. And so to those
of you out there, we encourage you to continue
discussing these topics on the blog, Slice of MIT and
on Twitter using the hashtag #MITFaculty. You can also view an archive
of this and past Faculty Forums Online by visiting the
learn section on the Alumni Association website. So please join us next month for
another Faculty Forum Online. Goodbye. Bye-bye. Thank you. Thanks again for joining us. For more information on
future MIT Alumni Association productions, please
visit our website.

One thought on “Faculty Forum Online: Ron Rivest

  1. The Interviewer wasn't the brightest candle in the box or maybe he just felt constrained by questions of what he felt was a general public who can not easily understand things and needed a simplistic explanations.

Leave a Reply

Your email address will not be published. Required fields are marked *