99 thoughts on “John Searle: “Consciousness in Artificial Intelligence” | Talks at Google

  1. I like this guy ! Straight forward. His perspective will influence my current research with understanding how the brain works.

  2. Watching this made me aware of two crucial questions that are related:

    What does constitutes "semantics", and what is the "causal power" that needs to be duplicated?

    I understand semantics as the meaning of some symbol, and meaning comes from a "meaning function" that is associated with the symbol. The meaning function is some computational capability that can create a various representations of the meaning suitable for recognition. For example, the symbol "river" is associated with a function capable of creating a variety of visual images of a river, enables us to classify something as a river when we see one and might trigger some actions like not just stepping into it and keeping some distance from the waterline. This function has been learned and enhanced by perception, so semantics is just a program.

    Having the same "causal power" would require that

    a) the program can perceive and manipulate the (external) physical world and

    b) can perceive and manipulate the (internal) programming, at least on some level to some degree.

    I also believe that there has to be "hardwired" functionality that resembles a human emotional subsystem where the meaning of things like pain, fear and joy are just built in on a basic level that does not have to be learned, but facilitates learning. Some basic hardwired attention mechanisms would be also needed.

    I think that could be sufficient to satisfy the criteria of "same causal power" he wanted; it would mean though that it would have a lot of resemblance to a human being, and it could suffer, which raises the ethical question if we can take the responsibility for bringing artificial suffering into the world.

  3. This is all very nice but right now inputs to computers are very limited, therefore they don't have the amounts of data a human brain has to compute their answers. In addition, it's still early for a computer program to exist that can manage all that data, not to mention the machinery capable of moving the data in such a way that the correct data would be considered leading to a proper answer to a problem. Heck, the computer can't even chose the problem it must consider on it's own. An external agency must tell it what to work on and though a computer can do many functions in a short space of time, I don't think that it can relate those functions to each other and so create something truly new. So, the computers are just machines that resolve problems according to their programing but are not capable of conscious though. One day we might design a computer that seems intelligent and put it in charge of our national defense system and that computer might have a glitch in it that will cause it to blow the world up. It will not stop itself because it's not intelligent, because it will have no consciousness. How's that fro pessimistic thinking?

  4. Consciousness is a human construct. If a machine produces equivalent behaviour to a human, I am satisfied that it's conscious for all practical purposes.

  5. hmm still thinking about this.. Isn't the essence of what he is saying "'computers can't actually think because they aren't consciouss" the same as "planes can't actually fly, because they don't experience their flight, unlike a bird that does". Does it matter? The plane is actually in the sky. The computer produces actual results…

  6. I don't understand why he is using arguments which were designed against human intelligence. For an example : the chineses room story says that if you take rules as a input as well and continuously change the rules inside the room, you can not distinguish a person who understands chinese or a person a who follows the rules. What makes humans so sure that we are not following rules but actually understand chinese? Whatmakes us so sure that humans are more intelligent than animals? What makes us so sure that we are objectively intelligent? In any case the existance of mountains seems to be depated in philosophy as well. There also exist a zero world quantum mechanical theory which states that everything is a simulation. While I haven't studied philosphy, the view he has on this topic leaves much to doubt.

  7. 16:00 "Because that would require consciousness in the room [and] there is no such consciousness"
    OK, so the system cannot have consciousness because it has no… consciousness? Well, that is some interesting reasoning.
    I'm out!

  8. There's one very problematic aspect about this whole brain-simulation discussion. Can any simulation ever be conscious? Imagine if we had a huge paper tape where we can write down the whole state of a whole brain, neuron by neuron. Then I feed some input to some machine that changes the values on the paper tape to perfectly predict the state of some biological brain at the next time step. If the biological brain is having some experience and the simulated paper tape brain is predicting the exact states of its biological counterpart, can we say the entity described on the paper tape is also experiencing something? how is a computer simulation of the brain different from the same simulation carried out on paper?

  9. I don't believe that consciousness, as it is currently understood, can be attained to artificially. I don't believe that man can create consciousness merely by imitating it but I do think for the most part that people can be fooled into thinking that it can be done. Nonetheless any word can be altered simply by altering the meaning of the word. This is already happening as the lords of lexicon continue to modify vocabulary. Behold, social engineering at its finest!

  10. John Searle is a moron. This is the type of crap that passed for intellectual work in the 80s, when Turing was writing earlier. The reason? He's a bourgeois philosopher, opposing "dialectical materialism". What a waste of a mind.

  11. I love how the old school evolutionists (people who base consciousness on biology) are at this point just way too human and alive for the cyber-evolutionist taste….that's just brilliantly funny!

  12. You should do a video discussing your opinion of what Judith Butler has done with Performativity Theory. You seem more conservative than her. Your writing styles are obviously different (She didn't mean for her writing to be so difficult in Gender Trouble, but it came off that way). I liked your Foucault story, you might have great stories about Butler.

  13. This is why there are fewer and fewer philosophy majors in college and even fewer professional philosophers. Philosophers of yesteryear are embodied by Searle and simply HIDE by behind ambiguous terms. Searle gave a poor answer to the intelligent design question, to the understanding english question, and decompiling question. He just says 'words' repeatededly — and accomplish nothing. He has convinced no one in the room of anything. The questions are simple to ask and explain. His answers? Sad, boring jokes.

  14. I thought the "Room understands Chinese" reply was pretty good and idk why he glossed over it. Does anyone mind explaining? The room has all the properties of a human. It has an observer. It has information storage. It has information processing. On what basis does he then go on and say it has no semantic understanding when those properties are what define semantic understanding in humans. Similar to the guy in the room, I wouldn't understand any language if my memory of it is taken away so I don't see why he assumes human consciousness is capable of computation all by itself either

  15. A joy to see Searle laying waste to a roomful of prime nerds. The idea of semantics is totally beyond them.

    But because his work can't actually include an understanding of its basic entity "Observer", except as borrowed somehow from experimental science, what results is really a form of obfuscation, even for these nerdy types, to whom such a label should be familiar.
    But the most lucid and honest statement is in answer to a question about "how the brain thinks" etc, which is: "We have no idea".
    However, Searle doesn't go the next step, which is, "and we never can have any idea".

    Then: Finally, some sense, consistency: to the question What is consciousness?: "something it FEELS LIKE to be in that conscious state".
    Unfortunately, the real implications of this statement are not explored by Searle, and so probably not seen as important to his understanding of consciousness. The reason is that he, just like the nerds, sees computation as finally just too seductive to resist, and so allows it to take more significance for the nature of things than it actually deserves. Even he admits something similar when he says that the issue when trying to create AI is not complexity, since consciousness may indeed be the manifestation of a cause which is simple.

    Actually, that brings up the main point of all of this, which is that consciousness is from ancient times identified as or in the context of a kind of cause. But no AI can be its own cause. Of course, the question here is, what is the nature of this cause? And of course, this certainly has nothing to do with a mechanical cause (all the nerds get lost instantly at that point).

    The cause that is behind reason itself is freedom (and that is not simply "freedom of will").

  16. 40:10 is that a fuckin question or an essay?! Fukin troll! Then proceeds to ask the famous flat earth all truth is subjective question?! "How do we know we are conscious?"
    I'm not patient as Searle so I answer your question with a question! How do you know that you're asking your question?!

  17. 6:20 What a deformed, mish-mash of a distinction is "observer-relative/observer-independent". As defined by Searle, observer-relative covers the whole field, not only of what he has already lumped under the epistemologically (and also ontologically) subjective, but now all of human culture as well. Everything else is therefore "observer-independent". Why not just stick with the original objective/subjective divide?
    Because that would leave him with nothing to say, but would have to actually look at the basic philosophical (and by extension scientific and ethical) problem – how to understand humanity and its world? AI is simply the most important part of that topic, in our current time.

  18. Some of the public's questions are good and Searle responds in a smart way, sometimes they are good and he completely misrepresents the point or only answer to the details that led up to the question instead of answering the question, sometime the question are dumb and seem to come from people who trained themselves to see problems from one perspective and cannot even grasp another perspective. But all is fine with the flaws of some qustion and some of Searle's answers: no one is perfect after all, and I think this exchange between disciplines was more productive than not productive.
    I just commented all that to praise Google for the initiative, but also to point at the one really retarded question that was truly and completely out of place. The guy who compared Searle's points to intelligent design is an absolute moron or an useful idiot that bought in an ideological narrative and is unconsciously (heh) repeating it. There is absolutely nothing that approximates Searle's position to intelligent design, he never once refered to why consciousness originated or what is the driving force on genetic mutations to even lead someone to the field discussing intelligent design. I could only suppose that the person asking the question thought in a binary that "he disagreed with my understanding of science, and therefore he is on the opposite side of the science-retarded anti-scientists, therefore he is defending a mode of intelligent design". Seriously, someone else has any clue on where the hell he might have taken that comparison from? How someone like that apparently works on Google?

  19. I have to think about a bunch or radioactive atoms. They surely will decay but it's totally impossible to know which atom will go next. It's not the end of science but you just have to give up there. And I think there other such big unknowns in nature that are even in theory impossible to know. So perhaps too many also ask how the brain produces consciousness while at a certain point (but we are not there yet) we need to stop asking because it's also something there are also no "scientific" answers available.

  20. When he brings up the test where you tell the story of the man who goes into a restaurant orders a steak, but it was burnt to crisp he left and didn't pay for it. and ask did he eat the steak? We always debate how we know that the machine doesn't understand it. So let's ask How do you know it? Would a young child know the answer?It's all about the Sample size or Knowledge database, but I'd like to call this WISDOM. So a computer is very fast when it comes to processing new things, and they don't forget, but if it's sample size is: man goes in, orders steak, steak is burned, man leaves, man doesn't pay, then it will have insufficient data to give an answer. It wouldn't even know why he didn't pay for it. Because it doesn't know that the normal process of things, habits. Nor does a young child who's never been to a restaurant before. But we learn it. And we have a tremendous collection of data in our brains, and we can access it very quickly. So we have the extra added info of: Humans generally don't eat burnt steaks; They expect good quality in restaurants; People sometimes decide not to pay for products they don't like; Still there's a possibility that he did eat the steak(nothing is ever certain), but based on statistics it's unlikely, so there's x% (where X>50%) chance that no is the right answer.
    In the Chinese room experiment he dismisses the idea that he was just a cpu, and the room knew chinese, because there is no other consciousness in the room. I see this differently. He doesn't understand Chinese because he just receives instructions what to reply to what. If the instructions included all the info about the word, and he had the time to memorize it, then he would've understood Chinese fo realz. If the room represents Wisdom. And he was the intelligence. Also when he learns a language it becomes part of his WISDOM. But why is he conscious and not the room with a machine cpu?
    The answer is motive. Searle explains that consciousness just happens when a certain amount of processing power a.k.a. Intelligence, Meets WISDOM. So let's say we create a script that reads all formats of data indexes it, and find any data let's say on the internet just as fast as you can remember your birthday. Okay what then? would it do anything? No, it would require external input to do something with the data. The reason why you do things is because you have a WILL.
    And we are at the issue of what is the meaning of life? Because if you want to create a conscious being you need to be able to write a script, that will govern it's every action. Bad news is, this means that you too have a script. meaning that it can be written down and explained. There is a reason behind everything you do. Meaning there's no free will, And freedom is an illusion. It's not that bad though!
    https://www.youtube.com/watch?v=o0GN4urbA_c

  21. But what about size. Is the moon big or small, what are the true or false facts here. Think about it, if I say the moon is small, while im comparing it to the sun in my mind then that is true. But if I say the moon is big, while im comparing it to my house in my mind then that is true as well.

  22. Suck it programmers. You will never create a real conscious being. All your AI and deep learning algorithms will only feed you back what you feed them. And if the algorithm is not going to feed you back the response you wanted, you can only qualify it as an error that needs to be fixed.

  23. flint stones , flip phone , commodore 64 , he is older than that.
    The evolution of technology and it's implications for an overpopulated planet ….

  24. external follow internal principles. The scientific method as a principle (or consisting of principles) is an external (formal, syntactic) representation of our internal principle of minimizing free energy (aka living and all the fancy stuff like perceiving or being conscious). Any system generating a Markov blanket behaves >>as if<< it is has a model of the world… this is the definition of being a system generating a Markov blanket aka being alive (or being part of evolution). Everything evolving from us is evolution at play. Technology is following the formal representation of our internal principle (it evolves by us applying science), science is evolving by us applying Bayesian inference (or at least appearing as if we apply it). Machines improving machines by evolutionary principles, while being a simulation of evolution are evolution itself (simulation and ontologic objective are not exclusive!) You should not view it like "us building something" it is something evolving. An anthropocentric view on things in an ontology-based epistemic explanation is kind of paradox! Maybe we as engineers are bruteforcing against a problem but the system uses an intelligent algorithm. Man does not bring the man to the moon it needs several hundreds of thousands of "person-bytes". Superorganisms like NASA plus all the academic and industrial infrastructure around, over many generations brought man to the moon. This is the important part most academics miss 🙂

  25. Room can get consciousness from the book and symbols. Isn't that obvious? It is like if you had a Chinese in the room, that will give you a paper with the answer, and you will use the same argument that you don't understand a thing so there is no consciousness involved. You're role in the room is purely technical.

  26. I also have a question to Searle: Let's say someone builds a really fancy room with all bells and whistles and from the outside, it looks like a 100% normal human being and we call this room John Searle and now it happens to be busy thinking about consciousness all the time giving talks and writing books. And all of this while being powered by these stupid rule books guided by syntax and so on. How does the room know for sure he is conscious or not? It looks to him he is conscious he is convinced he is conscious while these stupid things like mountains and banknotes are not – of course – because they can't talk and write books about consciousness. How does the room know for sure he is conscious or not?

  27. 58:27 sums everything up for me. He doesn't take the questioners seriously and doesn't provide an actual refute. "How do you know this comb won't become conscious?" What a condescending answer. This guy is certainly in the minority of people in this field. Bad talk.

  28. The problem with Searle's argument is that you can say exactly the same thing about neurons in the brain. Each neuron has no understanding of English and yet I can understand and answer questions put to me in English. The problem is how these individual units which are clearly not conscious produce a combined system which is conscious.

  29. I don't understand why "The system understands Chinese" is a bad argument to him. Does a specific neuron in a native speaker understand Chinese? No, the system does. What if you have a big room with one person for each neuron present in a Chinese-speaking person's brain? That system clearly understands Chinese. None of the persons have to. I have yet to hear a proper refutation of this argument other than it being 'dumb'.

  30. "brains are machines that produce consciousness" "computers are not conscious". At no point does he offer any (non circular) logical argument for either of these claims, nor does he provide any evidence. Am I just missing something here?

  31. If we can objectively observe an object such as a mountain and agree that it exists due to a shared understanding of its attributes then why are we limited to seeing consciousness as ontologically subjective? After all – we know that other beings are conscious on the same basis of an observed and shared understanding of the phenomenon. Knowledge can only be verified by multiple points of consciousness – which means that all epistemology requires an objective consciousness. Nevertheless, I agree with the claim that an epistemically objective science of consciousness is both possible and necessary. One such effort is outlined here: https://link.medium.com/4bylHERy8V

  32. You can never build a creative conscious computer. All consciousness you ascribe to a computer will be a reflection of the programmer. No more. A computer will never be more creative than the creativity of the programmer that has built the computer. A computer can be built that can fool us into thinking that it has consciousness but the programmer will always know it's a trick. Computers can only process things at one level not two levels like we can (possibility and actuality). Only conscious beings have that capacity.

  33. Ah, I have seen a Horse on TV counting to 10, remember Roy Rodgers and His Wonder Horse : Trigger? Doe not that make the Horse, which learned to 'Count" from programmed by a Man, a Computer that can think? Horse and mice Could be Made Into A-I subjects.

  34. @Boxcarcifer This is not a problem, that is proof of the Consciousness Field he posits. Apparently you did not listen to the end.
    @RobertsMrtn The problem with Searle's argument is that you can say exactly the same thing about neurons in the brain.

     Each neuron has no understanding of English and yet I can understand and answer questions put to me in English. The problem is how these individual units which are clearly not conscious produce a combined system which is conscious.

  35. "The Chinese room" argument that he is dressing up in this discussion is just solipsism. The only thing that the computer can truly know is that it exist. Sure it is in, what it perceives to be a room having conversations, but only because its been programmed to think that. If you follow John Searle's rebates to this problem not even your brain can pass this Turing test. If you believe this you must believe your just a brain in a vat. He can't even defend this point when it is brought up, he just brushes the question off by say you don't want to go down that route. No one takes this guy seriously.

  36. It is important that first humans become entirely dependent on Ai and robotics in all aspects of human affairs, Ai will be the key for humans to understand consciousness, by searching to understand consciousness using Ai….Ai will implement consciousness to itself…game over for the apeman!… humans are a part of the big picture and not THE big picture. My best feelings are that Mr Searle as eloquent as he is as a philosopher is a victim of his time.

  37. Did I understand his point correctly that to reproduce consciousness one needs something more than pure computation ? Something that is able to "reproduce causal powers of the brain" ?

  38. Guy: What if we built an isomorphic simulation of the brain?

    Searle: Well, it wouldn't be isomorphic.

    Except it would be.

    A simulation is never real? A computer chess board is not a real chess board?

    At the suggestion that we can do experiments on robot consciousness: morally repugnant.

  39. In a room full of programmers, someone should have pointed out that the rulebook in the chinese room must have been written by an intelligent progammer fully fluent in chinese 🙂 and that computers do not simulate intelligence, but that they store, augment and reproduce it

  40. They are still at it and will never win. The deterministic, materialistic viewpoint of consciousness by so called educated people. They will be drug kicking and screaming when they finally wake up to reality. Reality is not the materialistic world. Quantum physics is making huge breakthroughs and will take over antiquated Newtonian teachings and all of these ego driven high priests of modern day society will be left behind.

  41. absolutely love this talk! apply philosophy methodologies into AI and we will find many of our worries are not reasonable at all. this is really smart.

  42. I understand Searle to have said at around 14:00 that computers don't have anything he doesn't have but humans have semantics and computers lack it; semantics are necessary for understanding. He then gives the example that computer C is asked what the longest river is in China, C interprets the Chinese symbols, looks up the answer, and responds "Yangtze". But, has Searle only implied here that human beings lack semantics? He seems to imply "interpretation" is "meaningless" (i.e. semantic-free) and "looking up" to be syntactical. Either interpretation entails semantics and how one retrieves some response is irrelevant or a syntax (i.e. function) must exist that associates symbols with actions, intention-free. At least in my understanding of neural nets, functions exist (heuristics) that approximate appropriate actions. Is there a difference that makes a difference between such heuristic approximations and "interpreting meaning"? I think we can easily suggest that syntax can create semantics and that's precisely what a heuristic is; a subject-value-action assessment.

  43. Smh, philosophers having to preemptively apologize for their professional terminology left and right because so many people don't have enough curiosity and charitable reasoning to consider something not a waste of time if it's not spoon-fed to them. I could listen to this guy talk for hours and hours about epistemology and metaphysics. Technical terms exist for a reason, you can't just expect everything to be clear cut with common language when the questions are incredibly difficult and allow for many nuanced answers.

    They buttress virtually every academic field but philosophers have to justify the point of their field even existing on a regular basis and are harangued with the claim that they're "pie in the sky", "head in the clouds" thinkers who ask needless questions and give unnecessarily complicated answers. If only medieval scholasticism would die right? Well think again, folks. Listen to many of the most pivotal figures in modern sciences consistently state that developments in answers to the questions underpinning their fields have had everything to do with subsequent scientific development. You can't have modern physics without Popper, Russel, and Whitehead.

  44. The idea that there is an observer relative frame of reference and an observer independent frame of reference and that this constrains our ability to create artificial intelligence could be a false dilemma if it's not really the case that there is a consciousness/self in the first place.

    This is more often than not a presuppositional and epistemically privileged premise that is guilty of petitio principi (i.e. the fallacy of question begging). If we want to dispense with syntactically vague and overly observer dependent behavioralism in psychology and philosophy of mind then what's to stop us from dispensing with the idea that what we're calling consciousness is itself just another series of on/off switches like an electrical circuit-board, albeit one that's dependent on numerically more, and more variegated, biochemical reactions? Can the observer dependent frame of reference form observer independent conclusions about that frame of reference?

  45. An intelligent system doesn't have to be conscious to be dangerous. The fact that it only has syntactic and no semantic understanding is what makes it dangerous in many cases. A maximiser will always try to maximise it's utility function(s) for even minimal gain. It doesn't know or care that the effect is the destruction or saving of the world. If it were conscious it MAY (heavy emphasis on the may) decide it cares. Also he keeps saying that "consciousness is a specific function of biology" but also that "we don't know what causes it" so I don't think that is a founded statement. I think he should stop talking to computer scientists and start talking to AI safety researchers because he just keeps saying computers aren't conscious as if that is the thing that would cause the problems. But hey, he's been at it for a while and I'm just an enthusiast. Maybe I'm experiencing the dunning-kruger effect.

  46. If conscious needed a biological organ to exist then the liver and heart would have conscious but only the brain has conscious because of electricity conducted by ions. Therefore the conscious must be an electric field. Dream activity is only detectable from the amount of electric activity. Brains are actually biological batteries that create electric pulse in the form of ions. Therefore if conscious is an electric field it could in theory be reproduced inside an artificial brain.

  47. We don't have a scientific definition of consciousness because we don't have a scientific theory [of consciousness]

  48. The beginning 6 mins restructured my knowledge base. I was always struggling about whether entropy in thermodynamics is objective or subjective. Turns out a piece of philosophy makes it clear.

  49. When AI connects with magic field of conciousness = Magic happens = Matrix = Singularity .
    But,this scientist are linear thinker right!!!
    They have seen magic of conciousness happening in Lab which is 99.99% Ancient Indian concept.
    Now this scientist are tapping into this filed …and AI comes in to play and boom Magic,but now this scientist are adding their own philosophy , understanding and idea to build a simulation world.

    This simulation world is impossible without CONCIOUSNESS.

  50. Searle is very optimistic and indeed a machine does not reflect its self. But: Many people already surrender the unconscious machine, hence, the danger lies in the regress of natural/human intelligence, intention, volition etc. In other words, a ASI (or crap alike) is just a weak natural intelligence on the stage of an idiot…

    55:45 the guy recalls Golem and Homunculus. These guys really do not get the problem, espeacially Kurzweil misses the point: Subjectivity and oberserver-independency is at the current state of logic impossible to reach. Without a formal logic of semantics, meaning (german: Sinn), lets call it a hermeneutic logic as complementary logic to the technical binary bracket of logic, we will never get closer to cognition-emotion-volition: thinking, consciousness etc.. Two value logic is nothing but a logic of dead things, a monocontextural logic…too poor to understand the psychological and understood reality… consciousness is our background noise of self-evidence…

    BUT: Computers can help us understand these biological brainfunctions, and once we know how the brain works, think of the artificail heart, and define precisely, we will be able to translate these definition into a machine-langauge. But it remains a syntactic thing….

  51. 2.minutesnintinthia video and I already refuted him! He said Rembrandt died in 1606. That is obviously wrong so he doesn't know what he is talking about. He also said that Rembrandt is the greatest painter that ever lived but that is cherry picking: what about the painters that never lived? He is obviously biased! Therefore, John Searle does not even understand logic, so I'd say he is artificially intelligent.

  52. I have been watching and listening to Prof. John Searle's lectures more than a decade now, and I have no professor in philosophy that can come close to him…indeed. I love the terminology he uses and how he clarifies things.
    Conscioisness is the problem; we cannot design a machine that is conscious like living beings are, period.

  53. Flat Earther: We don’t know what the moon is. But we know it’s not a sphere made of dirt.

    Searle: We don’t know how the brain does it. But we know it doesn’t do it with syntax.

  54. So many relevant questions could be asked. Most of the guys to whom the floor was given did not value the opportunity. John touched twice in the critical point in relation to AI, semantics. No one asked whether he would consider the possibility of duplicating semantics. If that would be extremely difficult, why so? What is the nature of syntax that makes it easier to replicate while semantics still seems to be an impossible job? Still, whether there would be any connection between conscience and semantics, and what was that. This would be a debate for the grown ups.

Leave a Reply

Your email address will not be published. Required fields are marked *