Comments on: RIP John Searle
http://www.metafilter.com/210505/RIP-John-Searle/
Comments on MetaFilter post RIP John SearleMon, 29 Sep 2025 11:48:02 -0800Mon, 29 Sep 2025 11:48:02 -0800en-ushttp://blogs.law.harvard.edu/tech/rss60RIP John Searle
http://www.metafilter.com/210505/RIP-John-Searle
<a href="https://dailynous.com/2025/09/28/john-searle-1932-2025/">American philosopher John Searle, widely known for his famous "Chinese room" argument produced in 1980, has died aged 93.</a> <br /><br />Perhaps best known for <a href="https://plato.stanford.edu/entries/chinese-room/">The Chinese Room Argument</a> or his <a href="https://en.wikipedia.org/wiki/Searle%E2%80%93Derrida_debate?wprov=sfla1">debate with Derrida</a>, Searle was more generally famed for his work on philosophy of mind and philosophy of language. Over the course of his career, Searle was the recipient of several awards and honors, including the Jean Nicod Prize, the National Humanities Medal, and the Mind & Brain Prize.
In 2019, he was stripped of his emeritus status at Berkeley, where he had been a professor and worked from 1959 to 2019, after the University of California deemed that he had violated their sexual harassment policies. This followed prior complaints from students and workers at the university, including accusations that he fired one research assistant who rejected his advances and made inappropriate advances on students.
Searle died in a nursing home on September 17, per an email from his secretary of 40 years.post:www.metafilter.com,2025:site.210505Mon, 29 Sep 2025 11:38:02 -0800deekerPhilosophyJohnSearleBy: GenjiandProust
http://www.metafilter.com/210505/RIP-John-Searle#8770362
An interesting philosopher, but the worst sort of academic. No ".".comment:www.metafilter.com,2025:site.210505-8770362Mon, 29 Sep 2025 11:48:02 -0800GenjiandProustBy: mittens
http://www.metafilter.com/210505/RIP-John-Searle#8770364
Jackson Kernion: <a href="https://jacksonkernion.com/posts/I-Blew-The-Whistle-On-John-Searle.html"> I blew the whistle on John Searle.</a>comment:www.metafilter.com,2025:site.210505-8770364Mon, 29 Sep 2025 11:51:58 -0800mittensBy: Didymus
http://www.metafilter.com/210505/RIP-John-Searle#8770367
A philosophy prof introduced the class to Searle in his junior Philosophy of Mind course
In hindsight I think the prof was quite taken by Searle's arguments at the time, and he fostered this attitude in the class (mid- to late 1990s). So for a period of time I was convinced Searle got it right in a number of ways. Now, that feels like a lifetime ago and "The Social Construction of Reality" feels like a galaxy away.comment:www.metafilter.com,2025:site.210505-8770367Mon, 29 Sep 2025 11:57:05 -0800DidymusBy: mittens
http://www.metafilter.com/210505/RIP-John-Searle#8770369
(also the chinese room thought experiment is dumb, there, i said it)comment:www.metafilter.com,2025:site.210505-8770369Mon, 29 Sep 2025 12:02:14 -0800mittensBy: flabdablet
http://www.metafilter.com/210505/RIP-John-Searle#8770377
<em>the chinese room thought experiment is dumb</em>
I never found it the slightest bit convincing. It was my introduction to Searle, and resulted in my filing him under Men Who Think They're Much Cleverer Than They Show Any Sign Of Being, and nothing of his I've read since has persuaded me that I got that wrong.comment:www.metafilter.com,2025:site.210505-8770377Mon, 29 Sep 2025 12:19:28 -0800flabdabletBy: GenjiandProust
http://www.metafilter.com/210505/RIP-John-Searle#8770378
It is, kind of, but it's also a decent argument for why GenAI is not conscious nor a path toward consciousness.comment:www.metafilter.com,2025:site.210505-8770378Mon, 29 Sep 2025 12:20:15 -0800GenjiandProustBy: ocschwar
http://www.metafilter.com/210505/RIP-John-Searle#8770380
If you train an AI on Searle's writings and it produces similar output to what he did, and doesn't sexually proposition a student, is it conscious?comment:www.metafilter.com,2025:site.210505-8770380Mon, 29 Sep 2025 12:21:51 -0800ocschwarBy: GenjiandProust
http://www.metafilter.com/210505/RIP-John-Searle#8770385
No.comment:www.metafilter.com,2025:site.210505-8770385Mon, 29 Sep 2025 12:33:37 -0800GenjiandProustBy: Sebmojo
http://www.metafilter.com/210505/RIP-John-Searle#8770387
<em>(also the chinese room thought experiment is dumb, there, i said it)</em>
i thought this too, but LLMs are literally chinese rooms. I'm mildly embarassed by how wrong i was.comment:www.metafilter.com,2025:site.210505-8770387Mon, 29 Sep 2025 12:34:10 -0800SebmojoBy: The Bellman
http://www.metafilter.com/210505/RIP-John-Searle#8770388
<i>I'm mildly embarassed by how wrong i was.</i>
You weren't wrong and you don't have to be embarrassed. An analogy can accurately describe a system and still be dumb. The Chinese Room is that.comment:www.metafilter.com,2025:site.210505-8770388Mon, 29 Sep 2025 12:40:47 -0800The BellmanBy: edselford
http://www.metafilter.com/210505/RIP-John-Searle#8770390
Every 'AI' chatbot company owes his estate royalties for chinese-rooms-as-a-service.comment:www.metafilter.com,2025:site.210505-8770390Mon, 29 Sep 2025 12:45:12 -0800edselfordBy: Slothrup
http://www.metafilter.com/210505/RIP-John-Searle#8770394
The Chinese Room argument is dumb because the thing described as a "human" in this case is just part of a wire between the outside world and the program, which is the real "Chinese speaker". The trick is that the wire has been anthropomorphized while the program has been deanthropomorphized.comment:www.metafilter.com,2025:site.210505-8770394Mon, 29 Sep 2025 12:47:44 -0800SlothrupBy: SPrintF
http://www.metafilter.com/210505/RIP-John-Searle#8770411
I first read the Chinese Room argument in <em>Scientific American</em> many years ago. The article stumped me for a while, because I couldn't quite grasp Searle's point. Re-reading the article later, I realize that his argument confused me because <em>it makes no sense</em>.
He seemed to be claiming that the Room can't understand Chinese because there is no component within the room that understands Chinese. By that logic, no human being can understand Chinese either, because if you divide the brain into small enough pieces, you won't have anything left that understands anything. But somehow, collectively, it works!comment:www.metafilter.com,2025:site.210505-8770411Mon, 29 Sep 2025 13:43:00 -0800SPrintFBy: kittens for breakfast
http://www.metafilter.com/210505/RIP-John-Searle#8770415
LLMs are Chinese Rooms. I'm sorry this guy evidently sucked ass as a human being, but as someone who regularly works with everyday people who seem to believe AI is Data from Star Trek, a demon who lives in the phone, or somehow both, this is too important an analogy to just blow off because we don't like this individual.comment:www.metafilter.com,2025:site.210505-8770415Mon, 29 Sep 2025 13:50:07 -0800kittens for breakfastBy: eraserbones
http://www.metafilter.com/210505/RIP-John-Searle#8770416
<i>He seemed to be claiming that the Room can't understand Chinese because there is no component within the room that understands Chinese.</i>
That's how I read it too. Basically he's asserting that the reason I'm conscious is because there's a <a href="https://en.wikipedia.org/wiki/Herman%27s_Head">tiny little man inside</a> me who is embodies the consciousness.comment:www.metafilter.com,2025:site.210505-8770416Mon, 29 Sep 2025 13:52:25 -0800eraserbonesBy: kittens for breakfast
http://www.metafilter.com/210505/RIP-John-Searle#8770417
<i>He seemed to be claiming that the Room can't understand Chinese because there is no component within the room that understands Chinese. </i>
There is a component within the room that can <i>produce</i> Chinese. It can break Chinese down into patterns and match the patterns. It cannot understand Chinese. The Chinese writer on the other side of the door is fooled into believing the entity on the other side is communicating with them in a meaningful way. This is not the case.comment:www.metafilter.com,2025:site.210505-8770417Mon, 29 Sep 2025 13:53:28 -0800kittens for breakfastBy: eraserbones
http://www.metafilter.com/210505/RIP-John-Searle#8770426
If people think LLMs are the same thing as the Chinese Room then I must not understand the Room at all. I thought the part of the premise of the Chinese Room was that it actually worked?comment:www.metafilter.com,2025:site.210505-8770426Mon, 29 Sep 2025 14:11:48 -0800eraserbonesBy: flabdablet
http://www.metafilter.com/210505/RIP-John-Searle#8770427
LLMs are one way to implement Chinese Room-like behaviour. A genuine engineered consciousness could be another.
The thought experiment deliberately specifies <em>no</em> details about the operation of the room's proposed algorithm, then simply <em>asserts</em> that no such algorithm could be conscious <em>because</em> it's merely an algorithm.
This is nothing more than a classic question-beg, festooned with enough red herrings for Searle to convince himself that his reader won't spot what he's done.
Well, I <em>did</em> spot it, and it's a stupid argument. Like an awful lot of philosophical thought experiments, it's heavy on thought, too bloody light on experiment, and far too keen to lay down the law on what is or isn't possible "in principle". I've heard more defensibly rigorous work from friends as the joint gets passed around.comment:www.metafilter.com,2025:site.210505-8770427Mon, 29 Sep 2025 14:13:08 -0800flabdabletBy: mittens
http://www.metafilter.com/210505/RIP-John-Searle#8770428
<i>this is too important an analogy to just blow off because we don't like this individual.</i>
No, no, the essay was problematic long before our culture got around to caring about the welfare of his victims. (Have we talked about why it's <em>Chinese</em> in particular that's part of the example, and not, like, Linear A or French or something? Like it really leans <em>hard</em> into "why, this language is nought but meaningless squiggles to me!")comment:www.metafilter.com,2025:site.210505-8770428Mon, 29 Sep 2025 14:15:23 -0800mittensBy: flabdablet
http://www.metafilter.com/210505/RIP-John-Searle#8770430
Chinese was clearly all Greek to Searle.comment:www.metafilter.com,2025:site.210505-8770430Mon, 29 Sep 2025 14:16:34 -0800flabdabletBy: BungaDunga
http://www.metafilter.com/210505/RIP-John-Searle#8770442
Another way to build a Chinese room is to take a Chinese-speaking brain and one by one start to replace neurons with the obviously-unconscious algorithm until you have replaced the whole brain with no change in behavior. When did the consciousness snuff out?comment:www.metafilter.com,2025:site.210505-8770442Mon, 29 Sep 2025 15:02:24 -0800BungaDungaBy: BungaDunga
http://www.metafilter.com/210505/RIP-John-Searle#8770443
that said yes it is remarkable how much LLMs do prove that you can build a system that very convincingly mimics language but definitely isn't consciouscomment:www.metafilter.com,2025:site.210505-8770443Mon, 29 Sep 2025 15:03:29 -0800BungaDungaBy: flabdablet
http://www.metafilter.com/210505/RIP-John-Searle#8770444
<em>one by one start to replace neurons with the obviously-unconscious algorithm</em>
This being 2025, you'd one by one replace them with subscriptions to spiking-as-a-service.comment:www.metafilter.com,2025:site.210505-8770444Mon, 29 Sep 2025 15:06:42 -0800flabdabletBy: Sebmojo
http://www.metafilter.com/210505/RIP-John-Searle#8770450
the chinese room is like the one after the room that has the turing test in it. we passed the first, and hit the second.
<em>
that said yes it is remarkable how much LLMs do prove that you can build a system that very convincingly mimics language but definitely isn't conscious</em>
exactly.comment:www.metafilter.com,2025:site.210505-8770450Mon, 29 Sep 2025 15:36:13 -0800SebmojoBy: fnerg
http://www.metafilter.com/210505/RIP-John-Searle#8770452
Oh god people. The Chinese room is dumb in the same way Schroedinger's Cat is dumb. Although Schroedinger's Cat was actually supposed to show that quantum superposition is stupid. (It seems superficially stupid, but has turned out to be true)
It's an analogy. Analogies break down if you look too closely. Searle may have been a horrible person, but the analogy is still useful. Ultimately, we don't have a good definition of consciousness, so we have no way of measuring how conscious anything is. The Chinese Room is just saying that we can't use language to measure consciousness, because it should be possible to create a non-living machine/algorithm/whatever that can hold a conversation. WHICH WE HAVE.
(shakes head, rolls eyes)comment:www.metafilter.com,2025:site.210505-8770452Mon, 29 Sep 2025 15:44:03 -0800fnergBy: Ryvar
http://www.metafilter.com/210505/RIP-John-Searle#8770455
The Chinese Room is about adherence to syntax without understanding semantic relationships. LLMs are based on proximity of embeddings (words-in-context) within the complex vectorspace of the middle layers. They are nothing <i>but</i> semantic comprehension.
The problem is that the Chinese Room is a garbage thought experiment much in the way Searle was a garbage human. There is a difference between semantic comprehension within the language itself - how words and even concepts relate to each other - and language as a tool in its originating context: the world it describes and the coupling between the language-as-model and the material reality it represents. The problem with the Chinese Room is that it elides this distinction.
LLMs are as proficient or more than humans at the purely internal semantic comprehension side of language, and utterly, irredeemably hapless at the latter; we've built The Ultimate Poet. The only thing useful about the Chinese Room is that it is yet one more illustration of how fuzzy most human thinking on cognition and language really is at heart. But the need for such illustrations is long past when we live with the consequences of that fuzzy thinking every day. Cognitive Science, Machine Learning, Artificial Intelligence: whatever you want to call the field, it's better off without men like Searle in it. For multiple reasons.comment:www.metafilter.com,2025:site.210505-8770455Mon, 29 Sep 2025 15:50:02 -0800RyvarBy: Didymus
http://www.metafilter.com/210505/RIP-John-Searle#8770456
fnerg, it turns out your last two comments have started "Jesus people" and "Oh god people"comment:www.metafilter.com,2025:site.210505-8770456Mon, 29 Sep 2025 15:51:55 -0800DidymusBy: kittens for breakfast
http://www.metafilter.com/210505/RIP-John-Searle#8770458
So, in conclusion, the Chinese Room is a good analogy for LLMs.comment:www.metafilter.com,2025:site.210505-8770458Mon, 29 Sep 2025 15:53:53 -0800kittens for breakfastBy: migurski
http://www.metafilter.com/210505/RIP-John-Searle#8770462
What I took from his classes (<i>Mind</i> ca. 1998 and <i>Language</i> the following year) is that we can't talk about consciousness without intent (wanting things) and belief (about the outside world). I think that's what the Chinese Room was trying to illustrate? Today we have talking machines that don't want anything and don't believe anything, but they can give answer-shaped utterances to prompts so people get confused about whether they have some kind of subjective interiority. The CR is a <em>great</em> analogy for LLMs for the same reasons <a href="https://www.programmablemutter.com/p/cultural-theory-was-right-about-the">the critical theorists would have had a field day with ChatGPT</a>.
He was an interesting professor. Took and enforced attendance in a 200 person lecture. Stories of his creepiness were starting to come out around that time.comment:www.metafilter.com,2025:site.210505-8770462Mon, 29 Sep 2025 16:05:18 -0800migurskiBy: mittens
http://www.metafilter.com/210505/RIP-John-Searle#8770463
<i>It's an analogy. Analogies break down if you look too closely.</i>
I encourage you to read his <em>The Mystery of Consciousness</em> and ask yourself if he meant this thought experiment to be merely an analogy.comment:www.metafilter.com,2025:site.210505-8770463Mon, 29 Sep 2025 16:08:05 -0800mittensBy: SPrintF
http://www.metafilter.com/210505/RIP-John-Searle#8770466
<em>difference between semantic comprehension within the language itself... and language as a tool</em>
This makes me recall one of the issues I had with the <em>Scientific American</em> article. Searles frequently references the "semantic meaning" of something without ever defining what "semantic meaning" <em>is</em>. This is key because the whole idea of "meaning" and "comprehension" are bound up in the concepts of "understanding" and "consciousness".
I also recall a cartoon that illustrated the article. It showed a "computer" looking at <a href="https://en.wikipedia.org/wiki/Radical_187">the Chinese character for "horse"</a>. The computer just sees the same character. A human looking at the character has a "thought balloon" that depicts a cartoon horse. But (1) the character for "horse" is <em>itself</em> a simple cartoon of a horse and (2), the human doesn't actually have a literal horse in his head; he has an abstracted <em>notion</em> of a horse, a cartoon, if you will. Are the computer and the human really different, then?
I'll go a step further: if two humans look at the character for "horse," do they really conceptualize the same thing? I see "horse" and think of Secretariat, BoJack Horseman and My Little Pony. You see "horse" and think of carousel horses, sawhorses and a pickup basketball game. Neither of us think of "large-bodied domestic quadruped," yet somehow our definitions overlap enough that we reckon we understand each other when talking about horses. I suggest that the human "large world model" from which "semantic meaning" arises, is unique to every individual. All of our heads contain "cartoons," not the things themselves. Do any of us fully understand "horse"?comment:www.metafilter.com,2025:site.210505-8770466Mon, 29 Sep 2025 16:11:33 -0800SPrintFBy: kittens for breakfast
http://www.metafilter.com/210505/RIP-John-Searle#8770468
I mean, I know what a horse is, so.comment:www.metafilter.com,2025:site.210505-8770468Mon, 29 Sep 2025 16:16:22 -0800kittens for breakfastBy: fnerg
http://www.metafilter.com/210505/RIP-John-Searle#8770469
<i>The problem with the Chinese Room is that it elides this distinction.</i>
Cool. That makes perfect sense. Now talk to me about consciousness.comment:www.metafilter.com,2025:site.210505-8770469Mon, 29 Sep 2025 16:19:19 -0800fnergBy: kaibutsu
http://www.metafilter.com/210505/RIP-John-Searle#8770470
The real mystery is why one would bother talking about the chinese room when we could be talking about <a href="https://en.wikipedia.org/wiki/Blindsight_(Watts_novel)#Consciousness">Blindsight</a>.comment:www.metafilter.com,2025:site.210505-8770470Mon, 29 Sep 2025 16:23:34 -0800kaibutsuBy: fnerg
http://www.metafilter.com/210505/RIP-John-Searle#8770472
<i>fnerg, it turns out your last two comments have started "Jesus people" and "Oh god people"</i>
I think this is a sign I need to delete my account, because I'm rolling my eyes more and more at the conversation here in general. Not just this thread.comment:www.metafilter.com,2025:site.210505-8770472Mon, 29 Sep 2025 16:28:44 -0800fnergBy: sickos haha yes dot jpg
http://www.metafilter.com/210505/RIP-John-Searle#8770478
<em>Have we talked about why it's Chinese in particular that's part of the example, and not, like, Linear A or French or something?</em>
<a href="https://www.nplusonemag.com/issue-41/essays/china-brain/">Great essay that does ask this question</a>comment:www.metafilter.com,2025:site.210505-8770478Mon, 29 Sep 2025 16:39:41 -0800sickos haha yes dot jpgBy: sickos haha yes dot jpg
http://www.metafilter.com/210505/RIP-John-Searle#8770480
<em>I think this is a sign I need to delete my account, because I'm rolling my eyes more and more at the conversation here in general. Not just this thread.</em>
good luck out there!comment:www.metafilter.com,2025:site.210505-8770480Mon, 29 Sep 2025 16:41:22 -0800sickos haha yes dot jpgBy: dis_integration
http://www.metafilter.com/210505/RIP-John-Searle#8770482
> By that logic, no human being can understand Chinese either, because if you divide the brain into small enough pieces, you won't have anything left that understands anything. But somehow, collectively, it works!
that's the rub though. that's the whole thing. no part of my brain "understands" language but all the same there is something inside me, which all of us agree (or tend to agree) that we have, which does understand language: our minds. the paradox is what makes this philosophically interestingcomment:www.metafilter.com,2025:site.210505-8770482Mon, 29 Sep 2025 16:42:46 -0800dis_integrationBy: notoriety public
http://www.metafilter.com/210505/RIP-John-Searle#8770483
<em>I mean, I know what a horse is, so.</em>
A horse is a horse, of course, of course!comment:www.metafilter.com,2025:site.210505-8770483Mon, 29 Sep 2025 16:46:24 -0800notoriety publicBy: Ryvar
http://www.metafilter.com/210505/RIP-John-Searle#8770487
SprintF: That's kind of my point. We all understand a horse is an animal, a large mammalian quadruped, often domesticated, frequently used for transport, etc.
But I have ridden horses. And smelled horses (christ). I have fed and petted horses.
LLMs get the first part perfectly. They don't have coupling between "horse" and the experiences in the second. They <i>also</i> don't have coupling between any of the other terms used to describe and classify horses and real world experiences, meaning their comprehension exists purely within the language. That's the critical distinction, and I think the Chinese Room obfuscates this by attempting to draw a hypothetical fault at semantic comprehension in general rather than diving into what actually constitutes comprehension: the abstract and material duality of our existence.
<i>Now talk to me about consciousness.</i>
Which part? The one where we're generalized predictive modeling systems with a deep specialization in agentic prediction, or the one where we're all self-authored narratives attempting to continuously paper over the fact that we're basically deterministic processes lightly modulated by random quantum noise? I can do either. Still kind of working through the human mind as an energy conservation mechanism attempting to self-regulate its rate of change in order to maintain coherence, as a survival trait. Basically: why do we instinctively treat new ideas as a threat? Does this represent a sort of meat-based, behavioral analogue to gradient clipping? Is that why we reencode all our long term memories, are we solving for minimum-topological-delta write operations? And what does that imply about the merits of debating this in a John Searle obit thread on Metafilter?comment:www.metafilter.com,2025:site.210505-8770487Mon, 29 Sep 2025 16:51:54 -0800RyvarBy: kittens for breakfast
http://www.metafilter.com/210505/RIP-John-Searle#8770493
The threat is that grown-up people who should know better think that LLMs are people -- or demons -- or superintelligences that will skinamarink us all in a fit of pique one day like in that one Harlan Ellison story. <i>That's</i> the problem; you and I can do some bong rips and be all like, "Sooo what if we're all just LLMs ourselves, man," and it's cute, but what's not cute is the staggering number of people who believe we have created sentient beings.comment:www.metafilter.com,2025:site.210505-8770493Mon, 29 Sep 2025 17:03:55 -0800kittens for breakfastBy: Didymus
http://www.metafilter.com/210505/RIP-John-Searle#8770494
<em>I think this is a sign I need to delete my account, because I'm rolling my eyes more and more at the conversation here in general. Not just this thread.</em>
stick around, who the hell wants to read everyone lined up saying how positively useless the CR analogy is, how wrong Searle was (and is) and generally agreeing on all of it
oh and you earned a nasty from sickos and that right there should tell you to stay a whilecomment:www.metafilter.com,2025:site.210505-8770494Mon, 29 Sep 2025 17:04:30 -0800DidymusBy: SaltySalticid
http://www.metafilter.com/210505/RIP-John-Searle#8770495
What's amazing to me is how many experts on philosophy of consciousness and language we have in this thread!
When we see posts about influential findings in science (or the people who did the work), for some reason we don't see all that many people here with the years of training in the specific subject necessary to tear down ideas that have already been vetted and deemer valuable by qualified peers in the real world.
But apparently there are a lot more Mefites with graduate degrees in philosophy and it's a real experience to watch them at work.comment:www.metafilter.com,2025:site.210505-8770495Mon, 29 Sep 2025 17:14:41 -0800SaltySalticidBy: mittens
http://www.metafilter.com/210505/RIP-John-Searle#8770497
<i>What's amazing to me is how many experts on philosophy of consciousness and language we have in this thread!</i>
If he didn't want us to talk about his ideas, then he probably should have stopped publishing them.comment:www.metafilter.com,2025:site.210505-8770497Mon, 29 Sep 2025 17:17:52 -0800mittensBy: sickos haha yes dot jpg
http://www.metafilter.com/210505/RIP-John-Searle#8770500
<em>oh and you earned a nasty from sickos and that right there should tell you to stay a while</em>
best to you in your endeavors, dog. but if you want more of my attention, you shall have it!comment:www.metafilter.com,2025:site.210505-8770500Mon, 29 Sep 2025 17:30:18 -0800sickos haha yes dot jpgBy: SaltySalticid
http://www.metafilter.com/210505/RIP-John-Searle#8770501
Oh I don't think Searle would care that people with little relevant training would think his work is dumb and like to say so.
That's true for most any influential scholar.comment:www.metafilter.com,2025:site.210505-8770501Mon, 29 Sep 2025 17:33:39 -0800SaltySalticidBy: GenjiandProust
http://www.metafilter.com/210505/RIP-John-Searle#8770504
I mean, maybe, but, then, he was an abuser and now he's dead. We can care about his ideas, but we do not have to care about him as a person.comment:www.metafilter.com,2025:site.210505-8770504Mon, 29 Sep 2025 17:38:56 -0800GenjiandProustBy: star gentle uterus
http://www.metafilter.com/210505/RIP-John-Searle#8770509
<em>The real mystery is why one would bother talking about the chinese room when we could be talking about Blindsight.</em>
<strong>kaibutsu</strong>
<em>Blindsight</em> itself discusses the Chinese Room concept explicitly (as well as implicitly being the whole point of the book).comment:www.metafilter.com,2025:site.210505-8770509Mon, 29 Sep 2025 17:46:44 -0800star gentle uterusBy: mr_roboto
http://www.metafilter.com/210505/RIP-John-Searle#8770510
<em>The Chinese Room is about adherence to syntax without understanding semantic relationships. LLMs are based on proximity of embeddings (words-in-context) within the complex vectorspace of the middle layers. They are nothing but semantic comprehension.</em>
Calling it "comprehension" is somewhat begging the question.
I absolutely agree that LLMs operate free of formal syntax and I really do think that undermines some of the fundamentals of Searle's argument. These models show that decoupling syntax and semantics is not as simple as some would have us believe it is. It's almost as if syntax is an emergent behavior of a semantic system.
But I don't think what they do (optimizing distance metrics between embedded tokens or whatever) is "semantic" in any meaningful way. It's something else.comment:www.metafilter.com,2025:site.210505-8770510Mon, 29 Sep 2025 17:47:34 -0800mr_robotoBy: SaltySalticid
http://www.metafilter.com/210505/RIP-John-Searle#8770512
Oh yeah, Searle was a total scumbag. It must have been a mountain of evidence to get him stripped of emeritus status at Cal.
I don't give a shit about the guy, it's not out of respect for him that I'm annoyed with the facile critiques above.
I just thought I might see some interesting and informed discussion of the ideas here, since I've only ever studied it briefly and talked with a few established philosophers about it.
But instead the tone was set with "it's dumb" and that tends to drive away people who actually know about stuff.comment:www.metafilter.com,2025:site.210505-8770512Mon, 29 Sep 2025 17:52:38 -0800SaltySalticidBy: mittens
http://www.metafilter.com/210505/RIP-John-Searle#8770513
<i>people with little relevant training would think his work is dumb and like to say so</i>
What an odd thing to say. For all the faults of this thought experiment, Searle was a very clear and entertaining writer who took care to explain his point of view and objections to it, in a way that any reader could understand. Are you saying the Chinese Room would seem like a <em>better</em> thought experiment if we were all PhDs? That's certainly not the impression you get reading other philosophers talk about the flaws in the argument.
The whole reason we're talking about this argument <em>today</em> is because of the many, many objections people have raised to it. The life the argument has, is due to those objections and his constant reworking of it to try to forestall those objections. It's a bad argument and it fails, but it's <em>interesting</em>, and people's reactions to it are <em>interesting,</em> the conversation is enlightening, and that's all we can require of an argument about consciousness.comment:www.metafilter.com,2025:site.210505-8770513Mon, 29 Sep 2025 17:55:05 -0800mittensBy: kittens for breakfast
http://www.metafilter.com/210505/RIP-John-Searle#8770514
I mean...I'm gonna try to let it go after this, but I think it's important to recognize that this dumb, ridiculous, totally unworkable idea now literally exists in the real world and is rapidly changing our society.comment:www.metafilter.com,2025:site.210505-8770514Mon, 29 Sep 2025 17:58:03 -0800kittens for breakfastBy: dis_integration
http://www.metafilter.com/210505/RIP-John-Searle#8770515
searle's argument is interesting. it's not a slam dunk and it's derivative of leibniz (it's also possible to find a reading of spinoza that says similar things). it's historically very important to the discussion in philosophy of mind and there's enough said about it that it's silly to just reject it out of hand. the core intuition is that there's something missing from a completely linguistic account of "understanding" because we cannot separate our own experience of understanding from our having a mind. the thought experiment just makes that clear, similar to the argument from Chalmers about "philosophical zombies". the problem of the mind is something you <em>have</em> to take into account in any theory of what it means to be an "intelligent" being. i think you can say: it's just an epiphenomenon that reflects what operations the brain has already taken (with all the consequences for free will entailed), but you can't just ignore it.
i've heard stories of how the man was a piece of shit from people with firsthand knowledge, but he was still an incredibly important figure in analytic philosophy for whatever that's worth. he also got his ass handed to him by derrida, that "debate" in limited, inc is one of my favorite books. he just doesn't get that he's being dunked on, its very funny.comment:www.metafilter.com,2025:site.210505-8770515Mon, 29 Sep 2025 17:59:27 -0800dis_integrationBy: zippy
http://www.metafilter.com/210505/RIP-John-Searle#8770518
We should all get so much mileage out of a thought experiment so absurd that it can show that people cannot understand language.comment:www.metafilter.com,2025:site.210505-8770518Mon, 29 Sep 2025 18:12:54 -0800zippyBy: SaltySalticid
http://www.metafilter.com/210505/RIP-John-Searle#8770519
<em> Are you saying the Chinese Room would seem like a better thought experiment if we were all PhDs?</em>
No, I'm saying I'd rather read criticisms that give the impression the writer has spent more than 5 minutes thinking about CR, or maybe even more than 15 minutes reading about it.
<em>and people's reactions to it are interesting, the conversation is enlightening,</em>
Yeah, I'm with you, that's what I was hoping to see some of here!
But instead we get:
<em>(also the chinese room thought experiment is dumb, there, i said it)</em>
And I don't think that's interesting or enlightening. And that by itself is fine I suppose, lots of comments here are neither, mine included. But it set the tone, and set off a chain of similarly dismissive comments without any real substance.
TLDR, and more sincerely: it would be nice to see less low-effort hot takes and more informed discussion.
That's what I used to like about MeFi. I still do, but I used to, too :)comment:www.metafilter.com,2025:site.210505-8770519Mon, 29 Sep 2025 18:25:27 -0800SaltySalticidBy: Sebmojo
http://www.metafilter.com/210505/RIP-John-Searle#8770521
<a href="https://iep.utm.edu/chinese-room-argument/">Here's a good primer.</a>comment:www.metafilter.com,2025:site.210505-8770521Mon, 29 Sep 2025 18:33:54 -0800SebmojoBy: jamjam
http://www.metafilter.com/210505/RIP-John-Searle#8770525
I don't quite get why people think language is the key to consciousness, and that to understand consciousness, it's both necessary and sufficient to understand language.
Consciousness has been around for more than a hundred million years, and language for maybe a hundred thousand or so, and language might well never have arisen, whether you think it evolved or didn't. And human beings seem entirely conscious to me before they learn language and even if they never do.
I personally cannot imagine understanding consciousness without language, even though understanding certain things, such as some geometrical and other mathematical truths do not seem to depend on language, but I can't entirely foreclose the possibility.comment:www.metafilter.com,2025:site.210505-8770525Mon, 29 Sep 2025 18:45:32 -0800jamjamBy: mittens
http://www.metafilter.com/210505/RIP-John-Searle#8770526
<i>I'm saying I'd rather read criticisms that give the impression the writer has spent more than 5 minutes thinking about CR, or maybe even more than 15 minutes reading about it.</i>
I appreciate how patiently you made your point, and on reflection, I agree and apologize. So here's someone who spent a LOT of time thinking about Searle's experiment, <a href="https://www.nybooks.com/articles/1995/12/21/the-mystery-of-consciousness-an-exchange/">Daniel Dennett</a>!
"For his part, he has one argument, the Chinese Room, and he has been trotting it out, basically unchanged, for fifteen years. It has proven to be an amazingly popular number among the non-experts, in spite of the fact that just about everyone who knows anything about the field dismissed it long ago. It is full of well-concealed fallacies. By Searle's own count, there are over a hundred published attacks on it. He can count them, but I guess he can't read them, for in all those years he has never to my knowledge responded in detail to the dozens of devastating criticisms they contain; he has just presented the basic thought experiment over and over again. I just went back and counted: I am dismayed to discover that no less than seven of those published criticisms are by me. Searle debated me furiously in the pages of the <em>NYRB</em> back in 1982, when Douglas Hofstadter and I first exposed the cute tricks that make the Chinese Room "work." That was the last time Searle addressed any of my specific criticisms until now. Now he trots out the Chinese Room yet one more time and has the audacity to ask "Now why does Dennett not face the actual argument as I have stated it? Why does he not tell us which of the three premises he rejects in the Chinese Room Argument?" Well, because I have already done so, in great detail, in several of the articles he has never deigned to answer. For instance, in "Fast Thinking" (way back in The Intentional Stance, 1987) I explicitly quoted his entire three premise argument and showed exactly why all three of them are false, when given the interpretation they need for the argument to go through!"comment:www.metafilter.com,2025:site.210505-8770526Mon, 29 Sep 2025 18:46:21 -0800mittensBy: BungaDunga
http://www.metafilter.com/210505/RIP-John-Searle#8770537
The IEP entry makes a good point that if you buy the Chinese Room, you imagine that there could be entities walking around, talking, and behaving in all ways like they are conscious beings, and yet insist that they are just automatons once you learn <em>what stuff they're made of</em>. Even if they have brains that are organized like our own except each neuron is a nano-sized Turing machine.
Now, what stops you from abusing these entities? You have established that they <em>don't think</em>, they have no subjective experience, no matter how much they insist they do. So it's probably fine.
This seems like a bad state of affairs to me, so I'm pretty skeptical of a philosophical idea that would lead you in that direction.comment:www.metafilter.com,2025:site.210505-8770537Mon, 29 Sep 2025 19:30:10 -0800BungaDungaBy: BungaDunga
http://www.metafilter.com/210505/RIP-John-Searle#8770538
and the clockwork neurons have the same issue where you replace each of your neurons one at a time with a clockwork neuron. when does your consciousness snuff out?comment:www.metafilter.com,2025:site.210505-8770538Mon, 29 Sep 2025 19:32:11 -0800BungaDungaBy: Sebmojo
http://www.metafilter.com/210505/RIP-John-Searle#8770539
it's endlessly attackable! but it's also a perfect way of thinking about why LLMs feel like they're thinking, but aren't. I don't have any problem holding those two thoughts in my head.comment:www.metafilter.com,2025:site.210505-8770539Mon, 29 Sep 2025 19:33:50 -0800SebmojoBy: jamjam
http://www.metafilter.com/210505/RIP-John-Searle#8770542
The way I would put that, Sebmojo, is that we already know consciousness does not require language.
If AIs can actually talk, I think that would show that language does not require consciousness either.
Then with that out of the way, we might be able to finally get down to the business of understanding consciousness.comment:www.metafilter.com,2025:site.210505-8770542Mon, 29 Sep 2025 19:48:01 -0800jamjamBy: Faint of Butt
http://www.metafilter.com/210505/RIP-John-Searle#8770544
I've long said that being the person inside the Chinese room would be my dream job.comment:www.metafilter.com,2025:site.210505-8770544Mon, 29 Sep 2025 20:00:13 -0800Faint of ButtBy: SaltySalticid
http://www.metafilter.com/210505/RIP-John-Searle#8770545
Thanks for that bit from Dennett, I didn't realize he was one of the big haters of CR. I guess I'll read some of his seven published criticisms if I want more detail.
I think CR is a nice little story to tell to illustrate why the fact that something seems to somewhat convincingly talk like a human isn't any kind of evidence that it understands or has consciousness, and as mentioned above in <strong>dis_integration</strong>'s good comment, others have built up similar and useful arguments along these lines too.
I've never been sure why anyone, Searle included, would think that such a state of affairs would entail that such a non-human consciousness or understanding is impossible, as described by the SEP article. It seems to me that there are smaller lessons we can take from CR that are good, and giant leaps to big and wrong conclusions that are not good.comment:www.metafilter.com,2025:site.210505-8770545Mon, 29 Sep 2025 20:04:03 -0800SaltySalticidBy: kittens for breakfast
http://www.metafilter.com/210505/RIP-John-Searle#8770547
I mean, this is kind of getting into the weeds on a basically flawless illustration of something that really exists, right now, that a lot of people dramatically do not comprehend, and could comprehend -- maybe -- if they were introduced to this concept. It's ironic that the concept wound up explaining something that didn't really exist yet, and that Searle never intended to explain, and it's too bad that he was an asshole, but holy shit I cannot tell you how many people need to be introduced to this idea anyway. Like, it's really bad. All this other stuff about "he didn't presuppose the existence of a monkey that knew five human words! What a lummox!" or whatever just seems so utterly irrelevant to me. Not for the first time, a conversation here is making me feel like <a href="https://www.reddit.com/r/tumblr/comments/10dfjtn/unless_theres_a_crossover_event_youre_on_your_own/">Blade talking to the Avengers.</a>comment:www.metafilter.com,2025:site.210505-8770547Mon, 29 Sep 2025 20:18:58 -0800kittens for breakfastBy: axiom
http://www.metafilter.com/210505/RIP-John-Searle#8770549
<i>I've long said that being the person inside the Chinese room would be my dream job.</i>
you_guys_are_getting_paid.gifcomment:www.metafilter.com,2025:site.210505-8770549Mon, 29 Sep 2025 20:23:06 -0800axiomBy: zompist
http://www.metafilter.com/210505/RIP-John-Searle#8770551
The Chinese Room is incoherent and was well answered decades ago. Here's Searle himself explaining what the metaphors are:
<blockquote>Now, the rule book is the "computer program." The people who wrote it are "programmers," and I am the "computer." The baskets full of symbols are the "data base," the small bunches that are handed in to me are "questions" and the bunches I then hand out are "answers." [SciAm p. 26]</blockquote>
So the man in the CR represents the CPU, not the program. Querying if the CPU "understands" is supposed to be a test of whether the system "understands." One, it's not, any more than a single neuron in your head "understands." And two, he's narrowly but uninterestingly right about the CPU: the CPU understands nothing, learns nothing; it's the same bit of the computer whether it's running an AI program or Word or Minesweeper.
Yet a page later he says "Programs are neither constitutive of nor sufficient for minds". He is trying to pass off an observation about the CPU as a fact about the program.
Later on he attempts to hide this blunder by inflating the role of the CPU (the man in the room) even more, by having him memorize the rules, and deflating the program to "a few scraps of paper". The sole test he can conceive of is whether the man in the room can "understand Chinese". But a program that is actually intelligent is too big for the man in the room to memorize.
His definition of syntax does not apply to the way computers work... or the way symbols work. E.g. what if one of the rules in the program is this:
<blockquote>If you see 马, write down "horse".</blockquote>
That isn't even how a potential AI works, but it completely destroys Searle's argument. Can he really maintain, if there are enough such rules, that the man memorizing rules "can't understand Chinese"?
(Also, yes, he is totally relying on xenophobia to shore up his argument. He talks about "squiggle squiggles", as if the computer is as baffled by Chinese characters as he is.)
Searle never bothers to explain how "semantics" can arise in a human mind-- which is a mass of tofu-like cells in a bony prison-- and why it can't in a silicon mind. He just focuses on the CPU understanding or not; it's exactly like someone expecting a water molecule to be wet.
(All this is expanded <a href="https://zompist.com/searle.html">on my site</a>.)
Since the Chinese Room proves nothing, it also proves nothing about LLMs. (Searle's argument was devised in the time of algorithmic AI hand-written by humans, in particular story analyzers like Roger Schank's. But it was supposed to apply to any possible computer program.)
What LLMs do is show is that the Turing Test sucks. It's way too easy to fool human beings, at least of the CEO level.comment:www.metafilter.com,2025:site.210505-8770551Mon, 29 Sep 2025 20:26:35 -0800zompistBy: dustletter
http://www.metafilter.com/210505/RIP-John-Searle#8770556
I think Searle is wrong - I hold that the universe obeys computable physical laws and therefore there must be some program that understands Chinese, in the sense he meant it. Maybe the one produced by scanning someone's brain or replacing their neurons one by one. But, I think I've been engaging with a weak version of his argument.
Suppose you take the program from before, run it through homomorphic encryption, and destroy the key. Before, the symbols were opaque to the operator but held some relationship to the world and the simulated mind's state. Now, their meaning is opaque to all but God. Does that matter to the simulation's experience?
A more interesting variant offers that the room is unnecessary; you could memorize the symbol manipulation rules but still wouldn't understand Chinese. I think this implies that I don't understand arithmetic. Maybe I don't?comment:www.metafilter.com,2025:site.210505-8770556Mon, 29 Sep 2025 21:00:26 -0800dustletterBy: BungaDunga
http://www.metafilter.com/210505/RIP-John-Searle#8770559
Old SMBC comic: "<a href="https://www.smbc-comics.com/comic/john-searle39s-last-words">John Searle's last words</a>"comment:www.metafilter.com,2025:site.210505-8770559Mon, 29 Sep 2025 21:10:21 -0800BungaDungaBy: cotton dress sock
http://www.metafilter.com/210505/RIP-John-Searle#8770561
.comment:www.metafilter.com,2025:site.210505-8770561Mon, 29 Sep 2025 21:28:24 -0800cotton dress sockBy: Slothrup
http://www.metafilter.com/210505/RIP-John-Searle#8770563
> What LLMs do is show is that the Turing Test sucks.
Yes, this -- at least for consciousness. An LLM can clearly fool many people into thinking it's human and I believe fairly strongly that it's not conscious in any meaningful sense of the word. My cats could never pass the test and yet I believe equally strongly that they *are*.comment:www.metafilter.com,2025:site.210505-8770563Mon, 29 Sep 2025 21:41:07 -0800SlothrupBy: cotton dress sock
http://www.metafilter.com/210505/RIP-John-Searle#8770566
Wow I had no idea he was such a pig. I read the CR piece over 20 years ago, found it silly on its face, but it did illustrate the concept that lower level processes can give rise to a kind of experience, defined by an observer's imposition of a narrative around it. It's interesting that we resist this idea so strongly & insist on an essential, unitary, willing force, like a homunculus. We know it's there, emerging from a complex of synapses, bacteria, electricity, etc. It's a good question I think, in line with what I understand of the current scientific understanding. I have nostalgia for the time of my life when I threw myself into these ideas, and that piece was part of it, I guess.
What a pig, though. Wild.
Where are the decent people who achieve notoriety? I think they're outnumbered.comment:www.metafilter.com,2025:site.210505-8770566Mon, 29 Sep 2025 21:53:14 -0800cotton dress sockBy: polymodus
http://www.metafilter.com/210505/RIP-John-Searle#8770575
Searle's Room is wrong because it assumes that Chinese or any language is bounded by grammar. It isn't, the rules are in principle infinite, because all human language is embodied in our societies existing across time in the universe, with contingent exceptions and linguistic changes across this space-time.
In order for a Searle Room to in principle completely contain a human language it would have to be infinitely large, therefore it would collapse into a black hole.
Conversely a Searle's Room with the grammar rules for C++ does not understand C++ either, and in that case the grammar is finite.
Human minds are connected to the rest of the universe (or Earth) so we grow and change with it. So SR does not apply.comment:www.metafilter.com,2025:site.210505-8770575Mon, 29 Sep 2025 22:57:55 -0800polymodusBy: sickos haha yes dot jpg
http://www.metafilter.com/210505/RIP-John-Searle#8770590
<em>basically flawless</em>
lol this seems like a different order of commentary than your other remarks about itcomment:www.metafilter.com,2025:site.210505-8770590Tue, 30 Sep 2025 02:53:19 -0800sickos haha yes dot jpgBy: Phanx
http://www.metafilter.com/210505/RIP-John-Searle#8770595
I'm not fond of the Chinese room (incidentally one example of a weird Sinophobic tendency in academic philosophy) but I take Searle's main point to have been that you can't get semantics from syntax. I think that's right - his view of where we do get it from (qualia, basically) is less convincing.comment:www.metafilter.com,2025:site.210505-8770595Tue, 30 Sep 2025 03:43:56 -0800PhanxBy: flabdablet
http://www.metafilter.com/210505/RIP-John-Searle#8770614
<em>So here's someone who spent a LOT of time thinking about Searle's experiment, Daniel Dennett!</em>
Pass me the brush to tar ya
Make your choice then live your life
Come on pal, <a href="https://www.youtube.com/watch?v=GiHdpAVIHgo">what are ya</a>comment:www.metafilter.com,2025:site.210505-8770614Tue, 30 Sep 2025 05:39:06 -0800flabdabletBy: HearHere
http://www.metafilter.com/210505/RIP-John-Searle#8770626
<a href="https://www.nybooks.com/articles/1984/02/02/an-exchange-on-deconstruction/">*</a> [nybooks]comment:www.metafilter.com,2025:site.210505-8770626Tue, 30 Sep 2025 05:56:01 -0800HearHereBy: eraserbones
http://www.metafilter.com/210505/RIP-John-Searle#8770641
I was hoping to find a defense of the Room here because I assume that my take is overly shallow and/or missing the point. I don't really see that here, though -- is anyone willing to take a stab?
My takes are twofold:
1) The 'systems' argument. Specifically, I think that Searle believes that consciousness is magic and only 'real' if it runs on a platform that he regards as human. I don't think that. If we opened up my mother's skull and found nothing but relays and microfiche, I would still regard my mother as a conscious, real, understanding person. If we found the same thing in Searle's mother, he would conclude that she had just been a convincing simulacrum all along.
2) The 'assuming the conclusion' argument. "Imagine a room that speaks Chinese, but doesn't understand it. By imagining this, you have demonstrated that speaking Chinese isn't the same thing as understanding it." That's just a tautalogy, and tells us nothing about the the question of whether it is /necessary/ to understand a language in order to speak it (which, for certain values of 'speak' I am sure that it is.)comment:www.metafilter.com,2025:site.210505-8770641Tue, 30 Sep 2025 06:40:12 -0800eraserbonesBy: Didymus
http://www.metafilter.com/210505/RIP-John-Searle#8770676
Take what you can from what you read, is what I always saycomment:www.metafilter.com,2025:site.210505-8770676Tue, 30 Sep 2025 07:37:16 -0800DidymusBy: BungaDunga
http://www.metafilter.com/210505/RIP-John-Searle#8770684
Using LLMs to code is a lot like a Chinese room though, neither <em>I</em> nor the LLM <em>nor the combination</em> actually understand what we're producing (unless I go back and look at it), even when it's perfectly cogent code. There's no way there's a third entity being formed by the system that's understanding anything, that seems unreasonable.comment:www.metafilter.com,2025:site.210505-8770684Tue, 30 Sep 2025 07:57:31 -0800BungaDungaBy: dis_integration
http://www.metafilter.com/210505/RIP-John-Searle#8770693
>In order for a Searle Room to in principle completely contain a human language it would have to be infinitely large, therefore it would collapse into a black hole.
Actually it seems like it would have to be about 10-100 gigabytes or socomment:www.metafilter.com,2025:site.210505-8770693Tue, 30 Sep 2025 08:18:58 -0800dis_integrationBy: Western Infidels
http://www.metafilter.com/210505/RIP-John-Searle#8770697
I've been trained to manipulate the syntactic and semantic symbols in and surrounding propositions like "The Chinese Room," "The Trolley Problem," "The Turing Test," and "The Simulation Argument," but a moment's examination will show that I don't really understand any of them, and that's how you can tell I'm just a machine and not a living, breathing human being.comment:www.metafilter.com,2025:site.210505-8770697Tue, 30 Sep 2025 08:27:02 -0800Western InfidelsBy: dis_integration
http://www.metafilter.com/210505/RIP-John-Searle#8770702
It's funny. In grad school I took a very strong stand against Searle (because he's very dumb on some things) and his Chinese room. Primarily because I was, I dunno maybe still am, a materialist, and Searle is a sneaky dualist. He's implying that minds cannot be completely explained as the outcome of the causal relations of the bodies that "have" them. But reading these criticisms I have never wanted to defend him more. Like it doesn't matter that it's Chinese, or that he wasn't an expert in computers, or that he talks about syntactic rules when that's inadequate since coherent language requires more than syntax. It's a thought experiment, or (in a phrase I hate), an "intuition pump" intended to highlight a problem with functional-materialist conceptions of intelligence. And the thing is, as others have pointed out, we have real life Chinese rooms right now. You can deny it all you want, but up until the very day that ChatGPT 4 came out, just about everyone would have agreed that if we had a computer that could do what it does it would count as intelligent in some sense, and certainly that it would pass the Turing test. But his insight still stands. The transformer does not understand anything. It simply predicts the next most probable byte given the input bytes and the current set of answer bytes. What it has is a giant, almost infathomably massive table of byte pairs, and a fairly straightforward if computationally intensive process for doing a lookup in that table. But does it understand? Does it have a mind? It seems obvious that doing table lookups is not understanding. That's my intuition. There, I made the Chinese room argument.comment:www.metafilter.com,2025:site.210505-8770702Tue, 30 Sep 2025 08:34:13 -0800dis_integrationBy: flabdablet
http://www.metafilter.com/210505/RIP-John-Searle#8770718
<em>What it has is a giant, almost infathomably massive table of byte pairs</em>
Strictly speaking, what it has is a machine-learning-derived <em>approximation</em> of the function for which a too-large-to-implement table of input/output pairs <em>would</em> be a complete piecewise specification.
That specification is not a table of byte <em>pairs</em>, nor even token pairs; those <em>are</em> implementable because they require only N<sup>2</sup> entries where N is the number of distinct bytes or tokens. Rather, it's a key->value mapping where the keys are derived from an entire context window's worth of tokens and the value is a probability distribution over tokens.
The bigger the context window gets, the sparser becomes the population of keys with any actual existence within the combinatorial universe of possible keys, so the more specific and less generalizable the function specification becomes, so the closer any ML-derived approximation of it becomes to just making shit up.comment:www.metafilter.com,2025:site.210505-8770718Tue, 30 Sep 2025 09:13:46 -0800flabdabletBy: SPrintF
http://www.metafilter.com/210505/RIP-John-Searle#8770728
<em>Using LLMs to code is a lot like a Chinese room though, neither I nor the LLM nor the combination actually understand what we're producing</em>
Don't your prompts provide semantic meaning, though? Unpredictability isn't the same as randomness.comment:www.metafilter.com,2025:site.210505-8770728Tue, 30 Sep 2025 09:44:20 -0800SPrintFBy: BungaDunga
http://www.metafilter.com/210505/RIP-John-Searle#8770737
It probably does, and the whole thing definitely wouldn't work without it. But the LLM doesn't really understand the inputs or outputs, and if I never read the outputs and just vibe, nobody ever understands the output code at all. And then you can shove that output code back into the input and ask for updates. The ratio of "semantic meaning that a human provides" gets pretty small.
Yes, obviously, there's a nub of meaning that is coming from the human director, but you can produce a whole small app without <em>anyone</em> being consciously aware of the architecture or even any significant fraction of the code. That's weird!comment:www.metafilter.com,2025:site.210505-8770737Tue, 30 Sep 2025 09:56:30 -0800BungaDungaBy: BungaDunga
http://www.metafilter.com/210505/RIP-John-Searle#8770739
humans have a goal when they're directing LLMs, that's the big difference from trapping a human in a room and making them push symbols around. It's sort of the opposite of Searle's room. But it means a human can do things that <em>look like</em> cognitive tasks without knowing what they're doing at all, and just pulling levers on a sort of <em>cognition engine</em> that <em>also</em> doesn't know what it's doing. And yet the outputs look like the outputs of cognition, at least sometimes.comment:www.metafilter.com,2025:site.210505-8770739Tue, 30 Sep 2025 10:00:33 -0800BungaDungaBy: jackbishop
http://www.metafilter.com/210505/RIP-John-Searle#8770756
<i>Another way to build a Chinese room is to take a Chinese-speaking brain and one by one start to replace neurons with the obviously-unconscious algorithm until you have replaced the whole brain with no change in behavior. When did the consciousness snuff out?</i>
Your Chinese Room of Theseus is an excellent thought experiment, but could we work a trolley into it somewhere?
(Somewhere nice, please. I'm certain the philosophical role of trolleys as hypothetical manslaughter devices has done untold harm to the widespread adoption of public transit infrastructure.)comment:www.metafilter.com,2025:site.210505-8770756Tue, 30 Sep 2025 10:34:50 -0800jackbishopBy: NotAYakk
http://www.metafilter.com/210505/RIP-John-Searle#8770757
<em>> It is, kind of, but it's also a decent argument for why GenAI is not conscious nor a path toward consciousness.</em>
The obvious thing from it is that we, our brains, are Chinese rooms. But apparently this isn't what most people get out of it?
So I see it actually an argument why GenAI is a path towards consciousness. Not a huge step, but an impressive one.
We are a bunch of (fuzzily separate) subsystems in our brain. We have started to understand how some of them work (at a very vague level). GPT was based on study of organic brains (at a very removed level), and it is slightly plausible some of what it is doing is similar to what our language-processing part of our brain is doing. I mean, not that high a probability, but definitely non-zero.
Like, Wernicke and Broca's areas, coming to a dozen or so grams of grey matter, might be doing something similar to GPT; this is roughly 1% of our brain mass. Processing language. We don't think they (alone) are conscious, but we also don't think our vision center is, or our hippocampus, or...
---
The thing to me about the Chinese Room as (originally) described is that it is a library the size of a galaxy (maybe larger than the visible universe) with super-luminal delivery of the books the person does lookup on: Lookup tables are *not very compact*.
"That universe-sized-library has consciousness within it" (or some kind of complete recording of a consciousness) is a lot less silly than the implied "the library is not that big, how can it be conscious?"
---
<em>
> But does it understand? Does it have a mind? It seems obvious that doing table lookups is not understanding. That's my intuition.</em>
Based on this, I'd recommend reading Permutation City by Greg Egan, which is a kind of materialist-maximalist approach to reality, where reality doesn't even have to be exist to exist. What if patterns are reality?
It does this in small steps as a science fiction novel. He's a fun author, writes stuff like "what does a universe without a Minkowski metric look like?", without ever saying the word.comment:www.metafilter.com,2025:site.210505-8770757Tue, 30 Sep 2025 10:36:34 -0800NotAYakkBy: deeker
http://www.metafilter.com/210505/RIP-John-Searle#8770805
A trolley is being driven by something that is physically identical to a normal human being but does not have conscious experience. Tied to the tracks ahead is a person whose neurons have been totally replaced with an algorithm with no change in behaviour. Does the Artificial Intelligence intervene to divert the trolley? Now assume the same scenario is a projection on the wall of a cave witnessed by people in the state of nature behind a veil of ignorance...comment:www.metafilter.com,2025:site.210505-8770805Tue, 30 Sep 2025 12:22:47 -0800deekerBy: deeker
http://www.metafilter.com/210505/RIP-John-Searle#8770819
I apologise. I wouldn't normally do this, especially because it arises from laughing at my own joke, and I promise not to do it again.
A trolley is being driven by something that is physically identical to a normal human being but does not have conscious experience. Tied to the tracks ahead is a person whose neurons have been totally replaced with an algorithm with no change in behaviour. Tied to the other track is baby Hitler. Does the paperclip-maximising Artificial Intelligence intervene to divert the trolley? Now assume the same scenario is a projection on the wall of a cave witnessed by people in the state of nature behind a veil of ignorance...comment:www.metafilter.com,2025:site.210505-8770819Tue, 30 Sep 2025 12:41:41 -0800deekerBy: Didymus
http://www.metafilter.com/210505/RIP-John-Searle#8770828
You left out brain in a vat? Coward!comment:www.metafilter.com,2025:site.210505-8770828Tue, 30 Sep 2025 13:08:14 -0800DidymusBy: Salvor Hardin
http://www.metafilter.com/210505/RIP-John-Searle#8770833
It's amusing when people explain that LLMs can't be "thinking" because they're just following a set of rules. How exactly do you think a brain works? It doesn't follow any rules?
I don't know if current LLMs "think" or "understand" in the same sense a brain does (however the heck that works), but I haven't heard a convincing explanation (including the "Chinese Room") of why it's impossible.
Why couldn't a system involving a secretary scurrying around following a complex set of algorithms passing notes back and forth give rise of to thought or understanding or consciousness? Doesn't seem any more unlikely than a big electrical meatball.comment:www.metafilter.com,2025:site.210505-8770833Tue, 30 Sep 2025 13:17:46 -0800Salvor HardinBy: mittens
http://www.metafilter.com/210505/RIP-John-Searle#8770835
<i>Like it doesn't matter [...] that he talks about syntactic rules when that's inadequate since coherent language requires more than syntax.</i>
But that's literally his first premise!comment:www.metafilter.com,2025:site.210505-8770835Tue, 30 Sep 2025 13:19:16 -0800mittensBy: dis_integration
http://www.metafilter.com/210505/RIP-John-Searle#8770885
The Chinese Room doesn't say it's impossible for a computer to think. It says: if you consider this thought experiment, doesn't it seem like something is missing from the computational model of cognition. Some philosophers of mind want to argue that the brain is just a biological computer that processes information according to rules which we don't yet understand but could understand and model in sufficiently large and complicated computer. For example, you could just do what we've <a href="https://news.berkeley.edu/2024/10/02/researchers-simulate-an-entire-fly-brain-on-a-laptop-is-a-human-brain-next/">done</a> for a fly's brain to model a human brain and then what you'd get is more or less a human mind but in a computer. And then maybe you could talk to it and it would say "jesus christ where's my body kill me it's so horrible oh it's so terrible to be in a computer, i can't feel anything but i feel everything all at once and i hate it so much" or something and we'd know that human beings are just really wet and slimy computers. But the you have to say: well everything is determined completely by rules and nobody makes any choices or has strokes of genius or anything and nobody is responsible for anything (since consequentialism is incoherent, imo). Or maybe there's something else there, more than the operations of physical law, that we can very secularly call a mind, and it intervenes through a magical swerve, a clinamen, to interrupt the causal order with the spontaneity of a spirit. Or something. The chinese room says: it seems like there's something else here! Since if I did all the linear algebra by hand to compute the answer to the question of what a hamburger tastes like working entirely with real numbers alone and then you mapped the output to the lookup table that pairs those numbers with words and the result was an eloquent description of the perfect comibination of salt fat acid and breadiness that is a cheeseburger you might be fooled into thinking this is a person who has had a first-person /experience/ and called to mind eating a burger becaus they *understood* what a hamburger *is*. Except they didn't, it was just me following some rules, doing calculations without knowing what any of them meant, producing a likely sounding output, and there's a difference there, right? There's something there that distinguishes the two? Since isn't that all that a computer would be doing when it modeled all the neurons of a brain to produce what seems like the thoughts of a person? I'm a person with "mental states". A computer is just a computer. How would it get a mind? I'm not saying I agree but it's an unsolved question. Nobody knows where a mind comes from.comment:www.metafilter.com,2025:site.210505-8770885Tue, 30 Sep 2025 15:20:04 -0800dis_integrationBy: zompist
http://www.metafilter.com/210505/RIP-John-Searle#8770908
If we go on we're going to repeat everything from comp.ai.philosophy.
<em>The chinese room says: it seems like there's something else here!</em>
I'm afraid not, because Searle explicitly denies being a dualist. (See his <em>Scientific American</em> article, Jan. 1990.). He claims that he has understanding but thinks he doesn't have to explain how.
Dualism does at least provide a place to shove anything about the mind we don't understand. That's more a bug than a feature, since that provides no actual explanation of those things, nor can we imagine how the soul tells the meat what to do. (If your soul orders your muscles to move, the brain has to receive that order somehow, and we should be able to detect that. If you respond that it's too small to detect... well, philosophy of science tells us to be suspicious of theories that cannot be observed or tested.)
A common response (one I agree with) is that brains can have mental states because they are closely connected to the world— they have sensorimotor experience. You don't just know the word for horse, you can see and touch and ride horses. Cool. But your brain can't ride a horse; it's a blind chunk of neurons inside a prison of bone, merely connected to the outside world by electrical signals (i.e. nerves). You can see, but it's not via light shining in your skull. Is that a different thing from a robot, also connected to the outside world by wires, sensors, and effectors? Is it a different thing from a computer controlling a CAD system or a 3-D printer?
Searle's claims about "syntax" vs. "semantics" are, again, based on reacting against all-verbal programs like Schank's in the 1970s. His arguments get a lot less plausible when we tallk about computers with sensorimotor capabilities. (Actually, well before that. Somewhere in your bank is a computer system with a number representing your bank balance. Is that not real money? It's affected by the money you put in or take out. If a glitch halves the number, suddenly you have less money. It makes little sense to call the banking system "just syntax.")comment:www.metafilter.com,2025:site.210505-8770908Tue, 30 Sep 2025 16:18:02 -0800zompistBy: dis_integration
http://www.metafilter.com/210505/RIP-John-Searle#8770909
searle is dumb about dualism. he wants his cake and to eat it too. that's ok, he's allowed to be wrong even about his own arguments. well, he's dead now so it's even more permittedcomment:www.metafilter.com,2025:site.210505-8770909Tue, 30 Sep 2025 16:23:28 -0800dis_integrationBy: judgement day
http://www.metafilter.com/210505/RIP-John-Searle#8770938
I first came across Searle's Chinese Room concept as a college freshman, and never understood how it was supposed to work in practice. Maybe someone more familiar with it and the criticisms of it can explain...
If a Chinese-speaker outside the room passes in a simple question like "what day of the week is it?" how is the instruction set that the English-speaker inside is using supposed to handle that? Short of being provided some additional information (i.e., what day of the week it actually is), I don't see how it could ever consistently provide answers to such questions correctly. And it seems to me that a great many questions the Chinese-speaker outside could ask would fall into this category: any question about current events, many questions about the future, even lots of questions about the past ("how long ago was WWII?" depends on knowing what year it is now).
If the instruction book has "if...then" statements in it and/or lookup tables that rely on additional information known by the English-speaker in the room (e.g. "if the question looks like these characters and today is Monday, answer with this character") then how is the room anything different from a particularly extensive and cumbersome-to-use Chinese-English dictionary? The English-speaker might not be able to read Chinese (at least at first), but they must understand <em>something</em> about what is going on in order to follow the instructions to get the correct answer. Maybe they can't (initially) tell whether a question reads "what day of the week is it today?" or "what day of the week is it tomorrow?" and correspondingly don't know which day of the week the response they write refers to, but they can at least immediately figure out that if they need to provide the day of the week in order to look up the response, then the question must have been something to do with days of the week. I bet with enough effort they could even learn to read Chinese, by keeping track of what <em>other</em> information is needed to answer questions besides the questions themselves.
But whether or not they can learn to understand Chinese, at very least they are providing an understanding of the state of the world that is not present in the instructions they are using. They can't be reduced to an unthinking "wire." I don't see how the Room works if the person inside is supposed to do no thinking, provide no information, and only look up the characters they receive and write down corresponding answers. Unless I missing something, if they person inside can't provide additional understanding of the current state of the world, than the Room can only be used to answer the types of questions an encyclopedia or calculator could answer (i.e. those whose answers don't depend on any information beyond the question itself) and can't be used to answer the broader class of questions that humans can answer.comment:www.metafilter.com,2025:site.210505-8770938Tue, 30 Sep 2025 19:03:16 -0800judgement dayBy: SPrintF
http://www.metafilter.com/210505/RIP-John-Searle#8770939
<em>if they person inside can't provide additional understanding of the current state of the world</em>
"I don't know" is an acceptable response. It's interesting that LLMs try to avoid stating that.comment:www.metafilter.com,2025:site.210505-8770939Tue, 30 Sep 2025 19:07:57 -0800SPrintFBy: judgement day
http://www.metafilter.com/210505/RIP-John-Searle#8770942
<em>"I don't know" is an acceptable response.</em>
If I'm the Chinese-speaker and I ask the Room "what day of the week is it?" and it responds "I don't know" -- well, I'm pretty sure I'm not going to conclude that the Room can understand Chinese...comment:www.metafilter.com,2025:site.210505-8770942Tue, 30 Sep 2025 19:15:24 -0800judgement dayBy: flabdablet
http://www.metafilter.com/210505/RIP-John-Searle#8770948
<em>we'd know that human beings are just really wet and slimy computers</em>
People willing to countenance the idea that a human being is <em>just</em> or <em>merely</em> an instance of some other category are probably worth actively excluding from positions of power, especially if they're all "essentially" about it.comment:www.metafilter.com,2025:site.210505-8770948Tue, 30 Sep 2025 20:08:00 -0800flabdabletBy: kittens for breakfast
http://www.metafilter.com/210505/RIP-John-Searle#8770969
Yeah, the people who think people are really just machines always seem to be the very least functional people, don't they? And yet they're also the first to tell you how much better they are than everyone else. It's a conundrum for sure, hell if I can figure it out.comment:www.metafilter.com,2025:site.210505-8770969Tue, 30 Sep 2025 21:25:51 -0800kittens for breakfastBy: flabdablet
http://www.metafilter.com/210505/RIP-John-Searle#8770979
<em>hell if I can figure it out</em>
For me it comes down to how much <em>detail</em> folks are willing to ignore and/or dismiss as irrelevant.
The single most irritating misfeature of the Chinese Room argument, from my perspective, was Searle's eagerness to use dismissive language like "scraps of paper" to characterize the mind-punishingly massive amount of infrastructure that the argument just imagines into existence.
Whether or not it's reasonable to include both people and machines in the same category seems to me to hinge on why you're keen to do that. If it comes down to some kind of conviction that causal analysis rooted in physics will at some point be sufficient to account for all of human behaviour in practice, I think you're barking up the wrong tree and shouldn't bother.
If it's "in principle" rather than "in practice" then I'm going to need you to start with a careful exposition of exactly what principles you're invoking and exactly why you think that applying them in this instance might illuminate a useful path toward clarity.
"In principle" is the self-aggrandizing handwaver's best rhetorical friend. I don't recall ever <em>once</em> in the 63 years I've been on this planet encountering an "in principle" argument that's actually any good; when it comes right down to it, there <em>is only</em> practice.
Whether a thing be machine or whether it be biological or whether it be both, causal analysis of it rooted in physics <em>has severe practical limitations</em> and it pays never to lose sight of that.
LLMs are a case in point. Even though those <em>are</em> uncontroversially machines, it takes <em>way</em> longer to predict any LLM's output based on any analysis of its inputs and present internal state than it would simply to run it and see what it does. Even getting <em>close</em> to making any such prediction arrive ahead of the observed behaviour requires abandoning strictly physical concepts entirely, instead reasoning solely in terms of much more heavily chunked abstractions.
Physics is excellent for telling me why I ought to wear a seatbelt when I'm driving. It's utterly useless for telling me why I just typed out this lot and hit Post. But that says much more about physics than it does about any real-world system that one might consider applying its concepts to.
Much the same applies to free will, intentionality, consciousness, determinism, qualia, gods, demons and all the rest of the philosophical gamut. Expecting <em>any</em> of these thinking tools to provide a fundamental and/or definitive account of the real world of which we're all parts strikes me as somewhere between touchingly naive and massively arrogant.
Searle never really struck me as touchingly naive.comment:www.metafilter.com,2025:site.210505-8770979Tue, 30 Sep 2025 22:28:43 -0800flabdabletBy: adrienneleigh
http://www.metafilter.com/210505/RIP-John-Searle#8770988
<a href="/210505/RIP-John-Searle#8770979">flabdablet</a>: "<i>"In principle" is the self-aggrandizing handwaver's best rhetorical friend. I don't recall ever once in the 63 years I've been on this planet encountering an "in principle" argument that's actually any good; when it comes right down to it, there is only practice.</i>"
"In theory there is no difference between theory and practice, while in practice there is." —<a href="https://quoteinvestigator.com/2018/04/14/theory/">Benjamin Brewster</a> (often erroneously cited as by Yogi Berra)comment:www.metafilter.com,2025:site.210505-8770988Tue, 30 Sep 2025 23:28:42 -0800adrienneleighBy: flabdablet
http://www.metafilter.com/210505/RIP-John-Searle#8770997
<em>When we see posts about influential findings in science (or the people who did the work), for some reason we don't see all that many people here with the years of training in the specific subject necessary to tear down ideas that have already been vetted and deemer valuable by qualified peers in the real world.</em>
Philosophy benefits from scholasticism much, much less than science does because its domain of applicable subject matter is so much narrower.
It <em>looks</em> wider, but that's just parochialism.comment:www.metafilter.com,2025:site.210505-8770997Wed, 01 Oct 2025 00:58:54 -0800flabdabletBy: polymodus
http://www.metafilter.com/210505/RIP-John-Searle#8770999
There's nothing wrong with saying in principle, it just means first principles reasoning, it declares a logical argument which is often useful. And big-O notation is a good example of "in principle" reasoning about computation, like exponential growth arguments about algorithms or COVID.comment:www.metafilter.com,2025:site.210505-8770999Wed, 01 Oct 2025 01:13:53 -0800polymodusBy: flabdablet
http://www.metafilter.com/210505/RIP-John-Searle#8771001
Big-O actually provides a really good example of the kind of abuses of "in principle" that I so frequently find myself annoyed by.
It is <em>often</em> the case than an algorithm with worse big-O behaviour proves to be better in practice, exactly because of details completely extraneous to big-O considerations such as simplicity of implementation or inner-loop speed. Ignoring those details purely for the sake of achieving better big-O behaviour is doing software engineering wrong.
The history of philosophy is <em>littered</em> with dubious principles that have all been used at some time or other as "in-principle" justification for policy that works to the severe detriment of ordinary people.comment:www.metafilter.com,2025:site.210505-8771001Wed, 01 Oct 2025 01:58:39 -0800flabdabletBy: rum-soaked space hobo
http://www.metafilter.com/210505/RIP-John-Searle#8771008
Okay, new goal unlocked:
Live a life in such a way that none of the comments on your MeFi obit thread are more than one line.comment:www.metafilter.com,2025:site.210505-8771008Wed, 01 Oct 2025 03:22:11 -0800rum-soaked space hoboBy: Didymus
http://www.metafilter.com/210505/RIP-John-Searle#8771037
<a href="https://www.nybooks.com/articles/1995/12/21/the-mystery-of-consciousness-an-exchange/">Here is a Dennett/Searle exchange in case you missed it at the time</a>
It is good to have a rivalcomment:www.metafilter.com,2025:site.210505-8771037Wed, 01 Oct 2025 06:32:44 -0800DidymusBy: Brian B.
http://www.metafilter.com/210505/RIP-John-Searle#8771051
<a href="https://www.youtube.com/watch?v=973akk1q5Ws">Searle interview on free will.</a>comment:www.metafilter.com,2025:site.210505-8771051Wed, 01 Oct 2025 07:41:30 -0800Brian B.By: L.P. Hatecraft
http://www.metafilter.com/210505/RIP-John-Searle#8771320
Here's another exchange between <a href="https://www.nybooks.com/articles/1982/06/24/the-myth-of-the-computer-an-exchange/">Searle and Dennett</a>, from the early 80s.
I could have sworn that Douglas Hofstadter addressed Searle's Chinese Room argument in Godel Escher Bach but seems like I'm misremembering and it was in The Mind's I (co-written with Dennett).
See also: <a href="https://en.wikipedia.org/wiki/China_brain">China brain</a>.
Personally I think this reply from Searle to the "systems reply" was a bit of a cop out:
<blockquote>The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find that idea at all plausible.</blockquote>
The whole thought experiment is contrived as an "intuition pump" to make the systems reply as implausible as possible, but I think if such a system really was possible (it would most likely take millions of years to output one sentence) then it really would understand Chinese. Play language games, get language prizes. Just saying in effect "lol that's ridiculous, you're brainwashed" isn't a convincing counter-argument.comment:www.metafilter.com,2025:site.210505-8771320Wed, 01 Oct 2025 18:41:20 -0800L.P. HatecraftBy: mittens
http://www.metafilter.com/210505/RIP-John-Searle#8771390
<i>somehow the conjunction of that person and bits of paper might understand Chinese</i>
"Here at our translation office, we have one person who speaks French and another that speaks Chinese. However, there is <em>no way whatsoever</em> that you could say our translation office understands Chinese; I need you to strike that from the marketing materials immediately."comment:www.metafilter.com,2025:site.210505-8771390Thu, 02 Oct 2025 05:08:40 -0800mittensBy: flabdablet
http://www.metafilter.com/210505/RIP-John-Searle#8771570
Shorter Dennett: Eyeballs are a thing, and here are some clues we've gathered about how they might work for seeing with.
Shorter Searle: I cannot see my own eyeballs, therefore whatever Dennett says they're for is irrelevant, wrong, and argued in bad faith, and also why does he keep insisting we're all actually blind?comment:www.metafilter.com,2025:site.210505-8771570Thu, 02 Oct 2025 10:19:05 -0800flabdabletBy: mscibing
http://www.metafilter.com/210505/RIP-John-Searle#8772057
I feel that Aunt Hillary the intelligent ant colony in Gödel Escher Bach is a bit of a response to Searle's Chinese Room. Aunt Hillary is fanciful sure, but ant colonies very much do have emergent behavior. And really, basic metazoan biology is a problem for the Chinese Room argument; there are people who manifestly do speak Mandarin or Cantonese but you would search in vain for a neuron in their brains that understands these languages.comment:www.metafilter.com,2025:site.210505-8772057Fri, 03 Oct 2025 09:51:15 -0800mscibingBy: L.P. Hatecraft
http://www.metafilter.com/210505/RIP-John-Searle#8772225
Gödel Escher Bach was published in 1979 and Searle published the Chinese Room paper in 1980, so it wasn't a direct response but I agree it does address the same issues, the Chinese Room is just a particularly memorable and catchy way of framing them. It's honestly a bit strange to me how these arguments and counter-arguments seem to have been forgotten - there are things people say about LLMs that seem to map directly to the Chinese Room argument, for example when people say "LLMs don't understand anything, they're just doing bunch of linear algebra", you can think of the man in the Chinese Room doing a bunch of large matrix multiplications by hand. Can't people just make the "systems reply" to this? It's not even like they are saying that AI may be possible but LLMs ain't it, "it's just linear algebra" style arguments foreclose <i>any</i> possibility of artificial intelligence because there will always be some underlying mechanism be it linear algebra or even just basic Turing machine operations, we can't just make an irreducible intelligence, we have to make it <i>out of something</i>. Maybe some people aren't familiar with the concept of emergence, never read GEB or similar or heard of Conway's Game of Life or whatever, but it seems there's plenty of people who are - and who might accept the idea of emergent intelligence in the context of an abstract argument or thought experiment set in the far future, but instinctively reject it when there's a suggestion of it being an actual thing people might create in real life. It's like they default to these "common sense" intuition-based arguments, and this is what Searle specialises in (his free will arguments are also like this).comment:www.metafilter.com,2025:site.210505-8772225Fri, 03 Oct 2025 17:03:58 -0800L.P. HatecraftBy: flabdablet
http://www.metafilter.com/210505/RIP-John-Searle#8772286
<em>It's not even like they are saying that AI may be possible but LLMs ain't it</em>
For what it's worth, that's been my own position for a <em>very</em> long time now. Main reason I maintain the opinion that LLMs ain't it is that I see them as such a <em>comically</em> crude Plato's Cave sketch of what <em>it</em> would actually need to be; there is so much more to us, in which group I include large numbers of non-primates, than meets the I.comment:www.metafilter.com,2025:site.210505-8772286Fri, 03 Oct 2025 23:44:41 -0800flabdabletBy: flabdablet
http://www.metafilter.com/210505/RIP-John-Searle#8772287
<em>they default to these "common sense" intuition-based arguments, and this is what Searle specialises in (his free will arguments are also like this)</em>
Yeah, that's mainly why I have him filed under Men Who Think They're Much Cleverer Than They Actually Are.
<a href="https://crookedtimber.org/2018/03/21/liberals-against-progressives/#comment-729288">Wilhoit on conservatism</a> includes a remark apposite to Searle on the Hard Problem: <blockquote>As the core proposition of conservatism is indefensible if stated baldly, it has always been surrounded by an elaborate backwash of pseudophilosophy, amounting over time to millions of pages. All such is axiomatically dishonest and undeserving of serious scrutiny.</blockquote>comment:www.metafilter.com,2025:site.210505-8772287Fri, 03 Oct 2025 23:49:20 -0800flabdabletBy: mscibing
http://www.metafilter.com/210505/RIP-John-Searle#8772476
Ah, thank you L. P. Hatecraft for the correction on the order of Aunt Hillary and the Chinese Room.
<i>It's not even like they are saying that AI may be possible but LLMs ain't it,...</i>
I think that LLMs have been badly oversold, and while I wouldn't say they aren't intelligent at all, that intelligence is very limited, fragmentary, and alien, obscured by their impressive ability to imitate human conversation. My workplace is pushing hard for us to use A.I. and one of the chatbots comes off as a human that would rather make shit up rather than reveal that they don't know something. That's not at all what what's going on, but it's very easy to read human-like thinking into it. And even knowing that it's still infuriating when your plastic pal who's fun to be with lies to your face.
I don't think there is going to be an "it", not because A.I. is impossible, but because intelligence is a bit ill-defined and the old sci-fi idea of the machines suddenly "waking up" and thinking the way humans think is unrealistic. It's going to be a slog, with a lot of grift along the way.comment:www.metafilter.com,2025:site.210505-8772476Sat, 04 Oct 2025 16:41:46 -0800mscibing
"Yes. Something that interested us yesterday when we saw it." "Where is she?" His lodgings were situated at the lower end of the town. The accommodation consisted[Pg 64] of a small bedroom, which he shared with a fellow clerk, and a place at table with the other inmates of the house. The street was very dirty, and Mrs. Flack's house alone presented some sign of decency and respectability. It was a two-storied red brick cottage. There was no front garden, and you entered directly into a living room through a door, upon which a brass plate was fixed that bore the following announcement:¡ª The woman by her side was slowly recovering herself. A minute later and she was her cold calm self again. As a rule, ornament should never be carried further than graceful proportions; the arrangement of framing should follow as nearly as possible the lines of strain. Extraneous decoration, such as detached filagree work of iron, or painting in colours, is [159] so repulsive to the taste of the true engineer and mechanic that it is unnecessary to speak against it. Dear Daddy, Schopenhauer for tomorrow. The professor doesn't seem to realize Down the middle of the Ganges a white bundle is being borne, and on it a crow pecking the body of a child wrapped in its winding-sheet. 53 The attention of the public was now again drawn to those unnatural feuds which disturbed the Royal Family. The exhibition of domestic discord and hatred in the House of Hanover had, from its first ascension of the throne, been most odious and revolting. The quarrels of the king and his son, like those of the first two Georges, had begun in Hanover, and had been imported along with them only to assume greater malignancy in foreign and richer soil. The Prince of Wales, whilst still in Germany, had formed a strong attachment to the Princess Royal of Prussia. George forbade the connection. The prince was instantly summoned to England, where he duly arrived in 1728. "But they've been arrested without due process of law. They've been arrested in violation of the Constitution and laws of the State of Indiana, which provide¡ª" "I know of Marvor and will take you to him. It is not far to where he stays." Reuben did not go to the Fair that autumn¡ªthere being no reason why he should and several why he shouldn't. He went instead to see Richard, who was down for a week's rest after a tiring case. Reuben thought a dignified aloofness the best attitude to maintain towards his son¡ªthere was no need for them to be on bad terms, but he did not want anyone to imagine that he approved of Richard or thought his success worth while. Richard, for his part, felt kindly disposed towards his father, and a little sorry for him in his isolation. He invited him to dinner once or twice, and, realising his picturesqueness, was not ashamed to show him to his friends. Stephen Holgrave ascended the marble steps, and proceeded on till he stood at the baron's feet. He then unclasped the belt of his waist, and having his head uncovered, knelt down, and holding up both his hands. De Boteler took them within his own, and the yeoman said in a loud, distinct voice¡ª HoME²¨¶àÒ°´²Ï·ÊÓÆµ ѸÀ×ÏÂÔØ ѸÀ×ÏÂÔØ
ENTER NUMBET 0016hlchain.com.cn www.letubd.com.cn www.haonongmin.com.cn www.ilynn.com.cn www.icsngr.com.cn njfi.com.cn www.sxsilin.com.cn www.mlsuiu.com.cn otjejf.com.cn www.wnygbx.com.cn