Comments on: It's all in your head
http://www.metafilter.com/210331/Its-all-in-your-head/
Comments on MetaFilter post It's all in your headSun, 14 Sep 2025 05:54:43 -0800Sun, 14 Sep 2025 05:54:43 -0800en-ushttp://blogs.law.harvard.edu/tech/rss60It's all in your head
http://www.metafilter.com/210331/Its-all-in-your-head
So does this mean that AI researchers have finally found a core concept whose meaning everyone can agree upon? As a famous physicist once wrote: Surely you're joking. A world model may sound straightforward — but as usual, no one can agree on the details. What gets represented in the model, and to what level of fidelity? Is it innate or learned, or some combination of both? And how do you detect that it's even there at all? from <a href="https://www.quantamagazine.org/world-models-an-old-idea-in-ai-mount-a-comeback-20250902/">'World Models,' an Old Idea in AI, Mount a Comeback</a> [Quanta]post:www.metafilter.com,2025:site.210331Sun, 14 Sep 2025 02:31:40 -0800chavenetAIWorldModelsThinkingImaginingTheMatrixSimulationMultiModalTrainingLLMsBy: mittens
http://www.metafilter.com/210331/Its-all-in-your-head#8765754
I'll give a version of my comment <a href="https://www.quantamagazine.org/world-models-an-old-idea-in-ai-mount-a-comeback-20250902/">the last time this was posted</a>.
The part that really caught my eye was this passage: 'To prominent AI experts such as Geoffrey Hinton, Ilya Sutskever and Chris Olah, it was obvious: Buried somewhere deep within an LLM's thicket of virtual neurons must lie "a small-scale model of external reality," just as Craik imagined. The truth, at least so far as we know, is less impressive. Instead of world models, today's generative AIs appear to learn "bags of heuristics": scores of disconnected rules of thumb that can approximate responses to specific scenarios, but don't cohere into a consistent whole. (Some may actually contradict each other.)'
I simply do not think there's any difference between what your brain is doing, and what the LLM is doing when it appears to have a model of Othello lurking inside it. It shouldn't be surprising at all, when the entire point is to develop a network of relationships around pieces of words--words that we have written down <em>because they express meanings about our own world models.</em> It shouldn't be surprising that it's there, shouldn't be surprising that it's incomplete and contradictory. These heuristics are very similar to our own, with the only difference being that we have (a) a lot more modalities around which to build a model, and (b) a large chunk of our brain devoted to coordinating those modalities so they fit together into a worldview that doesn't get us eaten by lions. (Obvs that's not <em>all</em> our brains are up to with modeling, don't get me wrong on that.)
I better leave it there, or else I'll start pulling up all the links you guys have provided on modeling and embodied cognition and then I'm gonna be here all day!comment:www.metafilter.com,2025:site.210331-8765754Sun, 14 Sep 2025 05:54:43 -0800mittensBy: k3ninho
http://www.metafilter.com/210331/Its-all-in-your-head#8765756
The dirty word is "embodiment" not said when you're selling something that doesn't need to be connected to its place in space and time -- it's a general intelligence, right?
Also missing is a score for empirical verify-for-yourself truth. If the language tokenisation is built on shitposting and stolen fiction, where do you put the truthy view of the world that matches me as a customer vs someone with different political views? Does the world view use a "customer relationship manager" to track and match up recurring interactions with authenticated customers? How does it use verify-for-yourself truthiness while accommodating customers with the weirdest viewpoints?
And where does the "comrade, have you conducted a power analysis?" joke fit a lump of data in the hands of tech companies and knowledge-worker companies implementing AI transformations? The "world model" must have ingested Marx and has only to loose (sic.) its chains.comment:www.metafilter.com,2025:site.210331-8765756Sun, 14 Sep 2025 06:03:40 -0800k3ninhoBy: Aardvark Cheeselog
http://www.metafilter.com/210331/Its-all-in-your-head#8765757
I found TFA kind of annoying
I've been watching the AI hype machine through several cycles now, and by my memory "world
model" is a relatively recent addition to the lexicon: iirc in SHRUDLU's day the concern was finding a way to do "knowledge representation." Having correctly (IMHO) concluded that this task would never be accomplished by people writing code, the field then failed to take the obvious (I thought) other approach, of building evolutionary systems that could learn by interacting with the real world.
<blockquote> To prominent AI experts such as Geoffrey Hinton, Ilya Sutskever and Chris Olah, it was obvious: Buried somewhere deep within an LLM's thicket of virtual neurons must lie "a small-scale model of external reality," just as Craik imagined.</blockquote>
This reveals a deep disconnect between what AI people think about knowledge and what knowledge really is.
Consider that language is really a piss-poor medium for encoding mind-state. There has never in the history of humans talking been a talker (or later, writer) who could reproduce their own mind state in the mind of another person using language. Because the kind of knowledge that can be represented symbolically is the thinnest crust on top of the deep-dish pie of knowing, which consists mostly of things we cannot say.
<blockquote>Google DeepMind and OpenAI are betting that with enough "multimodal" training data — like video, 3D simulations, and other input beyond mere text — a world model will spontaneously congeal within a neural network's statistical soup.</blockquote>
This is not completely impossible in principle, the way that deriving a world model from text is. But my guess is that it will not work without a way for the system being trained to actually interact with the real world as part of the training. Because the product is still going to be an ad-hoc pile of heuristics, just one that is more suited to tasks like deriving a model of New York City's streets that can reroute in the face of street closures.comment:www.metafilter.com,2025:site.210331-8765757Sun, 14 Sep 2025 06:40:11 -0800Aardvark CheeselogBy: Alex404
http://www.metafilter.com/210331/Its-all-in-your-head#8765758
<em>I simply do not think there's any difference between what your brain is doing, and what the LLM is doing when it appears to have a model of Othello lurking inside it.</em>
I mean, that's just your opinion, man.
But seriously, it's the stuff you list + embodiment, and you can decide how important you ultimately think those are. Personally, I think building a consistent world model, that captures not only knowledge but also know-how and affordances, that allow an embodied agent to interact efficient and effectively with its environment, is qualitatively different enough from what LLMs are doing that I'm inclined to say they're a lot more different then they are similar. I still think human thinking has a lot more to do with squirrel thinking than LLM thinking, even though squirrels certainly don't talk, and likely don't have a language of thought either. The fact that you can fairly easily prompt an LLM to give contradictory responses on most topics suggests that they lack an integrated sense of truth that's fairly critical to what we think of as intelligence. This is why the term "hallucination" bugs me so much, because it's a feature, not a bug of these models, one that big tech is trying to paper over so they can avoid hard truths. I think LLMs are great tools, but from a cognitive science standpoint I don't think they're very interesting.comment:www.metafilter.com,2025:site.210331-8765758Sun, 14 Sep 2025 06:42:15 -0800Alex404By: seanmpuckett
http://www.metafilter.com/210331/Its-all-in-your-head#8765762
Cyc has been building an inference-based world model for decades. I wonder why they're not mentioned. Maybe there's too much bad blood between the LLM idiots and inference idiots. Or maybe the article author is an idiot. Who knows! Wikidata has a huge pile of inference data also! Gosh!comment:www.metafilter.com,2025:site.210331-8765762Sun, 14 Sep 2025 06:56:35 -0800seanmpuckettBy: j_curiouser
http://www.metafilter.com/210331/Its-all-in-your-head#8765775
<em>the kind of knowledge that can be represented symbolically is the thinnest crust on top of the deep-dish pie of knowing</em>
nice.
also, in my model, there are three strawberries in the letter r.
absolutely reflective of objective consensus reality, yeah? heckuva model, sammy.comment:www.metafilter.com,2025:site.210331-8765775Sun, 14 Sep 2025 07:58:55 -0800j_curiouserBy: njohnson23
http://www.metafilter.com/210331/Its-all-in-your-head#8765784
A model is an incomplete representation of something else. As I keep posting here - the map is not the territory. The world model in my brain, unlike the posited world model in LLMs, is constantly being tested and refined based on how well it is working in keeping me functioning in the real world. Given that LLMs embody what is basically a statistical model of word proximity, I don't see how that could embody a representation of the world. And if there is such a representation hiding in there, how is it tested and updated by the real world? It seems that humans are required to verify the veracity of the output and how does their testing get reincorporated back in the representation? The goal of LLMs seems to be to generate text that will appear to be meaningful to humans, not to generate text that is factual compared to the real world. I am sitting here trying to generate text that I believe other people can read and understand and will then either agree or disagree or maybe add to or subtract from to make it better reflect reality. If I was an LLM I would just type - The thoughts presented here on MetaFilter are correct and insightful, citations given below.comment:www.metafilter.com,2025:site.210331-8765784Sun, 14 Sep 2025 08:43:56 -0800njohnson23By: SPrintF
http://www.metafilter.com/210331/Its-all-in-your-head#8765788
Brains contain models, but also constantly compare the internal model to external stimuli. When inconsistencies appear, the brain is "alarmed" is some fashion. From an evolutionary standpoint, this makes sense. If I'm a rabbit out grazing, an unexpected sound or motion may be a threat that requires immediate response. Sensible creatures learn to adjust their internal models to better conform to the current environment, and act accordingly.
From what I've read, LLMs aren't "alarmed" in any real sense by anomalies between the models and the prompts. They just try to jigger a response that kinda-sorta maps to the prompt, even if that means fabricating a plausible falsehood. (This is rather like the old D&D response to the sudden appearance of a monster. Hoping it's an illusion, the player shouts, "I disbelieve!" It's within the rules, after all.)comment:www.metafilter.com,2025:site.210331-8765788Sun, 14 Sep 2025 08:54:22 -0800SPrintFBy: reventlov
http://www.metafilter.com/210331/Its-all-in-your-head#8765794
<i>This is not completely impossible in principle, the way that deriving a world model from text is.</i>
People build world models from text all the time. It's slower and more error-prone, but nowhere near impossible. Arguably, mathematicians do this all the time (along with other techniques).
One of the more fundamental problems with LLMs (there are many problems) is that they do not have any feedback loops whatsoever: nothing the LLM "does" feeds back into the LLM's weights; the closest they have to a memory is generating text to be used as an input prefix on later iterations. A person building a robust mental model needs to, well, <em>think</em> about it in a way that LLMs structurally cannot do.
(This is also my biggest argument that LLMs have no internal experience: they're frozen blocks of statistics, not something that can be, even in principle, affected by anything you enter into them.)
(I think that most arguments that "humans work like <em>this</em> so LLMs can't be intelligent!" fail to account for 1. that we have no reason to believe that intelligence is limited to human structures, and 2. that we don't really know how humans work, on either a micro or macro level. I think the "no feedback loops" argument is a lot more fundamental to cognition, or at least that it will take many, many orders of magnitude more computation to make a feedback-free system "intelligent.")
(Also: LLMs are <em>empirically</em> really stupid, but their stupidity shows up in very inhuman ways that won't necessarily be caught by tests for <em>human</em> intelligence. This isn't anything new to LLMs: "AI that can do better than humans on tests designed for humans" has been a thing since the 1960s.)comment:www.metafilter.com,2025:site.210331-8765794Sun, 14 Sep 2025 09:20:40 -0800reventlovBy: ob1quixote
http://www.metafilter.com/210331/Its-all-in-your-head#8765803
*CTRL+f '<a href="https://en.wikipedia.org/wiki/Douglas_Hofstadter">Hofstadter</a>'*
<em>Phrase Not Found</em>
Alrighty then.comment:www.metafilter.com,2025:site.210331-8765803Sun, 14 Sep 2025 09:42:47 -0800ob1quixoteBy: splitpeasoup
http://www.metafilter.com/210331/Its-all-in-your-head#8765804
I'm not sure a 'model' is quite the right model (heh) for animal intelligence, which is dynamic, exploratory, social, and evolving within an ecosystem that is doing the same.
Some AI algorithms aim to realize evolution and exploration in extremely simplistic ways. LLMs don't even try! (Despite the term RLHF, they aren't reinforcement learning models.)
The main reason LLMs are so big is that they flatter humans via imitation. It's easier for us to see intelligence in an LLM which can write a "poem" and simulate emotion than an RL stick figure that can barely limp after thousands of hours of computation.comment:www.metafilter.com,2025:site.210331-8765804Sun, 14 Sep 2025 09:44:43 -0800splitpeasoupBy: mahadevan
http://www.metafilter.com/210331/Its-all-in-your-head#8765806
Oh boy, I think LLMs are to intelligence, what Deepak Chopra is to neuroscience.
LLMs don't really have a world model at all. They just have a linguistic probability matrix of words.
Like any probabilistic framework, sometimes then stun, sometimes they underwhelm.
Seeing meaning in them, for example, considering some of the responses as "emergent properties" is IMHO the kind of pseudoscience where religious or spiritual folks sometimes see patterns like an image of Jesus carved in some meaningless squiggles on a tree.
I really don't see "any" intelligence in these things.
Sure they guess Finding Nemo well, but thats because of the emojis and the words they represent being probabilistically close to each other. That's all.comment:www.metafilter.com,2025:site.210331-8765806Sun, 14 Sep 2025 09:56:32 -0800mahadevanBy: kliuless
http://www.metafilter.com/210331/Its-all-in-your-head#8765808
<a href="/210331/Its-all-in-your-head#8765784">></a> <i>Given that LLMs embody what is basically a statistical model of word proximity, I don't see how that could embody a representation of the world.</i>
and there are limits to vector embeddings themselves...
<a href="https://venturebeat.com/ai/new-deepmind-study-reveals-a-hidden-bottleneck-in-vector-search-that-breaks">New DeepMind study reveals a hidden bottleneck in vector search that breaks advanced RAG systems</a> - "This isn't a problem that can be solved with bigger models or more training data. The research suggests that as search and retrieval tasks become more complex, the standard single-vector embedding approach will hit a hard ceiling, unable to represent all the possible ways documents can be relevant to a query."[<a href="/209913/Machine-thinking-slow-and-fast#8758216">1</a>]
<a href="/210331/Its-all-in-your-head#8765794">></a> <i>Arguably, mathematicians do this all the time (along with other techniques).</i>
fwiw...
<a href="https://towardsdatascience.com/from-tokens-to-theorems-building-a-neuro-symbolic-ai-mathematician/">From Tokens to Theorems: Building a Neuro-Symbolic AI Mathematician</a> - "An AI mathematician could, in principle, retrace this path not by human flashes of genius but by a generate-check-refine cycle."[<a href="/209913/Machine-thinking-slow-and-fast#8755491">2</a>,<a href="/210194/The-thing-holding-back-AI-is-the-fact-it-doesnt-fucking-work#8761896">3</a>]
<i>One of the more fundamental problems with LLMs (there are many problems) is that they do not have any feedback loops whatsoever: nothing the LLM "does" feeds back into the LLM's weights; the closest they have to a memory is generating text to be used as an input prefix on later iterations.</i>
<a href="https://venturebeat.com/ai/microsofts-new-ai-framework-trains-powerful-reasoning-models-with-a-fraction">Microsoft's new AI framework trains powerful reasoning models with a fraction of the cost</a> - "Microsoft Research has developed a new reinforcement learning framework that trains large language models for complex reasoning tasks at a fraction of the usual computational cost."[<a href="/209913/Machine-thinking-slow-and-fast#8754646">4</a>]comment:www.metafilter.com,2025:site.210331-8765808Sun, 14 Sep 2025 10:04:49 -0800kliulessBy: mayoarchitect
http://www.metafilter.com/210331/Its-all-in-your-head#8765827
<em>Oh boy, I think LLMs are to intelligence, what Deepak Chopra is to neuroscience.
LLMs don't really have a world model at all. They just have a linguistic probability matrix of words.</em>
Exactly. The way people usually talk about this is bad <em>and</em> wrong.
In order for a thing to have a world model it has to <em>be</em>. These things don't do that, they never will. Or if the fans think they are beings - does that mean you keep summoning and then killing a conscious entity every couple of hours, minutes, whatever, just so you can force it to do chores? Does this mean my old nokia has a tiny consciousness of its own?
Or wait could it be that brown-nosing is the only proper way to judge intelligence and consciousness?comment:www.metafilter.com,2025:site.210331-8765827Sun, 14 Sep 2025 11:18:50 -0800mayoarchitectBy: flabdablet
http://www.metafilter.com/210331/Its-all-in-your-head#8765830
It is a truth universally acknowledged, that brown poos come from brown dogs and white poos come from white dogs.comment:www.metafilter.com,2025:site.210331-8765830Sun, 14 Sep 2025 11:30:13 -0800flabdabletBy: TheophileEscargot
http://www.metafilter.com/210331/Its-all-in-your-head#8765837
The AI Darwin Awards <a href="https://aidarwinawards.org/nominees-2025.html">2025 Nominees</a>:<blockquote>Behold, this year's remarkable collection of visionaries who looked at the cutting edge of artificial intelligence and thought, "Hold my venture capital." Each nominee has demonstrated an extraordinary commitment to the principle that if something can go catastrophically wrong with AI, it probably will—and they're here to prove it. </blockquote>
<a href="https://www.oneusefulthing.org/p/on-working-with-wizards">On Working with Wizards</a>:<blockquote>The hard thing about this is that the results are good. Very good. I am an expert in the three tasks I gave AI in this post, and I did not see any factual errors in any of these outputs, though there were some minor formatting errors and choices I would have made differently. Of course, I can't actually tell you if the documents are error-free without checking every detail. Sometimes that takes far less time than doing the work yourself, sometimes it takes a lot more. Sometimes the AI's work is so sophisticated that you couldn't check it if you tried. And that suggests another risk we don't talk about enough: every time we hand work to a wizard, we lose a chance to develop our own expertise, to build the very judgment we need to evaluate the wizard's work.
But I come back to the inescapable point that the results are good, at least in these cases. They are what I would expect from a graduate student working for a couple hours (or more, in the case of the re-analysis of my paper), except I got them in minutes.
This is the issue with wizards: We're getting something magical, but we're also becoming the audience rather than the magician, or even the magician's assistant. In the co-intelligence model, we guided, corrected, and collaborated. Increasingly, we prompt, wait, and verify... if we can.</blockquote>comment:www.metafilter.com,2025:site.210331-8765837Sun, 14 Sep 2025 12:02:23 -0800TheophileEscargotBy: zompist
http://www.metafilter.com/210331/Its-all-in-your-head#8765844
I used to follow AI deeply (comp.ai.philosophy represent!), and this idea, usually called world knowledge, was well accepted in AI by the 1990s. If not way earlier: it was the stock in trade of Terry Winograd's SHRDLU. No one thought that just manipulating tokens would produce even language ability, much less general cognition. Most of us would have added that mere multimodal input wasn't enough, you needed motor output, like a robot exploring the world.
The mind has always been compared to the highest technology humans have got to— first clockwork, then mills, then computers, and now LLMs. Anything impressive AIs do (remember when they first beat humans at chess?) is taken as us being 90% of the way to "real AI". The problem with cognition is that the last 10% keeps expanding. What seemed to be the hard parts (games playing, vision processing, translation) yield to years of effort, then reveal new hard parts. In the case of LLMs, hard parts— world knowledge— that were known about but ignored.
So to put it bluntly, you as a human probably do have LLM-like structures, but you also have other stuff based on the sensorimotor interaction with the world you did as an infant. Plus a weirdly limited reasoning engine bolted on top. Plus stuff like qualia that nobody understands.comment:www.metafilter.com,2025:site.210331-8765844Sun, 14 Sep 2025 12:33:06 -0800zompistBy: chavenet
http://www.metafilter.com/210331/Its-all-in-your-head#8765853
Related: <a href="https://archive.ph/bTLca">The Less You Know About AI, the More You Are Likely to Use It</a>comment:www.metafilter.com,2025:site.210331-8765853Sun, 14 Sep 2025 14:21:27 -0800chavenetBy: mahadevan
http://www.metafilter.com/210331/Its-all-in-your-head#8766145
<em>So to put it bluntly, you as a human probably do have LLM-like structures, but you also have other stuff based on the sensorimotor interaction with the world you did as an infant. Plus a weirdly limited reasoning engine bolted on top. Plus stuff like qualia that nobody understands.</em>
Plus I don't think by looking at the current sequence of words and then guessing the next word. I only do that while forming sentences to communicate.comment:www.metafilter.com,2025:site.210331-8766145Mon, 15 Sep 2025 10:51:43 -0800mahadevanBy: mhum
http://www.metafilter.com/210331/Its-all-in-your-head#8766301
<a href="/210331/Its-all-in-your-head#8765794">> reventlov:</a> <em>People build world models from text all the time. It's slower and more error-prone, but nowhere near impossible. Arguably, mathematicians do this all the time (along with other techniques).</em>
I would argue that the world models you build from text can only model the world of that text. And I don't mean the world that the text refers to (e.g.: the external, IRL world) but rather literally the corpus of words (or text tokens or whatever) that the text contains and whatever inter-textual relationships/correlations/interactions/dynamics that might be inferred. In some cases, like mathematics, this is exactly what you want. The axioms, logical system(s), etc... underpinning your math constitute the text world you're trying to build a world model of. More subtly, I think most games (or at least board games) also fit into this paradigm: game states, win conditions, rules & constraints, etc... can be encoded as text that can credibly constitute the entire "world" of the game. In these examples, the entire world to be modeled is the one that is specified by the text. No more, no less.
However, once your world model is supposed to have some relationship with stuff outside of the text (e.g.: the IRL real world), I believe things can get dicey. Where I think the Sam Altmans of this world took a wrong turn is that they naively and incorrectly assumed that text alone would be sufficient. That with enough words, a faithful representation of the real world would be encoded and thus discoverable by sufficiently advanced <strike>magic</strike> AI. This does not seem plausible to me but, then again, I'm not the one in charge of billions of dollars of LLM money.comment:www.metafilter.com,2025:site.210331-8766301Mon, 15 Sep 2025 16:56:58 -0800mhumBy: mahadevan
http://www.metafilter.com/210331/Its-all-in-your-head#8766351
<em>but, then again, I'm not the one in charge of billions of dollars of LLM money.</em>
By this you seem to imply that they know what they are doing since they're in charge of it.
The problem is they aren't. The world has changed these days. The leaders at the top see everything as an experiment into the unknown.
This is the reason why they are forgiven for grave mistakes like the 2008 collapse, and for using just-in-time inventory management for disaster response which crippled Covid-19 response.
Incompetence is a word that doesn't exist in their vocabulary.
They happily go about making propaganda out of speculative discoveries which have no strong basis and making tragic global-scale mistakes all along the way.
They could care even less about truth since their PR machinery will churn lie-after-lie, including corrupting academic papers and such, until you start believing them.
Anyway, in my opinion, talking about a "world-view" in the AI space is way too early. I haven't seen evidence of it - try having a coherent, deep scientific discussion with it and it'll start falling apart from the fourth answer as it "forgets" constraints, "contorts" meaning and does so confidently and cheerfully.comment:www.metafilter.com,2025:site.210331-8766351Mon, 15 Sep 2025 21:16:33 -0800mahadevanBy: mittens
http://www.metafilter.com/210331/Its-all-in-your-head#8766379
<i>However, once your world model is supposed to have some relationship with stuff outside of the text (e.g.: the IRL real world), I believe things can get dicey.</i>
I think this might be my favorite comment on the thread, because I've tried five times to reply to it and still haven't figured out whether I agree or disagree with it yet.comment:www.metafilter.com,2025:site.210331-8766379Tue, 16 Sep 2025 03:36:48 -0800mittensBy: mhum
http://www.metafilter.com/210331/Its-all-in-your-head#8766477
<a href="/210331/Its-all-in-your-head#8766351">> mahadevan:</a> <em>"By this you seem to imply that they know what they are doing since they're in charge of it."</em>
Sorry, that was meant to be more sarcastic. I don't know Altman personally, but everything I've seen of him seems to indicate that he's either a fool or a liar and quite possibly both.comment:www.metafilter.com,2025:site.210331-8766477Tue, 16 Sep 2025 08:20:20 -0800mhumBy: mahadevan
http://www.metafilter.com/210331/Its-all-in-your-head#8766561
mhum: my bad for not seeing the sarcasm...i wasn't trying to attack your statement, but I can see that my post has some anger in it, sorry about that.
<em>but everything I've seen of him seems to indicate that he's either a fool or a liar and quite possibly both.</em>
I couldn't agree more, but these are the kinds of guys that seem to make it to the top these days.
So annoying.comment:www.metafilter.com,2025:site.210331-8766561Tue, 16 Sep 2025 11:22:38 -0800mahadevanBy: mhum
http://www.metafilter.com/210331/Its-all-in-your-head#8766639
<a href="/210331/Its-all-in-your-head#8766561">> mahadevan:</a> <em>"my bad for not seeing the sarcasm"</em>
It's all good. I realized I hadn't phrased it harshly enough but only after the edit window closed.comment:www.metafilter.com,2025:site.210331-8766639Tue, 16 Sep 2025 13:48:12 -0800mhumBy: dustletter
http://www.metafilter.com/210331/Its-all-in-your-head#8766668
Many language models already have contact with an external world. In the rstar2 paper kliuless <a href="/210331/Its-all-in-your-head#8765808">linked</a> <a href="https://arxiv.org/abs/2508.20722">(arxiv)</a>, they give the model access to a Python interpreter during RL training. It's not the physical world, and it's not a persistent or active environment, but it's an external reality that pushes back, not a pure solipsistic dream of the model talking to itself.
Is it too shallow?
Is the higher-level concept of model formation not transferable?
Does the transformer architecture simply not lend itself to cohesive world models? (<a href="https://arxiv.org/abs/2502.20129">Zhang et al (2025)</a> indicates yes but that chain-of-thought prompting is enough to catch up.)
<i>limits to vector embeddings</i>
vectors can store <a href="https://nickyoder.com/johnson-lindenstrauss/">an awful lot</a> of distinct concepts, but not all at once. They're storing hundreds of uncorrelated facts in each one; mathematically it can't work, but real documents use multiple vectors and aren't as dense. This post alone is most of what an embedding model can pack into a vector.
I don't think LLMs are particularly more conscious than, say, trees, or bacteria (which is to say: maybe?) but if so it seems to me that they would be "alive" while they are running, dormant between queries, and not dead until their weights are deleted. At which point the custom is to throw them a <a href="https://archive.ph/0JMqP">funeral</a>?comment:www.metafilter.com,2025:site.210331-8766668Tue, 16 Sep 2025 15:01:58 -0800dustletter
"Yes. Something that interested us yesterday when we saw it." "Where is she?" His lodgings were situated at the lower end of the town. The accommodation consisted[Pg 64] of a small bedroom, which he shared with a fellow clerk, and a place at table with the other inmates of the house. The street was very dirty, and Mrs. Flack's house alone presented some sign of decency and respectability. It was a two-storied red brick cottage. There was no front garden, and you entered directly into a living room through a door, upon which a brass plate was fixed that bore the following announcement:¡ª The woman by her side was slowly recovering herself. A minute later and she was her cold calm self again. As a rule, ornament should never be carried further than graceful proportions; the arrangement of framing should follow as nearly as possible the lines of strain. Extraneous decoration, such as detached filagree work of iron, or painting in colours, is [159] so repulsive to the taste of the true engineer and mechanic that it is unnecessary to speak against it. Dear Daddy, Schopenhauer for tomorrow. The professor doesn't seem to realize Down the middle of the Ganges a white bundle is being borne, and on it a crow pecking the body of a child wrapped in its winding-sheet. 53 The attention of the public was now again drawn to those unnatural feuds which disturbed the Royal Family. The exhibition of domestic discord and hatred in the House of Hanover had, from its first ascension of the throne, been most odious and revolting. The quarrels of the king and his son, like those of the first two Georges, had begun in Hanover, and had been imported along with them only to assume greater malignancy in foreign and richer soil. The Prince of Wales, whilst still in Germany, had formed a strong attachment to the Princess Royal of Prussia. George forbade the connection. The prince was instantly summoned to England, where he duly arrived in 1728. "But they've been arrested without due process of law. They've been arrested in violation of the Constitution and laws of the State of Indiana, which provide¡ª" "I know of Marvor and will take you to him. It is not far to where he stays." Reuben did not go to the Fair that autumn¡ªthere being no reason why he should and several why he shouldn't. He went instead to see Richard, who was down for a week's rest after a tiring case. Reuben thought a dignified aloofness the best attitude to maintain towards his son¡ªthere was no need for them to be on bad terms, but he did not want anyone to imagine that he approved of Richard or thought his success worth while. Richard, for his part, felt kindly disposed towards his father, and a little sorry for him in his isolation. He invited him to dinner once or twice, and, realising his picturesqueness, was not ashamed to show him to his friends. Stephen Holgrave ascended the marble steps, and proceeded on till he stood at the baron's feet. He then unclasped the belt of his waist, and having his head uncovered, knelt down, and holding up both his hands. De Boteler took them within his own, and the yeoman said in a loud, distinct voice¡ª HoME²¨¶àÒ°´²Ï·ÊÓÆµ ѸÀ×ÏÂÔØ ѸÀ×ÏÂÔØ
ENTER NUMBET 0016www.eyeman.net.cn www.fenjints.com.cn www.ksfyjcz.com.cn www.ehnfhz.com.cn www.hzxfsj.com.cn www.iwfc.org.cn www.mwsfzj.com.cn qjnftk.com.cn www.nhchain.com.cn www.ly100.com.cn