Freedom is the right to tell people what they do not want to hear.

  • George Orwell
  • 0 Posts
  • 33 Comments
Joined 2 months ago
cake
Cake day: July 17th, 2025

help-circle
  • I think comparing an LLM to a brain is a category mistake. LLMs aren’t designed to simulate how the brain works - they’re just statistical engines trained on language. Trying to mimic the human brain is a whole different tradition of AI research.

    An LLM gives the kind of answers you’d expect from something that understands - but that doesn’t mean it actually does. The danger is sliding from “it acts like” to “it is.” I’m sure it has some kind of world model and is intelligent to an extent, but I think “understands” is too charitable when we’re talking about an LLM.

    And about the idea that “if it’s just statistics, we should be able to see how it works” - I think that’s backwards. The reason it’s so hard to follow is because it’s nothing but raw statistics spread across billions of connections. If it were built on clean, human-readable rules, you could trace them step by step. But with this kind of system, it’s more like staring into noise that just happens to produce meaning when you ask the right question.

    I also can’t help laughing a bit at myself for once being the “anti-AI” guy here. Usually I’m the one sticking up for it.


  • You’re right - in the NLP field, LLMs are described as doing “language understanding,” and that’s fine as long as we’re clear what that means. They process natural language input and can generate coherent output, which in a technical sense is a kind of understanding.

    But that shouldn’t be confused with human-like understanding. LLMs simulate it statistically, without any grounding in meaning, concepts or reference to the world. That’s why earlier GPT models could produce paragraphs of flawless grammar that, once you read closely, were complete nonsense. They looked like understanding, but nothing underneath was actually tied to reality.

    So I’d say both are true: LLMs “understand” in the NLP sense, but it’s not the same thing as human understanding. Mixing those two senses of the word is where people start talking past each other.




  • It’s not about “AI stonks” really. If one genuinely believes that AGI will be the end of us then any form of retirement savings are just waste of time.

    I really think that most investors aren’t as hyped about AI stocks as the anti-AI crowd online wants us to believe. They may have increased the weight of their investments on the tech sector but the vast majority of investors are aware of the risk of not diversifying your portfolio and if you’re someone with actual wealth you can invest then they’re probably not putting it all on Open AI.

    The recent drop in AI stocks that was in the news a week or two back doesn’t even register on the value of my portfolio even though nearly all of the top companies on it are tech companies.



  • LLMs, as the name suggests, are language models - not knowledge machines. Answering questions correctly isn’t what they’re designed to do. The fact that they get anything right isn’t because they “know” things, but because they’ve been trained on a lot of correct information. That’s why they come off as more intelligent than they really are. At the end of the day, they were built to generate natural-sounding language - and that’s all. Just because something can speak doesn’t mean it knows what it’s talking about.










  • It is a big part of the issue, but as Lemmy clearly demonstrates, that issue doesn’t go away even when you remove the algorithm entirely.

    I see it a lot like driving cars - no matter how much better and safer we make them, accidents will still happen as long as there’s an ape behind the wheel, and probably even after that. That’s not to say things can’t be improved - they definitely can - but I don’t think it can ever be “fixed,” because the problem isn’t it - it’s us. You can’t fix humans by tweaking the code on social media.