• FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    5 months ago

    They absolutely cannot reliably summarize the result of searches, like this post is about

    The problem is that it did summarize the result of this search, the results of this search included one of those “if the Earth was the size of a grain of sand, Alpha Centauri would be X kilometers away” analogies. It did exactly the thing you’re saying it can’t do.

    Any meaningful rate of failures at all makes them massively, catastrophically damaging to humanity as a whole.

    Nothing is perfect. Does that make everything a massive catastrophic threat to humanity? How have we managed to survive for this long?

    You’re ridiculously overblowing this. It’s a “ha ha, looks like AI made a whoopsie because I didn’t understand that I actually asked it to do” situation. It’s not Skynet coming to convince us to eat cyanide.

    And this is all completely ignoring the obscene energy costs associated with making web searches complete and utter dogshit.

    Of course it’s ignoring that. It’s not real.

    You realize that energy costs money? If each web search cost an “obscene” amount, how is Microsoft managing to pay for it all? Why are they paying for it? Do you think they’ll continue paying for it indefinitely? It’d be a completely self-solving problem.

    • conciselyverbose@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      5 months ago

      Summaries distinguish substance from nonsense. It cannot be described as a summary of a piece of content if it does not accurately portray the substance of that content.

      LLMs aren’t imperfect. They’re dumpster fire misinformation machines with no redeeming qualities. Of course it’s not Skynet. Skynet was intelligent. This isn’t within 100 orders of magnitude of intelligence.

      Companies burn obscene amounts of money on moonshots all the time, even ones that have no possibility of success. Willingness to lose billions burning energy to degrade every single search made is not an indication that it’s not a nightmare for the environment (again, for literally no purpose because every single search with an LLM is worse than without it).

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        5 months ago

        No, a summary is just a condensed version of some larger work. If the larger work contains bullshit then so can the summary, that doesn’t stop it from being a summary. As you say, a summary accurately portrays the substance of that content. In this case there was content that said Alpha Centauri was 13 km from Earth, so the summary said that too.

        This is really not complicated.

        Companies burn obscene amounts of money on moonshots all the time, even ones that have no possibility of success.

        If you think it has no possibility of success, sit back and relax as AI goes away.

        • ipkpjersi@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 months ago

          If you think it has no possibility of success, sit back and relax as AI goes away.

          Yep. This is exactly it, and this is what people don’t seem to understand. AI is not going away, because it is actually useful, it has actual uses and people are actively using it. It’s not entirely fluff based pointless technology like blockchain etc, it is actually useful and real-world people use AI/LLMs.

      • ipkpjersi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        5 months ago

        This is just not true.

        Just because you don’t like LLMs doesn’t mean that they have no purpose. If they were really entirely useless and served no purpose and never did anything, they would not be the talk of the town and OpenAI would not be a multi-billion dollar company. If they were useless, nobody would use them, but people absolutely do use them.

        I literally use ChatGPT daily to automate writing code for me and it honestly does a good job. I literally used it to write an entire Laravel project called ArigatouAnimeTracker, over 600 commits including documentation all written using ChatGPT, and tbh my project is awesome. It easily would have taken me 5x as long to write it without ChatGPT and tbh it might not have ended up existing without ChatGPT because of how long it would have taken to write without LLMs doing the heavy lifting.

        Sure, you have to verify the output, but you know what? That’s going to be the case for any code that is written regardless, code review is essential and completely normal and existed long before LLMs did. That doesn’t mean that LLMs don’t have a purpose, or that nobody actually uses them. People do use them, it’s a multi-billion dollar industry for a reason and people are going to continue to use them, even if you say they have no redeeming qualities. There are definitely ethical concerns about LLMs, but to say they have no redeeming qualities is just not correct.

        Regardless, anything I say about AI/LLMs that isn’t that it’s terrible and useless and nobody should/would ever use it is going to be met with criticism.