• sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    15 hours ago

    the bubble bursting might very well be a good thing for the technology into the future

    I absolutely agree. It worked wonders for the Internet (dotcom boom in the 90s), and I imagine we’ll see the same w/ AI sometime in the next 10 years or so. I do believe we’re seeing a bubble here, and we’re also seeing a significant shift in how we interact w/ technology, but it’s neither as massive or as useless as proponents and opponents claim.

    I’m excited for the future, but not as excited for the transition period.

    • ArchRecord@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 hours ago

      I’m excited for the future, but not as excited for the transition period.

      I have similar feelings.

      I discovered LLMs before the hype ever began (used GPT-2 well before ChatGPT even existed) and the same with image generation models barely before the hype really took off. (I was an early closed beta tester of DALL-E)

      And as my initial fascination grew, along with the interest of my peers, the hype began to take off, and suddenly, instead of being an interesting technology with some novel use cases, it became yet another technology for companies to show to investors (after slapping it in a product in a way no user would ever enjoy) to increase stock prices.

      Just as you mentioned with the dotcom bubble, I think this will definitely do a lot of good. LLMs have been great for asking specialized questions about things where I need a better explanation, or rewording/reformatting my notes, but I’ve never once felt the need to have my email client generate every email for me, as Google seems to think I’d want.

      If we can just get all the over-hyped corporate garbage out, and replace it with more common-sense development, maybe we’ll actually see it being used in a way that’s beneficial for us.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        12 hours ago

        I initially started with natural language processing (small language models?) in school, which is a much simpler form of text generation that operates on words instead of whatever they call the symbols in modern LLMs. So when modern LLMs came out, I basically registered that as, “oh, better version of NLP,” with all its associated limitations and issues, and that seems to be what it is.

        So yeah, I think it’s pretty neat, and I can certainly see some interesting use-cases, but it’s really not how I want to interface with computers. I like searching with keywords and I prefer the process of creation more than the product of creation, so image and text generation aren’t particularly interesting to me. I’ll certainly use them if I need to, but as a software engineer, I just find LLMs in all forms (so far) annoying to use. I don’t even like full text search in many cases and prefer regex searches, so I guess I’m old-school like that.

        I’ll eventually give in and adopt it into my workflow and I’ll probably do so before the average person does, but what I see and what the media hypes it up to be really don’t match up. I’m planning to set up a llama model if only because I have the spare hardware for it and it’s an interesting novelty.