• glitchdx@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      34 minutes ago

      is there an easy way to do this that doesn’t require me to understand how github works?

      • llama@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        22 minutes ago

        I think that in that case, YouTube is your friend. There are a few pretty straight forward videos that can help you out.

  • rottingleaf@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    9 hours ago

    No problem, after they release all the data collected under the excuse of public good and progress.

    • doomcanoe@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      22
      ·
      9 hours ago

      Weird, I said this shit for years, and I was upvoted into the heavens, agreed with, called a hero, and acknowledged as a result.

      Maybe is not what was being said?

    • cm0002@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      3
      ·
      12 hours ago

      Heh, I warn about Mozilla/Firefox all the time and get the same. I hope I’m wrong though :(

      • rottingleaf@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        6 hours ago

        Everything was clear about Mozilla the moment they started fighting the ecosystem around Gecko, with alternative browsers, useful extensions and so on. And, of course, the old usable UI.

        People just forget what they don’t know how to process.

        • scratchee@feddit.uk
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          6 hours ago

          Disagree, XUL was a dead end that either needed shooting behind the bike shed or it’d have taken Mozilla down with it inevitably. It froze their internal architecture to a design that didn’t care about multicore or modern security. Switching to a proper extension api (it didn’t matter if it was chromes or their own, only that they are willing to make their own decisions, like in manifest v3).

          That said, I suspect the real death blow was when they killed servo, that project was their distant salvation, a chance to genuinely outcompete technologically and direct where browsers need to go next. I too hope I’m wrong and they can figure out a path forward, but they’ve shown little ambition from the top, so I’m not holding my breath.

          Edit: you could argue that the solution to XUL should have been an upgrade to modern design rather than death, but that would have just been an expensive temporary reprieve, the world doesn’t stop changing, it was always going to be slow to correct to whatever direction they needed to go next (and meanwhile every extension dev would be screaming murder every time they killed some braindead api designed 20 years ago).

          • rottingleaf@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 hours ago

            XUL itself - of course.

            Edit: you could argue that the solution to XUL should have been an upgrade to modern design rather than death, but that would have just been an expensive temporary reprieve, the world doesn’t stop changing,

            I’m not sure what do you mean by that. No deep customization at all is, of course, easier to support than some.

            I don’t care about preserving the feel of XUL, or any aesthetics, but I do care about its role.

            It’s not about specific extensions and specific language. It’s about the “before” allowing things like Conkeror and any kind of appearance change conceivable and the “after” - not, if we don’t count stupid CSS that breaks with every update.

      • expr@programming.dev
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        11 hours ago

        I don’t think you’ve paid enough attention. Back when ChatGPT first launched, they were treated as saints.

        The negative opinions have corresponded with public sentiment souring towards them in general (this did happen quite quickly, however).

        • Communist@lemmy.frozeninferno.xyz
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          6
          ·
          edit-2
          11 hours ago

          Can you provide even one example? AI is my autistic obsession, and I never saw anything like that on lemmy even once.

          I was even regularly searching for “AI” using the search feature daily.

          I have never once seen this, and I don’t find it believable at all, honestly.

          • MouldyCat@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            I think it’s another example of “internet bubbles” - people with similar views tend to congregate together and this is particularly true on the internet, when going elsewhere is always just a mouse-click away.

            When ChatGPT first launched, Lemmy was still pretty much a ghost town, and it did cause a lot of optimistic excitement e.g. on reddit. Lemmy got a big surge in numbers when reddit did its infamous API changes - enshittification driven by spez’s and other reddit executives’ insatiable lust to exploit the site for more and more money.

            Perhaps for this reason, people on Lemmy are more averse to the enshittification trend and generally exploitive nature of large tech companies. I think this is what people on Lemmy object to - tech companies’ concentration of power and profits by ripping off the general public - not so much the concept of LLMs themselves, but the fact they could easily be used to further inequality in society.

              • MouldyCat@feddit.uk
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 minutes ago

                Yes you’re right, sorry I went off on a tangent about the reasons for the intense negativity in the Lemmyverse about LLMs. I’ve been using lemmy for four years, and definitely don’t think there has ever been any positive feelings towards LLMs here, especially as ChatGPT’s arrival predates the first surge of users on Lemmy (and the subsequent appearance of all the instances we see today). On reddit, yes, and there are still many people there who still think OpenAI is great.

  • BB84@mander.xyz
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    1
    ·
    16 hours ago

    Stop depending on these proprietary LLMs. Go to [email protected].

    There are open-source LLMs you can run on your own computer if you have a powerful GPU. Models like OLMo and Falcon are made by true non-profits and universities, and they reach GPT-3.5 level of capability.

    There are also open-weight models that you can run locally and fine-tune to your liking (although these don’t have open-source training data or code). The best of these (Alibaba’s Qwen, Meta’s llama, Mistral, Deepseek, etc.) match and sometimes exceed GPT 4o capabilities.

    • llama@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 minutes ago

      The issue with that method, as you’ve noted, is that it prevents people with less powerful computers from running local LLMs. There are a few models that would be able to run on an underpowered machine, such as TinyLlama; but most users want a model that can do a plethora of tasks efficiently like ChatGPT can, I daresay. For people who have such hardware limitations, I believe the only option is relying on models that can be accessed online.

      For that, I would recommend Mistral’s Mixtral models (https://chat.mistral.ai/) and the surfeit of models available on Poe AI’s platform (https://poe.com/). Particularly, I use Poe for interacting with the surprising diversity of Llama models they have available on the website.

    • Kbobabob@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      There are open-source LLMs you can run on your own computer if you have a powerful GPU.

      What defines powerful? What if you don’t have the necessary hardware?

    • 0x01@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      llama is good and I’m looking forward to trying deepseek 3, but the big issue is that those are the frontier open source models while 4o is no longer openai’s best performing model, they just dropped o3 (god they are literally as bad as microsoft at naming) which shows in benchmarks tremendous progress in reasoning

      When running llama locally I appreciate the matched capabilities like structured output, but it is objectively significantly worse than openai’s models. I would like to support open source models and use them exclusively but dang it’s hard to give up the results

      I suppose one way to start for me would be dropping cursor and copilot in favor of their open source equivalents, but switching my business to use llama is a hard pill to swallow

    • ArchRecord@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      14 hours ago

      And there are also free, online hosted instances of those same LLMs in a (relatively speaking) privacy-protecting format from DuckDuckGo, for anyone who doesn’t have a powerful GPU :)

      • BB84@mander.xyz
        link
        fedilink
        English
        arrow-up
        10
        ·
        13 hours ago

        Interesting. So they mix the requests between all DDG users before sending them to “underlying model providers”. The providers like OAI and Anthropic will likely log the requests, but mixing is still a big step forward. My question is what do they do with the open-weight models? Do they also use some external inference provider that may log the requests? Or does DDG control the inference process?

        • ArchRecord@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          13 hours ago

          All requests are proxied through DuckDuckGo, and all personalized user metadata is removed. (e.g. IPs, any sort of user/session ID, etc)

          They have direct agreements to not train on or store user data, (the training part is specifically relevant to OpenAI & Anthropic) with a requirement they delete all information once no longer necessary (specifically for providing responses) within 30 days.

          For the Llama & Mixtral models, they host them on together.ai (an LLM-focused cloud platform) but that has the same data privacy requirements as OpenAI and Anthropic.

          Recent chats that are saved for later are stored locally (instead of on their servers) and after 30 conversations, the last chat before that is automatically purged from your device.

          Obviously there’s less technical privacy guarantees than a local model, but for when it’s not practical or possible, I’ve found it’s a good option.

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      27
      ·
      16 hours ago

      Well, apart from the people like me who thought they had always been one because they acted exactly like one.

      • bss03@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        I thought they had successfully converted around the time they got the infusion of funds from MS. I thought they were started as a not-for-profit, but were already shady-as-shit when they stopped publishing stuff under open licenses.

  • Boiglenoight@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    14 hours ago

    This is going to sound weird but so is the internet their icon suggests a chain of bodies eating out the ass of the one in front of them which to me seems apt for the product

  • seven_phone@lemmy.world
    link
    fedilink
    English
    arrow-up
    145
    arrow-down
    1
    ·
    22 hours ago

    So the development of inorganic intelligence, considered by many as an inflection point in human civilisation is to be handed to business graduates who are historically proven to be capable of any level of atrocity in the name of corporate greed. America, fuck yeah.

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      16 hours ago

      Actually corporations themselves are 99% of what people fear about AGI already in their inhuman decisionmaking to the detriment of humanity.

    • Jo Miran@lemmy.ml
      link
      fedilink
      English
      arrow-up
      70
      arrow-down
      2
      ·
      22 hours ago

      America Greed, fuck yeah.

      Don’t fool yourself. The USA lost the exclusivity deal on unchecked corpo greed a long time ago. This is a global issue now.

        • seven_phone@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          20 hours ago

          Yeah, the American tag was just a throwaway line, greed unchecked, insane and self-harming has always been with us. We let it sit with us around our camp fires like wolves but unlike wolves we never tamed it.

          • Hackworth@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            19 hours ago

            Then again, the US and China are basically the only players in this “game” atm. Hugging Face is trying hard to get the EU on-boarded, and I’m sure we’ll see more contenders. But right now it’s a 2-player game.

    • Ajen@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      15 hours ago

      What do you mean by “inorganic intelligence,” exactly? Do you think openai has already achieved it?

    • Eheran@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      18 hours ago

      As if any other group did not prove that just as much, if not more so.

    • cybergazer@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      19
      ·
      19 hours ago

      Dont see the issue mann, people are hard at work at openai to make the best quality ai on the market. Why would you not give it the best economic system on the planet as well? Its literally the best of the best of the best