Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

  • xantoxis@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    I don’t know how you’d solve the problem of making a generative AI accurately create a slate of images that both a) inclusively produces people with diverse characteristics and b) understands the context of what characteristics could feasibly be generated.

    But that’s because the AI doesn’t know how to solve the problem.

    Because the AI doesn’t know anything.

    Real intelligence simply doesn’t work like this, and every time you point it out someone shouts “but it’ll get better”. It still won’t understand anything unless you teach it exactly what the solution to a prompt is. It won’t, for example, interpolate its knowledge of what US senators look like with the knowledge that all of them were white men for a long period of American history.

    • Jojo@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Real intelligence simply doesn’t work like this

      There’s a certain point where this just feels like the Chinese room. And, yeah, it’s hard to argue that a room can speak Chinese, or that the weird prediction rules that an LLM is built on can constitute intelligence, but that doesn’t mean it can’t be. Essentially boiled down, every brain we know of is just following weird rules that happen to produce intelligent results.

      Obviously we’re nowhere near that with models like this now, and it isn’t something we have the ability to work directly toward with these tools, but I would still contend that intelligence is emergent, and arguing whether something “knows” the answer to a question is infinitely less valuable than asking whether it can produce the right answer when asked.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        I really don’t think that LLMs can be constituted as intelligent any more than a book can be intelligent. LLMs are basically search engines at the word level of granularity, it has no world model or world simulation, it’s just using a shit ton of relations to pick highly relevant words based on the probability of the text they were trained on. That doesn’t mean that LLMs can’t produce intelligent results. A book contains intelligent language because it was written by a human who transcribed their intelligence into an encoded artifact. LLMs produce intelligent results because it was trained on a ton of text that has intelligence encoded into it because they were written by intelligent humans. If you break down a book to its sentences, those sentences will have intelligent content, and if you start to measure the relationship between the order of words in that book you can produce new sentences that still have intelligent content. That doesn’t make the book intelligent.

        • Jojo@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          But you don’t really “know” anything either. You just have a network of relations stored in the fatty juice inside your skull that gets excited just the right way when I ask it a question, and it wasn’t set up that way by any “intelligence”, the links were just randomly assembled based on weighted reactions to the training data (i.e. all the stimuli you’ve received over your life).

          Thinking about how a thing works is, imo, the wrong way to think about if something is “intelligent” or “knows stuff”. The mechanism is neat to learn about, but it’s not what ultimately decides if you know something. It’s much more useful to think about whether it can produce answers, especially given novel inquiries, which is where an LLM distinguishes itself from a book or even a typical search engine.

          And again, I’m not trying to argue that an LLM is intelligent, just that whether it is or not won’t be decided by talking about the mechanism of its “thinking”

          • intensely_human@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            We can’t determine whether something is intelligent by looking at its mechanism, because we don’t know anything about the mechanism of intelligence.

            I agree, and I formalize it like this:

            Those who claim LLMs and AGI are distinct categories should present a text processing task, ie text input and text output, that an AGI can do but an LLM cannot.

            So far I have not seen any reason not to consider these LLMs to be generally intelligent.

            • GiveMemes@jlai.lu
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              7 months ago

              Literally anything based on opinion or creating new info. An AI cannot produce a new argument. A human can.

              It took me 2 seconds to think of something LLMs can’t do that AGI could.

    • FooBarrington@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 months ago

      I’ll get the usual downvotes for this, but:

      Because the AI doesn’t know anything.

      is untrue, because current AI fundamentally is knowledge. Intelligence fundamentally is compression, and that’s what the training process does - it compresses large amounts of data into a smaller size (and of course loses many details in the process).

      But there’s no way to argue that AI doesn’t know anything if you look at its ability to recreate a great number of facts etc. from a small amount of activations. Yes, not everything is accurate, and it might never be perfect. I’m not trying to argue that “it will necessarily get better”. But there’s no argument that labels current AI technology as “not understanding” without resorting to a “special human sauce” argument, because the fundamental compression mechanisms behind it are the same as behind our intelligence.

      Edit: yeah, this went about as expected. I don’t know why the Lemmy community has so many weird opinions on AI topics.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 months ago

        Lemmy hasn’t met a pitchfork it doesn’t pick up.

        You are correct. The most cited researcher in the space agrees with you. There’s been a half dozen papers over the past year replicating the finding that LLMs generate world models from the training data.

        But that doesn’t matter. People love their confirmation bias.

        Just look at how many people think it only predicts what word comes next, thinking it’s a Markov chain and completely unaware of how self-attention works in transformers.

        The wisdom of the crowd is often idiocy.

        • FooBarrington@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          Thank you very much. The confirmation bias is crazy - one guy is literally trying to tell me that AI generators don’t have knowledge because, when asking it for a picture of racially diverse Nazis, you get a picture of racially diverse Nazis. The facts don’t matter as long as you get to be angry about stupid AIs.

          It’s hard to tell a difference between these people and Trump supporters sometimes.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            7 months ago

            It’s hard to tell a difference between these people and Trump supporters sometimes.

            To me it feels a lot like when I was arguing against antivaxxers.

            The same pattern of linking and explaining research but having it dismissed because it doesn’t line up with their gut feelings and whatever they read when “doing their own research” guided by that very confirmation bias.

            The field is moving faster than any I’ve seen before, and even people working in it seem to be out of touch with the research side of things over the past year since GPT-4 was released.

            A lot of outstanding assumptions have been proven wrong.

            It’s a bit like the early 19th century in physics, where everyone assumed things that turned out wrong over a very short period where it all turned upside down.

            • FooBarrington@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              Exactly. They have very strong feelings that they are right, and won’t be moved - not by arguments, research, evidence or anything else.

              Just look at the guy telling me “they can’t reason!”. I asked whether they’d accept they are wrong if I provide a counter example, and they literally can’t say yes. Their world view won’t allow it. If I’m sure I’m right that no counter examples exist to my point, I’d gladly say “yes, a counter example would sway me”.

              • GiveMemes@jlai.lu
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                1
                ·
                7 months ago

                Yall actually have any research to share or just gonna talk about it?

            • GiveMemes@jlai.lu
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              7 months ago

              Yall actually have any research to share or just gonna talk about it?

    • TORFdot0@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 months ago

      Edit: further discussion on the topic has changed my viewpoint on this, its not that its been trained wrong on purpose and now its confused, its that everything its being asked is secretly being changed. It’s like a child being told to make up a story by their teacher when the principal asked for the right answer.

      Original comment below


      They’ve purposefully overrode its training to make it create more PoCs. It’s a noble goal to have more inclusivity but we purposely trained it wrong and now it’s confused, the same thing as if you lied to a child during their education and then asked them for real answers, they’ll tell you the lies they were taught instead.

      • TwilightVulpine@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        This result is clearly wrong, but it’s a little more complicated than saying that adding inclusivity is purposedly training it wrong.

        Say, if “entrepreneur” only generated images of white men, and “nurse” only generated images of white women, then that wouldn’t be right either, it would just be reproducing and magnifying human biases. Yet this a sort of thing that AI does a lot, because AI is a pattern recognition tool inherently inclined to collapse data into an average, and data sets seldom have equal or proportional samples for every single thing. Human biases affect how many images we have of each group of people.

        It’s not even just limited to image generation AIs. Black people often bring up how facial recognition technology is much spottier to them because the training data and even the camera technology was tuned and tested mainly for white people. Usually that’s not even done deliberately, but it happens because of who gets to work on it and where it gets tested.

        Of course, secretly adding “diverse” to every prompt is also a poor solution. The real solution here is providing more contextual data. Unfortunately, clearly, the AI is not able to determine these things by itself.

        • TORFdot0@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          I agree with your comment. As you say, I doubt the training sets are reflective of reality either. I guess that leaves tampering with the prompts to gaslight the AI into providing results it wasn’t asked for is the method we’ve chosen to fight this bias.

          We expect the AI to give us text or image generation that is based in reality but the AI can’t experience reality and only has the knowledge of the training data we provide it. Which is just an approximation of reality, not the reality we exist in. I think maybe the answer would be training users of the tool that the AI is doing the best it can with the data it has. It isn’t racist, it is just ignorant. Let the user add diverse to the prompt if they wish, rather than tampering with the request to hide the insufficiencies in the training data.

        • cheese_greater@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          7 months ago

          Why couldn’t it be tuned to simply randomize the skin tone where not otherwise specified? Like if its all completely arbitrary just randomize stuff, problem-solved?