• RedditWanderer@lemmy.world
      link
      fedilink
      English
      arrow-up
      37
      arrow-down
      4
      ·
      edit-2
      7 months ago

      First all companies were afraid of giving access to these models, for trade secret issues and security. But then they basically all met at the white house to agree that they would make way more fucking money stealing it than they would pay in restitution or damages to people and small businesses.

      Suddenly everybody had a chatbot and generated art ready for commercial sale. They also had to make the shift quickly enough before official laws and protections (mostly from the EU) came in.

      Now AI is plateauing a bit so they must hurry to get valuated at 10 trillion dollars and get their energy needs subsidized and have taxpayers invest into the nation’s energy requirements on their behalf.

    • Wrench@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      35
      ·
      edit-2
      7 months ago

      I doubt that most corporations would even consider allowing Slack as a trusted app if they weren’t hosting their own instances themselves.

      I have to assume that this training is exclusively on instances hosted on Slacks’ servers. So probably lots of smaller businesses that don’t know any better. And this was probably agreed to in the ToS as part of utilizing free and easy to set up cloud servers.

        • Wrench@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          16
          ·
          7 months ago

          Ahh, looked at it and you’re right. They have an “Enterprise” version which seems like it’s security conscious.

          Still, I stand by my original assertion. I have worked for FAANG companies with completely locked down security that allowed us to use Slack. I would be extremely surprised if their contract with Slack didn’t ensure complete data privacy.

          We’re talking about companies where a product leak makes international news. There is zero chance Slack employees have access to communications.

          • Kilgore Trout@feddit.it
            link
            fedilink
            English
            arrow-up
            7
            ·
            7 months ago

            We’re talking about companies where a product leak makes international news. There is zero chance Slack employees have access to communications.

            Sure, even though Slack itself admits so in their privacy policy.

  • RidcullyTheBrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    2
    ·
    7 months ago

    it’s funny how the conventional wisdom at the end of the last decade was that slack was preferred over other simpler/free alternatives because of its UX. People were hailing it for how simple and intuitive it was to use, etc.

    5, 6 years later, it has become a bloated piece of crap riddled with bugs. And the UI changes which come unannounced… it should be a criminal offense to change UI through automated updates.

    Anyway, here we are, companies have handed their data to this monster and we’ll see how they react when the data gets misused. Hopefully that would be the beginning of the end for it

    • pearsaltchocolatebar@discuss.online
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      7 months ago

      I fucking hate slack. I very rarely get any notification of new messages, and if I do I have to restart the app to get them to actually show up

      • Evotech@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        7 months ago

        I love slack. But the only thing I can compare it with for corp use is teams. So if course it’s amazing

    • ____@infosec.pub
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      Teams is bloated garbage.

      I miss Slack, though circa several years back. “Just worked,” on most any platform, without the BS or “help”.

      Wouldn’t like it now, I’m sure, but haven’t had a chance to use it since I started working for a co who is “all in” on MS, including foisting AI on us.

      I am capable of drafting an email or message, bitches. If I am concerned about tone, etc., I’d prefer to employ an actual human I have a close relationship with to review the same.

      I have zero desire to be constantly corrected, and there are certain niche scenarios where very minor errors are actually endearing, and indicate enthusiasm.

      “Bob, I saw the posting for your role, can you tell me about your avg day?” is effective because it’s honest, coherent, and just excited enough that you made a minor error that slipped through.

      When Bob gets 25 of those emails and they all look the same because AI, it’s much harder to make the connection.

      • corsicanguppy@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        minor error

        It was a the comma splice, wasn’t it? Depending on Bob’s cohort, he may never notice.

        … and if I was receiving notes and questions about a role, an error like “emails” would earn relegation for sure; so be careful which error you leave in.

      • RidcullyTheBrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        i never had the “pleasure” to use teams. Is it also replacing outlook? And is it worse somehow than fucking outlook?!

      • kamenLady.@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        ·
        edit-2
        7 months ago

        At this point, you should be able to ask, if you missed something important in the last few years. Is there any open conversation waiting for a reply somewhere?

        Edit: if they use our data, they should at least give us some useful tools, in order for us to be able to see what personal information is out there …

        • Passerby6497@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 months ago

          At this point, you should be able to ask, if you missed something important in the last few years. Is there any open conversation waiting for a reply somewhere?

          Not sure if you’ve ever used Copilot (I have it at work) and it offers the ability to summarize conversations and tell you what you’ve missed. I’ve used that a lot for high chatter conversations when I don’t feel like catching up or I’ve been out. Pretty nice.

  • iAmTheTot@kbin.social
    link
    fedilink
    arrow-up
    29
    arrow-down
    1
    ·
    7 months ago

    There’s a safe bet that if you’ve put something on the internet, it’s been scraped by a bot by now for training. I don’t like that, for the record, just saying I’m not surprised at this point. Companies are morally bankrupt

    • cm0002@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      4
      ·
      7 months ago

      I don’t know why everyone is all shocked all of a sudden, there have been various scraper bots collecting text info for…many years now, LONG before LLMs came onto the scene.

      • QuadratureSurfer@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 months ago

        I agree, but it’s one thing if I post to public places like Lemmy or Reddit and it gets scraped.

        It’s another thing if my private DMs or private channels are being scraped and put into a database that will most likely get outsourced for prepping the data for training.

        Not only that, but the trained model will have internal knowledge of things that are sure to give anxiety to any cyber security experts. If users know how to manipulate the AI model, they could cause the model to divulge some of that information.

  • Endorkend@kbin.social
    link
    fedilink
    arrow-up
    21
    ·
    7 months ago

    The more they push to train AI on our shitpostings on social networks, the more I’m certain we’re fucking doomed if their AI ever reaches consciousness.

    • Thorny_Insight@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      7 months ago

      We may very well be doomed if AI reaches consciousness but I’m not quite convinced LLM’s is the way to get there but even if it was and it was solely trained on social media content I still wouldn’t expect it to adopt the behaviour of your typical social media commentor. The toxic behaviour on social media is, in my view, almost solely driven by our human ego and pettiness. It’s not obvious to me that AI would care about things like winning arguments or coming up with snide remarks and such. What I see as the most likely outcome would be endlessly patient and quite autistic-like being that’s balanced in it’s views and would most likely be pretty difficult to argue against. I doubt humans are anywhere even near the far-end of the intelligence spectrum and something with the information processing capability that’s orders of magnitude greater than ours would more than likely not get caught up in stuff like confirmation bias, partisan thinking, motivated reasoning, being tossed around by emotions, cognitive dissonance etc. Those are by definitions human features.

  • Ghostalmedia@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    3
    ·
    edit-2
    7 months ago

    Sounds like a lot of this is for non-generative AI. It’s for dumb things like that frequently used emoji feature.

    Knowing how my legal teams have worked in my tech companies, I’m a bet that a lawyer updated the terms language to be in compliance with privacy legislation, but they did a shit job, and didn’t clarify what specifically was being covered in the TOS. They were lazy, and crafted something broad, so they wouldn’t have to actually talk to product or marketing people in their org.

  • Hobo@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    Anyone aware if they are also getting data from their slack for government offering? I was looking at the govslack site and I can’t tell one way or the other. While they claim to meet most of the big compliance regs I don’t see anything about training AI being included/excluded.

    I know that stealing trade secrets is a concern but seems like stealing state secrets might have some other implications. I know you’re not supposed to talk on slack about any classified info, but that doesn’t mean that sensitive info isn’t shared which also has some rather profound implications as well.

  • Andromxda 🇺🇦🇵🇸🇹🇼@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    7 months ago

    Stay away from proprietary crap like Discord, Slack, WhatsApp and Facebook Messenger. There are enough FOSS alternatives out there:

    • You just want to message a friend/family member?
    • You need strong privacy/security/anonymity?
      • SimpleX
      • Session
      • Briar
      • I can’t really tell you which one is the best, since I never used any of these (except for Session) for an extended period of time. Briar seems to be the best for anonymity, because it routes everything through the Tor network. SimpleX allows you to host your own node, which is pretty cool.
    • You want to host an online chatroom/community?
    • You need to message your team at work?
    • You want a Zoom alternative?