• DreamlandLividity@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 months ago

    The worst part is that once again, proton is trying to convince its users that it’s more secure than it really is. You have to wonder what else they are lying or deceiving about.

    • hansolo@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      Both your take, and the author, seem to not understand how LLMs work. At all.

      At some point, yes, an LLM model has to process clear text tokens. There’s no getting around that. Anyone who creates an LLM that can process 30 billion parameters while encrypted will become an overnight billionaire from military contracts alone. If you want absolute privacy, process locally. Lumo has limitations, but goes farther than duck.ai at respecting privacy. Your threat model and equipment mean YOU make a decision for YOUR needs. This is an option. This is not trying to be one size fits all. You don’t HAVE to use it. It’s not being forced down your throat like Gemini or CoPilot.

      And their LLM. - it’s Mistral, OpenHands and OLMO, all open source. It’s in their documentation. So this article is straight up lies about that. Like… Did Google write this article? It’s simply propaganda.

      Also, Proton does have some circumstances where it lets you decrypt your own email locally. Otherwise it’s basically impossible to search your email for text in the email body. They already had that as an option, and if users want AI assistants, that’s obviously their bridge. But it’s not a default setup. It’s an option you have to set up. It’s not for everyone. Some users want that. It’s not forced on everyone. Chill TF out.

      • DreamlandLividity@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Their AI is not local, so adding it to your email means breaking e2ee. That’s to some extent fine. You can make an informed decision about it.

        But proton is not putting warning labels on this. They are trying to confuse people into thinking it is the same security as their e2ee mails. Just look at the “zero trust” bullshit on protons own page.

        • youmaynotknow@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 months ago

          Where does it say “zero trust” ‘on Protons own page’? It does not say “zero-trust” anywhere, it says “zero-access”. The data is encrypted at rest, so it is not e2ee. They never mention end-to-end encryption for Lumo, except for ghost mode, and they are talking about the chat once it’s complete and you choose to leave it there to use later, not about the prompts you send in.

          Zero-access encryption

          Your chats are stored using our battle-tested zero-access encryption, so even we can’t read them, similar to other Proton services such as Proton MailProton Drive, and Proton Pass. Our encryption is open source and trusted by over 100 million people to secure their data.

          Which means that they are not advertising anything they are not doing or cannot do.

          By posting this disinformation all you’re achieving is getting people to pedal back to all the shit services out there for “free” because many will start believing that privacy is way harder than it actually is so ‘what’s the point’ or, even worse, no alternative will help me be more private so I might as well just stop trying.

        • hansolo@lemmy.today
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          My friend, I think the confusion stems from you thinking you have deep technical understanding on this, when everything you say demonstrates that you don’t.

          First off, you don’t even know the terminology. A local LLM is one YOU run on YOUR machine.

          Lumo apparently runs on Proton servers - where their email and docs all are as well. So I’m not sure what “Their AI is not local!” even means other than you don’t know what LLMs do or what they actually are. Do you expect a 32B LLM that would use about a 32GB video card to all get downloaded and ran in a browser? Buddy…just…no.

          Look, Proton can at any time MITM attack your email, or if you use them as a VPN, MITM VPN traffic if it feels like. Any VPN or secure email provider can actually do that. Mullvad can, Nord, take your pick. That’s just a fact. Google’s business model is to MITM attack your life, so we have the counterfactual already. So your threat model needs to include how much do you trust the entity handling your data not to do that, intentionally or letting others through negligence.

          There is no such thing as e2ee LLMs. That’s not how any of this works. Doing e2ee for the chats to get what you type into the LLM context window, letting the LLM process tokens the only way they can, getting you back your response, and getting it to not keep logs or data, is about as good as it gets for not having a local LLM - which, remember, means on YOUR machine. If that’s unacceptable for you, then don’t use it. But don’t brandish your ignorance like you’re some expert, and that everyone on earth needs to adhere to whatever “standards” you think up that seem ill-informed.

          Also, clearly you aren’t using Proton anyway because if you need to search the text of your emails, you have to process that locally, and you have to click through 2 separate warnings that tell you in all bold text “This breaks the e2ee! Are you REALLY sure you want to do this?” So your complaint about warnings is just a flag saying you don’t actually know and are just guessing.

          • DreamlandLividity@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            edit-2
            2 months ago

            A local LLM is one YOU run on YOUR machine.

            Yes, that is exactly what I am saying. You seem to be confused by basic English.

            Look, Proton can at any time MITM attack your email

            They are not supposed to be able to and well designed e2ee services can’t be. That’s the whole point of e2ee.

            There is no such thing as e2ee LLMs. That’s not how any of this works.

            I know. When did I say there is?

      • DreamlandLividity@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        Zero-access encryption

        Your chats are stored using our battle-tested zero-access encryption, so even we can’t read them, similar to other Proton services such as Proton Mail, Proton Drive, and Proton Pass.

        from protons own website.

        And why this is not true is explained in the article from the main post as well as easily figured out with a little common sense (AI can’t respond to messages it can’t understand, so the AI must decrypt them).

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      It’s when the coffers of Microsoft, Amazon, Meta and investment banks dry up. All of them are losing billions every month but it’s all driven by fewer than 10 companies. Nvidia is lapping up the money of course, but once the AI companies stop buying GPUs on crazy numbers it’s going to be a rocky ride down.

      • astanix@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Is it like crypto where cpus were good and then gpus and then FPGAs then ASICs? Or is this different?

        • wewbull@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          I think it’s different. The fundamental operation of all these models is multiplying big matrices of numbers together. GPUs are already optimised for this. Crypto was trying to make the algorithm fit the GPU rather than it being a natural fit.

          With FPGAs you take a 10x loss in clock speed but can have precisely the algorithm you want. ASICs then give you the clock speed back.

          GPUs are already ASICS that implement the ideal operation for ML/AI, so FPGAs would be a backwards step.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    First of all…

    Why does an email service need a chatbot, even for business? Is it an enhanced search over your emails or something? Like, what does it do that any old chatbot wouldn’t?

    EDIT: Apparently nothing. It’s just a generic Open Web UI frontend with Proton branding, a no-logs (but not E2E) promise, and kinda old 12B-32B class models, possibly finetuned on Proton documentation (or maybe just a branded system prompt). But they don’t use any kind of RAG as far as I can tell.

    There are about a bajillion of these, and one could host the same thing inside docker in like 10 minutes.

    …On the other hand, it has no access to email I think?

  • Red_October@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Okay but are any AI chatbots really open source? Isn’t half the headache with LLMs the fact that there comes a point where it’s basically impossible for even the authors to decode the tangled madness of their machine learning?

    • lefixxx@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Yeah but you don’t open source the LLM, you open source the training code and the weights and the specs/architecture

      • nymnympseudonym@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        2 months ago

        what do you think an LLM is? once you’ve opened the weights, IMO it’s pretty open. Once they open the training data, that’s pretty damn open. What do you want a gitian reproducible build?

  • archchan@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    There’s some good discussion about the security in the comments, so I’m just going to say that Lumo’s Android app required the Play Store and GPlay Services. I uninstalled.

    It’s also quite censored. I gave Proton’s cute chatbot a chance, but I’m not impressed.