I believe pixart sigma is more open. The community hasn’t rallied around it though.
Edit: Fuck yes, pixart is AGPL!
I believe pixart sigma is more open. The community hasn’t rallied around it though.
Edit: Fuck yes, pixart is AGPL!
I know people shit on those bots that repost content from Reddit and other places, but it’s a really simple way to bootstrap content on Lemmy. I think if we want to win we need to be open to that.
Probably no, unless it turns out to have more porn than non-porn. Which could definitely happen!
(And with a site name like X, probably should happen.)
It’s advertising more than AI for me. Everything you do in Windows is monetized by selling your preferences to advertisers. Shameful.
Locally run AI could be great. But sending all your data to an external server for processing is really, really bad.
It won’t fizzle out; it already has legitimate business use cases. (A lot fewer than the marketing bros want you to believe, but real use cases nonetheless.) Blockchain and Augmented Reality never reached this point, so they fizzled. We’ll see a huge AI winter soon just like we did in the dot com bust in 2000.
Honestly, dating apps are one of the cases for a federated system. Use whatever frontend you want, no one person owns the backend so they can’t sleazily monetize you. Probably would need to be a bit more cryptographic than something like lemmy or mastodon though.
I mean, Twitter is the dumbest shit on the internet. But Reddit gets close sometimes!
You posted a single blog post about ChatGPT not being deterministic, I posted a GitHub issue that explains exactly how to do it using the transformers library. Not sure we can see eye to eye on this one.
Yes, but seeding the random generator makes it deterministic. Because LLMs don’t use actual randomness, they use pseudorandom generators.
For all the same inputs, you’ll get the same result, barring a hardware failure. But you have to give it exactly the same inputs. That includes random seed and system prompt (eg. you can’t put the current time and date in the system prompt), as well as the prompt.
Please enlighten me then. Clearly people are doing it, as proved by the link I sent. Are you simply going to ignore that? Perhaps we have different definitions of determinism.
Again, you are wrong. Specifically ChatGPT may not be able to be deterministic since it’s a hosted service, but you absolutely can replay a prompt using the same random seed to get deterministic responses. Computer randomness isn’t truly random.
But if that’s not satisfying enough, you can also configure the temperature to be zero and system fingerprinting to always be the same, and that makes it even more deterministic, since it will always use the highest probability token.
For example, Llama can be fully deterministic. https://github.com/huggingface/transformers/issues/25507#issuecomment-1678498896
What do you mean? AI absolutely can be made deterministic. Do you have a source to back up your claim?
You know what’s not deterministic? Human content reviewers.
Besides, determinism isn’t important for utility. Even if AI classified an ad wrong 5% of the time, it’d still massively clean up the spammy advertisers. But they’re far, FAR more accurate than that.
Okay this is going to be one of the amazingly good uses of the newer multimodal AI, it’ll be able to watch every submission and categorize them with a lot more nuance than older classifier systems.
We’re probably only a couple years away from seeing a system like that in production in most social media companies.
That’s a great idea, if someone can bring the software and enough advertising to make it successful. It’s really hard but possible.
Good. Let their stranglehold die.
Anyone I know who’s actually deep into cybersecurity avoids extra devices, including smartphones. If you’re not hyper paranoid, you’ve missed the majority of what the nation states are up to.
the accuracy for lived-in cars is still far lower: between 10 and 15%
Sounds like the tech isn’t terribly useful
They’re not stranded because the part of the capsule that isn’t working has multiple redundancy and is intended to burn up on reentry anyway.
Starliner is perfectly capable of leaving the ISS whenever they want, but they would be unable to continue collecting data on the thruster shutoff (again, because it would burn up in the atmosphere).