Honestly, that’s the main thing I was thinking.
Honestly, that’s the main thing I was thinking.
This is so exciting. I worked in a lab where we were trying to do this, and so I was very aware what a gold rush we were in. I’m so glad to see that it’s actually happening.
This is truly a watershed moment in science. This is going to mark a major turning point in cellular medicine from theory to commonplace care. Eventually, this will end the pharma industry’s insulin cash cow.
But it’s even bigger than that. Because once we can engineer cells that produce a natural product, the next step is to engineer cells that produce synthetic medicines. Antidepressants, birth control, hormones, weight loss drugs, boner pills… The frontier is huge, lucrative, financially disruptive for pharma companies and life changing for patients. This is a big moment in history, and we all need to be fighting harder than ever to end for-profit healthcare. Otherwise we’re going to end up with subscription licenses to our own bodies.
This article doesn’t really answer most of my questions.
What subjects does the AI cover? Do they do all their learning independently? Does AI compose the entire lesson plan? What is the software platform? Who developed it? Is this just an LLM or is there more to it? How are students assessed? How long has the school been around, and what is their reputation? What is the fundamental goal of their approach?
Overall, this sounds quite dumb. Just incredibly and transparently stupid. Like, if they insisted that all learning would be done on the blockchain. I’m very open minded, but I don’t understand what the student’s experience will be. Maybe they’ll learn in the same way one could learn by browsing Wikipedia for 7 hours a day. But will they enjoy it? Will it help them find career fulfillment, or build confidence or learn social skills? It just sounds so much like that Willie Wonka experience scam but applied to an expensive private school instead of a pop-up attraction.
I was trying to explain what AI alignment is to my mom, and I ended up using the behavior of companies like OpenAI, and how they’re distorted by profit motive as an example of a misaligned decision making system. And I realized that late stage capitalism is basically the paperclip maximizer made real.
This is a very good article. I think AI models have more to teach us about epistemology than people want to believe right now.
That sounds like some very cool engineering. I hope it sees as little use as possible, but I’m glad you’re prepared.
I’m concerned that this would require a continuous supply of water at a flow rate that might not be realistic.
I don’t think it’s secret. A lot of OpenAI’s business strategy is to warn of the danger of their own project as a means of hyping it.
OpenAI, despite having produced a pretty novel product, doesn’t really have a sound business model. LLMs are actually expensive to run. The energy and processing is not cheap, and it’s really not clear that they produce something of value. It’s a cool party trick, but a lot of the use cases just aren’t cost effective at this point. That makes their innovation hard to commercialize. So OpenAI promotes itself like online clickbait games.
You know the ones that are like, ‘WARNING: This game is so sexy it is ADDICTIVE! Do NOT play our game if you don’t want to CUM TOO HARD!’
That’s OpenAI’s marketing strategy.
They start at $70k. And they are actually still losing money on each sale.
It’s largely marketed as a recreation/sport vehicle. It’s for going camping and off-roading.
That isn’t too say that it can’t also get you to and from work, or even be used for constructive uses. But at the price and feature set, I think anyone would agree it’s designed to be a fun luxury first and foremost rather than a practical tool.
Whew. I’m glad he’s happy with his purchase. I can’t ever imagine having enough money that I could drop that kind of cash on a toy, no matter how neat I think it looks.
Haha a bike.
I hold out hope, actually, that as the right-to-repair movement continues to grow, eventually repairability and control will become more common consumer interests, in the same way that vehicle safety wasn’t something people thought about when buying a car before the 70s, and now it’s one of the main influences when buying a car.
Once people start caring – and again, I believe this is the direction we’re heading – it will become something manufacturers have to design for.
This is modestly interesting. My brother worked here before they had layoffs about two years ago, and had a generally favorable opinion of the company and leadership.
Fundamentally, while I think RJ seems like a sound businessman and technologist, and I like the company’s taste a bit, I will never be able to reconcile his views with mine. He very openly views cars as computers and software and services that happen to move you around, and I would like it to be a machine over which I have as minimal a relationship as possible with the manufacturer after I acquire the product.
Still, I wish them luck.
This is actually a misrepresentation of the law.
The law bans school districts from requiring teachers to report if students start experimenting with different pronouns.
Teachers can still report this to parents. There is nothing barring them from doing so. The only change is that they aren’t policed by their school district.
Technically, this is actually the classically conservative position!
This whole thing is extremely stupid. Parents should take care of their own shit. You want to know what your kid is thinking? Talk to them. Demanding that the trusted adults in their lives who DO pay attention to them narc for you is a weak-ass move for parents who run to the nanny state to help them raise their kids because they don’t know how to manage their own damn family life.
I think maybe execs and investors might feel it’s all the same, but if you’re a project manager for cloud infrastructure for enterprise services or you’ve been working for years on releasing a new component of Bing search that you think is a real gamechanger and some muckity-muck at the top says, ‘Oh, don’t worry about that anymore: a property manager that’s owned by a private equity partner of one of our big investors wants the chatbot that schedules apartment viewings in Huntsville to be more flirty, so go massage the prompts to make it convincingly laugh at bad jokes,’ some of those folks are liable to start grumbling that this isn’t the role that they were pitched when they took this job.
This sucks. I was really holding out hope that they might chart a better path forward than most of the alternatives.
Yeah, but the training set is nowhere near clean. That’s my point. “Close” is no where near good enough within this context,
Why do you guarantee that? It seems obviously wrong, on a technical level.
The point I’m making is that even if we take it as a given that a shrewd enough AI could correctly distinguish sex at birth – which I think is obviously impossible based on the appearances of many ciswomen and the nature of statistical prediction – you’d still need a training data set.
If the dataset has any erroneous input, that corrupts its ability, and the whole point of this exercise is trying to find passing transwomen. Why would anyone expect that training set of hundreds of thousands of supposed cis women wouldn’t have a few transwomen in it?
This is a great point.
The technology that excludes transwomen from the app is the clear warning that the app is populated exclusively for transphobes. It’s obviously wildly dangerous for a transwoman to be on the app.
The notion that AI is going to clock them is absurd AI hype. There’s no reason to expect AI to be capable of this kind of discernment, and that assumes you even had a training set. Where in the absolute fuck would someone find a training set like that?
Edit: I didn’t read the article. It seems it’s a lesbian dating app. Well, probably less dangerous for transwomen, but still not technically sound.
Yeah. And it’s so bad that I feel like the functionality barely goes down.
They should release the following:
‘Out of an abundance of caution, we advise against any user charging this device and attempting to rely on it for communications or regular assistance. Fortunately, we’ve found a workaround and suggest customers looking to continue enjoying the benefits of the Humane pin consider wearing it down in an unpowered state. This will provide infinite battery life and a 100% reduction in unwanted heating while enabling users to continue to receive nearly all the same functionality to which they are accustomed.’
I think his intense commitment to getting Trump elected makes more sense when you consider this article.
His enormous wealth is largely stored in the form of Tesla stock, and that stock has been valued based on the belief that it isn’t a car company, it’s a robotaxi service currently selling the hardware to finance the software development. The value – and his wealth – can persist indefinitely as long as investors continue to accept that premise, no matter how long delayed. But if something tangibly undermines that premise, Musk could conceivably lose the majority of his wealth overnight.
The National Highway Traffic Safety Agency is probably the greatest threat to his wealth. He doesn’t worry about competitors or protestors or Twitter users or advertisers. They’re all just petty nuisances. But the federal regulator over roads… that is his proverbial killer snail. And I think fully capturing the entire federal regulatory state is his strategy to permanently confine that snail.
More than anything else, I think that’s what is motivating his radical embrace of fascism.