• 0 Posts
  • 89 Comments
Joined 1 year ago
cake
Cake day: July 20th, 2023

help-circle

  • No need to clarify what you meant with the oligarchs theres barely any exaggeration there. Ghouls is quite accurate.

    Considering the context of a worst case possible scenario (hostile takeover by an artificial superior) which honestly is indistinguishable from general end of the world doomerism prophecies but very alive in the circles of Sutskeva i believe safe ai consistent of the very low bar of

    “humanity survives wile agi improves the standards of living worldwide” of course for this i am reading between the lines based on previously aquired information.

    One could argue that If ASI is created the possibilities become very black and white:

    • ASI is indifferent about human beings and pursues its own goals, regardless of consequences for the human race. It could even find a way off the planet and just abandon us.

    • ASI is malaligned with humanity and we become but a. Resource, treating us no different then we have historically treated animals and plants.

    • ASI is aligned with humanity and it has the best intentions for our future.

    For either scenario it would be impossible to calculate its intentions because by definition its more intelligent then all of us. Its possible that some things that we understand as moral may be immoral from a better informed perspective, and vice versa.

    The scary thing is we wont be able to tell wether its malicious and pretending to be good. Or benevolent and trying to fix us. Would it respect consent if say a racist refuses therapy?

    Of course we can just as likely hit a roadblock next week and the whole hype dies out for another 10 years.


  • No i applaud a healthy dose of skepticism.

    I am everything but in favor of idolizing silicon valley gurus and tech leaders but from Sutskeva i have seen enough to know he is one of the few to actually pay attention to

    Artificial Super intelligence or ASI is the step beyond AGI (artificial general intelligence)

    The later is equal or better in capacity to a real human being in almost all fields.

    Artificial Super intelligence is defined (long before openai was a thing) as transcending human intelligence in every conceivable way. At which point its a fully independent entity that can no longer be controlled or shutdown.


  • Your entitled to that opinion, so are others. Sutsekeva may be an actual loony… or an absolute genius. Or both, that isn’t up to debate here.

    I am just explaining what this about because if you think this is “just another money raiser” you obviously havent paid enough attention to who this guy is exactly.

    Super intelligence in Artificial intelligence is a wel defined term btw, in case your still confused. You may have seen them plaster on stuff like buzzwords but all of these definitions precede AI hype of last years.

    ML = machine learning, algorithms that improve over time.

    AI = artificial intelligence, machine learning with complex logic, mimicking real intelligence. <- we are here

    AGI = artificial general intelligence. An Ai agent that functions intelligently at a level indistinguishable from a real human. <- expert estimate this will be archived before 2030

    ASI = Artificial Super Intelligence Agi that transcends human intelligence and capacities in every wat.

    It may not sound real to you but if you ever visited the singularity sub on reddit you will see how a great number of people do.

    Also everything is science fiction till its not. Horseless chariots where science fiction so where cordless phones. The first airplane went up in 1903, 66 years later we landed in the moon.


  • Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.

    This is the guy who turned against Sam for being to much about releasing product. I don’t think he plans on delivering much product at all. The reason to invest isn’t to gain profit but to avoid losing to an apocalyptic event which you may or may not personally believe, many Silicon Valley types do.

    A safe Ai would be one that does not spell the end of Humanity or the planet. Ilya is famously obsessed with creating whats basically a benevolent AI god-mommy and deeply afraid for an uncontrollable, malicious Skynet.




  • He’s the guy that tried to stay true to the mission, realized that Sam was becoming a problem. Had little business sense to calculate the drama that ensued.

    Realized that openAI getting destroyed in a single week was worse then having Sam run it. (Micrososoft and google where just going to gobble up the void)

    Then was forced into compliance by Sam who had conditions to return. Add in the fact there long time Personal friends, Sam is a super high level speaker while Ilya usually keeps to themselves. The super stressful days everyone was on during the times.

    You may understand why he didn’t want to continue fighting and just eased into a slow leave to do better elsewhere.


  • Its true i do worry about shit ai being plastered within our devices but if you cut through the marketing you see a whole mix of machine learning and ai is used under the hood.

    Some of these tools like ms paint auto removing backgrounds. And personal assistant like siri talking more fluently does seem like an improvement.

    I have no hopes for winfows rewind but even for that we must admit its not actually available yet. Neither is apple ai.

    So we are simply assuming that all of the tools they may put in are bad based on some current stupid ideas that are explored.






  • In a normal conversation sure.

    In this kind Turing tests you may be disqualified as a jury for asking that question.

    Good science demands controlled areas and defined goals. Everyone can organize a homebrew touring tests but there also real proper ones with fixed response times, lengths.

    Some touring tests may even have a human pick the best of 5 to provide to the jury. There are so many possible variations depending on test criteria.


  • The touring test isn’t an arena where anything goes, most renditions have a strict set of rules on how questions must be asked and about what they can be about. Pretty sure the response times also have a fixed delay.

    Scientists ain’t stupid. The touring test has been passed so many times news stopped covering it. (Till this click bait of course). The test has simply been made more difficult and cheat-proof as a result.