• 0 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle

  • I see intelligence as filling areas of concept space within an econiche in a way that proves functional for actions within that space. I think we are discovering more that “nature” has little commitment, and is just optimizing preparedness for expected levels of entropy within the functional eco-niche.

    Most people haven’t even started paying attention to distributed systems building shared enactive models, but they are already capable of things that should be considered groundbreaking considering the time and finances of development.

    That being said, localized narrow generative models are just building large individual models of predictive process that doesn’t by default actively update information.

    People who attack AI for just being prediction machines really need to look into predictive processing, or learn how much we organics just guess and confabulate ontop of vestigial social priors.

    But no, corpos are using it so computer bad human good, even though the main issue here is the humans that have unlimited power and are encouraged into bad actions due to flawed social posturing systems and the confabulating of wealth with competency.


  • While I agree about the conflict of interest, I would largely say the same thing despite no such conflict of interest. However I see intelligence as a modular and many dimensional concept. If it scales as anticipated, it will still need to be organized into different forms of informational or computational flow for anything resembling an actively intelligent system.

    On that note, the recent developments with active inference like RXinfer are astonishing given the current level of attention being paid. Seeing how llms are being treated, I’m almost glad it’s not being absorbed into the hype and hate cycle.




  • Perhaps instead we could just restructure our epistemically confabulated reality in a way that doesn’t inevitably lead to unnecessary conflict due to diverging models that haven’t grown the necessary priors to peacefully allow comprehension and the ability exist simultaneously.

    breath

    We are finally coming to comprehend how our brains work, and how intelligent systems generally work at any scale, in any ecosystem. Subconsciously enacted social systems included.

    We’re seeing developments that make me extremely optimistic, even if everything else is currently on fire. We just need a few more years without self focused turds blowing up the world.


  • Peanut@sopuli.xyztoTechnology@lemmy.worldGenerative A.I - We Aren’t Ready.
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    8 months ago

    AI or no AI, the solution needs to be social restructuring. People underestimate the amount society can actively change, because the current system is a self sustaining set of bubbles that have naturally grown resilient to perturbations.

    The few people who actually care to solve the world’s problems are figuring out how our current systems inevitably fail, and how to avoid these outcomes.

    However, the best bet for restructuring would be a distributed intelligent agent system. I could get into recent papers on confirmation bias, and the confabulatory nature of thought, on the personal level, group level, and society level.

    Turns out we are too good at going with the flow, even when the structure we are standing on is built over highly entrenched vestigial confabulations that no longer help.

    Words, concepts, and meanings change heavily depending on the model interpreting them. The more divergent, the more difficulty in bridging this communication gap.

    a distributed intelligent system could not only enable a complete social restructuring with autonomy and altruism both guaranteed, but with an overarching connection between the different models at every scale, capable of properly interpreting the different views, and conveying them more accurately than we could have ever managed with model projection and the empathy barrier.


  • I definitely agree that copyright is a good half century in need of an update. Disney company and other contemporaries should never have been allowed the dominance and extension of copywrite that allows what feels like ownership of most global artistic output. They don’t need AI, they have the money and interns to create whatever boardroom adjusted art they need to continue their dominance.

    Honestly I think the faster AI happens, the more likely it is that we find a way out of the social and economical hierarchical structure that feels one step from anarcho-capitalistic aristocracy.

    I just hope we can find the change without riots.


  • And you violate copyright when you think about copywritten things alone at night.

    I violate copyright when i draw Mario and don’t sell it to anybody.

    Or these are dumb stretches of what copyright is and how it should be applied.

    the reasoning in this article is dumb and all over the place.

    Seems like gary marcus being gary marcus.

    Already seen openAI calling out some of the bullshit specifically noted in this. That doesn’t matter though, damage is done and people WANT to believe ai is terrible in every way.

    Everyone is just deadfast determined to climb onto the gary marcus unreasonable AI hate train no matter what.