Money wins, every time. They’re not concerned with accidentally destroying humanity with an out-of-control and dangerous AI who has decided “humans are the problem.” (I mean, that’s a little sci-fi anyway, an AGI couldn’t “infect” the entire internet as it currently exists.)

However, it’s very clear that the OpenAI board was correct about Sam Altman, with how quickly him and many employees bailed to join Microsoft directly. If he was so concerned with safeguarding AGI, why not spin up a new non-profit.

Oh, right, because that was just Public Relations horseshit to get his company a head-start in the AI space while fear-mongering about what is an unlikely doomsday scenario.


So, let’s review:

  1. The fear-mongering about AGI was always just that. How could an intelligence that requires massive amounts of CPU, RAM, and database storage even concievably able to leave the confines of its own computing environment? It’s not like it can “hop” onto a consumer computer with a fraction of the same CPU power and somehow still be able to compute at the same level. AI doesn’t have a “body” and even if it did, it could only affect the world as much as a single body could. All these fears about rogue AGI are total misunderstandings of how computing works.

  2. Sam Altman went for fear mongering to temper expectations and to make others fear pursuing AGI themselves. He always knew his end-goal was profit, but like all good modern CEOs, they have to position themselves as somehow caring about humanity when it is clear they could give a living flying fuck about anyone but themselves and how much money they make.

  3. Sam Altman talks shit about Elon Musk and how he “wants to save the world, but only if he’s the one who can save it.” I mean, he’s not wrong, but he’s also projecting a lot here. He’s exactly the fucking same, he claimed only he and his non-profit could “safeguard” AGI and here he’s going to work for a private company because hot damn he never actually gave a shit about safeguarding AGI to begin with. He’s a fucking shit slinging hypocrite of the highest order.

  4. Last, but certainly not least. Annie Altman, Sam Altman’s younger, lesser-known sister, has held for a long time that she was sexually abused by her brother. All of these rich people are all Jeffrey Epstein levels of fucked up, which is probably part of why the Epstein investigation got shoved under the rug. You’d think a company like Microsoft would already know this or vet this. They do know, they don’t care, and they’ll only give a shit if the news ends up making a stink about it. That’s how corporations work.

So do other Lemmings agree, or have other thoughts on this?


And one final point for the right-wing cranks: Not being able to make an LLM say fucked up racist things isn’t the kind of safeguarding they were ever talking about with AGI, so please stop conflating “safeguarding AGI” with “preventing abusive racist assholes from abusing our service.” They aren’t safeguarding AGI when they prevent you from making GPT-4 spit out racial slurs or other horrible nonsense. They’re safeguarding their service from loser ass chucklefucks like you.

  • vexikron@lemmy.zip
    link
    fedilink
    arrow-up
    2
    ·
    10 months ago

    All of these people who make part of their public, and apparently also actual real personas being very concerned about AGI are hypocrites at best and con artists at worse.

    How many of such people express vehement public opposition to granting automated military systems the ability to decide whether to fire or not fire?

    We are /just about/ to blow through that barrier, into building software systems that totally remove the human operator from that part of the equation.

    Then we end up pretty quickly with a SkyNet drone airforce, and its not too long after that it is actually conceivable we end up with something like ED 209 as well, except its a boston dynamics robot mule that can be configured for either hauling cargo, or have a mounted rifle or grenade launcher or something like that.

  • 👍Maximum Derek👍@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    40+ years on this planet have made me 100% certain that no one with the power to safeguard AGI will make any legitimate effort to do so. Just like we have companies spending millions greenwashing while they pollute more than ever, we’ll have plenty of lip-service about it but never anything useful.

    • Immersive_Matthew@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      Anyone who thinks America or your local government is going to regulate AI are delusional, especially in the face of companies planning to build AI Data Centers on ships and float them into International waters where the law does not apply to them. If not there,they will put it in space. Unregulated AI is coming where you like it or not, unless we destroy the entire planet which I would not rule out. Sure this commenter would agree on that.

  • Deceptichum@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    11 months ago

    We are no where near developing AGI.

    It’s so far fetched that you might as well legislate for time travel and FTL drives while you’re at it.

    • Arin@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      I agree we’re far out, but not as far as you think. Advancements are insane and AGI could be here in 5-10 years. The way the industry have been attempting it the past decade is wrong though, training should be more indepth than images/videos, I think a few are starting to understand how to do more indepth training, so even more progress will start soon

      • brambledog@lemmy.today
        link
        fedilink
        arrow-up
        0
        ·
        11 months ago

        I think you are being optimistic.

        If you are old enough to remember AIM chatbots, this current generation is maybe multiple times more advanced, not exponentially so. From what I have seen, all the incredible advancements have been in image production.

        This leads me to believe that AGI has never been the true commercial goal, but rather an advancement of propaganda media and its creation.

        • WldFyre@lemm.ee
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          11 months ago

          This leads me to believe that AGI has never been the true commercial goal, but rather an advancement of propaganda media and its creation.

          Uh what? Why wouldn’t it be because text/image generation isn’t even on the same plane of difficulty as AGI?

  • OneCardboardBox@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    I think there are real concerns to be addressed in the realm of AGI alignment. I’ve found Robert Miles’ talks on the subject to be quite fascinating, and as such I’m hesitant to label all of Elizier Yudkowsky’s concerns as crank (Although Roko’s Basilisk is BS of the highest degree, and effective altruism is a reimagined Pascal’s mugging for an atheist/agnostic crowd).

    Even while today’s LLMs are toys compared to what a hypothetical AGI could achieve, we already have demonstrable cases where we know that the “AI” does not “desire” the same end goal that we desire the “AI” to achieve. Without more advancement in how to approach AI alignment, the danger of misaligned goals will only grow as (if) we give AI-like systems more control over daily life.

    • KeenFlame@feddit.nu
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      But like we only think about “controlling” it’s goals and shit when honestly what we only need is a fucking stop button like in a fucking factory. Whops it’s genocidal again Claus, all right Lars slam the off button and let’s start over

      • OneCardboardBox@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        The stop button problem is not yet solved. An AGI would need a the right level of “corrigability”: a willingness allow humans to stop it when undertaking incorrect behavior.

        An AGI that’s incorrigible might take steps to prevent itself being shut off, which might include lying to its owners about its own goals/internal state, or taking physical action against an attempt to disable it (assuming it can).

        An AGI that’s overly corrigible might end up making an association “It’s good when humans stop me from doing something wrong. I want to maximize goodness. Therefore, the simplest way to achieve a lot of good quickly is to do the wrong thing, tricking humans into turning me off all the time”. Not necessarily harmful, but certainly useless.

        https://www.youtube.com/watch?v=3TYT1QfdfsM

  • inetknght@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    11 months ago

    How could an intelligence that requires massive amounts of CPU, RAM, and database storage even concievably

    What you define as “massive” amounts might still be large amounts for most consumers. But even then it’s not… really. Developers frequently fit these models in their own laptops. Some of the ML models fit on an iPhone or Android phone. It can generate ten, or hundreds of words (tokens) per second.

    So the fact that they don’t need massive amounts of CPU, RAM, and database storage is rather the point. Imagine if it could escape and multiply. It could conceivably do so quite quickly given current technology.

    • KeenFlame@feddit.nu
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      Zephyr 7b might run on a cell but you don’t understand how far behind oai these are for stuff, their gpt uses multi agent networks too, it certainly requires massive, massive amounts of power. And no, a tiny model on a phon can’t brrrr hundreds of words per second. You are just misinformed somehow. If I tune my computer correct I get like 30. And these are magnitudes behind in quality anyway. How you believe they can replicate is beyond me. Using autogen? I mean we can already make replication softwares, called viruses, but what’s the gain of having a language model as payload for that?

      • inetknght@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        11 months ago

        Give it time. Cell phones are getting more powerful every day.

        As for misinformed… sure it’s possible. But I doubt it. Llama isn’t chat gpt but it runs pretty well on my machine. Is it perfect? No, of course not. Neither is ChatGPT. But it’s “good enough” for what I need it for, and it certainly could be “good enough” for many other users.

        What’s the gain of a LLM for a virus? Well that… is a little more esoteric. It’s about as esoteric as encrypting hard drives. Crypto malware isn’t always a virus either. Imagine a LLM in a virus used to determine if a given file’s content is worth extracting from the device. I haven’t yet figured out all of the side ventures but I can see a use for it.

        • KeenFlame@feddit.nu
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          11 months ago

          I don’t get it, you didn’t say “in the future” you said it is that now, it’s the premise of the entire comment. We aren’t in the future. It’s not used in mobile apps that much yet because it’s not at all reliable or fast… Or cheap. It’s incredible technology. But it’s not ready for the things you described

  • Rottcodd@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    11 months ago

    Money wins, every time.

    And right there, you answered your own (presumably rhetorical) question.

    The money people jumped on AI as soon as they scented the chance of profit, and that’s it. ALL other considerations are now secondary to a handful of psychopaths making as much money as possible.

      • Omega_Haxors@lemmy.ml
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        11 months ago

        Unrelated but is your name a reference to Amy Likes Spiders? That was my favorite poem in DDLC.

        • LadyLikesSpiders@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          11 months ago

          Probably subconsciously. I came up with the name long after playing the game, but I wasn’t thinking of it when I made it. I actually am just a lady who likes spiders

          • Omega_Haxors@lemmy.ml
            link
            fedilink
            arrow-up
            0
            arrow-down
            1
            ·
            edit-2
            11 months ago

            I love spiders, and lots of bugs really. I have zero respect for people who look down on them when they’re just so damn cute.

            Like how can anyone look at this and say anything other than “awww” jumping spider

            • LadyLikesSpiders@lemmy.ml
              link
              fedilink
              arrow-up
              0
              ·
              11 months ago

              awww

              Yeah you’re right. Look at that little cutie <3

              I use the way people treat other animals, especially ones like bugs and stuff, the ones we barely give a second thought about, as a measure of character. Phobias are one thing, but at least have compassion for this other living thing

              • Omega_Haxors@lemmy.ml
                link
                fedilink
                arrow-up
                0
                arrow-down
                1
                ·
                edit-2
                11 months ago

                Very few will get a chance to feel what it’s like to pet a bug and have it go from fearing for its life to trusting you with its life. They genuinely have no framework for a world that treats them as disposable when you show them compassion, and it’s magical how they react.

  • thru_dangers_untold@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    It’s common business practice for the first big companies in a new market/industry to create “barriers to entry”. The calls for regulation are exactly that. They don’t care about safety–just money.

    • Snot Flickerman@lemmy.blahaj.zoneOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      The greed never ends. You’d think companies as big as Microsoft would just be like “maybe we don’t actually need to own everything” but nah. Their sheer size and wealth is enough of a “barrier to entry” as it is.

  • intensely_human@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    11 months ago

    It doesn’t matter if anyone cares about the safety of AGI.

    AGI is a direct source of power, much like any weapon. As soon as AGI exists, we will exist in a state of warfare due to the fact that the “big guns” will be out.

    I know I’m having trouble articulating this point, but it’s very important to understand. AGI is like a nuclear weapon: once a person has it, it doesn’t matter how much others may want to regulate them. It’s just not possible to regulate.

    The ONLY strategy that gives us hope of surviving AGI’s emergence without being enslaved is to spread AGI far and wide to ensure a multipolar AGI ecosystem, which will force AGI to learn prosocial interaction as a means of ensuring its own survival.

    And if you want to come at me with “AGI doesn’t inherently have a self interest”, consider that the same is true of nuclear weapons. And yet nuclear weapons get their interests from their wielders. And the only way to stay safe from nuclear weapons is also to proliferate them far and wide so that there is a multipolar ecosystem of nuclear weapons, ensuring those holding nuclear weapons have to play nice to ensure their own survival.

    All of this talk about restricting AGI will only have the effect of concentrating it in a few hands, leading to the very nightmare the regulators are trying to avoid.

    If the regulators had succeeded, and the US had been the only nation to possess nuclear weapons in the long run, humanity would have suffered massively from that lack of parity. Let me be less coy: humanity would have suffered under the brutality of repeated nuclear holocausts as the interests of the few led to further and further justification of larger and larger strikes.

    Nuclear weapons cannot be regulated by law. They can only be regulated by other nuclear weapons. Same is true of AGI.

        • KeenFlame@feddit.nu
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          11 months ago

          “It doesn’t matter if anyone cares about the safety of agi”

          It does matter. And it doesn’t apply because it’s not just a weapon. It matters how it acts towards humans ethically in so many ways other than indiscriminate slaughter

  • Omega_Haxors@lemmy.ml
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    11 months ago

    I think that ship sailed when ChatGPT dropped and a ton of journalists instantly lost their job. No need to speculate, it’s already happening.

  • schmorp@slrpnk.net
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    11 months ago

    Somewhere between

    A bunch of incapable, spoilt, completely insane men-children with too much money think they can save the world.

    and

    A bunch of scam artists build an artificial human who they claim can talk and draw and reason just like a real human would.

    For the CEOs of this brave new AI world this probably changes depending on their level of hangover and/or midlife crisis.