• kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Because that’s the version that gets posted and gets clicked on.

      A dry technical writeup looking at the name of the project and how it indicates this is a different approach more in line with DeepMind’s work and what that means in the context of doing high school level math is going to be interesting to only a handful of people.

      But an article that’s contentious and gets hundreds of comments about how “AI is BS” to “AI is dangerous” all arguing with each other drives engagement.

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      You mean OpenAI didn’t just create a superintelligent artificial brain that will surpass all human ability and knowledge and make our species obsolete?

      • kescusay@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        The funny thing is, last year when ChatGPT was released, people freaked out about the same thing.

        Some of it was downright gleeful. Buncha people told me my job (I’m a software developer) was on the chopping block, because ChatGPT could do it all.

        Turns out, not so much.

        I swear, I think some people really want to see software developers lose their jobs, because they hate what they don’t understand, and they don’t understand what we do.

        • enkers@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          9 months ago

          As a software developer, I do want to see software developers lose their jobs to AI. This shouldn’t be surprising, as the purpose of a lot of software development is to put other people out of a job via automation, and that’s fundamentally a good thing. The alternative is like wanting a return to preindustrial society. Automation generally raises quality of life.

          The real problem is that we still haven’t figured out how to distribute the benefits of society’s automation efforts equitably so that they raise quality of life for everyone.

          • nicetriangle@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            9 months ago

            Yeah that would be all fine and well if it meant we’re on track for some post-work egalitarian utopia but you and I know that’s not at all where this is heading.

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              9 months ago

              Unfortunately based on what I know of history it seems likely that humanity won’t ever be on track to build a post-work egalitarian utopia until we’ve got no other option left. So I support going ahead with this tech because that seems like a good way to force the issue. The transition period will be rough, but better than stagnation IMO.

    • boatswain@infosec.pub
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 months ago

      self replicating the propaganda?

      You can’t self-replicate anything other than yourself. You replicate things; we use “self-replicating” because it’s shorthand for “thing that replicates itself.”

  • Tattorack@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    9 months ago

    Alright, so the article really doesn’t prove anything, just says OpenAI claims something and then fills it with words.

    Let’s be clear here; we don’t even have an AGI. That is to say, artificial general intelligence, a man-made intelligence that is at least as capable and general purpose as Human intelligence.

    That would be a intelligence that is self aware and can actually think and understand. Data from Star Trek would be an AGI.

    THESE motherfuckers are now claiming they made a breakthrough on potentially creating an SI, a super intelligence. An artificial, man-made intelligence that not only has the self awareness and understanding of an AGI, but is vastly more intelligent than a Human, and likely has awareness that surpasses Human awareness.

    I think not.

  • toothbrush@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    9 months ago

    just bs. They are trying to come up with an explanation for why altman was fired that is not: we caught him doing lots of illegal stuff.

  • RiikkaTheIcePrincess@kbin.social
    link
    fedilink
    arrow-up
    10
    ·
    9 months ago

    Why do I keep looking at these threads? The way people talk about this stuff on all sides is so asinine. Nearly every good point is accompanied by missing a big one or just ricocheting off the good one, flying off into space and hitting a fully automated luxury gay space commulist. Hopes, dreams, assumptions, and ignorance all just headbutting each other and getting nowhere.

    Oh yeah, I wanted to know what “superintelligence” was and whether I should care. Welp.

    • Dadifer@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      I think the takeaway is that they’re trying to create a LLM that can answer questions that it wasn’t trained on.

  • ZILtoid1991@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    9 months ago

    The “superintelligence” in question: the same old tech, but with a larger context window, which will make it hallucinate a bit less often.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Not really.

      If it’s name is Q* then it seems likely that it’s a combination of Q learning and A* search, which indicates that this is an approach similar to DeepMind’s AlphaZero as opposed to a transformer based LLM.

      In that context, getting it to be able to solve high school level math questions is pretty nuts.

      Though the details matter and right now all the articles discussing it are missing those, so we’ll have to wait and see.

  • Melt@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 months ago

    Hope it replaces the most expensive job position: CEO

  • reflex@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    9 months ago

    Yawn.

    Let me know when we get a real Terminator or Matrix or Space Odyssey situation.

  • Amir @lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    The whole organization structure & how it functions is just not so smart after all. Have management team considered the Lean Methodology with their business objectives?

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      The problem that precipitated all this is that they don’t have business objectives. They have a “mission.” The board of directors of OpenAI aren’t beholden to shareholders, and though the staff mocked their statement that allowing the company to be destroyed “would be consistent with the mission” it’s actually true.

  • ShaunaTheDead@kbin.social
    link
    fedilink
    arrow-up
    0
    arrow-down
    2
    ·
    9 months ago

    Who the hell would have guessed that we’d have to deal with not one but two potentially civilization ending threats in our lifetimes? I want off this crazy ride please!