• ReallyActuallyFrankenstein@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    26
    ·
    9 months ago

    Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

    That’s wild. Given the abruptness and his profile, I was thinking it must be an improper conduct investigation. But either way, I hope we get more details.

  • sculd@beehaw.org
    link
    fedilink
    arrow-up
    20
    ·
    9 months ago

    Lol The finally realized that Altman brings more bad press because of his association with crypto and his weird views on things.

    I doubt the successor can be a good person but hopefully a less creepy one.

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    18
    ·
    9 months ago

    Damn, time for wild thoughts as to why. I wonder who the bad guy is here. Did Sam want to focus on profit? Does the board and they’re hiding behind that? I have no idea.

    Still this is a scene right from Silicon Valley, the founder being voted out of their own company

    • HeartyBeast@kbin.social
      link
      fedilink
      arrow-up
      16
      ·
      edit-2
      9 months ago

      I think it’s more likely that Sam did not want to focus on profit as much as some investors. I presume Microsoft has seats on the board ( I haven’t checked)

      Edit - I seem to be wrong: OpenAI’s board of directors consists of OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner." Sam Altman and Greg Brockman(President and Co-founder) both left the board today

      • BolexForSoup@kbin.social
        link
        fedilink
        arrow-up
        30
        ·
        edit-2
        9 months ago

        Their board is independent and as such do not have equity in the company - Microsoft is not part of this. It’s a very different dynamic.

        Based on the language, if I HAD to guess, I’d say he straight up lied to the board or acted on something without them when they were supposed to be involved. Serious charter-violating stuff.

        • t3rmit3@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          9 months ago

          My guess is he hid a security breach from the SEC and the board. That makes the most sense as to what would prevent the board from being able to execute on their legal duties.

  • sabreW4K3@lemmy.tf
    link
    fedilink
    English
    arrow-up
    13
    ·
    9 months ago

    They make it sound like all the for profit stuff was Sam Altman and they just wanna make cool tech.

    • intensely_human@lemm.ee
      link
      fedilink
      arrow-up
      12
      ·
      9 months ago

      OpenAI’s original mission was extremely serious: to ensure the wide proliferation of AI to ensure a multipolar ecosystem instead of a monopolar one, to force AI to learn to play nice via parity with other AIs.

      I was amazed that an organization existed which recognized this hard to swallow but ultra important fact.

  • cwagner@beehaw.org
    link
    fedilink
    arrow-up
    10
    ·
    9 months ago

    Crazy, the news almost took hackernews down when it broke. MS also was taken by surprise, and today 3 lead researchers resigned. Currently only speculation and no one really knows what’s going on.

          • ulkesh@beehaw.org
            link
            fedilink
            English
            arrow-up
            6
            ·
            9 months ago

            Seriously. Their neocon attitude toward regulation is leaking. If anything should be regulated, it’s AI advancements. The public good is still a thing, despite what conservatives want to dismiss.

          • bioemerl@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            9 months ago

            Do you want AI to exclusively be in the hands of big companies and the government?

            Do you want the future of technology locked behind pay walls and censored so that you can’t use it to do anything they don’t want you to do?

            If you think AI regulation comes in the form of making sure big companies can’t do bad things to you, you haven’t been paying attention.

  • bedrooms@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    9 months ago

    As a happy subscriber, the last thing I want is the influence from the board.

    Monopoly established, the max profit phase about to start…

    • YeeHaw@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      9 months ago

      Except that’s definitely not the case, since unlike crypto shit, the latest wave of AI tech is already useful and found lots of applications. It may never reach AGI level, but that doesn’t mean it’s not immensely useful.

      • shiveyarbles@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        9 months ago

        Yeah I guess, I just see a mishmash of consumed data presented where you have to tweak parameters and so forth. Some gobbledygook nonsense presented as facts, three arms and six fingers in generated art, etc. just seems like shit to me

  • t3rmit3@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    9 months ago

    I’m gonna guess this was a security compromise that he failed to disclose to the board, and failed to report to the SEC.

    It makes the most sense given the board’s statement that his “repeated lack of candor” prevented the board from executing on its duties.

  • shiveyarbles@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    9 months ago

    I mean I get it, but you can Google for answers as well… check stack overflow, etc, get answers from true industry masters. at the end of the day it seems like there’s not much added value… especially if you have to vet the answers for reliability.