• hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    So it managed to type in ollama run llama3.1:70b into a Linux terminal, or what’s this about?

    • Lugh@futurology.todayOPM
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      5 months ago

      I think the issue is that although it’s early days, if nobody builds the guardrails in at this point in time, will they ever? Do the people in charge of building AI even care? Their leadership seems much more interested in deregulation and rolling back safety oversight.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 months ago

        They do. The field is called “AI safety”. And it’s a topic of research for quite some time now. If you like Youtube, I can recommend watching Robert Miles and Computerphile. They have some videos about the science, philosophy and groundworks.

        And since we’re talking about Llama… Meta released a whole framework to safeguard output and input of language models, control and limit it. OpenAI does similar things when you try to talk about intimacy or other forbidden things.

        I just don’t think they care about the robot apocalypse at this point. It’s still science fiction. And the large tech companies are mainly focused on profit. They always say they factor in safety and do AI responsibly. But I don’t think they care a lot, as long as they’re making money.

        If you ask me, a company like OpenAI or Meta could as well turn into Skynet. Or just turn the internet into a post-factual world of misinformation and AI spam… They just like doing business more than bothering with ethics. But that’s true with all big tech.

        This might change after the AI hype is over. But that’s just my speculation.