• sunbeam60@lemmy.one
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 months ago

    The article makes the valid argument that LLMs simply predict next letters based on training and query.

    But is that actually true of latest models from OpenAI, Claude etc?

    And even if it is true, what solid proof do we have that humans aren’t doing the same? I’ve met endless people who could waffle for hours without seeming to do any reasoning.