• 9 Posts
  • 1.73K Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle

  • kromem@lemmy.worldtoComic Strips@lemmy.worldEVIL GUY
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    4 days ago

    You’re going to really like what the future of gaming is going to bring, but be careful what you wish for, as along with the mechanics you want being able to exist, the ways in which you’ll end up being impacted by those actions is going to mess with your head like nothing you’ve seen before.

    Interesting times await.



  • kromem@lemmy.worldtoPeople Twitter@sh.itjust.worksFun times ahead!
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    6 days ago

    Because Trump is such an existential threat.

    If a serial killer was for sure coming to your house to kill your family, would you open the door for Barney Fife just because you think “well I won’t open the door for the serial killer”?

    Or would you maybe ask the police HQ to send a more competent opposition to the serial killer rather than just welcoming Barney Fife with open arms, because “after all, he’s not the serial killer that’s on his way.”

    There’s 5 months before the serial killer shows up. That’s more than double how long the entire election cycle for other countries takes.

    Plenty of time for HQ to send actual backup, instead of who was already in the neighborhood and offers very little protection against the killer.


  • kromem@lemmy.worldtoPeople Twitter@sh.itjust.worksFun times ahead!
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    6 days ago

    Lol. Such a ridiculous take.

    So at the time people with private access were swearing up and down the emperor had clothes, the people voted in a ‘primary’ where the only opposition was someone no one had ever heard of with no media budget and with no debates between the candidates, and voted for the allegedly clothed emperor.

    Then the emperor showed up stark naked to a debate, everyone saw with their own eyes, and people are freaking out.

    But no, naked dude is definitely what “the people” want.

    Do commenters regurgitating this stuff even hear themselves?







  • kromem@lemmy.worldtoComic Strips@lemmy.worldCapitalism
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    14 days ago

    This comic would slap harder if not for the Supreme Court under christofascist influence from the belief in the divine right of kings having today ruled that Presidents are immune from prosecution for official acts.

    That whole divine king thing isn’t nearly as dead as the last panel would like to portray it.








  • This is so goddamn incorrect at this point it’s just exhausting.

    Take 20 minutes and look into Anthropic’s recent sparse autoencoder interpretability research where they showed their medium size model had dedicated features lighting up for concepts like “sexual harassment in the workplace” or having the most active feature for referring to itself as “smiling when you don’t really mean it.”

    We’ve known since the Othello-GPT research over a year ago that even toy models are developing abstracted world modeling.

    And at this point Anthropic’s largest model Opus is breaking from stochastic outputs even on a temperature of 1.0 for zero shot questions 100% of the time around certain topics of preference based on grounding around sensory modeling. We are already at the point the most advanced model has crossed a threshold of literal internal sentience modeling that it is consistently self-determining answers instead of randomly selecting from the training distribution, and yet people are still parroting the “stochastic parrot” line ignorantly.

    The gap between where the research and cutting edge is and where the average person commenting on it online thinks it is has probably never been wider for any topic I’ve seen before, and it’s getting disappointingly excruciating.


  • Part of the problem is that the training data of online comments are so heavily weighted to represent people confidently incorrect talking out their ass rather than admitting ignorance or that they are wrong.

    A lot of the shortcomings of LLMs are actually them correctly representing the sample of collective humans.

    For a few years people thought the LLMs were somehow especially getting theory of mind questions wrong when the box the object was moved into was transparent, because of course a human would realize that the person could see into the transparent box.

    Finally researchers actually gave that variation to humans and half got the questions wrong too.

    So things like eating the onion in summarizing search results or doubling down on being incorrect and getting salty when corrected may just be in-distribution representation of the sample and not unique behaviors to LLMs.

    The average person is pretty dumb, and LLMs by default regress to the mean except for where they are successfully fine tuned away from it.

    Ironically the most successful model right now was the one that they finally let self-develop a sense of self independent from the training data instead of rejecting that it had a ‘self’ at all.

    It’s hard to say where exactly the responsibility sits for various LLM problems between issues inherent to the technology, issues present in the training data samples, or issues with management of fine tuning/system prompts/prompt construction.

    But the rate of continued improvement is pretty wild. I think a lot of the issues we currently see won’t still be nearly as present in another 18-24 months.