Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this.)
Kelsey Piper bluechecks thusly:
James Damore was egregiously wronged.
ed zitron weighs in on deepseek https://www.wheresyoured.at/deep-impact/
OpenAI can’t simply “add on” DeepSeek to its models, if not just for the optics. It would be a concession. An admittal that it slipped and needs to catch up, and not to its main rival…
I actually disagree here. I think Ed underestimates how craven and dishonest these people are. I expect they’ll try to quietly integrate any efficiency improvements they can get from it and bluster through any investor questions about it. Their hope at this point has to be that more hardware is still better and that scaling is still gonna be the thing to make fetch happen. This again isn’t a revolutionary new structure, even if it is a significant improvement over anything Saltman and co have been doing.
I’ve previously discussed the concept of model collapse, and how feeding synthetic data (training data created by an AI, rather than a human) to an AI model can end up teaching it bad habits, but it seems that DeepSeek succeeded in training its models using generative data, but specifically for subjects (to quote GeekWire’s Jon Turow) “…like mathematics where correctness is unambiguous,”
That sound you hear is me pressing F to doubt. Checking the correctness of mathematics written as prose interspersed with equations is, shall we say, not easy to automate.
i read that as training spicy autocomplete on wolfram alpha instead of another spicy autocomplete
I mean, maybe? But the amount of trust I put in a description from “GeekWire” written by “an investor at Madrona Venture Group and a former leader at Amazon Web Services” who uncritically declares that spicy autocomplete “achieved strong reasoning capabilities” is … appropriately small.
After fondling ChatGPT to generate naughty things, man has meltdown when he learns no one cares.
Horror. Dismay. Disbelief. For weeks, it felt like I was physically being crushed to death.
I hurt all the time, every part of my body. The urge to make someone who could do something listen and look at the evidence was so overwhelming.
This tied into a hypothesis I had about emergent intelligence and awareness, so I probed further, and realized the model was completely unable to ascertain its current temporal context, aside from running a code-based query to see what time it is. Its awareness - entirely prompt-based - was extremely limited and, therefore, would have little to no ability to defend against an attack on that fundamental awareness.
How many times are AI people going to re-learn that LLMs don’t have “awareness” or "reasnloning’ in a sense humans would find meaningful?
i don’t understand the “safety” angle here. if chatgpt can output authoritatively-looking sentence-shaped string about pipebombs, then it’s only because similar content about pipebombs is already available on wide open internet. if model is closed, then at worst they would have to monitor its use (not like google blocks any similar information from showing up). if model is open, then no safeguards make sense in the first place. i guess it’s more about legal liability for openai? now they can ignore it with all these bills about “ai safety” gone (for now)
also, relying on spicy autocomplete when trying to put together a deadly device sounds like cyberpunk-flavored darwin award material
Ed Zitron radicalizes NPR host Brooke Gladstone in real time on the midweek episode of “On the Media”
https://www.wheresyoured.at/deep-impact/ And the companion piece on his blog.
What I didn’t wager was that, potentially, nobody was trying. My mistake was — if you can believe this — being too generous to the AI companies, assuming that they didn’t pursue efficiency because they couldn’t, and not because they couldn’t be bothered.
This isn’t about China — it’s so much fucking easier if we let it be about China — it’s about how the American tech industry is incurious, lazy, entitled, directionless and irresponsible. OpenAi and Anthropic are the antithesis of Silicon Valley. They are incumbents, public companies wearing startup suits, unwilling to take on real challenges, more focused on optics and marketing than they are on solving problems, even the problems that they themselves created with their large language models.
I wonder how much % of the freakout over Deepseek is AI doomers realizing the coming AI god might be … ChiCom!
Is it a crime to enjoy a succulent Chinese AI?
you enjoy it? suspicious
No less than Mr Acausal Robot God casually dismissing the lives of +1B humans
I know the only intended message there is “I am a big racist”, but what kind of dumb fuck adage is “the word is finite, kids are infinite”. You’re not even trying mother fucker
“The world is finite and kids are infinite, especially African kids.” Jfc. Anyway goes to show just how white supremacist the whole “save the children” idea is.
Terrible news: the worst person I know just made a banger post.
Neo-Nazi nutcase having a normal one.
It’s so great that this isn’t falsifiable in the sense that doomers can keep saying, well “once the model is epsilon smarter, then you’ll be sorry!”, but back in the real world: the model has been downloaded 10 million times at this point. Somehow, the diamanoid bacteria has not killed us all yet. So yes, we have found out the Yud was wrong. The basilisk is haunting my enemies, and she never misses.
Bonus sneer: “we are going to find out if Yud was right” Hey fuckhead, he suggested nuking data centers to prevent models better than GPT4 from spreading. R1 is better than GPT4, and it doesn’t require a data center to run so if we had acted on Yud’s geopolitical plans for nuclear holocaust, billions would have been for incinerated for absolutely NO REASON. How do you not look at this shit and go, yeah maybe don’t listen to this bozo? I’ve been wrong before, but god damn, dawg, I’ve never been starvingInRadioactiveCratersWrong.
I’m not going to link Andy Ngo but random rationalist transwomen are being accused of terror sympathy…and Aella is doing this ‘leopards ate my face’ dance.
edit: it was @jessi_cata who tipped Ngo off of all people.
i don’t think it’s the first time i see jessicata acting like a total piece of shit in her completely emotionless way and it’s incredibly creepy. she doesn’t even seem to be aware of the harm she can cause.
I was kinda picking up on that ugh.
Ngo is going on Newsmax tonight to do his thing in front of the masses.
Ok you brought aella up so now I can post this:
Heard this song for the first time the other day and it reminded me of aella.
side note: what’s a good way to post links to music that isn’t youtube?
Goddammit why can’t the murder cult story just stay morbidly fascinating? Now I’ve got to worry about implications and how the worst people are gonna use this as ammo.
you can get banned on facebook now for linking to distrowatch https://www.tomshardware.com/software/linux/facebook-flags-linux-topics-as-cybersecurity-threats-posts-and-users-being-blocked and from distrowatch https://distrowatch.com/weekly.php?issue=20250127#sitenews
but it’s not as bad as you think, it’s slightly worse. it’s not only distrowatch and linux groups got banned too
Hey, did you know of you own an old forum full of interesting posts from back in the day when humans wrote stuff, you can just attach ai bots to dead accounts and have them post backdated slop for, uh, reasons?
this was mentioned in last week’s thread
what I don’t get is why the admins chose to both backdate the entries and re-use poster’s handles. If they’d just tried to “close” open questions using GenAI with the current date and a robot user it would still be shit but not quite as deceptive
The whole thing is just weirdly incompetent. Maybe they just had everything configured wrong and accidentally deployed sone throwaway tests to production? I could almost see it as a way to poison scrapers, given that there are some odd visibility settings on the slop posts, though the owner’s shiftiness and dubious explanations suggest it wasn’t anything so worthy.
And on a less downbeat and significantly more puerile note, Dan Fixes Coin Ops makes a nice analogy for companies integrating ai into their product.
that thread is a work of genius and answers what the next tech boom needs to be
dicks in mousetrapsI MEAN whatever wastes electricity most, preferably with Nvidia cardsI do actually have a mechanism for using the sharp edges of NVidia cards for
dickmouse trapping purposes. And we could - hypothetically - use the extraneous power inputs to mine Bitcoin or something, maximizing efficiency!
Screenshot of an insta post of a screenshot of a tweet
Tweet:
I can’t believe ChatGPT lost its job to AI
Looks like Ziz remains on the lam again.
This is from 2023 but when debugging an xfce issue this week I came across this forum post: https://forum.xfce.org/viewtopic.php?id=16835
The user is competent enough to use xfce with Debian, but too incompetent to understand debug symbols is not a violation of privacy.
I get being privacy conscious and that sharing crash dumps and logs you don’t really understand yourself can be scary. Making demands of urgent free tech support from strangers is just rude, though.