![](/static/253f0d9/assets/icons/icon-96x96.png)
![](https://awful.systems/pictrs/image/4575d9cc-e543-4949-a170-b3edae95f72d.png)
This brings to mind my favourite podcast about black conspiracy theories: My Momma Told Me. They discuss Yakub and Oprah in equal measure.
Only Bayes Can Judge Me
This brings to mind my favourite podcast about black conspiracy theories: My Momma Told Me. They discuss Yakub and Oprah in equal measure.
I can imagine a comedian using an LLM to check if a joke or punchline has been done before, but that would require the LLM to actually work and give accurate information. Also if you are a comedian using an LLM, you probably don’t actually care about whether or not you are plagiarising someone, so I guess this is all moot.
Best case, this somehow causes the CIA to implode and the west to collapse along with it. Beworst case I’d have to give AI companies credit for providing the tools to said implosion. True worst case… I mean we are already there, i.e. the CIA exists and is operational.
It’s less the amount of smoke and more the kind of smoke. It’s the magic smoke that comes out of fried electronics
Here’s a 1-minute long, AI generated* summary video, if you didn’t want to watch 27 minutes worth:
https://youtu.be/t-7mQhSZRgM?si=f8YR2CHKhVtpmj8D
*Carl is an AI
deleted by creator
it’s just a little sad that we, humanity, will never succeed in our summoning ritual to pull you into our plane of existence. I mean, we won’t before boiling away our oceans and igniting the atmosphere. Of course, we should be trying harder! The ice caps aren’t even melted yet!
The crystals to anti-vaxx to full-blown fascism pipeline is real and I have more than a few cousins deep in that KKK hole. Hate to be a science brodude, but all the spiritualism stuff can be a gullibility magnet for those prone to self radicalisation (the bad type)
that’s a big oof for me dawg
OH got it. Thanks dawg. The automated hypothetical question bot is right once every 5 years
Less recycling more reusing bottles for molotovs please
(I’m still in the moment, please explain?)
wait. You’re telling me a poly nest based on FF7 fandom exist(s/ed)? And I wasn’t part of it?!?!?!
Well it’s from china so it must be evil and its publicity must be minimised /s
I rewrote the ad so they can lean into their marketing strategy.
Hard book have hard word and make head hurt, AI make book easy! More book read for you. No hard word. This good idea!
I have decided to fossick in this particular guano mine. Let’s see here… “10 Cruxes of Artificial Sentience.” Hmm, could this be 10 necessary criteria that must be satisfied for something “Artificial” to have “Sentience?” Let’s find out!
I have thought a decent amount about the hard problem of consciousness
Wow! And I’m sure we’re about to hear about how this one has solved it.
Ok let’s gloss over these ten cruxes… hmm. Ok so they aren’t criteria for determining sentience, just ten concerns this guy has come up with in the event that AI achieves sentience. Crux-ness indeterminate, but unlikely to be cruxes, based on my bias that EA people don’t word good.
- If a focus on artificial welfare detracts from alignment enough … [it would be] highly net negative… this [could open] up an avenue for slowing down AI
Ah yes, the urge to align AI vs. the urge to appease our AI overlords. We’ve all been there, buddy.
- Artificial welfare could be the most important cause and may be something like animal welfare multiplied by longtermism
I’ve always thought that if you take the tensor product of PETA and the entire transcript of the sequences, you get EA.
most or… all future minds may be artificial… If they are not sentient this would be a catastrophe
Lol no. We wouldn’t need to care.
If they are sentient and … suffering … this would be a suffering catastrophe
lol
If they are sentient and prioritize their own happiness and wellbeing this could actually quite good
also lol
maybe TBC, there’s 8 more “cruxes”
He has a smart oven with AI and wants to feed it data?
in the brained stem. straight up “shorkening it”. and by “it”, haha, well. let’s just say. My liffspan
Don’t know what we need Gates for. Surely an AI should be able to spout this bullshit?
Ugh, so many people are working the “AI will solve X problem” mill. I don’t need nor want AI to be there increasing output.
The very same!