Kelsey Piper bluechecks thusly:
James Damore was egregiously wronged.
Kelsey Piper bluechecks thusly:
James Damore was egregiously wronged.
I mean, maybe? But the amount of trust I put in a description from “GeekWire” written by “an investor at Madrona Venture Group and a former leader at Amazon Web Services” who uncritically declares that spicy autocomplete “achieved strong reasoning capabilities” is … appropriately small.
I’ve previously discussed the concept of model collapse, and how feeding synthetic data (training data created by an AI, rather than a human) to an AI model can end up teaching it bad habits, but it seems that DeepSeek succeeded in training its models using generative data, but specifically for subjects (to quote GeekWire’s Jon Turow) “…like mathematics where correctness is unambiguous,”
That sound you hear is me pressing F to doubt. Checking the correctness of mathematics written as prose interspersed with equations is, shall we say, not easy to automate.
Wait, the splinter group from the cult whose leader wants to bomb datacenters might be violent?
I mean, “downvotes are proof that the commies are out to get me” is an occasion not just to touch grass, but to faceplant into an open field of wildflowers.
Enjoy your trip to the egress.
Pouring one out for the local-news reporters who have to figure out what the fuck “timeless decision theory” could possibly mean.
The big claim is that R1 was trained on far less computing power than OpenAI’s models at a fraction of the cost.
And people believe this … why? I mean, shouldn’t the default assumption about anything anyone in AI says is that it’s a lie?
Altman: Mr. President, we must not allow a bullshit gap!
Musk: I have a plan… Mein Führer, I can walk!
I would appreciate this too, frankly. The rabbit hole is deep, and full of wankers.
This seems like an apt point to share Maxwell Neely-Cohen’s “Century-Scale Storage”.
I asked ChatGPT, the modern apotheosis of unjustified self-confidence, to prove that .999… is less than 1. Its reply began “Here is a proof that .999… is less than 1.” It then proceeded to show (using familiar arguments) that .999… is equal to 1, before majestically concluding “But our goal was to show that .999… is less than 1. Hence the proof is complete.” This reply, as an example of brazen mathematical non sequitur, can scarcely be improved upon.
brb, saving copies of physics and math books before they go offline
Michael Keaton bursts out of a grave It’s sneer time!
From the “flipping through LessWrong for entertainment” department:
What effect does LLM use have on the quality of people’s thinking / knowledge?
- I’d expect a large positive effect from just making people more informed / enabling them to interpret things correctly / pointing out fallacies etc.
A WELL TRAINED AI can be a very useful tool.
No, it can’t. Go away.
From the comments:
The author is very much that type of guy: