The best way to tune the algorithm on Youtube is to aggressively prune your watch/search history.
Even just one “stereotypical” video can cause your recommendations to go to shit.
The best way to tune the algorithm on Youtube is to aggressively prune your watch/search history.
Even just one “stereotypical” video can cause your recommendations to go to shit.
Is there anything the rest of us can do to cultivate such a mindset?
For cardio it’s basically “go slow”. The main source of discomfort is the exertion.
An easy long run with good music is quite meditative and enjoyable.
When your legs hurt and you’re wheezing your lungs out, not so much.
Running, not so much because my calves tend to seize up and it gets a little painful
This is often a form and/or a shoe type issue.
Doing it yourself is fine as an educational exercise for newbies, but skilled linux users generally have better things to do than to do the setup by hand for the nth time. On the other hand the “vanilla”/bleeding-edge approach of Arch makes it one of the best bases for derivative distros available, so basing your distro on it is a no-brainer for many.
“Manjaro is not stable because it ensures no breaking updates are pushed to users” is such a weird statement to make.
does not comply with the principle of K.I.S.S. One application should solve one task and can be replaced
That’s not KISS, but the UNIX principle. And even that part is wrong, as in traditional UNIXes applications were certainly not replaceable.
The features themselves are very useful for basically any user. Whether they are worth the non-standardness and issues that come with it is another question.
Twitter probably opened the floodgates when they managed to shaft users and cut API access without outright killing themselves. Now everyone else is emboldened to ask “why can’t we do that too?”.
Most of the data used in training GPT4 has been gathered through open initiatives like Wikipedia and CommonCrawl. Both are freely accessible by anyone. As for building datasets and models, there are many non-profits like LAION and EleutherAI involved that release their models for free for others to iterate on.
While actually running the larger models at a reasonable scale will always require expensive computational resources, you really only need to do the expensive base model training once. So the cost is not nearly as expensive as one might first think.
Any headstart OpenAI may have gotten is quickly diminishing, and it’s not like they actually have any super secret sauce behind the scenes. The situation is nowhere as bleak as you make it sound.
Fighting against the use of publicly accessible data is ultimately as self-sabotaging ludditism as fighting against encryption.
This sentiment always pops up when the topic is discussed, but it doesn’t really make any sense.
Any sort of setup depends on the government not being co-opted or corrupted.
Free speech absolutism does nothing to prevent a corrupt government from censoring you.
You can’t really use that as an argument for free speech absolutism when it suffers from the exact same issue.