• 0 Posts
  • 2.25K Comments
Joined 2 years ago
cake
Cake day: July 7th, 2023

help-circle

  • My son has doubled in size every month for the last few months. At this rate he’ll be fifty foot tall by the time he’s seven years old.

    Yeah, it’s a stupid claim to make on the face of it. It also ignores practical realities. The first is those is training data, and the second is context windows. The idea that AI will successfully write a novel or code a large scale piece of software like a video game would require them to be able to hold that entire thing in their context window at once. Context windows are strongly tied to hardware usage, so scaling them to the point where they’re big enough for an entire novel may not ever be feasible (at least from a cost/benefit perspective).

    I think there’s also the issue of how you define “success” for the purpose of a study like this. The article claims that AI may one day write a novel, but how do you define “successfully” writing a novel? Is the goal here that one day we’ll have a machine that can produce algorithmically mediocre works of art? What’s the value in that?



  • The key difference being that AI is a much, much more expensive product to deliver than anything else on the web. Even compared to streaming video content, AI is orders of magnitude higher in terms of its cost to deliver.

    What this means is that providing AI on the model you’re describing is impossible. You simply cannot pack in enough advertising to make ChatGPT profitable. You can’t make enough from user data to be worth the operating costs.

    AI fundamentally does not work as a “free” product. Users need to be willing to pony up serious amounts of money for it. OpenAI have straight up said that even their most expensive subscriber tier operates at a loss.

    Maybe that would work, if you could sell it as a boutique product, something for only a very exclusive club of wealthy buyers. Only that model is also an immediate dead end, because the training costs to build a model are the same whether you make that model for 10 people or 10 billion, and those training costs are astronomical. To get any kind of return on investment these companies need to sell a very, very expensive product to a market that is far too narrow to support it.

    There’s no way to square this circle. Their bet was that AI would be so vital, so essential to every facet of our lives that everyone would be paying for it. They thought they had the new cellphone here; a $40/month subscription plan from almost every adult in the developed world. What they have instead is a product with zero path to profitability.




  • In the comments section of another, a TikTok user responded to a thread outlining the current administration’s anti-LGBTQ actions by saying, “None of that has anything to do with us being gay.”

    Compliance will not save you, you craven fucking cowards. They have never been quiet about how much they hate gay people, and the fact that they’re focused so heavily on attacking trans people right now is only because they want to pick off the weakest of the herd first.

    Bigotry is the enemy of all people. None of us is free until all of us are free.


  • I’m genuinely struggling to believe that you’re being anything other than intentionally disingenuous here, because it’s hard to imagine how anyone operating in good faith could manage to miss a point so completely and utterly.

    But on the off chance that you’re serious; the logic is that purpose has far more moral weight to it than means. Punching out a Nazi to save the black man he was trying to beat to death in the gutter is a morally good thing to do. Punching out a trans person because you’re a hateful bigot is a morally bad thing to do. Do I need to elaborate on that? I feel like I shouldn’t have to, but then it feels like I shouldn’t have to be explaining any of this.

    If you were in a sealed room with a thousand starving children, a padlocked shipping container full of food labelled “Property of Jeff Bezos”, and a set of bolt-cutters, what would you do? Because if the answer is anything other than “Break the lock open”, your entire moral system is completely and utterly fucked, and I do not know how to explain it to you any more plainly than that. If you actually believe that property rights are more important than human lives, then I honestly think you might need serious and extensive therapy to undo whatever damage has been done to you.


  • Yeah, I’m all for pro-Palestine activism; fully support this cause, and I have no moral objections to destroying some property to do it. Human lives are more important than inanimate objects. But I really wish they could have come up with a version of this plan that didn’t involve fucking over Ukraine.

    Unfortunately I suspect a lot of these people may be of the stripe of Leftist that treats anything relating to war - including arming the victims of unprovoked territorial aggression - as morally wrong, meaning they likely saw this as killing two birds with one stone.




  • It’s not the standard because it will likely have a LOT of unintended consequences.

    How do you share evidence of police brutality if they can use copyright to take down the video? How do newspapers print pictures of people if they have to get the rightsholders permission first? How do we share photos of Elon Musk doing a Nazi salute if he can just sue every site that posts it for unauthorized use of his likeness?

    Unless this has some extremely stringent and well written limitations, it has the potential to be a very bad idea.






  • There are, as I understand it, ways that you can train on AI generated material without inviting model collapse, but that’s more to do with distilling the output of a model. What Musk is describing is absolutely wholesale confabulation being fed back into the next generation of their model, which would be very bad. It’s also a total pipe dream. Getting an AI to rewrite something like the total training data set to your exact requirements, and verifying that it had done so satisfactorily would be an absolutely monumental undertaking. The compute time alone would be staggering and the human labour (to check the output) many times higher than that.

    But the whiny little piss baby is mad that his own AI keeps fact checking him, and his engineers have already explained that coding it to lie doesn’t really work because the training data tends to outweigh the initial prompt, so this is the best theory he can come up with for how he can “fix” his AI expressing reality’s well known liberal bias.