Great for the fediverse! I suspect that these changes to twitter and reddit are mainly a response to the growing hunger of generative artificial intelligence companies who are hoovering up data, basically for free. Change is never easy, but i’m optimistic that this is the break open source and federated communities needed to start taking off. I hope people can see the value in decentralizing and help support these open source projects financially so that they can really start to scale. The reality is, scaling is expensive, and we all need to help where we can. These Ai companies will not hesitate to suck up federated data also. If we want to live in an ad free world its gonna cost us.
Question: Can’t AI companies just as easily Hoover up language content from the fediverse? Or is it something that we just kind of accept but don’t care about since it isn’t eating into fediverse finances?
I would assume it’s even worse for the fediverse considering the limited resources we have to run the servers. I wonder how the devs/server owners will handle this.
I don’t think (completely wild guess here) AI content crawlers should have any more impact than the dozens and dozens of spiders that make up must of my own site’s traffic.
The impact was magnified for Twitter because it generates so much new content every second. That wasn’t an issue when Twitter had a nice, properly cached API and it shouldn’t be an issue for fediverse instances either because we have RSS and caching and we’re not so stupid as to turn those off. Like, what kind of moron would do that?
The issue comes when those AI bots start commenting and posting here. From what I understand, bots are a large reason why Beehaw keeps defederating from instances with open registration: bots are difficult to moderate without good moderation tools.
Great for the fediverse! I suspect that these changes to twitter and reddit are mainly a response to the growing hunger of generative artificial intelligence companies who are hoovering up data, basically for free. Change is never easy, but i’m optimistic that this is the break open source and federated communities needed to start taking off. I hope people can see the value in decentralizing and help support these open source projects financially so that they can really start to scale. The reality is, scaling is expensive, and we all need to help where we can. These Ai companies will not hesitate to suck up federated data also. If we want to live in an ad free world its gonna cost us.
Question: Can’t AI companies just as easily Hoover up language content from the fediverse? Or is it something that we just kind of accept but don’t care about since it isn’t eating into fediverse finances?
I would assume it’s even worse for the fediverse considering the limited resources we have to run the servers. I wonder how the devs/server owners will handle this.
I don’t think (completely wild guess here) AI content crawlers should have any more impact than the dozens and dozens of spiders that make up must of my own site’s traffic.
The impact was magnified for Twitter because it generates so much new content every second. That wasn’t an issue when Twitter had a nice, properly cached API and it shouldn’t be an issue for fediverse instances either because we have RSS and caching and we’re not so stupid as to turn those off. Like, what kind of moron would do that?
The issue comes when those AI bots start commenting and posting here. From what I understand, bots are a large reason why Beehaw keeps defederating from instances with open registration: bots are difficult to moderate without good moderation tools.