I’ve had a pretty depressing morning, scrolling through my Subscribed feed and realising that 90% of new posts were from the same two bot accounts (bagel and somethingmelon, can’t be remeber exactly and I’ve blocked them.)
Thankfully, a few people had made “ai slop” comments under one, so I checked the post history and, sure, a new account posting at a implausible rate. And once you started looking at the posts they were kinda samey, generic or a bit off. But I think that if the bot had been programmed to post at a slower rate, I don’t think I’d have really noticed.
So my question is, should people be allowed to report bot accounts? And can/should mods be expected at assess someone’s humanity? The very idea is gross, but so is the thought that lemmy would be very easily swamped by a small number of more careful written bots.
I don’t care if I’m “allowed” to report them. These latest bots are pretending to be people when it’s clear they are not. Going to call out, downvote, and report liars and trolls wherever they appear, AI or not.
If it’s a bot that is not registered as a bot, yes it should be reported.
If it’s registered as a bot account it doesn’t show up in my feed because I have the hide bot posts.
The only reason I have that setting off is because I really enjoy the daily bunnies community. And that one is basically just a bot posting the daily illustration
I didn’t know about that setting, cheers
I’m curious, if an RSS bot posts something, and a human cross posts it (hopefully because they decided it was high quality), does it show or not?
Thanks for the thread. I’ve been noticing those accounts and have been trying to figure out what action I should take as a mod.
What’s really fucked up about it is that the content is actually pretty coherent and on-topic, and it seems clear that at least some of the comments are human – like it’s more “tool-assisted” than fully “bot.” So it feels like it’s almost valuable but also kinda ‘cheating,’ which (from a mod perspective) makes it hard to decide what to do about it.
What I’d really like to know is what techniques these accounts are using and what they’re actually up to. The other day I gave one a warning and demanded an explanation, but I only got an angry reply and then the account was nuked by an admin before I had the chance to do anything else.
You have to ban this sort of thing before it can be reported as something that is banned
When you make a report, it goes to the mods of that community, but it also goes to the Admins of their home instance.
Generally, people take a dim view of AI content and remove it once it’s apparent.
Yes, I’d say report it when you see it. If you see something, say something.
InfiniteBagel already got banned for this, so at a minimum, Bagel is ban evading. Same for CosmicWaffle or whatever the other one was.
It’s just spam and most instances have a no spam rule. Yes, report.
How does it work on lemmy? When I report a post or comment as spam, that goes to the community mod? And they can ban an account from their community. But how does stuff get to the instance level (who I assume are the people with the power to ban an account completely). Do community mods report problem users? Or do instance admins just see patterns of behaviour on the mod logs?
Reports go to both instance and comm mods, and it’s a coin toss who responds first.
Maybe a botornot community is in order? If a human cannot discern another human then…what…beep beep…beep
I’d say yes, depending on the channel rules, for the simple reason that it is literally an arms race until better bot blocking tools come about. Even then, it’s just the next step in the arms race.
I reported a spamming bot account the other day, apparently trying to hock AI based business/realty ideas and other AI slop across multiple communities.
When the cats community (yes little kitty cats) has a post that obviously doesn’t belong, plus the user only started their account a couple or few hours ago, report that shit!
What do you mean, “should they be allowed to report?” The report button works regardless of what kind of account you’re reporting.
If you mean “should the accounts be allowed”, then that’s entirely up to the communities and the instances involved. Some may be fine with them, others might not, it’s not anything that can be decided globally.
If you’re in a community that’s allowing accounts you don’t want to see then block the account.
I know you can click report, but I wasn’t sure if “suspected llm bot” was a legitimate complaint. Similarly, other commentators have said that it’s spam and can reported as such, but I wasn’t sure if this kind of ai posting was consider spam. They’re mostly posting in lots of different communities, and not just reposting the same shit. Tbh, if they had just slowed the pace to a few a day rather than 10+ an hour, I don’t think anyone would have noticed. So, I just wanted to check whether it was a legit reason to report a post.
Whether it’s a “legitimate complaint” is the community-dependent thing, I guess. Check the rules of the communities they’re posting in. The instance being used by the bot account may also have rules about how people can use accounts there, I’m not sure how to report an account to their home instance but that seems like the sort of thing that should exist.
How about flagging suspected bot accounts and those that spam AI-generated content? That way users can decide whether to block/avoid them and communities can decide whether to remove them. I wouldn’t knowingly engage with a bot and suspect most others wouldn’t either, even if the posts do occasionally lead to interesting discussions between real users.
I go into my settings and I remove all bot posts.
I have no idea why they allow bots. Usually websites are trying to get rid of bots, but I guess it’s backwards on Reddit and lemmy
I don’t mind some of the open bots (c/dailygames has some that post links) but it’s these shady llm bots pretending to talk about naps or giving life tips that bug me. And sadly, because they aren’t openly tagged as bots, I don’t think my settings can help.
I saw a article a while back. I cannot remember what website in particular, but over 51% of the users were bots.
This AI is going to start talking more and more casually like it’s a human. And when it does, you’ll know that these bots were only the first wave of the terrible problem that’s to come.
Yup, is a depressing future. I love chatting with strangers on the Internet, but increasingly I can’t see a way of excluding bots without destroying accessibility (constant captcha bullshit) or privacy (forcing users authentication with ID). Neither are acceptable, so maybe I’ll be forced to speak to my actual human acquaintances…
Maybe it will force people to talk in person again







