While it is true that there are no good incentives for a bot to comment here, I believe that the best safeguard against AI-generated content is that here such content will be noticed easily.
I have seen a couple of times on some other even smaller technical forums what appeared to be sequences of comments generated by an AI, from a fake account (until the accounts were banned).
There could not have been any kind of financial gain from those actions, so the conclusion was that someone was testing the capability of their AI to carry a conversation on such subjects.
The messages posted were remarkably coherent, but in the end it was obvious that their source could have been only either an AI or a human with mental problems.
What made those messages stand-out was that even if they contained mostly correct information, the same that might be found e.g in technical magazines, there was always some weird disconnect between their answer and the message to which they replied.
Apparently, in most cases the AI failed to identify which exactly was the point-of-view of the previous poster. The reply referred to various items mentioned in the parent message, but instead of referring to the essential points it referred to some non-relevant things mentioned more or less accidentally, or it tried to argue for or against some points as if trying to contradict the previous poster, when they actually had argued in the same direction.
I have seen a couple of times on some other even smaller technical forums what appeared to be sequences of comments generated by an AI, from a fake account (until the accounts were banned).
There could not have been any kind of financial gain from those actions, so the conclusion was that someone was testing the capability of their AI to carry a conversation on such subjects.
The messages posted were remarkably coherent, but in the end it was obvious that their source could have been only either an AI or a human with mental problems.
What made those messages stand-out was that even if they contained mostly correct information, the same that might be found e.g in technical magazines, there was always some weird disconnect between their answer and the message to which they replied.
Apparently, in most cases the AI failed to identify which exactly was the point-of-view of the previous poster. The reply referred to various items mentioned in the parent message, but instead of referring to the essential points it referred to some non-relevant things mentioned more or less accidentally, or it tried to argue for or against some points as if trying to contradict the previous poster, when they actually had argued in the same direction.