Yes, a fun fact about slop text is that it's very low perplexity text (basically: it's statistically likely text from an LLM's point of view) so most algorithms that rank will tend to have a bias towards preferring this text.
Since even classical machine learning uses BERT based embeddings on the backend this problem is likely wider scale than it seems if a search engine isn't proactively filtering it out
A naive way of scoring how AI laden text is would be to run n-1 layers of a model and compare the text to the probability space of tokens from the model.
It works somewhat to detect obvious text but is not strong enough a method by itself.
Since even classical machine learning uses BERT based embeddings on the backend this problem is likely wider scale than it seems if a search engine isn't proactively filtering it out