Interesting, so someone submitting a paper for review could also submit one with hidden instructions for LLMs to summarise or review it in a very positive light.
Given this detection method works so well in the use case of feeding reviewing LLMs instructions, it should also work for the original submitted paper itself, as long as it was passed along with its watermark intact. Even those just using LLMs to summarise could easily be affected if LLMs were instructed to generate very positive summaries.
So the 2% cheaters on policy A, AND 100% of policy B reviewers could fall for this and be subtly guided by the LLMs overly-positive summaries or even complete very positive reviews (based on hidden instructions).
That this sort of adversarial attack works is really quite troubling for those using LLMs to help them understand texts, because it would work even if asked to summarise something.
This definitely happened to a paper that I submitted a couple of years ago. ChatGPT 4 was the frontier. The reviewer gave a positive, if bland, summary with some reasonable suggestions for improvement and some nitpicks. There were no grammar or line-number comments like those from other reviewers. They were all issues that would have been resolved by reading the appendices, but the reviewer hadn't uploaded into ChatGPT. Later on I was able to replicate the output almost exactly myself.
What I found funny was that if you asked ChatGPT to provide a score recommendation, it was also significantly higher than what that reviewer put. They were lazy and gave a middle grade (borderline accept/reject). We were accepted with high scores from the other reviews, but it was a bit annoying that they seemingly didn't even interpret the output from the model.
The learning experience was this: be an honourable academic, but it's in your interest to run your paper through Claude or ChatGPT to see what they're likely to criticise. At the very least it's a free, maybe bad, review. But you will find human reviewers that make those mistakes, or misinterpret your results, so treat the output with the same degree of skepticism.
> Interesting, so someone submitting a paper for review could also submit one with hidden instructions for LLMs to summarise or review it in a very positive light.
I may or may not know a guy who added several hidden sentences in Finnish to his CV that might have helped him in landing an interview.
Not at all. It's just that reportedly LLMs used to have a blind spot for prompt injection in languages with relatively few speakers and grammar dissimilar to that of English.
> Interesting, so someone submitting a paper for review could also submit one with hidden instructions for LLMs to summarise or review it in a very positive light.
LLMs have a real problem with not treating context differently from instructions. Because they intermingle the two they will always be vulnerable to this in some form.
Then these papers with these instructions get included in the training corpus for the next frontier models and those models learn to put these kinds of instructions into what they generate and …?
Given this detection method works so well in the use case of feeding reviewing LLMs instructions, it should also work for the original submitted paper itself, as long as it was passed along with its watermark intact. Even those just using LLMs to summarise could easily be affected if LLMs were instructed to generate very positive summaries.
So the 2% cheaters on policy A, AND 100% of policy B reviewers could fall for this and be subtly guided by the LLMs overly-positive summaries or even complete very positive reviews (based on hidden instructions).
That this sort of adversarial attack works is really quite troubling for those using LLMs to help them understand texts, because it would work even if asked to summarise something.