Yes, definitely. For every model I've used and/or tested, the more context there is, the worse the output, even within the context limits.
When I use chat UIs (which admittedly is less and less), I never let the chat go beyond one of my messages and one response from the LLM. If something is wrong with the response, I figure out what I need to change with my prompt and start new chat/edit the first message and retry, until it works. Any time I've tried to "No, what I meant was ..." or "Great, now change ..." the responses drop sharply in quality.