Mixing artist names was by far the most effective way to create aesthetically pleasing images, this is a huge change. DreamBooth can only fine-tune on a couple dozen images, and you can't train multiple new concepts in one model, but maybe someone will do a regular fine-tune or train a new model.
That really depends on whether you mean 'like artist X' as 'aesthetically pleasing'. I was fooling around with furry diffusion and got to try a few different models. Yiffy understood artist names, and furry did not: it had further training but stripped of artist tags.
All these models are pretty good as that community is strong on art, styles, art skill, and tagging, causing the models to be a serious test case for what's possible. The model with artist names was indeed capable of invoking their styles (for instance, an artist with exceptional anatomy rendering had it translate into the AI version). The more-trained model without the artist names was much more intelligent. It was simply more capable of quality output, so long as your intention wasn't 'remind me of this artist'.
I think that's likely to be true in the general case, too. This tech is destined for artist/writer/creator enhancement, so it needs to get smarter at divining INTENT, not just blindly generating 'knock-offs' with little guidance.
What you want is better tagging in the dataset, and more personalized. If I have a particular notion of an 'angry sky', this tech should be able to deliver that unfailingly, in any context I like. Greg Rutkowski not required or invoked :)
I'd be curious how well the model still performs given such prompts. Disparate concepts, interpolation, n' all that. Surely it performs worse - but I bet it gets closer than you might think.