> I don't think any of the big boys are working on how to get an LLM to design a better LLM
Not sure if you count this as "working on it", but this is something Anthropic tests for for safety evals on models. "If a model can independently conduct complex AI research tasks typically requiring human expertise—potentially significantly accelerating AI development in an unpredictable way—we require elevated security standards (potentially ASL-4 or higher standards)".
Not sure if you count this as "working on it", but this is something Anthropic tests for for safety evals on models. "If a model can independently conduct complex AI research tasks typically requiring human expertise—potentially significantly accelerating AI development in an unpredictable way—we require elevated security standards (potentially ASL-4 or higher standards)".
https://www.anthropic.com/news/announcing-our-updated-respon...