I've always been interested in constructed languages, but reading through this article just now made me wonder about curriculum learning for NLP models. Could better generalizable language models be achieved through curriculum learning of this sort, where simple mathematics and logic are introduced before anything else? The curriculum learning papers I've seen so far are mostly for specific tasks, like introducing simple questions for QA tasks before more complicated multi-hop reasoning.