Granted, I’m 62, so I’m from the old world. I attended college, and taught a couple of college classes, before the AI revolution. There was definitely a connection between learning and evaluation for most students. In fact most students preferred more evaluation, not less, such as graded quizzes and homeworks rather than just one great big exam at the end. Among other things, the deadlines and feedback helped them budget their efforts. Also, the exercise of getting something right and hitting a deadline is not an overt purpose of education, but has a certain pragmatic value.
Again, showing my age, in the pre-AI era, the technology of choice was cheating. But there were vanishingly few students who used cheating to circumvent the evaluations while actually learning anything from their courses.
If teaching and certifying could be separated, they would be. In fact, it has happened to some extent for computer programming, hence the “coding interview” and so forth. But computer programming is also an unusual occupation in that it’s easy to be self taught, and questionable whether it needs to be taught at the college level.
Universities make money not by teaching, but by testing and certifying. That's why AI is so disruptive in that space.