Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This story runs counter to the other story that on the HN front page at the same time, which I enjoy:

https://news.ycombinator.com/item?id=12414746

https://www.insidehighered.com/news/2016/09/02/massachusetts...



Hey, Gradescope dev here. What detaro said is on the money -- we're able to group identical short-answer responses so that they can be graded in one shot. It's not necessary to analyze the answer content for this.

Many (though certainly not all) of the instructors using Gradescope are teaching CS or Math courses with heavy enrollment. So each exam will have many submissions (even 1000+), and each submission will have a lot of short answers. Marking each one on its own is tedious, but until recently it was the state of the art for paper exams.

Instructors can and do grade essays on Gradescope, and are able to save time. But in that case the savings comes from being able to create rubrics on the fly, to change point values without re-adjusting every single marked paper, to grade across questions rather than across exams, to publish grades without having to type them all in, and so on.

There's a lot of grunt work that goes into grading, and it doesn't need to be the case :)


I may have misread it. Does that still count as AI?

Also, they have a robot grading the GMAT essays since 1999 (http://www.800score.com/content/essay.html)


The classification of answers together is the AI


Does it? Essay questions are a special case, and this story doesn't claim that they can solve them. But they make improved tools, based on MOOC grading tools (from what I've seen of them) available for paper exams. Which e.g. would allow to spend more time manually grading an essay, instead of wasting time on checking simple questions of simpler form, which in many cases will make up most of an exam.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: