I’m coming to the end of week three of Coursera’s statistics one course, and this week we’ve been looking at regression and hypothesis testing. Once again, the subject matter has been very good. I’ll need to take a closer look at getting the regression coefficients using matrices to make sure I fully understand it, but apart from that it’s been a good week. I still have the quiz and assignment to do, but that’s a job for the weekend.
Last weekend, I did the quiz and assignment for week two, with a rather critical error becoming apparent: the answers recorded by the system didn’t match the answers I submitted. Yes, that’s right, the online marking system (from an organisation top-heavy with computer scientists) doesn’t record a student’s submissions correctly. I became aware of this when looking back at previous attempts to try and work out which questions I was getting wrong so I could go back and look at the data analysis again. I posted to the forum to report the issue and received an email saying that a solution was being worked on and would be applied to existing submissions retrospectively. Surely this was tested before deployment? In which case, how did such a fundamental bug get past testing? Can you imagine how frustrated and angry students would be getting if they were only allowed one attempt and a completion certificate was being offered? Last week I said:
- Is is too much to ask that something as well funded as Coursera using video as the primary teaching method could actually produce videos without errors in them?
to which I can add this week:
- Is it too much to ask that when using an automated online marking system it marks what I actually submitted?
Week three and again the quality of the subject matter is being let down by pedagogy and planning issues.