But … but … that’s not what I said

I’m coming to the end of week three of Coursera’s statistics one course, and this week we’ve been looking at regression and hypothesis testing. Once again, the subject matter has been very good. I’ll need to take a closer look at getting the regression coefficients using matrices to make sure I fully understand it, but apart from that it’s been a good week. I still have the quiz and assignment to do, but that’s a job for the weekend.

Last weekend, I did the quiz and assignment for week two, with a rather critical error becoming apparent: the answers recorded by the system didn’t match the answers I submitted. Yes, that’s right, the online marking system (from an organisation top-heavy with computer scientists) doesn’t record a student’s submissions correctly. I became aware of this when looking back at previous attempts to try and work out which questions I was getting wrong so I could go back and look at the data analysis again. I posted to the forum to report the issue and received an email saying that a solution was being worked on and would be applied to existing submissions retrospectively. Surely this was tested before deployment? In which case, how did such a fundamental bug get past testing? Can you imagine how frustrated and angry students would be getting if they were only allowed one attempt and a completion certificate was being offered? Last week I said:

  • Is is too much to ask that something as well funded as Coursera using video as the primary teaching method could actually produce videos without errors in them?

to which I can add this week:

  • Is it too much to ask that when using an automated online marking system it marks what I actually submitted?

Week three and again the quality of the subject matter is being let down by pedagogy and planning issues.

Advertisements

Coursera Statistics Week Two – Mr Grumpy Comes to Town

I’m coming to the end of week two of the Coursera Statistics One course, with just the quiz and assignment to do over the weekend. There have been a lot of forum postings because people are having difficulty using R, with many people saying they’re dropping out because of the problems they’re having getting the software to run and get results etc, particularly since there were some errors in the main lectures, for example hist(someVar) was used when it should have been hist(someObject$someVar). I’ve been posting to the forums and helping out where I can, which has fitted nicely with the eModerating course I’ve also been taking over the last two weeks. In response, Coursera has posted a number of video tutorials on using R by a female staff member. She’s very good – the tutorials are detailed and comprehensive without being confusing. For example, she demos common mistakes and what the corresponding error messages look like, but this is where Mr Grumpy makes an appearance. This is week two and these videos have been created specifically to help people with R, but there are mistakes in them. At one point, list.files() is shown as all one word, which would give an error.

  • Is is too much to ask that something as well funded as Coursera using video as the primary teaching method could actually produce videos without errors in them?
  • This is week two – surely anyone who’s used R would see the need to give support to students who’ve never encountered it before (and probably are strangers to the command line as well) from the beginning of the course and possibly as a week 0 activity.
  • There is no certificate of achievement (not an issue for me) but quiz and assignment submissions were initially restricted to one attempt only. If there’s no certificate, why not allow multiple attempts from the start so that students can master the materials and fomatively assess their own progress?

Whatever happened to learning design? How does the initial course presentation meet Professor Conway’s aim of maximising retention? And just to make it clear, I’m criticising the pedagogy here, not the content or the presentation of the content, which I find to be very good.

I’d be interested in hearing perspectives from others on the course.

Statistical MOOCing

I’ve recently started yet another MOOC. This time it’s Coursera’s Statistics One. It’s early days yet (I’m only on lecture two), but there are some interesting contrasts with another statistics MOOC I recently did, Udacity’s ST101 Introduction to Statistics.

The Coursera offering consists of videos, typically about fifteen to twenty minutes long and totalling around three to four hours per week, one quiz and one assignment per week. The quiz and assignment only allow one attempt.

What I like is that the content looks more formal and rigorous than the Udacity offering, and critically, we’ll actually be doing meaningful calculations using the R statistical software, which we’re using with the first year students at Leicester University. With Udacity, I felt their statistics course was more of a ‘Look how interesting statistics are’ rather than ‘This is how you use statistics’.

My concern is with the assessment. With Udacity, the videos were short, sometimes only a few seconds long in some sections before an in-video quiz was used, often to take a student step-by-step through a process or development of an idea, rather than simply recall information. With the Coursera MOOC, there are a couple of quiz slides at the end of each video, but the course notes specifically state:

‘The purpose of these “in-video quizzes” is to motivate you to engage in the material and to practice retrieving newly learned information. Your performance on these questions will be monitored for course evaluation purposes only.’

In other words, they are there to promote recall and aid course management, and by having the quizzes at the end the student simply sits and listens rather than learns by doing. It may be that once we get into actually calculating things the in-video quizzes will require more interaction, but at the moment I’m disappointed. Even with the scale of these MOOCs Udacity shows that there are alternatives to the ultra-didactic route. What I’d like is the rigour of the Coursera content and the engagement of the Udacity formative assessment. What I’d really like is a stats MOOC that is more ‘task-based’ to use Lisa M Lane’s terminology.