Two fishy MOOCs

A few weeks ago, I completed two MOOCs that ran at the same time and covered similar subject areas (at least at first glance), so I thought I’d ‘compare and contrast’ the two. One was the University of Southampton’s Exploring Our Oceans course on Futurelearn, and Duke University’s Marine Megafauna course, which ran on Coursera. I do have a background in the subject – I did a degree in Marine Biology and Zoology at Bangor University  so my aim was to look at the courses from a professional (educational technology) viewpoint while refreshing my knowledge of a subject I love.

Photo credit: Strobilomyces

Photo credit: Strobilomyces

Although both courses involved the oceans they did focus on different disciplines. Southampton’s course was more of an oceanography course while the marine megafauna course, as the name suggests, used the enigmatic big beasties to draw in and hold the students’ attention. Both courses could be described as xMOOCs although, as Grainne Conole has pointed out recently, there are much more nuanced ways of describing and classifying MOOCs. Any comparisons have to take the platform into account because it isn’t a neutral actor, as we can see in the way video is used on Coursera and assessment is done on Futurelearn.

Who are the students?

The marine megafauna course largely replicates a standard model of undergraduate education placed online, and doesn’t seem to assume any existing knowledge, although with a background in the subject I might be missing something. The Southampton course also doesn’t assume existing knowledge but here the approach is different with the target demographic that of what I’ll call the ‘curious amateur’. In other words, someone who comes to the subject with curiosity, passion, but who may have little experience of the subject or studying recently. As well as not assuming existing knowledge, Exploring Our Oceans also had material explicitly marked as advanced and optional so that participants could explore a particular area in more depth.

Video. And more video.

Both courses make frequent use of video. Marine Megafauna, like many of the courses on Coursera, uses video as its primary way of delivering content. There were five to eight videos per week, mostly as video lectures with other video clips, simulations, and audio embedded within them. Futurelearn delivers learning materials in a very linear manner so for example, in week three there will be items 3.1, 3.2, etc. Some of these were videos (complete with pdf transcript), but some were text-based where that was more appropriate. And that’s as it should be – video, useful as it is, is not the one medium to ‘rule them all’. In fact, one way that I’ll catch up on a MOOC is to read the video transcript and skip to particular points if I need to any graphics to help with my understanding. Video needs to be appropriate and offer something that the participant can’t get more easily or faster through different media, and for the majority of the time the Exploring our Oceans did that. Production values were high. We saw staff filmed on the quayside, on ships and in labs explaining the issues and the science from authentic environments. Related to this, here’s an example of poor practice with video. I’m enrolled on another Futurelearn MOOC with a single academic as the lead educator. At the start of every video the academic introduces themselves and their academic affiliation as thought we’ve never met them before. It’s week five. There are multiple videos each week – it’s not like we’re going to forget who they are between step 5.2 and step 5.5.

What didn’t I like?

I felt Marine Megafauna was a little heavy on taxonomy initially as we had introductions to each group of animals. Taxonomy is important. For example, the worms that live around hydrothermal vents (and who made appearances on both courses), have moved phylum since I did my degree, and major groupings within the gastropods have also been revised in 2005 and later. I would have preferred an introduction to group X (including taxonomy) followed by exploring that group’s ecology, conservation issues and adaptations to life in the ocean in more detail. You could compare to other groups at that point or have a summary/compare and contrast section later in the course, which would serve as a good synthesis of the course so far. As it was, it felt like we were marking time until we got to the interesting parts, and course retention might have suffered at that point. For the Southampton course, the parts I disliked were outside the control of the staff. Futurelearn uses a commenting system at the bottom of the page, similar to that of blogs, rather than the forums found on other platforms. In one way, that’s good in that it keeps the comments within context, but bad in that it prevents participants from starting their own discussions and searching comments is a non-starter. The other thing I didn’t like about the Southampton course was the assessment, which I’ll come back to later.

What did I like?

In Exploring Our Oceans I liked the range of other activities that we were asked to do. We shared images, planned an expedition, and did a practical. Yes, a real life, who made that mess in the kitchen practical on water masses and stratification using salt and food dye. In Marine Megafauna, I enjoyed the three peer assessments and the fact that scientific papers were an explicit part of each weeks’ activities. We would have between one and three PLoS ONE papers each week, and the material within them was assessed through the weekly quizzes. There were supporting materials for those unused to making sense of journal articles. Exploring Our Oceans did use some journal articles when discussing how new species were described and named, but not as an integral part of the course.


This was the area in which I found the biggest difference between the two courses, partly I think due to the different target participants (‘undergraduate-ish’ versus ‘curious amateur’), but largely due to the restrictions of the platform. Marine Megafauna had weekly quizzes with between 20 and 25 multiple choice questions, including questions that (unusually for MOOCs) went beyond factual recall. There were three attempts allowed per quiz with the best result counting. Each quiz contributed 10% to the final course mark. There were also three peer assessments – a Google Earth assignment, a species profile, and a report on a conservation issue for a particular species. The Google Earth assignment was largely quantitative and functioned as the peer marker training for the following two.

Exploring our Oceans had quizzes of five to six multiple choice questions, with three attempts per question and a sliding scale of marks (three marks for a correct answer on the first attempt down to one mark for a correct answer on the last attempt). But this is a platform issue. At a recent conference, someone who had authored Futurelearn quizzes gave their opinion on the process, the polite version of which was “nightmare”. I have seen peer assessment used successfully on other Futurelearn courses so it is possible, but it wasn’t used within this course.

Personally, I preferred the longer assessment for a number of reasons. First, it tests me and gives me a realistic idea of how I’m doing, rather than getting a good mark for remembering something from lecture one and guessing the other four questions. Secondly, more questions means fewer marks per question, so one area of difficulty or confusion doesn’t drag my score down. Thirdly, and regardless of how it contributes to the final course mark, I see it as formative, something to help me. I want to be tested. I want to know that I ‘got it’; I also want to know that my result (formative or not) actually means something and that means rigorous assessments. This may not be the same for everyone and a more rigorous assessment may discourage those participants who only see assessment as summative and lead them to believe that they are ‘failing’ rather than being shown what they need to work on.

Some final thoughts

If I didn’t already know the subject, what would I prefer? I think I’d prefer the approach of Exploring our Oceans but with the assessment of Marine Megafauna, with a clear explanation of why that form of assessment is being used. I really enjoyed both courses so if you’re interested in marine science, then I’d say keep an eye out for their next run.

P.S. Santa? Put one of these on my Christmas list please. Ta.


Plagiarism in MOOCs – whose loss?

I’m enrolled on a few MOOCs at the moment (no surprise there), some for work and some for personal interest. The two for personal interest are the Marine Megafauna course from Duke University on Coursera, and the University of Southampton’s Exploring the Oceans course on Futurelearn, which has just finished. I’ll do a post comparing the two approaches and platforms in another post, but what I want to talk about here is the issue of plagiarism that was flagged up in an email for the marine megafauna course recently.

The Marine Megafauna course uses peer review on written assignments, with a workflow of student X submits in the first week, student X marks five other assignments and self-assesses their own submission the following week, and then gets their marks the week after that. The assignment we had was to write a profile of a species for a general audience. There were a number of sections to the profile and the marking criteria were explicit so it was relatively easy to get high marks provided you followed the criteria and didn’t pick an obscure species that had little published research. I picked the leatherback turtle partly because its range extends into UK waters, and partly because the largest leatherback ever recorded washed ashore at Harlech in North Wales in 1988.

While I hadn’t been concerned with whether the assignments I evaluated were plagiarised or not a forum thread on plagiarism became quite animated and led to the course email. The position stated in the email was that “plagiarism is legally and morally a form of fraud”, but that “we wish to keep student evaluations focused on the substance of the assignment”. The email also states “students are not required to evaluate the plagiarism status of the assignments they receive” but then goes on to give advice about when it would be appropriate to zero mark if plagiarism is found. Initially, this made me feel uneasy, and I’ve yet to finalise my thoughts on the issue, so what follows is a little ‘thinking out loud’.

First of all, I’m talking specifically about plagiarism in MOOCs, not within higher education in general, where I have more conventional views. I have a number of questions:

  • If plagiarism is fraud, then who is being defrauded here and of what?
  • Is it appropriate to punish for plagiarism in a learning environment where there is no qualification or credential on offer (leaving aside the issue of signature track)?
  • Is it appropriate to punish for plagiarism with little or no training or guidance on what constitutes plagiarism?

The approach on Marine Megafauna mimics the processes of traditional higher education, but I would question if that’s appropriate. In traditional HE, there is a clear power structure and demarcation of roles. Students cede authority to academics and receive rewards (grades and qualifications) in return for their academic labour. A useful (although imperfect) analogy would be that of employer and employee. The employee conforms to the demands of the employer in expectation of the reward (salary) that they will receive later. In a MOOC that all goes out of the window because the analogy is closer to that of someone doing voluntary work and it becomes a lot more difficult (and ethically dubious) for the ’employer’ to criticise the ‘worker’ for something such as turning up late, for example. Likewise in MOOCs, the student is a free agent studying for reasons other than gaining a formal qualification. In the academic-student scenario there is an implied contract, and breaking the terms of that contract by presenting the work of another as your own carries penalties and punishments. But where is the contract in the MOOC? The only thing I’m receiving is the knowledge and skills I gain from the course and if I cheat, I only end up cheating myself (assuming I’m not signed up for something like specialisms or signature track). True, there is the honour code and a declaration that the work is the student’s own, but still: if plagiarism is fraud, then who is being defrauded here and of what? And what of the case where the plagiarism consists of content from wikipedia, where the content is explicitly licensed for re-use?

There is also the issue that the students had not been given any guidance on what constitutes plagiarism either as a submitting student or as a marker, probably I suspect because the course team weren’t expecting students to consider that. Student attitudes varied with some not concerned (“We’re not supposed to hunt for plagiarism”) while others were using online services to check for plagiarism. In fact, one of the reviewers of my submission gave the final feedback of “I’ve checked your text in … and had 90% originality.” But a low originality score is meaningless without context, and there were some cases where students had very little idea of what was plagiarism and what was not. One student questioned if their work would show as plagiarised because they’d typed it up into a word file before hand. Another explicitly asked if finding a match to a source that gave the size and dimensions of the animal counted as plagiarism. In other words, was quoting the basic biological facts of the animal plagiarism or not? With this level of awareness amongst students how can it be reasonable to use students to police plagiarism, however informally? And why should students have knowledge about the issue – they’re doing the course for fun or interest, with perhaps little recent experience of educational settings.

The third assignment is still to be marked. Personally, I won’t be checking for plagiarism – as one of the students on the forum said: “That’s not my call”. If a student wants to cheat themselves, that’s their loss. If the student is on signature track (which I won’t know) then they’ve paid a fee to the institution and it’s their job to check for plagiarism. E-learning is not an offline course put online, and that applies to the culture as well as the learning materials themselves.

What’s special about specialisations?

Coursera has launched its ‘specialisations‘ program. These are groups of existing courses in the same subject area with signature track options, followed by a two-week ‘capstone exam’ that reviews and then assesses the course materials. All the courses within a specialisation currently come from a single institution. The specialisation certificate does show the institution’s name, but also mentions that the program is non-credit bearing. They could also involve a significant investment in time. The largest specialisation is the data science specialisation, consisting of ten courses (including the capstone exam), each around three to five hours work a week (assuming their estimates are correct) and running in blocks of three.

So my first question is why? What problem is this initiative attempting to solve? Suppose I enrolled as a student. I do the courses, take the capstone exam and get my certificate. Now what?

Educational accreditations can function as a token, as a medium of information exchange. For example, a degree could be thought of as a ‘token’, because institutions, graduates and employers all understand the meaning and intrinsic value of it. Tokens don’t have to be qualifications. Martin Hall describes how silicon valley prefers participation in forums in programming and developer communities online in preference to formal computer science qualifications, and that’s fine. You could argue that someone’s behaviour, code and problem solving in those forums gives a better indication of their potential as an developer than a degree transcript. The community engagement functions as an unconventional token, but its transparent because all sides can see what it represents.

Which brings me back to my fictional specialisations certificate. I can’t see what it offers me other than an extra summative assessment and my results on a single certificate. How would an employer know what that represents? They may be able to see a syllabus on a course information page, but they’re unlikely to be able to see any detail of what the course entails or how rigorous the assessment is. True, they can’t do that with a conventional degree, but they don’t need to, because they have that shared meaning of what the degree, the ‘token’ represents from the systems (such as quality assurance) already in place. That’s all missing with MOOCs.

I like the idea of showing potential students a pathway, a program that allows them to develop their knowledge and skills in an area. I’m just not sure I’d be willing to pay for the privilege, especially when there’s little indication that my investment of time and money would hold value for anyone else.

MOOCs – Distance Learning Done Badly?

Long ago, in the dim distant past, when mobile phones were the size of suitcases and browsers were just people looking around a shop I studied using distance learning materials. It was the 1980s, and I was serving in the Army. I studied with the UK’s Open University and did a number of modules in science and maths. There were no entrance qualifications, and one of the OU’s goals was to make university-level education available to as many people as possible. The only ‘entrance’ restrictions, if we can call them that, were that you paid fees and you had to study foundation modules before moving on to study higher levels. The foundation courses assumed no prior knowledge and minimal or dormant study skills, and taught to that demographic bringing everyone up to the level where they could tackle more advanced courses. The materials were multimedia – they used TV, text, audio cassettes, workbooks, and practical kits, and their quality was excellent. The books the S330 Oceanography course team created were recommended reading when I studied marine biology at a conventional university. Lalli and Parson’s Biological Oceanographyfrom that course is on reading lists at the university where I now work. I still look at the materials I create or use now and ask myself ‘are these OU quality?’. The TV programmes were on Saturday mornings and at other, less sociable hours, and these both publicised the OU and drew people in. One of the reasons I enrolled with the OU was because I’d watched the TV programmes.

I loved studying with the OU. I measured the distance to the moon using a variant of this method. I explored the population dynamics of insects using holly leaves collected from a local wood. I calculated the valency of elements on the kitchen table using the experiment kits. I stared fascinated down a geology microscope at the beauty of thin rock sections under polarising light. I even enjoyed the assessments. There were questions within the texts and at the end of sections to check understanding. I submitted my TMAs (Tutor marked assessments), which came back not only with a mark, but with rich and detailed feedback. CMAs (computer marked assessments) were multiple choice, which I answered by putting a line through the letter for the correct answer and posting the sheet for the computer to scan and mark. When I the time came for my first exam, I was on active service. My unit pulled me out of the field, transported me to the assessment centre, I took the exam and then did the reverse journey and went back out on patrol. It’s certainly one way to put exam nerves in perspective, but not a technique I could really recommend :-). In fact, it was the attitude of my next unit to my studies that was a major reason in my decision to leave the army, but that’s a different story.

In later courses, I joined study groups and attended tutorials, and of course, the residential summer schools. These were a week at a conventional university using their labs and facilities, and where breakfasts could be livened up by ‘people watching’ to see who came down with who, or which pair, apparently inseparable during the rest of the summer school, made a point of coming down to breakfast separately.

So why this nostalgia? Well, it seems to me that the current crop of MOOCs such as Coursera and Udacity, for all the hype they’ve received, are trying to achieve similar aims to the OU, only with different technology and a few decades later. They make claims that they’re open and that anyone can study with them, which is true to an extent, but I question whether a MOOC student without well-developed learning skills would be able to study these courses effectively. There is no ramping up of study skills, and partly that’s due to the length of these courses, but it’s also down to bad pedagogy as I’ve explored in other posts. The forums quite often have a number of posts from students about to drop the course because they find themselves out of their depth, and those students are probably a minority compared to those that silently give up.

The learning design (in the MOOCs I’ve enrolled on) is mostly based on the transmit content and test model. The test can usually be taken a number of times, but feedback tends to be minimal, especially if students need to achieve a particular mark to gain a certificate of completion. Udacity has a better approach to formative assessment with in-video quizzes liberally scattered through the presentations. There’s no real collaboration around making sense of the content, there’s no real conversation to use Laurillard’s model. It’s as if the OU, rather than creating its own materials, simply posted a textbook to the student, scheduled an exam for the end of the academic year and left students to talk amongst themselves if they wanted to. We wouldn’t expect that approach to work offline, so why is it suddenly thought to be a viable model when moved online? And not just viable, but innovative and disruptive? Online learning has a huge role to play in the future of higher education, but not using the model of the Coursera-style MOOCs, which although having solved the problem of scale, has lost much of what we already knew worked well.


My last couple of posts may have come across as me being critical of MOOCs. I’m not, although I do have criticisms around how some of the MOOCs are implemented and whether they’re as disruptive and innovative as they claim to be, but I’ll save those thoughts for another post. I like MOOCs and just to prove the point, I’ve started another three this week: Computing for Data Analysis, Social Network Analysis  and Writing in the Sciences, making a total of four courses. Actually, it would be five if I’d ever started the Introduction to Sustainability course as well. Luckily, the workload will drop over the next two or three weeks as courses end, but I’m feeling a little like the MOOC dial has been turned all the way up to eleven, so I may drop a course if it proves too much.

Writers are told that if they want to write well they need to read voraciously, so should I want to ‘write’ a MOOC in the future ‘reading’ so many is a positive advantage. Coursera as a brand isn’t a monolithic whole – the way that each course is presented varies quite a lot. In one course I’m doing the videos are basically presentations with an audio soundtrack and a short ‘talking head’ sequence at the start. In Statistics One, the videos show the instructor (tablet and stylus in hand) and the slides in the background which the video cuts away to when necessary. Some courses have certificates of completion while others don’t. In some the discussions in the forums are a key pedagogical feature, while in others the forums function more like a helpdesk. The Social Network Analysis course steps outside the Coursera box by having its own Twitter account.

A new feature I’ve noticed this week is the concept of late days. Each student has a number of late days that they can spend on extending the deadline for an assignment, which means no more missed deadlines because Saturday is going to be taken up with Aunty Ethel’s birthday party. This is great because it’s such a simple concept, and shows how online learning should take account of a student as a human being, a person with a real life of normality enlivened with the occasional triumph and disaster. It takes account of the student beyond their existence as an educational entity and allows flexibility in the course to accommodate that. So this week, I’d like to finish on a positive note: a round of applause for late days please.

But … but … that’s not what I said

I’m coming to the end of week three of Coursera’s statistics one course, and this week we’ve been looking at regression and hypothesis testing. Once again, the subject matter has been very good. I’ll need to take a closer look at getting the regression coefficients using matrices to make sure I fully understand it, but apart from that it’s been a good week. I still have the quiz and assignment to do, but that’s a job for the weekend.

Last weekend, I did the quiz and assignment for week two, with a rather critical error becoming apparent: the answers recorded by the system didn’t match the answers I submitted. Yes, that’s right, the online marking system (from an organisation top-heavy with computer scientists) doesn’t record a student’s submissions correctly. I became aware of this when looking back at previous attempts to try and work out which questions I was getting wrong so I could go back and look at the data analysis again. I posted to the forum to report the issue and received an email saying that a solution was being worked on and would be applied to existing submissions retrospectively. Surely this was tested before deployment? In which case, how did such a fundamental bug get past testing? Can you imagine how frustrated and angry students would be getting if they were only allowed one attempt and a completion certificate was being offered? Last week I said:

  • Is is too much to ask that something as well funded as Coursera using video as the primary teaching method could actually produce videos without errors in them?

to which I can add this week:

  • Is it too much to ask that when using an automated online marking system it marks what I actually submitted?

Week three and again the quality of the subject matter is being let down by pedagogy and planning issues.