Two fishy MOOCs

A few weeks ago, I completed two MOOCs that ran at the same time and covered similar subject areas (at least at first glance), so I thought I’d ‘compare and contrast’ the two. One was the University of Southampton’s Exploring Our Oceans course on Futurelearn, and Duke University’s Marine Megafauna course, which ran on Coursera. I do have a background in the subject – I did a degree in Marine Biology and Zoology at Bangor University  so my aim was to look at the courses from a professional (educational technology) viewpoint while refreshing my knowledge of a subject I love.

Photo credit: Strobilomyces

Photo credit: Strobilomyces

Although both courses involved the oceans they did focus on different disciplines. Southampton’s course was more of an oceanography course while the marine megafauna course, as the name suggests, used the enigmatic big beasties to draw in and hold the students’ attention. Both courses could be described as xMOOCs although, as Grainne Conole has pointed out recently, there are much more nuanced ways of describing and classifying MOOCs. Any comparisons have to take the platform into account because it isn’t a neutral actor, as we can see in the way video is used on Coursera and assessment is done on Futurelearn.

Who are the students?

The marine megafauna course largely replicates a standard model of undergraduate education placed online, and doesn’t seem to assume any existing knowledge, although with a background in the subject I might be missing something. The Southampton course also doesn’t assume existing knowledge but here the approach is different with the target demographic that of what I’ll call the ‘curious amateur’. In other words, someone who comes to the subject with curiosity, passion, but who may have little experience of the subject or studying recently. As well as not assuming existing knowledge, Exploring Our Oceans also had material explicitly marked as advanced and optional so that participants could explore a particular area in more depth.

Video. And more video.

Both courses make frequent use of video. Marine Megafauna, like many of the courses on Coursera, uses video as its primary way of delivering content. There were five to eight videos per week, mostly as video lectures with other video clips, simulations, and audio embedded within them. Futurelearn delivers learning materials in a very linear manner so for example, in week three there will be items 3.1, 3.2, etc. Some of these were videos (complete with pdf transcript), but some were text-based where that was more appropriate. And that’s as it should be – video, useful as it is, is not the one medium to ‘rule them all’. In fact, one way that I’ll catch up on a MOOC is to read the video transcript and skip to particular points if I need to any graphics to help with my understanding. Video needs to be appropriate and offer something that the participant can’t get more easily or faster through different media, and for the majority of the time the Exploring our Oceans did that. Production values were high. We saw staff filmed on the quayside, on ships and in labs explaining the issues and the science from authentic environments. Related to this, here’s an example of poor practice with video. I’m enrolled on another Futurelearn MOOC with a single academic as the lead educator. At the start of every video the academic introduces themselves and their academic affiliation as thought we’ve never met them before. It’s week five. There are multiple videos each week – it’s not like we’re going to forget who they are between step 5.2 and step 5.5.

What didn’t I like?

I felt Marine Megafauna was a little heavy on taxonomy initially as we had introductions to each group of animals. Taxonomy is important. For example, the worms that live around hydrothermal vents (and who made appearances on both courses), have moved phylum since I did my degree, and major groupings within the gastropods have also been revised in 2005 and later. I would have preferred an introduction to group X (including taxonomy) followed by exploring that group’s ecology, conservation issues and adaptations to life in the ocean in more detail. You could compare to other groups at that point or have a summary/compare and contrast section later in the course, which would serve as a good synthesis of the course so far. As it was, it felt like we were marking time until we got to the interesting parts, and course retention might have suffered at that point. For the Southampton course, the parts I disliked were outside the control of the staff. Futurelearn uses a commenting system at the bottom of the page, similar to that of blogs, rather than the forums found on other platforms. In one way, that’s good in that it keeps the comments within context, but bad in that it prevents participants from starting their own discussions and searching comments is a non-starter. The other thing I didn’t like about the Southampton course was the assessment, which I’ll come back to later.

What did I like?

In Exploring Our Oceans I liked the range of other activities that we were asked to do. We shared images, planned an expedition, and did a practical. Yes, a real life, who made that mess in the kitchen practical on water masses and stratification using salt and food dye. In Marine Megafauna, I enjoyed the three peer assessments and the fact that scientific papers were an explicit part of each weeks’ activities. We would have between one and three PLoS ONE papers each week, and the material within them was assessed through the weekly quizzes. There were supporting materials for those unused to making sense of journal articles. Exploring Our Oceans did use some journal articles when discussing how new species were described and named, but not as an integral part of the course.

Assessment

This was the area in which I found the biggest difference between the two courses, partly I think due to the different target participants (‘undergraduate-ish’ versus ‘curious amateur’), but largely due to the restrictions of the platform. Marine Megafauna had weekly quizzes with between 20 and 25 multiple choice questions, including questions that (unusually for MOOCs) went beyond factual recall. There were three attempts allowed per quiz with the best result counting. Each quiz contributed 10% to the final course mark. There were also three peer assessments – a Google Earth assignment, a species profile, and a report on a conservation issue for a particular species. The Google Earth assignment was largely quantitative and functioned as the peer marker training for the following two.

Exploring our Oceans had quizzes of five to six multiple choice questions, with three attempts per question and a sliding scale of marks (three marks for a correct answer on the first attempt down to one mark for a correct answer on the last attempt). But this is a platform issue. At a recent conference, someone who had authored Futurelearn quizzes gave their opinion on the process, the polite version of which was “nightmare”. I have seen peer assessment used successfully on other Futurelearn courses so it is possible, but it wasn’t used within this course.

Personally, I preferred the longer assessment for a number of reasons. First, it tests me and gives me a realistic idea of how I’m doing, rather than getting a good mark for remembering something from lecture one and guessing the other four questions. Secondly, more questions means fewer marks per question, so one area of difficulty or confusion doesn’t drag my score down. Thirdly, and regardless of how it contributes to the final course mark, I see it as formative, something to help me. I want to be tested. I want to know that I ‘got it’; I also want to know that my result (formative or not) actually means something and that means rigorous assessments. This may not be the same for everyone and a more rigorous assessment may discourage those participants who only see assessment as summative and lead them to believe that they are ‘failing’ rather than being shown what they need to work on.

Some final thoughts

If I didn’t already know the subject, what would I prefer? I think I’d prefer the approach of Exploring our Oceans but with the assessment of Marine Megafauna, with a clear explanation of why that form of assessment is being used. I really enjoyed both courses so if you’re interested in marine science, then I’d say keep an eye out for their next run.

P.S. Santa? Put one of these on my Christmas list please. Ta.

Nice subject, shame about the platform

One thing that tends to get pushed to the sidelines when talking about MOOCs is the issue of what platform it runs on. We talk about pedagogy, assessment, scalability, and grumble about particular features, but I think we don’t always acknowledge how big a role the platform plays. I’m in the middle of the Marine and Antarctic Science MOOC  from the university of Tasmania on the Open2Study platform. It’s my first time on open2study and although I’m planning a more in-depth post talking about platforms I thought I’d give my initial impressions.

Emperor Penguin

Photo © Samuel Blanc

The courses are four modules (weeks) consisting of a number of topics, followed by a weekly assessment and some of the assessment questions have been less than challenging, shall we say. Each topic has a short video, followed by a single quiz question. That’s right – we’re back on planet video. All the topics are video-based, predominantly talking-head, but with some visualiser work and embedded images or video clips. There is an interactive transcript that means I can jump to a particular point, which is a nice feature, but there’s no text-based materials. At the moment, videos can’t be downloaded for offline viewing, so not only are they assuming video is the best pedagogical technique for their content but they’re also tethering me to an active connection to study it.

In terms of engagement it is also something of a mixed bag. Students get badges for activities such as completing their profile, completing an assessment, etc, so there’s an attempt at gamification. There are forums, of course, but badly implemented. The forums are embedded within the page. There’s a search box and a drop-down to view by latest, highest voted, highest rated or most views, but no option to view all. There is an initial post to start things off, and students can create their own, but it’s woefully underused. The staff member that makes the initial post is not one of the academic staff, and I’ve not seen a single comment from the academic staff so far in the course.

The last major failing of the platform is that of community. In the course I’m studying there simply isn’t any. The forums are ghost towns. In one area of the screen it tells me that there are 495 students studying the course and lists four students that I can click on to view their profiles, but no option to see all the students on the course and connect to them. There is a suggested connections section on my profile page but of the top eight listed in the ‘recommended connections’ not a single one was doing the same course.

So, each course lasts for four weeks and runs every five. It’s all video, with minimal staff involvement beyond one admin person that can simply be run and re-run at relatively low-cost. This is the type of online learning that makes my heart sink, where it seems the economics of the model take precedence over the effectiveness of the learning. The academics involved in this course are engaging and have some interesting perspectives on their subject – it’s such a shame their expertise and passion for their subject has been let down by such an uninspiring platform.

Plagiarism in MOOCs – whose loss?

I’m enrolled on a few MOOCs at the moment (no surprise there), some for work and some for personal interest. The two for personal interest are the Marine Megafauna course from Duke University on Coursera, and the University of Southampton’s Exploring the Oceans course on Futurelearn, which has just finished. I’ll do a post comparing the two approaches and platforms in another post, but what I want to talk about here is the issue of plagiarism that was flagged up in an email for the marine megafauna course recently.

The Marine Megafauna course uses peer review on written assignments, with a workflow of student X submits in the first week, student X marks five other assignments and self-assesses their own submission the following week, and then gets their marks the week after that. The assignment we had was to write a profile of a species for a general audience. There were a number of sections to the profile and the marking criteria were explicit so it was relatively easy to get high marks provided you followed the criteria and didn’t pick an obscure species that had little published research. I picked the leatherback turtle partly because its range extends into UK waters, and partly because the largest leatherback ever recorded washed ashore at Harlech in North Wales in 1988.

While I hadn’t been concerned with whether the assignments I evaluated were plagiarised or not a forum thread on plagiarism became quite animated and led to the course email. The position stated in the email was that “plagiarism is legally and morally a form of fraud”, but that “we wish to keep student evaluations focused on the substance of the assignment”. The email also states “students are not required to evaluate the plagiarism status of the assignments they receive” but then goes on to give advice about when it would be appropriate to zero mark if plagiarism is found. Initially, this made me feel uneasy, and I’ve yet to finalise my thoughts on the issue, so what follows is a little ‘thinking out loud’.

First of all, I’m talking specifically about plagiarism in MOOCs, not within higher education in general, where I have more conventional views. I have a number of questions:

  • If plagiarism is fraud, then who is being defrauded here and of what?
  • Is it appropriate to punish for plagiarism in a learning environment where there is no qualification or credential on offer (leaving aside the issue of signature track)?
  • Is it appropriate to punish for plagiarism with little or no training or guidance on what constitutes plagiarism?

The approach on Marine Megafauna mimics the processes of traditional higher education, but I would question if that’s appropriate. In traditional HE, there is a clear power structure and demarcation of roles. Students cede authority to academics and receive rewards (grades and qualifications) in return for their academic labour. A useful (although imperfect) analogy would be that of employer and employee. The employee conforms to the demands of the employer in expectation of the reward (salary) that they will receive later. In a MOOC that all goes out of the window because the analogy is closer to that of someone doing voluntary work and it becomes a lot more difficult (and ethically dubious) for the ’employer’ to criticise the ‘worker’ for something such as turning up late, for example. Likewise in MOOCs, the student is a free agent studying for reasons other than gaining a formal qualification. In the academic-student scenario there is an implied contract, and breaking the terms of that contract by presenting the work of another as your own carries penalties and punishments. But where is the contract in the MOOC? The only thing I’m receiving is the knowledge and skills I gain from the course and if I cheat, I only end up cheating myself (assuming I’m not signed up for something like specialisms or signature track). True, there is the honour code and a declaration that the work is the student’s own, but still: if plagiarism is fraud, then who is being defrauded here and of what? And what of the case where the plagiarism consists of content from wikipedia, where the content is explicitly licensed for re-use?

There is also the issue that the students had not been given any guidance on what constitutes plagiarism either as a submitting student or as a marker, probably I suspect because the course team weren’t expecting students to consider that. Student attitudes varied with some not concerned (“We’re not supposed to hunt for plagiarism”) while others were using online services to check for plagiarism. In fact, one of the reviewers of my submission gave the final feedback of “I’ve checked your text in … and had 90% originality.” But a low originality score is meaningless without context, and there were some cases where students had very little idea of what was plagiarism and what was not. One student questioned if their work would show as plagiarised because they’d typed it up into a word file before hand. Another explicitly asked if finding a match to a source that gave the size and dimensions of the animal counted as plagiarism. In other words, was quoting the basic biological facts of the animal plagiarism or not? With this level of awareness amongst students how can it be reasonable to use students to police plagiarism, however informally? And why should students have knowledge about the issue – they’re doing the course for fun or interest, with perhaps little recent experience of educational settings.

The third assignment is still to be marked. Personally, I won’t be checking for plagiarism – as one of the students on the forum said: “That’s not my call”. If a student wants to cheat themselves, that’s their loss. If the student is on signature track (which I won’t know) then they’ve paid a fee to the institution and it’s their job to check for plagiarism. E-learning is not an offline course put online, and that applies to the culture as well as the learning materials themselves.

What’s special about specialisations?

Coursera has launched its ‘specialisations‘ program. These are groups of existing courses in the same subject area with signature track options, followed by a two-week ‘capstone exam’ that reviews and then assesses the course materials. All the courses within a specialisation currently come from a single institution. The specialisation certificate does show the institution’s name, but also mentions that the program is non-credit bearing. They could also involve a significant investment in time. The largest specialisation is the data science specialisation, consisting of ten courses (including the capstone exam), each around three to five hours work a week (assuming their estimates are correct) and running in blocks of three.

So my first question is why? What problem is this initiative attempting to solve? Suppose I enrolled as a student. I do the courses, take the capstone exam and get my certificate. Now what?

Educational accreditations can function as a token, as a medium of information exchange. For example, a degree could be thought of as a ‘token’, because institutions, graduates and employers all understand the meaning and intrinsic value of it. Tokens don’t have to be qualifications. Martin Hall describes how silicon valley prefers participation in forums in programming and developer communities online in preference to formal computer science qualifications, and that’s fine. You could argue that someone’s behaviour, code and problem solving in those forums gives a better indication of their potential as an developer than a degree transcript. The community engagement functions as an unconventional token, but its transparent because all sides can see what it represents.

Which brings me back to my fictional specialisations certificate. I can’t see what it offers me other than an extra summative assessment and my results on a single certificate. How would an employer know what that represents? They may be able to see a syllabus on a course information page, but they’re unlikely to be able to see any detail of what the course entails or how rigorous the assessment is. True, they can’t do that with a conventional degree, but they don’t need to, because they have that shared meaning of what the degree, the ‘token’ represents from the systems (such as quality assurance) already in place. That’s all missing with MOOCs.

I like the idea of showing potential students a pathway, a program that allows them to develop their knowledge and skills in an area. I’m just not sure I’d be willing to pay for the privilege, especially when there’s little indication that my investment of time and money would hold value for anyone else.

First Impressions of a Climate Change MOOC – Will the Climate Change?

I’ve just finished the second week of the University of Exeter’s climate change course through Futurelearn, and thought I’d post some reflections. First, the pedagogy. I’m still unsure about the linear nature of the materials. The contents links for the week are presented as a single page, and once you’ve clicked on one and marked it as complete you have the choice of ‘previous’ or ‘next’, with no obvious way back to the contents page. The contents page is broken into blocks by headings, and within each heading the blocks tend to follow a video-article-article-discussion pattern, or a close variant of it. Now, in one way that’s good because the blocks let me plan how to break down the week’s work. On the other hand, it would be really useful if I could jump around the content a little more easily so that I could revisit some underlying concept, or simply because I’d like to study the material in a different order. On a positive note, I’m impressed with the quality of the videos, both in terms of their educational content and in terms of their production values. Complex concepts are explained in simpler terms with high quality and appropriate animations and graphics.

In terms of content, so far we’ve covered the basics of the climate system, some of the feedbacks, and the origins of some of the variability in the climate system. I have noticed some sceptic viewpoints in the discussions, but no outright trolling as such. For example, so far I’ve spotted ‘no climate change since x’, ‘the climate has always changed’, and ‘humans are not changing the climate’, but myself and others have then responded with analogies or further evidence. So far, it seems, the sceptics seem to be true sceptics, perhaps repeating misinformation they’ve heard from other sources, but open to examine the evidence for themselves, at zero cost except for time. And isn’t that one of the opportunities MOOCs offer, for access to education and the opportunity to learn? Personally, I think MOOCs have a place, but remain doubtful they’ll achieve even a fraction of what the hype has predicted for them. Will the climate change? Well, next week we move on to look at the man-made influence on the climate, so it’ll be interesting to see what happens then and how the course team manages.

New year, new MOOC

New year, and yet another MOOC. I’ve started ‘Sustainability, Society and You‘ with Futurelearn. It’s not my first time with Futurelearn – I completed their web science MOOC in December – so it would seem an appropriate time to offer some reflections on the Futurelearn platform. Disclosure: the university I work for has also launched MOOCs on Futurelearn, although I am not involved in those in any way, and these are my thoughts from the student perspective.

Firstly (at least in my limited experience so far), the primary delivery medium is not video. Each week’s topic is split into a number of subtopics. Within each subtopic are the actual content items, some of these are video, and some are text-based. While this arrangement makes the learning process quite linear (each item has a mark as complete button and a next button), it does make it quite easy to plan how to arrange the various tasks throughout the week.

Secondly, the course team’s estimate of the time required has been much more accurate than other MOOCs. If it says five hours work a week then I know I can plan on needing to do around five hours work, plus or minus a margin of error. That’s unlike others I’ve done where the course team’s estimate might have said eight to ten hours, but in reality doubling that estimate put me closer to the real workload. You could argue that perhaps those were courses I wasn’t particularly well prepared for, and yes, you would be right, but I should have been prepared because according to the course team I met the required pre-requisites. The Futurelearn MOOCs I’ve done have set realistic expectations of me before I started the courses, and I think that would really help to keep students engaged and active in the courses. It would be interesting to compare the retention of Futurelearn MOOCs with that of other platforms such as Coursera and Udacity.

I do have some criticisms of the platform. The user interface was not entirely intuitive to me when I first started using it. Clicking on my profile picture at the top right pops up a mini-menu giving me access to a list of my courses and the usual account settings and profile editing options. No surprises there. When I’m in the content pages there’s a tab at the centre top, which pops out the main navigation in a header. Again, nothing unexpected. At the top left is the Futurelearn logo. I clicked on this expecting to go back to some sort of home page, as would normally be the case on a blog for example, but no, I get a pop-out menu with three options: To do, activity and progress. If I hadn’t wanted to jump back to the home page quickly, I could have gone through a large chunk of the course and never found that it existed. There’s also a feedback tab around half-way down the left hand side. This not only allows students to give feedback on the content, but also on the platform, and by clicking through students can suggest and vote on potential improvements for the platform. Incidentally, not only can you vote, but you can give one, two or three votes to a feature request, and each feature request has a status: under review, planned, started, completed or declined.

The platform is orientated towards promoting discussion and the development of a learning community. The forums are embedded on each content item, but are not really forums. In terms of promoting engagement I think that’s a great idea, but the functionality is closer to a basic commenting system of a blog, and that limits what you can do. You can choose to follow people, but it’s difficult to develop a network at the moment because of how the comments are displayed. There is no option to track comments or replies to comments (on the content page), and having a threaded display, perhaps not to the extent of a full blown forum on each page, would help greatly. You can see who’s following you (and who you’re following) and their comments either from the profile page, or via the ‘activity’ option in the menu that appears when you click on the Futurelearn logo. A quick check of the feature requests shows that the top two, both with a status of planned, are ‘break up discussion forums into smaller groups’ and ‘notification on comment’, so I’m not the only person to have spotted these as issues. In some ways, the reduced amount of content compared to many other MOOCs means that you have to engage with the discussions to gain the most from the courses.

Overall, I like Futurelearn. They’ve obviously given a great deal of thought to the learning design of the courses (and I’d expect nothing less from a development that involved the Open University). They are managing the expectations of their students (e.g. the time estimates) and getting them actively involved in a meaningful way with developing the platform further. I’ll be posting more thoughts related to the content of the course as it progesses using the #flsustain hashtag.

Maths and Mindset

An word-based maths problem

An word-based maths problem

Dr Jenny Koenig from the university of Cambridge was the presenter at one of our regular PedR meetings (pedagogical research group) recently. Now, I actually like maths. One of the first Open University courses I did was ‘MS283 An Introduction to Calculus’ so it was interesting to look at maths from a different perspective. The title was ‘Teaching and Learning Maths in the Biosciences’ and dealt with the challenges and issues surrounding quantitative skills in the biosciences, which fell into two main areas. First was content, the mathematical knowledge that a student arrived at university with, which varied according to the subjects and level they studied to and the grades they achieved. What this meant in practice was a very wide range in knowledge and ability from a bare pass at GCSE (the qualifications taken at the end of compulsory education around the age of 16) to a top grade in A-level maths immediately before entry into university. The second area was the attitude to maths, and the issues of maths phobia and maths anxiety. This lead me on to the work of Dr Jo Boaler and her ‘How to Learn Maths‘ MOOC. Unfortunately, by the time I became aware of it the course was due to finish so I downloaded the videos and settled down for some offline viewing. Her book “The Elephant in the Classroom”  is my current hometime reading on the commute home, and goes into the ideas in more detail.
Her premise is that the typical teaching of maths is strongly counterproductive and doesn’t equip students to actually use maths in the way they need to do in real life. This is because it relies on individual work using standardised methods with little creativity or active problem solving. Also, the (predominantly) UK and US practice of grouping students by ability leads to fixed expectations of both student and teacher. Her solution is to use a problem solving approach, involving group work, active discussion and explicit demonstration that there a variety of ways to reach the answer. She draws heavily on the work of Dr Carol Dweck around the concept of mindset, who distinguishes between fixed mindsets and growth mindsets. Fixed mindsets are where a person believes that people possess a fixed amount of a certain trait or talent (like mathematical ability) and that there is little that they can do to change it. This manifests itself as the self-fulfilling prophecy that there are those who are good at maths and those that aren’t. A person with a growth mindset believes that development comes through persistence and practice, and that anyone can improve their skill in a particular area. While these mindsets can apply to any area, I’d argue that Maths is one of the areas where the fixed mindset is particularly common and stated, and not only that, but that it’s culturally acceptable to be bad at maths. For example, while it’s not uncommon to hear people say that they’ve never been able to do maths you’d never see anyone smiling, shrugging their shoulders and saying “Ah, that reading and writing stuff. Never could get the hang of it”. Dweck’s work on mindset really resonates with me, and while I’m largely in the growth mindset there are a few areas where my mindset is more fixed. Now that I’m aware of those I can take steps to change them.
This concept of mindset links in to my earlier post on behaviour and reward because in addition to cultural and institutional barriers to innovation we now can add internal barriers. A fixed mindset leads to risk-averse behaviour because self-worth becomes connected to success. Failure doesn’t present a learning opportunity but passes sentence on the person as the failure. The failure or success at the task is the embodiment of the worth of the individual.
Growth mindsets on the other hand, allow ‘failures’ to be positive. A paper by Everingham et al (2013) describes the introduction of teaching quantitative skills through a new interdisciplinary course, looks at the effectiveness over two years and describes rescuing it “… from the ashes of disaster!” Evaluation at the end of the first year produced some worrying results. Maths anxiety for all students had increased. Female students were less confident in the computing areas of the course and male students were less engaged with the course overall. Significant changes were made to student support and assessment practices for the course and the second evaluation produced much better results. This is a great example of the growth mindset in action – they tried something and it went wrong. Rather than playing the ‘bail out and blame’ game they persisted. They redesigned and tried again, and then made public their initial failure through publication. When I worked as an IT trainer someone asked me how I ran my training room. I replied that I aimed for an atmosphere where people could screw up completely, feel comfortable and relaxed about it, and then get the support to put it right. What works for students works equally well, if permitted :-), for institutions.

References

Etheringham, Y. , Gyuris, E. and Sexton, J. (2013). Using student feedback to improve student attitudes and mathematical confidence in a first year interdisciplinary quantitative course: from the ashes of disaster! International Journal of Mathematical Education in Science and Technology, 44(6), 877–892. DOI: http://dx.doi.org/10.1080/0020739X.2013.810786