Maths and Mindset

An word-based maths problem

An word-based maths problem

Dr Jenny Koenig from the university of Cambridge was the presenter at one of our regular PedR meetings (pedagogical research group) recently. Now, I actually like maths. One of the first Open University courses I did was ‘MS283 An Introduction to Calculus’ so it was interesting to look at maths from a different perspective. The title was ‘Teaching and Learning Maths in the Biosciences’ and dealt with the challenges and issues surrounding quantitative skills in the biosciences, which fell into two main areas. First was content, the mathematical knowledge that a student arrived at university with, which varied according to the subjects and level they studied to and the grades they achieved. What this meant in practice was a very wide range in knowledge and ability from a bare pass at GCSE (the qualifications taken at the end of compulsory education around the age of 16) to a top grade in A-level maths immediately before entry into university. The second area was the attitude to maths, and the issues of maths phobia and maths anxiety. This lead me on to the work of Dr Jo Boaler and her ‘How to Learn Maths‘ MOOC. Unfortunately, by the time I became aware of it the course was due to finish so I downloaded the videos and settled down for some offline viewing. Her book “The Elephant in the Classroom”  is my current hometime reading on the commute home, and goes into the ideas in more detail.
Her premise is that the typical teaching of maths is strongly counterproductive and doesn’t equip students to actually use maths in the way they need to do in real life. This is because it relies on individual work using standardised methods with little creativity or active problem solving. Also, the (predominantly) UK and US practice of grouping students by ability leads to fixed expectations of both student and teacher. Her solution is to use a problem solving approach, involving group work, active discussion and explicit demonstration that there a variety of ways to reach the answer. She draws heavily on the work of Dr Carol Dweck around the concept of mindset, who distinguishes between fixed mindsets and growth mindsets. Fixed mindsets are where a person believes that people possess a fixed amount of a certain trait or talent (like mathematical ability) and that there is little that they can do to change it. This manifests itself as the self-fulfilling prophecy that there are those who are good at maths and those that aren’t. A person with a growth mindset believes that development comes through persistence and practice, and that anyone can improve their skill in a particular area. While these mindsets can apply to any area, I’d argue that Maths is one of the areas where the fixed mindset is particularly common and stated, and not only that, but that it’s culturally acceptable to be bad at maths. For example, while it’s not uncommon to hear people say that they’ve never been able to do maths you’d never see anyone smiling, shrugging their shoulders and saying “Ah, that reading and writing stuff. Never could get the hang of it”. Dweck’s work on mindset really resonates with me, and while I’m largely in the growth mindset there are a few areas where my mindset is more fixed. Now that I’m aware of those I can take steps to change them.
This concept of mindset links in to my earlier post on behaviour and reward because in addition to cultural and institutional barriers to innovation we now can add internal barriers. A fixed mindset leads to risk-averse behaviour because self-worth becomes connected to success. Failure doesn’t present a learning opportunity but passes sentence on the person as the failure. The failure or success at the task is the embodiment of the worth of the individual.
Growth mindsets on the other hand, allow ‘failures’ to be positive. A paper by Everingham et al (2013) describes the introduction of teaching quantitative skills through a new interdisciplinary course, looks at the effectiveness over two years and describes rescuing it “… from the ashes of disaster!” Evaluation at the end of the first year produced some worrying results. Maths anxiety for all students had increased. Female students were less confident in the computing areas of the course and male students were less engaged with the course overall. Significant changes were made to student support and assessment practices for the course and the second evaluation produced much better results. This is a great example of the growth mindset in action – they tried something and it went wrong. Rather than playing the ‘bail out and blame’ game they persisted. They redesigned and tried again, and then made public their initial failure through publication. When I worked as an IT trainer someone asked me how I ran my training room. I replied that I aimed for an atmosphere where people could screw up completely, feel comfortable and relaxed about it, and then get the support to put it right. What works for students works equally well, if permitted :-), for institutions.

References

Etheringham, Y. , Gyuris, E. and Sexton, J. (2013). Using student feedback to improve student attitudes and mathematical confidence in a first year interdisciplinary quantitative course: from the ashes of disaster! International Journal of Mathematical Education in Science and Technology, 44(6), 877–892. DOI: http://dx.doi.org/10.1080/0020739X.2013.810786

Advertisements

Carrots and sticks – not good enough even for donkeys?

In my last post I looked at student feedback and talked about institutional inertia in implementing new practice. Over the last couple of days I’ve come across blog posts that have led me to consider how institutions (in their widest sense) actively work against the improvement of teaching and the educational experience.

One post that came through my RSS feeds was ‘25 ways to cultivate intrinsic motivation‘. While an excellent article in itself it contained a video of the talk Daniel Pink gave to the RSA, and that’s what provided the seed for this blog post. I’d seen this video before but it was a while ago and I’d forgotten the details. Daniel talked about what motivates and drives human beings and some of the research that had been done. He described research where people were offered monetary rewards for various tasks and their performance was measured. The reward system worked as expected (higher pay produced better performance) provided that the task only involved mechanical or rote skills. Once the task needed any sort of thinking or cognitive input then a larger reward actually led to poorer performance. As Daniel states: “When a task gets more complicated, when it requires some conceptual creative thinking those types of motivator demonstrably don’t work.” He then goes on to discuss how for those types of task a combination of autonomy (self-direction), mastery (the desire to get better at something), and having a sense of higher purpose produces performance increases. Money is only relevant (in cognitive tasks) if people are paid sufficient so that they’re thinking more about the task and less about the reward. I’d argue that these three traits are a pretty good description of what drives the best teachers.

So how does this link to teaching? My daughter has recently passed her teaching degree, called the PGCE (Post Graduate Certificate of Education) here in the UK, and has just started her first full year of teaching. The UK government through the Department of Education has introduced new pay policies for teachers. The press release states that “evidence shows that improving the quality of teaching is essential to raising standards in schools.” No argument there, but I have grave doubts that any aspect of what’s been announced would actually improve ‘the quality of teaching’ within schools as a whole. There are three main elements listed in the press release for the new national pay framework. First, pay increases based on the length of service are stopped. I’d argue that rather than rewarding length of service these increases recognised increased experience in much the same way that a person with a number of years of experience could expect to start a job on a higher salary than someone without. Second, all pay progression is linked to performance based on annual appraisals. I don’t have an issue with performance monitoring or annual appraisals, provided that the process is transparent, fair, and not used as a tool to divide staff. Unfortunately, I’ve had personal experience where that was not the case. Thirdly, the new proposals scrap mandatory pay points, meaning that the pay scales remain for reference only “to guide career expectations“.

The press release then goes on to say: “It is up to each school to decide how to implement new pay arrangement for performance-related pay”, but there’s no mention of any extra funding to meet the additional salary costs (and if extra funds were available you can be sure they’d be shouting it from the rooftops). This means that funding the performance-related pay will have to come from elsewhere in the school budget. Schools are expected to do more with less, and the blame for any failure goes to those left to implement the policy (i.e. the school management) rather than those who set up an unworkable system in the first place.

Performance is assessed against the teachers’ standards framework and “if they meet all their objectives they might receive a pay rise” (my emphasis). So what happens if a majority of the teachers in a school meet (or exceed) their objectives? Do they all receive an increase, and if so, where does the money come from within a fixed budget? An analogy here is criterion and norm-referenced assessment. In criterion-referenced assessment theoretically the entire class could get the top grade provided their work met the standards that identified the top grade. In norm-referenced assessment only a certain percentage get the top grade, because what matters is not the work they produce, but how that work compares to their cohort. It’s the same for the teachers under these policies – there is no link between their performance and the reward they receive because there is no additional funding available. Even if financial regulations allow the headteacher some flexibility the largest budget item in educational institutions by a big margin is staff costs. In a previous institution I worked in staff costs accounted for around 70% of the total annual budget. A better approach would be to have had a chunk of money available to fund improved teacher performance in a similar way to the pupil premium, where schools are given additional funds to “support their disadvantaged pupils and close the attainment gap between them and their peers.” They could even call it the ‘teacher premium’.

Looking at the politics of this, and with an eye to the creeping agenda of privatisation within all sectors of education in the UK, I see this more as an attack on collective pay agreements and giving school management a tool to reduce staff costs. Over time, the salary you would get as a teacher would become a lottery. How can you even call this a national pay framework if teachers doing the same job to the same standard with the same amount of experience could end up being paid different salaries within the same school? And what would that do to the collegiate, collaborative environment that enable educational institutions to increase their achievement through the synergy of their staff?

So, the government has introduced a ‘performance-related’ pay scheme that isn’t related to performance in any systematic way, is likely to reduce institutional effectiveness by setting up staff to compete against each other for limited resources, and actually contradicts the economic and psychological research that shows us that monetary reward as a motivator for creative and complex cognitive tasks doesn’t work.

What does work, as we saw earlier, is autonomy, mastery, and purpose. Lack of autonomy in teaching in the UK is a frequent complaint. Mastery (getting better over time) is possible, but as I’ve just explored, doesn’t necessarily result in any extrinsic reward. It seems the Department of Education are relying on a sense of purpose to abdicate from their responsibility to reward and motivate teachers through effective and evidence-based policy. In effect, they’re using the old “it’s a vocation” excuse and hoping everything else will magically fall into place.

By coincidence and in contrast, I’ve recently started following a blog where an American teacher is blogging his experiences of teaching within the Finnish system. Finland is often held up as an example of excellence in teaching (including by the UK government), but the Finnish system is very different to the UK one. Pasi Sahlberg, the author of Finnish Lessons: What Can the World Learn from Educational Change in Finland? put forward some interesting views when interviewed in The Altantic. In the UK ever more command and control management (and student testing) is put forward as the answer to teacher accountability. Sahlberg says “Accountability is something that is left when responsibility has been subtracted.” In other words, accountability becomes more necessary (and more complicated to administer and measure) once you start to remove autonomy. At a school reunion two years ago, one of my former teachers said that they were glad to have retired because they said that the current system meant they “weren’t allowed to teach any more”.

Teachers and administrators in Finland are “are given prestige, decent pay, and a lot of responsibility“. Teacher training institutions are highly selected, with a master’s degree the minimum qualification. There is also a designed lack of competition within the Finnish educational system, discussed in the same Atlantic article. Contrast this with the Education Secretary’s recent dismissal of those within education who disagreed with his curriculum reforms as ‘marxists’ and ‘the enemies of promise’.

Here’s an idea: if we really want to improve the quality of education by using performance-related pay how about we do a teaching version of group assessment by tying the reward to the performance of some group on a criterion referenced basis, i.e. if the group meets the criteria the group gets the reward. The group could be those that teach a particular year, a department, or even the entire school. This would reduce the negative effects of competition because the groups are no longer in conflict for a limited resource. It’s similar to profit-sharing schemes within business, which is useful for those sections of the political spectrum who see any system where individuals are not in direct cut-throat competition with each other as fundamentally wrong. Of course, it would require Government to actually fund it rather than just trot out soundbites during a photo-opportunity to a school.

To come full circle back to my starting point, institutional inertia can be a significant block to educational innovation and improvement, but it’s even worse when the systems imposed on us seem designed to actively impede us. In politics, we might hear the phrase ‘evidence-based policy’. Unfortunately, this appears to be evidence-free policy.

The Culture of Student Feedback

Feedback is a big issue in education – how much students get, and how (or if) they use it. Feedback features in the annual National Student Survey, and no matter how good the institutional results it remains one area that is frequently marked down.
David Boud and Elizabeth Molloy published a recent paper  on rethinking models of feedback that I found quite interesting. First, I’ll explain a little about the paper itself and then my thoughts on it, particularly as it relates to institutional culture.
Boud and Molloy distinguish between two models of feedback: the traditional model of feedback, which they call feedback mark one, and their model, which they call feedback mark two. They talk about traditional models by comparing them to the process in biological and engineering systems, i.e. that ‘information’ acts on a system in order to change the output. In educational terms, this is feedback as ‘telling’, feedback as a behaviourist pedagogy, and assumes that not only is the information given sufficient to actually produce a change, but that it’s unambiguous and that students will actually make use of it.
Feedback mark two is more of a developmental process and less of an add-on extra to an assessment. The aim is to transition students to become ‘agents of their own change’ so that they seek out information for improvement themselves rather than merely respond to it. Students become aware of what quality performance and feedback look like through dialogue, and through this process students develop the capacity to monitor and evaluate their own learning. Assessment tasks are designed so that students are engaged over a period of time so the generation, processing and use of feedback happens over a number of cycles. In other words, it’s making the receipt of feedback and reflection on it, (and the development of skills to do that) explicit.
Essentially, three elements work together: learners (and what they bring to learning), the curriculum (and what that promotes), and what Boud and Molloy call the ‘learning milieu’, which is the interplay of staff, students and the learning environment. Feedback shifts from being an act of teachers to an act of students (with teacher support), from a process involving a single source to multiple sources (with a corresponding shift from an individual to a collaborative act), and from an isolated event to a designed sequence of events.

My thoughts

Feedback mark one isn’t sustainable. First, it absorbs a huge amount of time and resources, but its effectiveness is questionable in actually influencing student behaviour and improving student outcomes, since students may simply look at the mark and ignore the carefully considered comments. Secondly, if there is a dual emphasis on student improvement and on improving NSS scores then the problem is that students don’t recognise the feedback they receive as actually being feedback. Thirdly, mark one feedback isn’t fit for purpose since it doesn’t equip students for life post-graduation.
Feedback mark two is, to my mind, a desirable development. It’s better pedagogically because students actively use the feedback. It’s better for the students because they are developing (and using) skills that will help them in lifelong learning and in employment after graduation. It’s better for the institution because it reduces the workload of academics (allowing more time for research, professional development, or improving their teaching in other ways), and because it improves the quality and performance of their graduates. The NSS is actually a barrier to this because rather than acting as a measure of quality of the learning experience its focus is that of a customer satisfaction survey. Graham Gibbs has pointed out that there are much better ways of examining higher education from the perspective of quality in the excellent Dimensions of Quality, published by the Higher Education Academy.
But, all those benefits will remain unrealised unless the process can be implemented, and the process represents a fundamental shift in practice for many academics in many institutions, which of course, is the main obstacle. For the implementation to succeed it needs to reach a threshold. Implementing feedback mark two in one or two modules will probably fail to appreciably improve student achievement because it will be seen as a ‘one-off’, something new and novel and will probably hit resistance (from staff and students) because of that. It’s a little like immunisation in a population – you need a certain proportion immunised in order for everyone to benefit. Once implementation gets over that minimum threshold then it simply becomes the ‘way we do things here’, and the skills and benefits learned from the process in one module can be transferred to other modules. The problem is therefore one of institutional inertia. If the change required to get that level of benefit is so large, how are we ever going to achieve it?

Troops as teachers?

This post will be a little different from my usual posts in that it’s not about MOOCs, it’s UK-focussed, and also more political than usual.

The UK Government recently announced a new programme to fast track former military personnel into the teaching profession. I think it’s a bad proposal for a number of reasons. Before I go into those reasons I’ll give a little background. The scheme is aimed at those without degrees who have recently left or will shortly leave the military and they’re especially interested in those with ‘advanced technical skills’. When I was in the Army I worked repairing and maintaining air defence missile systems as a member of REME and I left without any higher education qualifications so, if this scheme had been in place then, I would have been precisely the type of person it would have been targeted towards. I went to university after leaving the forces and eventually gained a masters degree and a post-graduate teaching qualification. I’ve taught in further education, higher education and the commercial sector. My step-daughter is about to complete her PGCE and gain qualified teacher status (QTS), so the combination gives me a particular perspective on the proposal.

First of all, as a former soldier myself, I see no reason why former service personnel should not make excellent teachers. My technical training in the army was in Arborfield at what is now the School of Electronic and Aeronautical Engineering (SEAE). We were taught by senior NCOs and the basic electronics portion of my training covered a BTEC syllabus in nine months that would have taken two years at a civilian college. The teaching was well resourced and consistently excellent throughout my time there. The military know how to train well, but training doesn’t automatically conflate with education.

One problem I have with the proposal is ‘what problem is this the solution for?’ There is a issue around teacher numbers with a projected shortfall of 15,000 by the next election in 2015, but I don’t see that as the major thrust here. The publicity surrounding the announcement talks about bringing a military ethos into the school. Education minister David Laws talks about bringing “leadership, discipline, motivation and teamwork” to the classroom. Defence Secretary Phillip Hammond talks about “instil[ling] respect discipline and pride in the next generation”.

Is the lack of discipline in schools a real problem? Possibly, in limited cases, but how is that to be solved by troops becoming teachers, since they will have exactly the same disciplinary powers and discretion as those teachers already in post? Military discipline, in my experience, is not the same as that presented in the media or cultural stereotypes. The sergeant screaming into your face from six inches away is not a common occurrence in day-to-day military service. The cliché is not the reality, Military discipline, despite orders having the force of law (through the Armed Forces Act 2006, previously the Army Act 1955), is still a form of rule by consent through a strict hierarchy, and that requires all parties to acknowledge and accept the hierarchy. Military discipline functions because the answer to “jump” had better be “how high?” or else, and would that really be the case in schools? As Christine Blower, General Secretary of the National Union of Teachers, pointed out: military discipline does not equal the management of behaviour in the classroom.

Is the problem being served that of redundant service personnel? Possibly. Perhaps someone in government looked at the shortfall in teacher numbers and the numbers being made redundant and decided to join the dots. I’m all for giving help to those leaving the forces, especially those who are leaving against their wishes, but I don’t think that this scheme does that. It looks at two problems and solves neither satisfactorily. The GI Bill in the USA showed a much more enlightened (socially and economically) approach to returning service personnel.

This particular government does not have a good track record with regard to the teaching profession, receiving four votes of no confidence from unions, including the union representing the head teachers. ‘Reforms’ have come thick and fast. Schools have been removed from local democratic scrutiny through an escalation of the academies program and introduction of free schools, and are now managed by central government. The justification narrative for all this change seems to be that schools (and especially teachers) are failing our children. Unfortunately, international league tables either show the UK performing well, sixth in one study, or are criticised as statistically flawed. The Times Educational Supplement reported: “The UK Statistics Authority has censured the Department for Education and Sir Michael Wilshaw – appointed by Mr Gove as Ofsted chief inspector – for using uncertain, weak and ‘problematic’ statistics to claim that England’s schools have tumbled down the global rankings.”
Also, if teachers are really the problem, then why allow academies to employ ‘teachers’ without requiring them to actually be qualified as teachers? Accompanying this is the dismissal of educational academics as a Marxist ‘blob’ and the “enemies of promise”. The emphasis seems to be on de-skilling the practice of teaching and reducing the status of the profession. Since social mobility is low in the UK compared to other developed countries and hasn’t improved in thirty years (according to the London School of Economics) isn’t it more likely that the “enemies of promise” are those that take away those programs designed to increase social mobility, such as the Educational Maintenance Allowance and Aim Higher? Oh wait, that would be the Department of Education.

Troops could make good teachers but not because of their previous profession. They would make good teachers because that particular person is suited to the demands of teaching. This scheme, this idea, seems to be the latest in a long line of bad ideas. And that’s bad for all of us.

Learning Outcomes – What have they ever done for us?

We have a monthly pedagogical research group meeting  and the topic for December was learning outcomes (LOs). We based our discussion on three documents and this was followed by a presentation by Dr Bob Norman on the research he’s doing into learning outcomes. The three documents were:

  1. A journal article by Trevor Hussey and Patrick Smith
  2. An article in the Times Higher Education Supplement by Professor Frank Furedi, and
  3. A guide to writing and using good learning outcomes by David Baume (Leeds Metropolitan University).

The Hussey and Smith article divides learning outcomes into three broad categories: session outcomes, module outcomes and outcomes for a degree programme as a whole, and argues that session outcomes are the most pedagogically useful and programme outcomes actually represent a misuse of the term learning outcome. A common criticism of learning outcomes is their misuse (potential or real) as auditing or management tools. Hussey and Smith state that learning outcomes have been hijacked by managers as an auditing tool. But for hijacking to occur requires that learning outcomes had a pedagogical purpose in the first place. So what purpose do learning objectives serve? Their purposes fall into two categories – learning outcomes as audit tools and learning outcomes as teaching tools. I’d argue that these are two fundamentally different things rather than two categories of a single concept.

So, what purpose do they serve? For example, do learning outcomes drive student behaviour? Surveys of student attitudes to learning outcomes suggest not, finding that most use of LOs was at the summative assessment stage (i.e. as a revision aid) rather than using them to guide their studying throughout the module, and certainly not in any sense as an aid to reflective practice and their development as independent learners.

Alternatively, are they simply statements of curriculum or what the tutor is going to cover? This is obviously useful to the student while studying the module and also, if they are made more publicly available, when choosing a degree, but that brings us back to the role of LOs as being something other than a pedagogic tool. And do a series of learning outcomes really describe the richness of the educational experience we’re trying to give our students?

Early on in the discussion someone asked a pertinent question: what evidence is there that learning outcomes have any impact on educational outcomes? In other words: do they actually work? The question was followed by silence, since none of us knew of any research supporting the use of learning outcomes. In one way, that’s not surprising since here in the UK, the push for learning outcomes didn’t arise from any pedagogical need but were imposed as part of a quality assurance agenda and consequently resistance to their use (as shown in the Furedi article) becomes part of the wider reluctance to accept the creeping managerialism of the higher education sector. And as Hussey points out in a comment replying to the Furedi article:

“First, few people dispute the need for a teacher to tell his/her students roughly what to expect from a teaching session, or what, broadly, will be the contents of a course of study. Secondly, doing this is not the same thing as stipulating supposedly precise learning outcomes, written to a strict formula.”

In some ways, this is similar to debates and policy around learning styles. For example, saying that people learn differently at different times or in different contexts isn’t all that contentious, but a simple, imprecise, rule-of-thumb like this then gets extended to the stage where secondary school teachers place cards on little Freddy’s desk labelling him a kinaesthetic learner and are expected to tailor their teaching to that particular style. My standard response to that is ‘Great, now let’s see you teach them to code kinaesthetically – Yay! Dance that subroutine!’ The original (and useful) general awareness of personal differences has been subverted into an inflexible process that is actually counter-productive. In a similar way, the general usefulness of letting students know where they’re going has been subverted into a series of audit checkpoints better suited to accountability to management than usefulness to the students.

An important point about learning outcomes written beforehand is that they are intended learning outcomes. Another category of outcomes, often neglected in discussions of learning outcomes from an audit perspective, is emergent outcomes. In other words, those outcomes that arise from the interaction of the student and tutor and that (by definition) can’t be planned in advance. MacAlpine et al (1999) talk about a ‘corridor of tolerance’ where the corridor represents the level of diversion and divergence from the planned outcomes that is acceptable in a given context. Hussey and Smith (2008) argue that these should be teacher decisions and I strongly agree. Isn’t that what teaching is all about? – the interaction that leads to the ‘aha’ moment or the diversion that leads to an intense interest in something only tangentially related to the teaching session (and something that’s probably not going to be ‘on the test’). My first degree was in Marine Biology and Zoology. I forget what the trigger was but I remember being fascinated by wolf ecology and I’d check the journals room for the latest issues that were likely to have relevant research in them (this was in the days before electronic journals and abstract databases were widely available). I was like a child waiting for next week’s comic, with the result that wolves are an animal that continue to fascinate me to this day.

Furedi talks about learning outcomes devaluing the art of teaching because in their most restrictive form they constrict what can be taught. Hussey and Smith in an earlier paper (2002) make the point that all learning outcomes are objective and trying to make them subjecting by adding more detail only makes them objective at a different level. The more we try to make them subjective the more restrictive they become, and the pedagogy undergoes a selective pressure towards didactic teaching rather than constructivist practices (but of course, the more subjective they are the better they suit their role as auditing and management tools). Refreshingly, the Baume document, although written as an institutional guide to writing learning outcomes, explicitly acknowledges (p7):

“… learning isn’t the tidy process that the use of learning outcomes may sometimes lead us to think. Unexpected, serendipitous, learning happens. Such learning is also worth making space for, recording and valuing.”

Since I was expecting this document to approach learning outcomes from the management perspective (because it’s an institutional document) this was a welcome nod to teaching as the messy and creative process it often is.

Learning outcomes don’t have to be prescriptive though. Gabriel Egan in the comments to the Furedi article responds to Furedi’s arguments against learning outcomes and shows how learning outcomes can be written in such a way that the encompasses the organic messiness of real teaching:

“1) Demonstrate that you have acquired knowledge of, and can articulate fluently (in forms as yet undecided), aspects of the topic that cannot be predicted beforehand and that are as yet just as unknown to us, the tutors, as they are to you the student.

2) Describe and explain your engagement with the uncertainty of academic exploration of this topic and show your skill in articulating the indeterminacy of the pedagogic outcomes that arise from the topic’s inherent complexity and the subtlety of the tutor-student relationship.”

Now these are my kind of learning outcomes, but whether these learning outcomes would make it past institutional validation processes is another matter 🙂

Another question is how do learning outcomes link to assessment? Constructive alignment (Biggs, 1999) matches the teaching and learning activities to the assessment to be used, but Hussey and Smith point out that at the session level the outcomes may be too small, too granular to be assessed directly, and that they may need to build upon each other and be subject to student practice before reaching a point where they can actually form something to be assessed. This shouldn’t be surprising. If we’re aiming to develop and then assess the higher order skills then it’s unlikely a tick-box approach to the outcomes of a session would be appropriate.

Constructive alignment brings us on to a function of learning outcomes that I haven’t really seen acknowledged much, that of outcomes as a design tool. The choice of outcome has a direct impact on the learning and teaching activities and subsequent assessment. For example ‘… will demonstrate the characteristics of …’ implies a very different course to ‘… can recall the principles of …’. That’s one way I’ve approached learning design in the past – matching learning outcomes to a grid of teaching activities/technologies and a grid of assessment techniques. I wouldn’t argue it’s the best approach to learning design, but it’s a quick and dirty way to get something up and running with a fair likelihood of being effective pedagogically, and it helps us to think a little more reflectively about our teaching practice. The downside to outcomes as a design tool is that we operate within institutional constraints that we may have little or no influence over e.g. we have X contact hours in room Y.

So where do I stand on this? Learning outcomes are useful as a pedagogical tool rather than an auditing tool. They work best at the session and module level, and as a rough and ready design tool, but only if they reflect the messy, creative and sometimes chaotic process that good teaching can be. The endpoint can be (flexibly) planned, but the journey should be a voyage of discovery in good company, not guided by a robotic sat nav that leads us over a cliff.

References

Biggs, J. (1999). Teaching for quality learning at University. Buckingham: Society for Research into Higher Education and Open University.

Hussey, T. and Smith, P. (2002). The trouble with learning outcomes. Active Learning in Higher Education, 3(3), 220–223.

Hussey, T. and Smith, P. (2008). Learning Outcomes: a Conceptual Analysis. Teaching in Higher Education, 13(1), 107–115.

MacAlpine, I., Weston, C., Beauchamp, C., Wiseman, J. and Beauchamp, J. (1999) Building a metacognitive model of reflection. Higher Education, 37, 105–131.

MoocMooc – Looking Back

I’m taking part in MoocMooc, a mooc about moocs. It’s one of the shorter MOOCs at one week long and our final day’s task is to reflect on our experiences over the week. So what are my thoughts after a week of this ‘meta-mooc’?

Firstly, on the nature of MOOCs themselves, their strength (whatever type of MOOC we’re talking about) is that they offer opportunities for education to people that would not otherwise have access to them. They have a number of issues. For example, the definition of ‘open’ that they employ varies across the different types. Some MOOCs have proprietary content hosted within a proprietary platform so that ‘open’ refers only to being able to access. Other MOOCs such as David Wiley’s Introduction to Openness in Education  are open in the fullest sense – open access to open content on an open source platform. Their definition of ‘course’ is just as loose. In MOOCs such as Udacity (xMOOCs) a course closely resembles traditional education while in the connectivist MOOCs the course is whatever path the learner chooses to take and success or failure at the course largely depends on whether the participant learned what they needed from participation in a network of peers.

Secondly, assessment and credentialling are also issues and related to the concept of what it means to succeed or fail in a course. The massive element is only causing issues when the designer of the MOOC is trying to scale traditional practices of teaching or assessment to the MOOC, e.g. assessment via a submitted piece of writing. For this reason, I don’t think that xMOOCs will replace the traditional university experience because they’re trying to replicate the existing experience online and at a larger scale but without having solved the problems and contradictions that that approach will bring. For example, if one group of participants is paying for credentialling then is the MOOC really open or is it just a distance education course with guest access? And if the course offers credentialling then how can assessment be done at scale and with validity?

The more connectivist MOOCs could have the capacity to be a more disruptive effect on the traditional university experience because they are trying to do something different, and because they’re trying to do something different the methods of assessment need to be different. Indeed, connectivist MOOCs in their purest form mean that we will need to examine not only what assessment is, but what the purposes of assessment are.

Have I enjoyed this week? Absolutely, despite not being as ‘connectivist’ as I would have liked. My previous experience with MOOCs had been with Coursera and Udacity and I was interested in experiencing a different approach, as well as starting to get my head around the various facets of MOOCs.

MOOCs and participant pedagogy

I’m taking part in MoocMooc, a mooc about moocs. Each day we’re given some articles and questions as a starting point and today the theme is participant pedagogy and the questions are:

  1. How does the rise of hybrid pedagogy, open education, and massive open online courses change the relationships between teachers, students and the technologies they share?
  2. What would happen if we extracted the teacher entirely from the classroom? Should we?
  3. What is the role of collaboration among peers and between teachers and students? What forms might that collaboration take? What role do institutions play?

I’m going to look at the first two.

For question one, it depends on what flavour of MOOCs we are talking about. xMOOCs (such as Udacity and Coursera) change the relationship between student and teacher by making it more remote, both in a physical sense and also in the sense of teaching presence because it presents even less opportunity for students to interact directly with the tutor, although this is compensated to some extent by increasing the opportunities for students to interact with each other through the use of forums. Paradoxically, from the student’s point of view it can appear to increase the teaching presence because the video and presentation are informal, and made to feel more personal, more like the interaction of face-to-face tutorial.

cMOOCs (such as ds106) fundamentally change the relationship between teachers and learners because the emergent skills and knowledge are constructed by active participation in a network where the participants are both learners and teachers.

For question two, I think teacher-less environments would not work because it would put too much responsibility on the learner to be an effective independent learner from the start, neglecting the idea that these skills are learned behaviours and not innate. However, teacher-less does not necessarily indicate a directionless environment. Direction could be given using technology to guide students and offer them appropriate opportunities to navigate adaptively through materials. For example, answer X wrong and the platform might suggest you revise A before you look at Z. I have two issues with this: firstly, it takes away the development of independent learning skills since the software has a strong influence of what is learned when. Secondly, it presumes that there is a body of knowledge to be learned, i.e. the focus returns to the mastery of content, not the development of skills and abilities. It becomes training rather than education. In essence, it makes all the demands of a student that a cMOOC does but without any of the benefits.

Participant pedagogy highlights the fundamental tension between xMOOCs and cMOOCs. In the former, the hierarchy is narrow and tall with the teacher at the top. In the latter, the hierarchy is wider and flatter with the ‘pinnacle’ occupied by different people at different times.