Saturday 30 July 2011

Assessment in ESP

A few weeks ago, I was asked an interesting question by an ESP teacher called Rosa in Algeria. The question related to an ESP course Rosa was running as part of her research for her doctorate. She'd conducted a careful needs analysis and designed her course based around those needs. But then the course had stopped unexpectedly before its scheduled end, which meant that there was no way of assessing if the course aims had been achieved. The question was: is the research still valid.

Well, let me start of by admitting that I’ve never been involved in the academic side of ESP. My experience of ESP courses has always been either in-company (where the employer pays) or with mixed groups of professionals and pre-experience students at the British Council (where the students themselves or, occasionally, their parents, pay).

The reason I bring up the grubby subject of who pays here is that it has a big impact on assessment. The person paying for the course has a large say in what the aims of the course should be, and therefore what constitutes a successful course. Whenever I ran an in-company ESP course, I (or my colleagues in sales) had to justify it to the customer in that company. In other words, in my experience, assessment is a service for the employer (usually represented by the HR department) and/or for the student (and his/her parents). The student is also paying in another way – in terms of time invested in attending the course and self-study.

Now, of course, when we get to the public university sector, we have a different customer: the taxpayer. Is the taxpayer getting good value for money out of the education they are paying for? It’s an interesting question, and I sometimes wonder if teachers in the public sector realise they’re doing a service for me as a taxpayer. Of course, individual taxpayers aren’t in a position to check the effectiveness of the courses they pay for, so responsibility is delegated to the government, the universities and ultimately the English teachers themselves, who are expected to provide evidence that they’re actually teaching and that the students are actually studying and even … learning something useful.

OK, so let’s look at Rosa's question first, and then we’ll step back to look at some broader issues. First of all, the problem with courses stopping before they’ve finished is, unfortunately, very common. If assessment is an important part of your course, don’t leave it all to the end. You can actually get a lot of assessment done during the course – not just in formal tests, but also by assessing role-plays, listening activities, written work, etc. You can conduct assessment as part of your regular teaching, to check how much they’ve learnt during the lesson, but perhaps it’s more useful to assess in a later lesson, to check how much they remember. For example, if one of your course aims is to teach your nursing students to conduct a patient admission, you could teach it and practise it in a role-play in one lesson and then repeat the role-play a few weeks later, in controlled conditions (so you can grade it properly) – perhaps without warning the students that there’s going to be a test.

It’s important to think carefully about your criteria for assessment – should everything be based on your course aims? A good way of planning aim-based assessment is by writing your course aims in the form ‘By the end of the course, students will be able to …’. This will then form the basis of your assessment: can they do it or can’t they. Ideally, break each aim down into several sub-aims, so you can give a detailed and objective assessment of how well each student can do things, rather than just an impression mark or whether they can do it or not.

But this then raises the next question. Have they learnt this ability during the course, or could they already do it pretty well before the course? In other words, have they actually improved? To assess this, you’ll need to do some benchmarking at the beginning of the course to identify the starting point.

Of course, not all assessment needs to be aims-based. You can also assess their general level of skills – reading, writing, listening and speaking. For this, I’d recommend using professionally created assessment materials, such as practice tests for Cambridge exams (FCE, Advanced, BEC, etc.). This obviously won’t be connected with the topics you’ve studied in your course, but it can still be valuable to check for and measure improvement in, say, general listening comprehension skills.

In other words, assessment is a complicated business, and to do it properly, you’ll need to do lots of it. But then … will you have any time left to actually teach the poor students? Too much assessment can demotivating for learners and a huge drain on your time both in the classroom and away from it (marking!!!).

This was something that used to bother me when I was teaching at the British Council. We had a wonderfully sophisticated assessment system, with all sorts of assessment events scheduled throughout the semester, which generated a page full of statistics that could be combined into scores on a range of skills for each student. The problem was, there wasn’t really time both to complete the assessment regime and to teach, so some teachers (dare I admit I was one of them?) invented some of the figures in order to prioritise teaching.

You see, I guess I need to come clean about one of ESP’s guilty secrets. We claim to be very sophisticated with our detailed needs analyses and carefully designed courses, but we’ve actually got very little control over what our students take away from the course. For example, you may be doing an exercise with the aim of developing their reading subskills, but what they’ll actually get out of it is some new vocabulary (which you didn’t even notice) or a deeper understanding of some grammar rule, based on the way it’s used in the text. In another lesson, your garbled explanation of a grammar point may fail to teach your students much about the grammar point in question, but they’ll become aware of some nice expressions and idioms for giving explanations (or for apologising for failing to explain something). In yet another lesson, while you are arguing with your students over why answer X from the listening exercise is right, and answer Y is wrong, you may be helping their negotiation skills more than their listening skills. (Note that this only works if the class is conducted in English).

(This is connected with a concept I call the leaky pipeline, which I’ll have to explain in a separate post, along with the related concept of obliquity – achieving great things by not trying too hard to achieve them.)

That’s not to say that our needs analysis and course design is a waste of time – far from it. We need it as a starting point for our teaching, and it needs to appear relevant, interesting and useful in order to motivate our students to engage with it. We also need to include a wide range of language and skills work in our courses – they’ll certainly benefit from it, but perhaps not in exactly the way that we planned. A strong syllabus also provides a focus for their study: a list of good words or useful phrases to learn, for example, is surely beneficial. But we’re kidding ourselves if we think this is all they’ll get out of our lessons, or even the most important thing.

So what does this mean for assessment? Well, on the one hand, it means assessment is less important than we make it out to be. But on the other hand, assessment can be extremely useful in motivating students to learn. Customers (by which I mean parents, employers, taxpayers and others who pay for the courses) also have a right to expect some measurable results from the course. And we mustn't forget that not all teachers are as competent or conscientious as those who read this blog - there are lazy, incompetent teachers out there (apparently), and assessment is perhaps the only way of keeping them on their toes.

So we definitely should assess, both formally (in mid-course and end-of-course tests – including writing, role-plays, etc.) and informally (during the course). But we also shouldn’t take the results too seriously. By the end of the course, a good teacher should know which students are good and hard-working and which are clueless or lazy simply because you have spent time getting to know them and their English (including plenty of time hearing them speak and reading their writing). The formal assessment should simply confirm what you already know.

Does this answer Rosa’s question? Not really. If I were her professor assessing her research, I guess I’d just discuss with her if she thought she’d achieved her aims. I wouldn’t worry too much about missing end-of-course tests. But perhaps that’s why I’m not a professor. I guess ultimately it comes down to each university’s policy on what counts as valid research, and my common-sense approach doesn’t really have much bearing on individual universities’ policies!

So sorry, Rosa, for not really answering your question - although you've given me lots to think about. Perhaps some readers of the blog can add their opinions.

Related posts:
Exams: Financial English ... if such a thing exists
Needs: English for Nursing
Syllabus design: ESP consultancy, Cyprus

4 comments:

  1. Should I be able to identify with the ambiguity in your point of view? I can however appreciate your honesty and forthcoming opinion while you wear your different hats. So, I guess I can agree with your blog, assessment is a statistical measure of numbers, not motivation and inspiration or real life.

    ReplyDelete
  2. Hello, that was really interesting regarding how much attention i gave to assessment as a teacher and i'm conducting an MA reasearch in the field of assessment in ESP so if possible can i have your Email ID.
    Thank you

    ReplyDelete
  3. Hi Janice. I'm glad you found it useful. You can contact me at jeremy@dayelt.com.

    ReplyDelete