Formative assessment is all around us in almost every aspect of everyday teaching.
Time for Tea: or testing, evaluation and assessment.
I have been lucky enough to attend the Iatefl conference for several years now, and each year I like to select various themes to the sessions I choose to go to. This year, for the first time, I have been involved in The TEASIG webinars, and so one of my main themes for this conference was assessment. At the risk of seeming pedantic I am going to explain briefly what I mean by assessment in this article. My interpretation is perhaps a little broader than the more orthodox understanding of the term in the world of education. Testing, evaluation and assessment are often used almost interchangeably when it comes to exams but there is a difference: in a nutshell, a test is the practical method we use to measure our construct, the assessment is the whole process of measuring, which may include tests or other tools and evaluation, as it is often applied, is related to judgements about something, often before we use it , so that we might evaluate a digital tool to see if it is what we need to teach our learners to engage with lexis, for instance. The focus of assessment in our field is generally learners’ knowledge, skills and performance, but one thing that struck me in this conference was that actually teachers are surrounded by assessment, especially when it is a formative part of the learning process, they breathe it in every day in almost every interaction both with learners, teaching tools and with texts. In the PCE this year we touched on both formative and summative assessment but the main focus was what teachers need to know about summative test production and application. That is why I decided then to focus rather on the formative aspect when attending other presentations during the rest of the conference and I would like to share a few impressions about this with you here.
The PCE: What teachers need to know about assessment
The Teasig Pre Conference event provided us with a wonderful opportunity to discuss what teachers really need to know about assessment. Neil Bullock asked some searching questions that gave us all food for thought such as “If learning is so important why do assessment and teaching seem to dominate?” Or “why is so little attention paid to assessment on training courses?”, which was echoed on several occasions throughout the day. Neil works in the world of aviation and he also asked who would like to fly in a plane with a pilot who could not communicate with e air traffic control tower! This definitely gives you a very practical reason to advocate effective summative assessment.
Evelina Galaczi and Nahal Khabbazbashi followed on with a rich but practical session which provided us with six key questions when designing a test: “Why, who and what am I testing?”, “How am I testing and scoring and how is my test benefitting learners?” A key concept in this was that the test you design should not only be valid but also fit for the purpose you are designing it for. A perfectly valid test, for example, if used in the wrong context for learners who need to do something that that particular test does not assess, will be worse than useless for that purpose, even though it may be fine if used for the purpose it was designed for. Their focus was speaking and, to illustrate the point above, if your learners need to give formal presentations, then a general test that assesses conversation, interacting and giving opinions, may not be the answer for you.
Later in the afternoon Vivien Berry and Barry O’Sullivan explored the concept of assessment literacy, which they said had been mistakenly interpreted to mean “testing for literacy”. Assessment literacy, however,in basic terms, means how literate you are when it comes to assessment. It is a very wide area and few of those present considered themselves to be at all literate but after the day’s discussions we felt that we were beginning to get an idea of what we didn’t know, which is, after all, the first step. Once again the feeling was that more training is required for teachers who are often required to develop tests of various types.
How educators feel
Speaking to various people at the conference I got the same negative reaction fairly often, which confirms the initial PCE idea that there is little training done on teacher training courses and despite this everyone needs to be involved in the assessment process at some level. Tests in fact,have negative associations for many of us, possibly because of our past experiences of stress related to high stakes test-taking.Throughout the conference, however, my initial feeling that assessment is in the air we breathe every day, was confirmed albeit indirectly by many of the discussions I took part in.
What about the rest of the conference?
When it comes to formative assessment the separation between teaching and testing i is by no means so clear as it is in summative testing, where, despite washback effects, the two processes are usually conceived of as being different. Formative assessment is inherent in methodologies where tools such as test-teach-test, or contrastive analysis are used, and teachers assess learner performance on some level every time they interact with them. Everyone attends different events, with a different focus, in a conference like this and my own focus tend to be related to lexis and corpora, so here is a taste of some of my impressions.
Looking at assessment in general and formative assessment in particular
On the first day David Crystal talked about language change, which led me to think that teachers need to assess the language they choose to focus on with their learners or to consider acceptable or appropriate for their needs. If many Internet users write “who would of thought it?” Does that mean automatically that this is an acceptable form for our learners or should we simply draw their attention to the fact that such changes are underway? Obviously teachers have to assess their learners and make sensible choices, but the question is when does a language change become an acceptable norm? Should discrete elements be assessed analytically in llearner production or should discourse be looked at holistically? Perhaps the answer is “both”.
Marcel Lemmens, who comes from a translation background, advocated an interesting approach to the formative assessment of writing. He held up a standard translation that had been marked, and was covered in red ink, saying, most of his learners would not even read the painstakingly detailed corrections, but would go straight to the mark. He called for the need to familiarise learners with the assessment criteria before dong the test itself And suggested a more holistic approach to marking an email, looking at stylistic features which would help learners to write more effectively, such as cohesion, register and perhaps choosing one or two specific language areas to focus on, such as the use of articles, so that the assessment would then be recycled back into the learning process, and the corrections would actually help learners to achieve their aims. Whilst this was thought-provoking, learner expectations also need to be considered. My learners, for instance, expect their tests to be “corrected” and any change which is introduced is generally better if it is gradual. So,for instance, this term,in one group, I introduced one assignment which was “corrected” in the traditional way, with a detailed correction code, followed by individual one to one interviews where the learner could discuss their self corrections with me. I also did activities that were labelled as discussions which were rated holistically and others where a general analytical scale was used that rated, task achievement, coherence and cohesion, clarity of lexical and grammatical expression, and comments were provided as feedback in these categories rather than detailed corrections. The beauty of work like this is that it can all be reintegrated into work being done in the classroom.
Diane Larson Freeman pointed out in her plenary that aspects of learning that are becoming increasingly sgnificant at this point in time are learner agency, relationships and interactions and the patterns that emerge from such complex interactions. By integrating the aspects noticed in formative assessment and reintroducing them into the classroom we are providing our learners, I feel, with exactly the sort of multiple affordances that will lead to different learners having the opportunity to exploit this work in different ways.
Corpora and lexis are always areas that interest me so I attended several sessions on this topic and once again assessment in various shapes and forms kept rearing its head.Jenny Wright, for instance, gave an introduction to the use of The American Corpus (COCA) in the classroom focusing on such areas as adverb + adjective collocations, and providing a range of activities that teachers can produce very simply to sensitise learners to such areas and to practise them. Teachers may, for example:
1. Elicit learner intuitions abut the adjective collocate that follows various adjectives such as “bitterly”, “sincerely”or “deeply”;
2. Training can then be provided to show how to do a corpus search for colocation frequency;
3. Concordance lines can then be cut and pasted to provide a concordance line gap filler where the key word is missing.
It is this activity which is interesting from the point of view of formative assessment as it is often seen as practice but what it is actually doing is testing comprehension or recall, particularly if it is done in a later lesson or part of a test-teach-test sequence. Activities such as this one are used in classrooms all over the world where they are considered to be practice… but they are part of assessment, in fact, since they provide tachers with knowledge about what learners can or cannot do.
To continue with the topic of corpora tools, Stephen Bax introduced his amazing tool Text Inspector, (http://www.textinspector.com/workflow). This tool assesses the difficulty of a text giving it a percentage score from zero to native speaker and a detailed analysis of the elements that make it so. It is freely available online and has been developed with links both to the English Vocabulary Profile which classifies lexical items with reference to the CEFR levels. It can also be used, of course, as a means of assessing learner written production, so once again, assessment enters the picture. Although Bax advised the audience to err on the side of caution, this is a tool which can be used both by teachers as an initial assessment of learners’ work and by learners who want to assess their own levels. This is an opportunity for learner oriented assessment in the purest interpretation of the term, perhaps, then, in that the individual learner can take assessment into their own hands and use it to develop their own power of expression.
Let’s not forget technology
Technology, of course, in the shape of corpora or many other tools such as the wonderful English Vocabulary and English Grammar Profiles (EVP and EGP)which are being developed to measure the level of various items with reference to the Cambridge Learner Corpus (for the EGP) and various corpora (for the EVP). This means that when developing reading tests, for instance, we now have tools to help us gauge the level of difficulty of lexical items, which in turn, can help us make mor valid and reliable tests. It is not only teachers, of course, who can use these tools but learners too and,in a world where user content reigns supreme, our learners can create their own “revision tests” or “progress tests” very easily and post them online to share with the rest of the class or even… with the rest of the world. So even though teachers do not seem to particularly warm to the notion of assessment, it seems clear to me that we are all talking about it and at Iatefl in Birmingham, assessment was definitely in the air.