Word problems feature strongly on many high stakes standardised maths tests. We found these kinds of test items in research in the United States in 2010 (on the New York State and Massachusetts grades 3-5 tests for children aged 8-11). The two reasoning papers on England’s new Mathematics Key Stage 2 test, sat by 10 to 11-year-olds in May 2016, also include a range of word problems. The item below is an example from the sample 2016 test:

Word problems require children to translate the words of the question into a workable maths problem and decide which operations to carry out. They may include a range of contexts (e.g. time, length, weight), and often involve money – posing a scenario in which children go shopping, purchasing more than one item, and sometimes requiring change.

Word problems will show up on the test for a variety of reasons, but seem to be particularly relevant to testing problem-solving skills, reasoning and the ‘using and applying’ component of the maths national curriculum. Presenting the calculation in a context also has the potential to make calculations more meaningful for children and support them in their application of calculation skills in real life. Ollerton (2007) explains how problem solving provides pupils with opportunities to develop knowledge and to practise and consolidate this knowledge.

But how fair are word problems in testing children’s mathematical skills? This is an important question given the high stakes nature of the Key Stage 2 test. Schools have to make sure their children make adequate progress and that 85% reach the national standard; failing to do so may lead to increased monitoring and inspection (by the Regional Schools Commissioner and Ofsted) and ultimately lead to forced takeover by an academy sponsor.

In a Nuffield-funded research project we asked 30 Year 6 teachers in schools below and above the floor standard to describe distinctive features of the Key Stage 2 mathematics test to us[1]. More than half of the teachers (n = 16) talk about word problems as types of questions that would always show up, while almost all of them (n = 23) describe specific 2-step money problems in which children have to calculate the correct answer to a ‘shopping problem’.

Teachers talk about the problems children have in accessing and answering word problems. They explain how the ‘wordiness’ of items would have an impact on students’ performance, particularly on those with limited reading skills, students whose first language is not English or students from disadvantaged backgrounds. These children would have difficulty in answering word problems correctly as they might not read the question well or could fail to understand the wording.

As one teacher explained to us:

*‘for some of our children who are good mathematicians and good with number but poor with language, it disadvantages them quite a lot. We’ve got lots of sort of dyslexic children, and although you can read the question to them it’s not always easy for them to interpret that what they are being asked to do is 37 times 5 divided by 2, which they could probably manage to do if it was simply written in that format, and I think that’s a shame, because they are not being given the praise that they deserve for being able to work with the numbers.’*

Another teacher also explains that some children lack the general knowledge needed to understand what the question is asking them to do:* ‘So they might talk about the theatre and the children are thinking I don’t actually know what that means, because I don’t know what that is’*. However, presenting a problem in a real life context may also support some children in answering questions, according to another teacher: it allows children to understand and estimate the correct answer and ‘*see when their answers are ridiculous’*.

These examples show the multidimensional nature of wordy test items and the types of skills these items are testing. Our interviews with teachers suggest that these items may work less well in testing mathematical skills of children who are poor readers, are from disadvantaged backgrounds or have English as their second language. A range of studies support this understanding, noting that maths language is semantically and syntactically specialized and word problems may be culturally or linguistically biased. Langeness (2011), for example, talks about the dense and concept-loaded nature of mathematical word problems, using comparative structures, passive voice, and logical connectors which children need to understand. In addition, mathematical problems assume children have access to or experience with culturally specific information, often the type of experience normal for white middle class families. Children from other backgrounds may be at a disadvantage when trying to answer such questions (see also Wilburn et al, 2011), and it is unclear whether these problems are answered incorrectly due to a lack of understanding of the language used or because of errors in computation.

As many test developers would tell us, no test can provide a perfect score of a child’s ability. In making decisions on a child’s school career, or a teacher’s or school’s performance, we should therefore not assume that test scores represent an error-free measure of performance, but always use a number of sources to inform those decisions.

[1] *Interviews took place in 2015 and teachers were talking about previous years’ tests, which measured the old 2000 curriculum. However, the 2016 test has similar word problems on the reasoning papers.*

*This is Melanie Ehren’s third post in a series on the maths SATs. See here and here.*

## Leave a Reply