Ei-iE

Credits: World Bank Photo Collection (Flickr)
Credits: World Bank Photo Collection (Flickr)

Review. The Global Education Race. Sellar, Thompson and Rutkowski. Brush Education Inc 2017.

published 3 May 2017 updated 5 June 2017
written by:
Subscribe to our newsletters

The number of countries which take part in the OECD’s Programme for International Student Assessment (PISA) is now well over double the number of OECD member countries. If there is one thing which demonstrates just how far PISA has pulled in front of other global assessments of education, it is that countries outside OECD now see PISA as the ‘go to’ assessment for evaluating their education systems. Sellar, Thompson, and Rutkowski’s book is therefore not only a timely reflection on the advantages and pitfalls of global education assessments, it is also right to focus on PISA’s pre-eminent role.

Their book has a number of strong virtues. It is short, balanced, eminently readable, and, unlike some other academic critiques of PISA, the authors know what they are talking about. They are also not afraid to set out some clear policy proposals.

At the core of the authors’ critique is that countries which see advancement in the PISA as a systemic objective in itself are engaged in toxic competition. Ironically, those countries also fail to get the best benefits out of PISA’s data and policy conclusions.

So according to Sellar et al what should be the right approach to PISA? Firstly they make it clear that their book is not anti-PISA. They do, however, believe that often government ministers use PISA as a high stakes report card which leads to either unjustified celebrations or blame. It seeks to shift policy makers away from this approach. Without summarising the book itself there are some insights which stand out.

For example, according to the authors, PISA cannot be a comparative assessment of schools themselves since PISA assesses fifteen year olds and students in some countries where they receive more years of education than in others. PISA can only be an age related assessment.

In what is, in essence, a mini primer on the operation of the PISA tests themselves, the authors explain that PISA not only samples the student population in each participating country, but each student only completes a sample of the test items. They describe this procedure as well-established and reliable but, in what is a continuing theme in the book, they urge the OECD to recognise PISA’s limitations. One good example is the PISA country performance tables themselves. They are the most high profile aspect of PISA and surely the most controversial. Rather than call for the abolition of the tables themselves, Sellar et al again urge governments to recognise their limitations instead of indulging in PISA envy. There is a fascinating section on the inevitable measures of uncertainty or statistical errors in PISA scores. They conclude that PISA‘rankings are best understood as ranges rather than exact places’… and that ‘reporting standard errors is more than simply good statistical practice…(they) remind all stakeholders that PISA scores are not exact, but rather estimations of what the OECD believes 15 year olds know and can do.’ And the authors make a particularly insightful proposal that it would be more useful to focus ‘ attention on areas where students performed poorly, rather than lamenting…performance across the board’ which could lead to sweeping and possibly counterproductive changes for students performing well.

Education International itself has intervened in this area. It has consistently said that the PISA country rankings obscure PISA’s far more important policy inferences. In 2009, it commissioned Professor Peter Mortimore to come up with alternative models for analysing and reporting countries’ performance in PISA (Mortimore, 2009). What is fascinating is that his proposals are remarkably similar to this book’s including urging OECD to involve teachers in the design and development of PISA and broaden the areas which are assessed.

Which brings us to the limitations of the book itself. Curiously, while it contains much about test design, there is nothing on the principle of equity underpinning PISA, although equity is certainly mentioned as a policy outcome, or that the tests focus on using and applying knowledge rather than students regurgitating it- a radical shift in assessment when it was first introduced. The book could have focussed far more on the relationship of the contextual questionnaires to the PISA assessments. It is the correlation of the questionnaire results with those of the assessments which give PISA its policy significance. Sellar et al rightly warn off policy makers from assuming that PISA can identify the causes of policy success or failure but they could have made it clearer that it is almost impossible for any study to identify causation at policy and system level. Correlation is the only route PISA could have taken.

Sellar et al criticise governments which ignore the implications of student background as a dominant influence on much of the data while at the same time indulging in PISA panic or celebration over marginal fluctuations in the PISA rankings. Cases in point are the Australian and New Zealand governments. Students from a Chinese cultural background from these countries actually perform comparably with students in Shanghai, China.

They argue therefore that PISA should be used to improve the outcomes of lowperforming students-again a line that has consistently been held by Education International.  The book says unequivocally that if PISA is put into the service of arguing for changes that can ‘make education systems fairer and better’, it is a powerful tool to enrich and broaden educational debate.

Sellar et al point out that how PISA data is used is in the hands of OECD member countries, and politicians in particular, not OECD officials. The book infers, but doesn’t make clear, that this also applies to the limitations on the ‘literacies’ which are assessed. It is OECD member countries which have stopped the OECD from introducing broader curriculum assessment such as in ‘ geography or social sciences’ as Mortimore put it.

These caveats, however, are relatively minor. Sellar et al are surely right when they say that while PISA is one of the best efforts to accurately measure educational outcomes, if it is coupled with standardised testing hitched to punitive regimes of educational accountability then there will be negative and perverse effects. They believe that teachers and principals have to be at the centre of the evaluation of PISA when it is published. They also argue that this should be coupled with an open and ongoing debate about the uses and misuses of standardised testing. After all it is teachers who own PISA as much as governments.

The opinions expressed in this blog are those of the author and do not necessarily reflect any official policies or positions of Education International.