Authors: Chris Pascal, Tony Bertram, Liz Rouse ofCentre for Research in Early Childhood
This review is a significant piece of work in its own right, and influential because it was supported by a broad coalition of early years sector organisations in England. Given its importance, it merits a lengthy and detailed consideration.
Before moving onto that consideration, I’m going to discuss how I see my own position. In 2019, the Department for Education asked me to lead on a review of Development Matters, the non-statutory guidance for practitioners, settings and schools working with children in the Early Years Foundation Stage. The DfE was consulting on changes to the Early Learning Goals and to the Statutory Framework for the EYFS at this time. In response to that consultation, a large number of early years organisations jointly published Getting it right in the Early Years Foundation Stage: a review of the evidence in September 2019. The report is written by Professors Chris Pascal and Tony Bertram, and Dr Liz Rouse.
Since September 2019 I have been waiting for a discussion of this document in the detail it merits. After all, the document is backed by numerous sector organisations, representing many thousands of practitioners and academics. It’s important and influential. But there doesn’t seem to be any scrutiny of its findings. I am not sure whether people are afraid to challenge this orthodoxy, or just too busy to read the document carefully and share their thoughts. I know that my own experience of leading the revision of Development Matters has been very bruising. A lot of angry messages and documents have been directed my way. Perhaps it is simply too frightening to have this debate?
But detailed discussion, consideration and debate are the lifeblood of practice development, and buidling knowledge. In that spirit, I am sharing my reflections in this blog.
This is a long piece, so to make it easier to navigate I am breaking it down into the following sections.
1. What type of review is this?
2. What is the quality of the report’s commentary?
1. What type of review is this?
Different commentators and organisations have described the review using different terms. I have seen it described as a ‘systematic review’, ‘high quality research which references 388 recent peer-reviewed studies’ and ‘a literature review’.
The authors themselves describe it as both a ‘rapid evidence review’ (p. 6) and as a ‘literature review’ (on the front cover). As the methodology section describes the document as a ‘rapid evidence review’ (pp. 11-14) I am going to focus on this.
The archived UK Civil Service page on Rapid Evidence Assessment (REA) offers the following definition:
‘REAs provide a balanced assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research. They aim to be rigorous and explicit in method and thus systematic but make concessions to the breadth or depth of the process by limiting particular aspects of the systematic review process.’
This is the only source which Pascal, Bertram and Rouse cite in their discussion of REAs.
So, REAs are extremely useful when you don’t have the time, or funding, to conduct a full, systematic review of the evidence. But they need to be systematic, to be useful.
Pascal and others explain their methodology on pages 11-14 of the report. This includes stating their search terms, and a quality criteria and assessment process. Once selected through their search, research was given a quality rating, with a maximum score of 8.
However, no further detail is given about selection: the authors have hidden their approach. Evidence Based Approaches to Reducing Gang Violence (Butler, Hodgkinson and Holmes, 2004) provides a useful contrast. This is one of the REAs which is given as an example on the Civil Service website which Pascal and others draw on.
Buter and others show how they used a particular scale (The Maryland Scale of Scientific Methods) to classify the strength of the evidence in different studies (p. 42). They also share the Quality Assessment Tool which they devised and used to select the papers and studies included in their review (p. 43) and the guidance notes which they followed (p. 44-45).
They present a clear case for their work being systematic. The case is made clearly enough for readers to make their own judgements: for example, we might disagree with the criteria in their Quality Assessment Tool.
However, Pascal and others don’t give any such examples. They say on page 13 that the ‘quality of both quantitative and qualitative evidence was assessed using four key dimensions developed by CREC for RAE reviews (Pascal, Bertram and Peckham, 2018)’. However, this 2018 publication is not publicly available to read. They do not give any examples of the scores they give any of the papers using their tool.
On page 13 they state that each dimension could gain a score of 1 if it is met sufficiently, with the maximum score being 8. They list 4 criteria on page 14. Much of the wording in this list is open to very wide interpretation. For example, how would one score this – ‘is there self-reflexivity about subjective values, biases, and inclinations of the researcher(s)?’
Furthermore, the search terms do not seem systematic. There were searches for ‘creativity’, for example, but none for ‘science’ – the researchers chose only ‘understanding the world/of the world’. This might risk excluding any English language studies of scientific learning in the early years. The search would only find studies which used the term ‘Understanding the World’ from the Statutory Framework for the Early Years Foundation Stage in England (2012).
This raises some questions, on the face of it, about how systematic the document is. The authors only cite one source, the Civil Service website, for the definition of an REA. It doesn’t seem that their document is consistent with what the source says. Nor does their document seem to be as systematic as the examples included on the Civil Service website.
The bigger question is: despite all these comments, do the authors make a sound selection of papers and studies to inform their review?
I would argue that some of their choices can be legitimately questioned.
For example, the document cites a guidance report from the Education Endowment Foundation, Preparing for Literacy: Improving Communication, Language and Literacy in the Early Years (2018). But it doesn’t cite from the extensive report which informed that guidance, Early Language Development: Needs, provision, and intervention for preschool children from socio- economically disadvantaged backgrounds (Law and others, 2017). This publication, jointly funded by the Education Endowment Foundation and Public Health England, strikes me as exactly the sort of detailed and high-quality report which should be considered in a review of the early years in England. It is in the bibliography, but its findings are not quoted or discussed.
It is also very surprising that the Early Intervention Foundation’s Teaching, pedagogy and practice in early years childcare: an evidence review (Sim and others, 2018) is not included. This is another review which is notably more specific and open about its methodology.
It is also noticeable that around one in ten of the journal articles included in the review are from the European Early Childhood Education Research Journal, which is edited by Professor Tony Bertram. Bertram is one of the review’s co-writers. The articles may have all been selected over other papers, like that of Law and others (2017) and Sim and others (2018), on merit. But surely the question of possible selection bias in this instance needed some discussion?
In conclusion, I would argue that Getting it Right in the EYFS is not a systematic review. It does not meet the UK Civil Service definition of an REA. It is, instead, more like a narrative review that comments on what a number of different papers say about early years education and care. A number of high quality reviews have been excluded, and there may be selection bias in respect to some which have been included.
Perhaps of the most telling example of how this is a narrative review, and not systematic, is its treatment of others reviews. On several occasions, Getting it Right in the Early Years Foundation Stage paraphrases large chunks of text from other reviews. Here are some examples
Getting it right in the Early Years Foundation Stage
Nutbrown (2013) calls for a clearer conceptualization of arts-based learning in the early years curriculum, stating that young children’s engagement with the world is primarily sensory and aesthetic. This statement is based on an arts-based learning project with pre-school children which concludes that arts based activity enables children to learn in ways that are naturally suited to their stage of development and enables them to take part in cultural and artistic elements of life which can sustain them in the long term (Payler et al, 2017, pp.67-68).
Nutbrown (2013) calls for a more robust and articulated conceptualization of arts-based learning in the early years curriculum, particularly given that young children’s engagement with the world is primarily sensory and aesthetic. The paper reports on an arts-based learning project with young children in preschool settings and concludes that children are able to learn in ways that are naturally suited to their human condition and therefore better equipped to take part in cultural and artistic elements of life.
Gifford (2014) also cites research which suggests that children’s understanding of each number being ‘one more than the one before and one less than the one after’ does not develop until around the age of six, indicating that engagement with formal mathematics may best be delayed with children who are not secure in this ability (Payler et al, 2017, p.67)
Gifford (2014) cites research which suggests that children’s understanding of each number being ‘one more than the one before and one less than the one after’ does not develop until around the age of six, supporting the notion that, in this context, engagement with formal mathematics is best delayed with children who have yet to reach
Getting it right in the Early Years Foundation Stage
Learning through play:
(Zosh et al, 2007) reports that learning through play supports overall healthy development, acquisition of both content (e.g., mathematics) and learning-to-learn skills (e.g., planning, exploration, evaluating).
Learning through play supports overall healthy development, acquisition of both content (e.g., math) and learning-to-learn skills (e.g., executive function)
There is also at least one instance when Getting it right draws heavily on the BERA-TACTYC review, but implies that it is directly discussing the original source.
Getting it right in the Early Years Foundation Stage
BERA-TACTYC Early Childhood Research Review
Cremin et al, (2015) highlight the link between the teaching and learning of science and creativity, highlighting the pedagogic synergy between the two, and especially emphasising the value of play and exploration, motivation and affect, dialogue and collaboration, problem-solving and agency, questioning and curiosity, reflection and reasoning, and teacher scaffolding and involvement in teaching approaches.
Some potential synergistic features have been identified between science, creativity and teaching and learning in the early years: play and exploration, motivation and affect, dialogue and collaboration, problem-solving and agency, questioning and curiosity, reflection and reasoning, and teacher scaffolding and involvement. These were found to exist in teaching practices in nine European countries (Cremin et al., 2015).
A systematic review of the evidence wouldn’t paraphrase large chunks of text like this. Instead, as Davies (2003, 2:3) comments: ‘systematic reviews differ from other types of research synthesis (e.g. narrative reviews and vote counting reviews) by … having explicit ways of establishing the comparability (or incomparability) of different studies and, thereby, of combining and establishing a cumulative view of what the existing evidence is telling us.’ In the sections above, we’re not getting that ‘cumulative view’ – we’re getting a narrative account of what another review found.
It is also noticeable that Getting it right in the Early Years Foundation Stage does not weigh up the different evidence strengths of the studies it draws on.
I'm going to move onto my second question now.
2. What is the quality of the report’s commentary?
One immediately notable aspect of the report is its inconsistent structure. Early on, it sets out ‘6 sub-questions’ (p. 10) for consideration:
- What areas/aspects of learning are particularly important for children to develop in the EYFS, and specifically at different stages/ages from birth to five years?
- What outcomes should a child be achieving by the end of the EYFS that will provide a basis for lifelong learning and long term wellbeing?
- Which outcomes in EYFS predict good levels of attainment through primary school and beyond?
- What are the implications of the ELGs (EYFS outcomes) for ensuring responsiveness to individual children’s characteristics and needs?
- What teaching content is particularly beneficial to supporting good levels of development in the prime and specific areas of learning?
- What pedagogic approaches best supports (sic) a child to achieve a good level of development throughout the EYFS and enables them to achieve their potential?
However, the report itself only addresses some of these questions. I have summarised this:
· Question 1 is addressed on pages 16-25
· Question 2 is addressed on pages 25-28
· Question 3 is not addressed at all
· Question 4 is altered and becomes ‘What are the implications of the ELGs (EYFS outcomes) for summer born children?’ on page 28. The change of questions is neither noted nor discussed.
· Question 5 is addressed on pages 29-36
· Question 6 is addressed on pages 37-50, corrected to read ‘What pedagogic approach best supports a child to achieve a good level of development throughout the EYFS and enables them to achieve their potential?’
The changes to the questions are not commented on. In my experience, it is very unusual for an academic paper to set out a series of questions and not answer them each, in turn, in a systematic way.
The report draws on a wide range of papers but it does not appraise their different qualities. For example, a number of rich research papers are cited which are case studies. I think that case studies can make an important contribution to research. The two papers which I am going to discuss now are both valuable and illuminating in their own right.
Arnold (2009b) is a case study of a child’s explorations in nursery which uses an experimental methodology for an educational study. Cath Arnold says clearly that she ‘acknowledges the novelty of this approach for her, as a teacher, although these kinds of interpretation are common within the fields of play therapy and psychotherapy and are used increasingly in education.’
Carruthers and Worthington (2005) discuss around 700 examples of mathematical graphics. They explain that all these examples come ‘from our own classes or those in which we have been invited to teach. Based on this large sample of original children’s marks from authentic teaching situations, our findings are therefore evidence based.’ In other words, this paper is a rich case study of their work as teachers. We do not know how representative the sample of 700 graphics is. We do not know how many are by boys, and how many are by girls. Did the children come from a diverse range of backgrounds?
As case studies, these papers are valuable. But it isn’t clear how they make a contribution to a review of the evidence. Case studies, by their very nature, are limited in terms of their wider applicability.
Getting it right also comments that 'Clark (2013; 2014) offers an evidenced-based critique of synthetic phonics.' Yet it is immediately noticeable that these two papers are not evidence-based, nor are they in a peer-reviewed journal. They are discussion papers that selectively consider evidence in a stimulating and thoughtful way.
Yet in Getting it right in the Early Years Foundation Stage, the findings of case studies and discussion papers seem generate conclusions at the same level of strength as the findings of major, systematic reviews like Law and others (2017).
There are several points in the document when conclusions are made, but no evidence is provided to substantiate them. On page 16:
‘For example, key underpinning learning skills highlighted in The 100 Review (Pascal, Bertram et al, 2017), the bedrock of which are to be found in executive functioning, have been recognised as instrumental within sustained, long-term attainment in Mathematics and Literacy.’
We might ask: ‘been recognised’ by whom? The authors do not tell us.
Another striking aspect of the report is its unusual and inconsistent discussion of how children learn English as an additional language. In pages 20-21, references to learning English as an Additional Language are placed next to comments about disadvantages at home and being in receipt of pupil premium. This makes it seem as if being multilingual is analogous to being disadvantaged.
If we consider the paragraphs in detail, it is extremely difficult to understand. It also seems to conclude that ‘language proficiency’ is the same as proficiency in English.
‘Whiteside et al’s (2016) study of 782 EAL and 6,485 monolingual Reception aged children, where teachers provided ratings of English language proficiency and social, emotional, and behavioural functioning, strongly suggests that where English is an additional language or disadvantages are experienced at home, as recognised within children with EAL or in receipt of pupil premium, a clear focus on oral language development for children who are socially deprived or with EAL may be necessary through Reception and into Year one (Pascal et al, 2018, p.25) indicating the importance of establishing language proficiency in the Foundation years.’
Multilingualism seems to be considered as akin to other disadvantages. Yet, tellingly, the findings of Whiteside and others (2016, page 11) are that ‘children with EAL show comparable, or better, academic attainment relative to monolingual peers with comparable English language proficiency.’
On the same page, Getting it right in the Early Years Foundation Stage also appears to confuse oral language with phonological awareness. The report considers the 2015 paper by Soto-Calvo and others, Identifying the cognitive predictors of early counting and calculation skills: Evidence from a
longitudinal study'. Pascal and others say that the report suggests that ‘oral language proficiency is also linked to the development of certain components of mathematical ability, such as counting and calculation’. Soto-Calvo and others, however, say that their study ‘aims to determine whether phonological awareness predicts growth in sequential counting, as predicted by Krajewski and Schneider (2009) and Koponen et al. (2013), as well as to clarify whether phonological awareness predicts growth in early formal calculation.’
Again on the same page, Getting it right comments on how a ‘Public Health Report by Brooks (2014) sets out research evidence which focuses on children in the school system, including four to five-year-olds’. Brooks’s report has no references to the early years.
This is already a long blog, so I will only briefly discuss two more areas of learning: maths, and expressive arts and design.
Getting it right comments on page 23 that ‘evidence from Soto-Calvo et al’s (2015) longitudinal study demonstrates how the development of counting and calculation are supported by different cognitive abilities which do not develop fully until around the age of six’. I was not able to find this conclusion in Soto-Calvo’s report. The authors continue with a discussion of Sue Gifford’s (2014) research which, they say, suggests that children’s understanding of each number being ‘one more than the one before and one less than the one after’ does not develop until around the age of six, indicating that engagement with formal mathematics may best be delayed with children who are not secure in this ability’. Again, I was not able to find this conclusion in Gifford’s report. The authors do not explain what they mean by ‘formal mathematics’.
Finally, in a discussion about Expressive Arts and Design on page 23, Getting it right comments that ‘Zachariou and Whitebread’s (2015) study investigated possible correlations between musical play and self-regulation. Although the study focused on six-year-olds, the findings suggest that engaging in musical play in the Foundation years facilitates self-regulatory behaviours.’ This is striking for two reasons. First of all, it’s surprising to see that a study of children aged six was selected in the first place. Secondly, the argument made is tenuous. How can a study focused on six-year-olds ‘suggest that engaging in musical play in the Foundation years facilitates self-regulatory behaviours’?
My conclusion is that even if we judge this as a narrative review, the quality of its commentary is questionable:
- It does not answer the questions it sets out.
- It does not comment on the merits and limitations of the studies it cites.
- It makes assertions which are not supported by evidence.
- It deals with important issues, like learning English as an additional language, in an inconsistent manner.
- It draws conclusions from studies, which I could not find in the source papers.
As David Wilkinson argues, 'finding evidence to support your argument does not make an evidence-based argument'.
I'd like to conclude with a few questions.
Firstly, the blunt use of the phrase 'getting it right' in the title of this review means its authors are making very big claims. Why have those claims not been scrutinised?
Are we too dependent on a narrow range of voices in the early years?
Is there a dangerous assumption that the early years workforce isn't able to read and evaluate research critically, and doesn't know what a systematic review is?
Is it time for us to lift our game in the early years, and value serious academic discussion over populism?