Tag Archives: decoding

The Optima Reading Programme by Dr Jonathan Solity: Does it Provide Optimal Results? A Paper by Dr Marlynne Grant

Recently Dr Jonathan Solity reported on his critical analysis of the Year 1 Phonics Screening Check.1 This has thrown a spotlight on his own programme for teaching reading, which was developed first as ERR (Early Reading Research), later known as KRM, and currently as Optima.

Having watched the video on the Optima website (2016) (2) http://optimapsychology.com/find-out-more/introductory-video/ I was struck by the many similarities to the systematic synthetic phonics (SSP) teaching with which I am familiar:

  • Whole class teaching
  • Results showing no gender differences or vulnerable group differences
  • Key skills of synthesising and segmentation
  • All achieving success from the start i.e. from Reception
  • Successful with low achievers and high achievers who are stretched
  • Pacy
  • Works on writing as well as reading
  • Use of modelling
  • Little and often teaching in daily doses
  • Very similar teaching each day with a little bit extra added (cumulative)
  • Establishes love of books and literature
  • Complete engagement
  • Increases confidence and enjoyment
  • Parents are impressed as they see structure and progress

Optima video from cris on Vimeo.

On the video, Solity stresses that Optima is underpinned by psychological theory. But Optima does not have the monopoly on incorporating psychological learning theory into its teaching:

For example, one of the government approved SSP programmes* (3) lists the following:

  • Whole class teaching
  • Reinforcement and repetition are built in
  • Use of active recall – students are encouraged to make active attempts at recall
  • Oral work
  • Interactive teaching which engages the children rather than them having to work individually on work sheets
  • Lively teaching. The lesson has a good pace, which helps to manage behaviour and focus students’ attention
  • Multisensory teaching. It integrates what you see (letters) with what you hear (sounds) and what you do (articulating sounds and words, clicking fingers, phoneme fingers, robot arms, manipulating sound and syllable cards, writing sounds and words from dictation)
  • Frequent rehearsal on the ‘little and often’ principle
  • Develops fluency and mastery in learning which is vital and will ensure later success with reading comprehension and writing composition
  • Direct instruction (modelling) is used: I do, we do together, you do.

*Sound Discovery Manual, p4 (3)

The Optima video claims that its programme is ‘different from everything else around’ – so what are these differences? It appears to differ from government approved SSP programmes with regard to the following:

  • Lack of decodables
  • Teaching High Frequency Words (HFWs) as sight words
  • Teaching fewer grapheme-phoneme correspondences (GPCs)

Decodable Reading Books

It is not clear whether Solity has conducted research to justify his decision not to use the sort of decodable reading books often used now with SSP programmes. In these books a high percentage of words (average of 81% in the SSP study reported below) embody only the GPCs so far taught, with divergences in the few remaining words being explicitly taught.

One longitudinal study of SSP (4) compared outcomes with and without decodable reading books (http://bit.ly/2coAoKP ). The study showed a gain of an extra 5 months of reading age for a cohort of 85 Reception children when decodable reading books were introduced for the first time, and when other teaching variables remained the same. Subsequently in following years 433 Reception children used decodable reading books as their first experience of reading books for themselves alongside a SSP programme, using the phonics they had been taught. Over time these pupils were able to demonstrate an impressive start and sustained attainments, even those from vulnerable groups.

To give further information on this study: Grant reported on whole cohorts of Reception children being taught synthetic phonics over an eight-year period. At the beginning of the study, during one year, the whole cohort of Reception children was not given decodable reading books, but continued to use the Oxford Reading Tree books employed by the school at that time, which were based largely on look-and-say and whole language principles. The 90 Reception children achieved an average of 12 months ahead of chronological age for reading and 17 months ahead for spelling at the end of Reception. The following year decodable reading books were written and used for the first time. 85 Reception children averaged 17 months above chronological age for reading, a gain of an extra 5 months of reading age compared with the previous year. Averaged over 6 years with SSP and decodable books, 433 children were 16 months ahead for reading and 17 months ahead for spelling. (4, p18) Cohorts of these Reception children were tracked through to their KS1 and KS2 English SATS where comprehension and writing as well as decoding skills were assessed.

There is other supportive evidence for the use of decodable reading books, for example, the work of Juel and Roper/Schneider, 1985 (5) as referenced in: http://www.iferi.org/iferi_forum/viewtopic.php?f=2&t=469 (6). In addition there are the summaries of research into decodable text discussed in: http://www.iferi.org/iferi_forum/viewtopic.php?f=2&t=554 (7).

Teaching HFWs as Sight Words

Optima proposes teaching the 100 most frequently occurring words in English as whole words by sight, whether or not these words can be read using the GPCs already taught. “These were taught as whole words with no reference to any phonemes within the words …..” (8) (Shapiro and Solity, 2008). The 100 words were based on word lists and “determined as the optimal number by Solity, McNab and Vousden, 2007 (9); Solity and Vousden, 2009 (10); Vousden, 2007 (11).

The practice of learning HFWs by sight is in direct contrast with the view of the psychologist and researcher Professor Diane McGuinness (12) and with the government’s advice (13):

Research has shown, however that even when words are recognised apparently by sight, this recognition is most efficient when it is underpinned by grapheme-phoneme knowledge”. “What counts as ‘decodable’ depends on the grapheme-phoneme correspondences that have been taught up to any given point”. “Even the core of high frequency words which are not transparently decodable using known GPCs usually contain at least one GPC that is familiar. Rather than approach these words as though they were unique entities, it is advisable to start from what is known and register the ‘tricky bit’ in the word.” (13, pp 15-16).

Diane McGuinness (12) has pointed out that introducing multiple strategies (such as learning HFWs by sight) at an early stage of reading instruction will be “mutually contradictory and will confuse rather than assist young readers”. (http://www.rrf.org.uk/archive.php?n_ID=112&n_issueNumber=51)

The dual system of reading instruction proposed by Optima would not meet the DfE revised core criteria which defined the key features of an effective systematic synthetic phonics teaching programme (13). The Department strongly encouraged heads and teachers to consider the revised core criteria when making decisions about the quality of commercial programmes and the suitability of them for their particular schools and settings. The first of these criteria was ‘phonics first and fast’ as a programme’s prime approach to decoding print.

Optima follows an explicit dual route to reading instruction: teaching phonics along with learning HFWs by sight. Although SSP programmes teach phonics ‘first and fast’ they can also teach ‘tricky’ words but again through phonics as recommended in Letters and Sounds (L&S) (14). In my own programme, common ‘tricky’ words are introduced in a natural drip-fed way through sentence work, right from the very beginning of instruction. (3 and 17)

Shapiro and Solity (15) compared the effectiveness of L&S (“which teaches multiple letter-sound mappings”) implemented in Reception with ERR (“teaches only the most consistent mappings plus frequent words by sight”), and then followed up outcomes at the end of the second and third years of schooling. They found the two programmes equally effective in broad terms with ERR being more effective with children with poor phonological awareness.

In her blog on sight words (http://readoxford.org/guest-blog-are-sight-words-unjustly-slighted) (16), Professor Anne Castles reviewed the Shapiro and Solity 2015 study (15) and agreed with them that teaching frequent words by sight did not appear to interfere with phonics learning in the ERR programme.

Shapiro and Solity (15) made some positive comparisons of ERR compared with L&S. However, would these comparisons hold up if ERR were compared with other SSP programmes? Shapiro and Solity suggested that in L&S some children may not have fully grasped the concept of segmentation and blending in the absence of print before moving on to segmentation and blending of letters. They pointed out, in contrast, that ERR continued to teach segmentation and blending in the absence of print in every whole class lesson. Is this true for any government ‘approved’ SSP programme? In at least one other SSP programme (3 and 17) practice with oral segmentation and blending precedes segmentation and blending of letters in every lesson.

Shapiro and Solity (15) also maintained that ERR provided more practice with ‘tricky’ common words and that it might be more beneficial for L&S children simply to learn these by sight and gain regular practice with them, instead of attempting to analyse their sound structure. However, other SSP programmes may provide more explicit teaching and practice for HFWs. For example, in one other SSP programme there are resources (18) provided to support the analysis of sound structure in HFWs. Specially prepared common words, sentences and texts make it easier for children to practise sounding and blending them, so that the children begin to be able to read the words without overt sounding and blending. Thus, they start to experience what it feels like to read some words automatically. In addition, blending and segmenting of common ‘tricky’ words takes place in every lesson alongside more easily decodable words, providing the ongoing regular practice that Shapiro and Solity recommended.

At present there appears to be insufficient evidence to support the Shapiro and Solity view that only ERR has potential benefits for children at risk of developing reading difficulties. They suggest that an optimal programme should explicitly teach children to use two strategies: sight recognition and phonic decoding. However, to date, they have not published data which compares ERR with SSP programmes other than L&S, particularly where those programmes use reading books with a high level of decodability and which explicitly support teaching for HFWs.

The Grant longitudinal study (4) used a ‘phonics first and fast’ approach but also taught HFWs through sentence work and through a phonics route. Impressive results were achieved even with vulnerable groups of pupils who were at risk of developing reading difficulties and with higher achievers who were stretched (0% below Level 3B, 6% Level 3B, 94% Level 4+, 65% Level 5 at KS2 English SATs).

Number of GPCs Taught

I have some sympathy with the Solity view regarding the number of grapheme-phoneme correspondences (GPCs) which should be taught. How extensive should the advanced code be? Solity questions, “whether it is worth teachers spending a great amount of time making sure pupils learn all 85, rather than concentrating on the most frequent ones and then building pupils’ vocabulary.” (1)

Solity is following the ‘principle of optimal information from rational analysis’ and aims to teach the “optimal number of GPCs(8). However these do seem to be quite limited. Children were taught 61 high frequency mappings between graphemes and phonemes (26 letters; 5 vowels modified by ‘e’; 30 letter combinations) and “where multiple mappings exist between phonemes and graphemes – only one GPC was taught”. (8)

Teaching in this way would require more words having to be taught explicitly should they contain a GPC not covered in the 61 high frequency ERR mappings. Would these words also be taught by sight?  Such words would be in addition to the ‘tricky’ common words taught as “unique entities” by sight. In SSP programmes a greater number of words would be decodable as more GPCs would be taught and words with unusual GPCs would be blended from what is known and from registering the ‘tricky’ bit’ in the word as recommended by DfES. (14, pp15-16).

Learning a large number of words by sight in this way could place a strain on memory to which there is a limit. Diane McGuinness reported the average visual-memory limit for whole word units as approximately 2,000 (19). A good English dictionary contains from 250,000 to 500,000 words, which sounds like a huge challenge for those individuals needing to memorise whole words.  Whereas those able to use a more comprehensive alphabetic code would be at an advantage and more able to work out pronunciations using their pre-existing phonics.

In her blog (16), Professor Castles suggested that some pupils in the Shapiro and Solity study (15) were possibly confused by being exposed to the multiple alternative sound mappings (GPCs) in L&S rather than to sight words in ERR.

What about the strain on memory of learning GPCs? Is there a memory limit to the number of GPCs that can be taught explicitly? According to the literature this limit is high. Victor Mair, Professor of Chinese Language and Literature, claims there is a natural upper limit of approximately 2,000-2,500 to the number of sound-symbol units (in our case GPCs) which most individuals can tolerate (20). Also note the reference in http://www.rrf.org.uk/archive.php?n_ID=173&n_issueNumber=59).  I think we are safe in saying that this limit far exceeds the demands of all SSP programmes. Even those SSP programmes which teach a large and very comprehensive alphabetic code for English are unlikely to be teaching more than 100-200 GPCs. Some SSP programmes will be teaching fewer and a number will be concentrating on even fewer. In Sound Discovery (3) every effort is made to teach the advanced code in the simplest and most straightforward way in order to decrease confusion and minimise overload.

From the perspective of limits to memory, one cannot assume that learning 100 sight words and a reduced set of GPCs as in ERR is less strain on memory than learning more GPCs and constantly rehearsing and practising common ‘tricky’ words through phonics in a SSP programme.

There may be a value in teaching a more comprehensive alphabetic code than Optima in a systematic way as recommended by the Year 1 Phonics Screening check (21), even if, in practice, a greater emphasis is placed on the most frequently occurring GPCs. We simply do not know whether programmes teaching less code are less effective or more effective when there is no comparative research.

Reducing Difficulties

In the Optima video (2), Solity reported that his programme had reduced reading difficulties from 20-25% to 2-3%.

It is perhaps not surprising that such positive results and views have been reported, given the systematic way the Optima programme appears to have been introduced and delivered in schools, the commitment of all staff, including senior management, and the ongoing support from the Optima team. However we do not know whether even better results could have been achieved if they had incorporated some of the features discussed above and which are found in good synthetic phonics teaching (viz. decodables, not teaching HFWs by sight, teaching a more comprehensive alphabetic code – even if then concentrating on the most frequently occurring).

Dr Solity quoted a percentage of remaining ‘difficulties’ as 2-3% with Optima. In the Sound Discovery study (4) we were able to achieve just over 1% of moderate reading difficulties in 2004. Only one child out of the 3 form entry at Year 6 achieved less than Level 4 English. 94% of pupils achieved Level 4+ and 65% achieved Level 5. It is not clear how Solity defines ‘difficulties’ as they relate to his quoted 2-3%. In the Grant study the only pupil who did not achieve Level 4 gained a Level 3B English which is not a severe literacy difficulty. This pupil had complex and severe learning difficulties and he was followed up into his secondary school. He was reported as, “holding his own in mainstream classes in Year 7; he had made good gains in reading and spelling and could understand complex vocabulary in the curriculum. He was able to be de-statemented in Year 9” (4, p19).

Real Books

The Optima video stresses the importance of ‘real’ books in increasing the vocabulary and language comprehension of pupils.  The Optima team did not use the sort of decodable reading books matched to their order of teaching GPCs often used now by SSP programmes.

In contrast SSP schools comply with the Simple View of Reading (22) which identifies two distinct processes in learning to read: ‘word recognition’ and ‘language comprehension’. Many SSP programmes have a strong language comprehension strand using structured, decodable reading materials which aim not only to give practice with decoding but also to develop vocabulary and comprehension.

In addition, in SSP schools, ‘real’ books and rich literature are used alongside decodable books for adults to read to children and to share with them. The aim is to establish a love of books and literature and to increase confidence and enjoyment. Children taught in this way pick up reading quickly. They become enthusiastic and confident about their reading. They are more able and willing to engage in the world of reading around them and take advantage of incidental phonics practice in the environment. They are also more able and willing to access a wide range of texts and literature themselves.

Conclusion

The Optima video was impressive and I am not surprised that the schools were achieving good results with a programme and teaching which the majority of us could endorse. However, in my view, similar outcomes could be achieved and even surpassed in schools which are committed to following rigorously a good quality systematic synthetic phonics programme, which uses books which are decodable at a high level (in this instance at the 81% level as mentioned on page 2) and teaches HFWs with attention to what is, and is not, decodable in them.

 

Dr Marlynne Grant

Registered Educational Psychologist

Author of the government ‘approved’ systematic synthetic phonics programme Sound Discovery

Committee member of RRF

October 2016

 

References

 

  1. Richardson, H. (2016). National phonics check ‘too basic’. BBC News, 16th September 2016. Education and Family, available online at: http://www.bbc.co.uk?news.education – 37372542 ; and British Education Research Association (BERA) press release, available online at: https://www.bera.ac.uk/bera-in-the-news/press-release-children-can-pass-phonics-test-without-extensive-phonic-knowledge

 

  1. Optima Video (2016). Optima Psychology, available online at http://optimapsychology.com/fi nd-out-more/introductory-video /.

 

  1. Grant, M. (2000). Sound Discovery Manual. Synthetic Phonics Ltd., www.syntheticphonics.net .

 

  1. Grant, M. (2014). Longitudinal Study from Reception to Year 2 (2010-2013) and Summary of an earlier Longitudinal Study from Reception to Year 6 (1997-2004). The Effects of a Systematic Synthetic Phonics Programme on Reading, Writing and Spelling. Paper presented to ResearchEd Conference, London, 2014. Available online at http://bit.ly/2coAoKP .

 

  1. Juel, C. & Roper/Schneider, D. (1985). Reading Research Quarterly, 18. Also in Adams, M.J. Beginning to Read: Thinking and Learning about print, Bradford Books, pp275-280.

 

  1. International Foundation for Effective Reading Instruction (IFERI), 2015. Is there a role for predictable texts in reading instruction? Available online at: http://www.iferi.org/iferi_forum/viewtopic.php?f=2&t=469 .

 

  1. International Foundation for Effective Reading Instruction (IFERI), 2015. The multi-cueing reading strategies and ‘Is reading about getting meaning from print’? Available online at: http://www.iferi.org/iferi_forum/viewtopic.php?f=2&t=554 .

 

  1. Shapiro, L. & Solity, J. (2008). Delivering Phonological and Phonics Training within Whole Class Teaching. British Journal of Educational Psychology, 78, part 4, pp 597-620.

 

  1. Solity, J., McNab E. & Vousden, J. (2007). Is there an optimal level of sight vocabulary to teach beginning readers? Unpublished data.

 

  1. Solity, J. & Vousden, J. (2009). Reading schemes vs. real books: A new perspective from instructional psychology. Educational Psychology, Volume 29, Issue 4.

 

  1. Vousden, J. (2007). Units of English spelling-to-sound mapping: a rational approach to reading instruction. Applied Cognitive Psychology, Volume 22, Issue 2.

 

  1. McGuinness, D. (2004). A response to teaching phonics in the National Literacy Strategy. RRF (Reading Reform Foundation) Newsletter 51. Available online at: http://www.rrf.org.uk/archive.php?n_ID=112&n_issueNumber=51 .

 

  1. Department for Education (DfE) (2010). Phonics teaching materials: core criteria and the self-assessment process. Crown copyright. Available online at: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/298420/phonics_core_criteria_and_the_self-assessment_process.pdf .

 

  1. Department for Education and Skills (DfES) (2007). Letters and Sounds: Principles and practice of high quality phonics. Notes of guidance for practitioners and teachers. DfES Publications, pp15-16. Available online at: https://www.gov.uk/government/publications .

 

  1. Shapiro, L. & Solity, J. (2015) Differing effects of two synthetic phonics programmes on early reading development. British Journal of Educational Psychology, Volume 86, Issue 2, pp182-203.

 

  1. Castles, A. (2016). Guest Blog: Are sight words unjustly slighted? Read Oxford, University of Oxford. Available online at: http://readoxford.org/guest-blog-are-sight-words-unjustly-slighted .

 

  1. Grant, M. (2007-2009). Sound Discovery Big Books of Snappy Lesson Plans at Step 1, Step 2, Step 3A, Step 3B and Steps 4-7. Synthetic Phonics Ltd. www.syntheticphonics.net

 

  1. Grant, M. (2014). Sound Discovery High Frequency Words, Version 2. Synthetic Phonics Ltd. www.syntheticphonics.net .

 

  1. McGuinness, D. (2004). Growing a Reader from Birth, W.W. Norton.

 

  1. Daniels, P. and Bright, W. (1996) (Editors). The World’s Writing Systems. Oxford & New York: Oxford University Press, p 200, and referenced in:

http://www.rrf.org.uk/archive.php?n_ID=173&n_issueNumber=59 .

 

  1. Department for Education (2011). Year 1 Phonics Screening Check.

https://www.gov.uk/government/policies/reforming-qualifications-and-the-curriculum-to-better-prepare-pupils-for-life-after-school/supporting-pages/statutory-phonics-screening-check .

 

  1. National Primary Framework for Literacy and Mathematics (DfES) (2006). Crown Copyright.

Why we use the Phonics Screening Check in Australia

Students at the school at which I work learn to decode systematically and explicitly. We believe that, given the balance of evidence, a good grounding in phonics, taught systematically, will provide them with the best opportunity to improve their reading comprehension. A key part of our teaching strategy is using assessment evidence to pinpoint what a student can decode and what they still need to work on.

As an Australian school we don’t have access to an Australian national or state-wide assessment for decoding skills or early reading comprehension. In the absence of such an assessment we have decided to use the UK Phonics Screening Check to help inform our instruction. We use the Phonics Screening Check because:

It provides a standard

One of the most common questions students, staff and parents have is whether a child is “doing ok” – are they at the standard for their age? The Phonics Screening Check gives us a standard that we can measure between year levels and across years. We know that students are at standard for their age when they can pass the Phonics Screening Check. It gives a definitive anchor for our work and helps guide what we do. Instead of having an individual feel for what an appropriate level of decoding might be, we have an agreed standard. This aids conversation: we all know exactly what it means to say a student is above or below that standard and we know what instruction and learning is required to get them there. We are able to detect much earlier when a student is in danger of not making the required level and can intervene earlier and with more of a sense of what is required.

Another feature of the earlier Screening Checks that is useful is the published item difficulties for each of the words and non-words in the 2012 and 2013 pilots. This gives a good indication of what words or non-words were more difficult than others for the UK students. We can then compare that to how difficult our students found those items and investigate when differences arise. What items are we comparatively strong at? Are there aspects of our instruction around the use of that grapheme that we need to record and make sure we are all include in our practice?

There may be other words/non-words our students unexpectedly find difficult to decode. Why can’t our students decode the word? What part of the word is proving to be the stumbling block? What do we currently do to teach the decoding of that grapheme and why is not working? What parts of our instruction need to be revised in order for students to improve?

It builds a bridge between classrooms

In our school the Prep (5 year old) classes are fluid – the groups are altered every six weeks and teachers change between classes. This results in a shared responsibility for the progress of all students in Prep. Fantastic conversations are had between teachers as they realise that kids who have been in one class are much better at something than students who have been in another class. It might be as simple as noticing children from class A always construct sentences with a capital letter at the start and a full stop at the end, something that doesn’t happen in class B. What is happening in this class that allows students to do this consistently and how can I teach my kids to do the same?

Sometimes, though, the differences in student learning between classes are not so obvious and it takes a specific assessment to reveal them. On the UK Phonics Screening Check there are times when students who are notionally in an earlier phase of their phonics work that have greater success in decoding a non-word than a class that should have done better. Why did that happen? What instruction around that grapheme phoneme correspondence in that class was so effective and how is best implemented in the other classes? How can we learn from each other in order to improve the instruction for all students?

The sharing of demonstrably effective practice results in teaching that is more successful. It doesn’t necessarily mean that all classes are exactly the same but it does allow the gap in effectiveness of instruction to be decreased. This is an equity issue: a student’s progress should not be based on a lottery depending on whether or not they get an effective teacher when classes are allocated. When the instructional quality of the team is growing, both as a whole and as individuals, all students benefit.

The Phonics Screening Check is an important component of the process of instructional improvement and allows a sense of what an appropriate level of decoding looks like. As such, I would heartily recommend it to all schools teaching phonics.

IFERI would like to thank Reid Smith, who is a teacher in Australia, for allowing us to re-blog this post. You can subscribe to his blog here:

https://notquitetabularasa.wordpress.com/

And you’ll find him on Twitter here: @Smithre5

IFERI supports and promotes the use of the Phonics Screening Check internationally. It is a free, easy to use, light-touch assessment. For more information, or to download the check, click here.

The Reading Reform Foundation Conference, March 2015 *updated*

‘From the Rose Review to the New Curriculum. A growing number of schools successfully teach every child to read; the majority still don’t. Why?’

The theme of the Reading Reform Foundation conference (above) drew attention to the fact that some schools achieve very highly despite complex and challenging circumstances. Indeed, London schools, despite being ‘inner city’ schools, are gaining a reputation in England for nationally high standards and some commentators attribute this to the rise in standards particularly in primary schools. Many primary headteachers would attribute their rise in standards to getting the foundations of literacy right by ensuring high-quality Systematic Synthetic Phonics provision within enriched language and literature settings.

The conference was very well-received and attendees included people from America, Spain, Ireland, Scotland and Australia.

Most of the talks were filmed and will be added to this blog posting as the footage becomes available.

Debbie Hepplewhite gave the opening talk, ‘Does it really matter if teachers do not share a common understanding about phonics and reading instruction?’ Having watched the talk via youtube, a number of ‘tweeters’ recommended this video for INSET (In-service training) suggesting that ‘all teachers’ would benefit from watching it!

Debbie’s PowerPoint – click here

Next, Anne Glennie talked about the lack of ambition and lack of phonics training in Scotland with her talk, ‘The Attainment Gap? What about the Teaching Gap?’ – and this is despite the fact that England and other countries internationally paid heed to the Clackmannanshire research (Johnston and Watson) conducted in Scottish schools.

Following Anne was Josie Mingay with her talk, ‘Phonics in the Secondary Classroom’. This talk generated a great deal of interest and Josie had more questions from the audience than anyone else. Clearly we still have weak literacy in many of our secondary schools – and this is surely why ALL teachers need to be trained in reading and spelling instruction, not just infant and primary teachers. In any event, a ‘beginner’ for whom English is a ‘new’ language, isn’t necessarily a five year old.

Sam Bailey was appointed headteacher of a struggling school with results well below national expectations. The theme of her talk was, ‘Transforming the life chances of our children – simple methods, great results’.She described in detail the rapid improvements with the adoption of Systematic Synthetic Phonics programmes (Oxford Reading Tree Floppy’s Phonics Sounds and Letters and Phonics International) in a climate of support, expectation, challenge, and rigour.

Gordon Askew brought his wealth of knowledge and experience to bear for his excellent talk, ‘Assessment, including the Phonics Screening Check and assessing reading at the end of Key Stage One’. To be honest, his talk was not what one might have expected and it turned out to be quite inspirational considering the topic!

Marj Newbury is a retired Early Years teacher with 37 years experience, She has also delivered synthetic phonics training extensively in schools both in the UK and worldwide – including as guest lecturer in her local universities. Marj’s talk, ‘Teacher Training’ not only described her work, but also voiced her concern about changes to the way we are training teachers in England.

Angela Westington HMI CV (Her Majesty’s Inspector) was invited to talk about her very important Ofsted report, ‘How a sample of schools in Stoke-on-Trent teach pupils to read’. Angela has considerable experience of leading and participating in national surveys and what is so important about Angela’s report is the clear description of strong phonics and reading practice and weak practice. Angela was not filmed but her ‘Stoke-on-Trent’ report is a must read and you can find it via the link below:

Stoke-on-Trent report – click here

Finally, the RRF was very appreciative that Nick Gibb, Minister of State for School Reform, rounded off the conference with his final ministerial speech prior to the general election in the UK, Nick Gibb has been at the forefront at looking closely at the findings of international research to inform reading instruction and championing changes in the statutory National Curriculum to incorporate Systematic Synthetic Phonics. The theme of his talk was, ‘The Importance of Phonics’:

Nick Gibb’s speech – click here

 

Phonics: An International Perspective

Phonics is not just a national issue for us here in the UK. Supporting the optimum number of children to read the English language fluently and confidently is an international issue. Although I thought I already had a fair picture of what is happening overseas, since being involved as a member of IFERI I have been amazed at the synergy between our situation and that of so many other countries worldwide. The growing evidence and support internationally for phonics as the key to reading success is so strong. However many other countries also have almost exactly the same issues with ingrained but misinformed and, yes, prejudiced opposition. They too need so many more policy makers, head teachers, teachers and parents (perhaps especially parents) to start to question what they have been led to ‘believe’ about learning to read and start to pay due regard to the actual evidence, no only of research, but of the demonstrated classroom efficacy of high quality phonics practice. We are fortunate in this country that some key national policy makers understand what is needed to get all our children reading. This means that we are well ahead of many other countries in moving towards teaching based on the principle of using phonic decoding as the route to reading unknown words. There are many in other countries who are envious of our position and fully appreciate what is being done. That makes it all the more tragic that there are still large numbers of teaching professionals, and others, in our country who don’t. Please start to look regularly at the IFERI website and see the whole picture for yourself. Gordon Askew