About the Author(s)


Carisma Nel Email
Faculty of Education Sciences, North-West University, South Africa

Citation


Nel, C., 2018, ‘A blueprint for data-based English reading literacy instructional decision-making’, South African Journal of Childhood Education 8(1), a528. https://doi.org/10.4102/sajce.v8i1.528

Original Research

A blueprint for data-based English reading literacy instructional decision-making

Carisma Nel

Received: 06 Mar. 2017; Accepted: 15 Apr. 2018; Published: 25 June 2018

Copyright: © 2018. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Making decisions about English reading instruction is as core a component to teaching as providing the instruction itself. When providing support to learners at risk for poor reading outcomes, for which currently there is a large percentage in South Africa, it is especially important to ensure that the decisions that are made have the highest likelihood of accuracy as possible and they lead to improving those reading outcomes. The learners with the greatest needs require the most accurate and effective decisions. Now more than ever, effective use of reading literacy assessment data to plan and critically review instruction is a fundamental competency for good teaching. The purpose of this article is to provide districts, schools and teachers with a blueprint for data-based English reading literacy instructional decision-making at a system-wide level.

Introduction

South Africa is facing an issue of epic proportions and of critical importance. It is an issue that affects the economy, reduces the competitiveness of the workforce and challenges the highest ideals (Spaull 2016b). The issue is South Africa’s reading literacy crisis. According to Spaull (2016c), the majority of South African children are not learning to read in any language by the end of Grade 3. The 2016 Progress in International Reading Literacy Study (PIRLS) results indicate that learners who wrote the test in English scored 372 points which means that they did not reach the lowest benchmark. In addition, of the Grade 4 learners who wrote the PIRLS in English, only 21% speak the language at home. The results indicate that those learners who wrote the test in English and spoke the language at home had a score of 445 which was significantly higher than those who spoke a different language at home – they achieved a score of 356 (Howie et al. 2017). The latter score was most probably achieved by learners who transition to English as Language of Teaching and Learning in Grade 4. The English reading literacy situation in South Africa continues to be a major crisis, and educational stakeholders are not making effective reading literacy decisions. Spaull (2016c) regards reading as the binding constraint to improved educational outcomes for the poor.

The act of reading is a complex linguistic achievement (American Federation of Teachers 1999; Pretorius & Ribbens 2005). Effective reading instruction depends on sound instructional decisions made in partnership with the use of reliable data regarding learners’ strengths, weaknesses and progress in reading. The National Institute of Child Health and Human Development (2000) in the United States concluded that there are no easy answers or quick solutions for optimising reading achievement. Nor is there one assessment that will screen, diagnose, benchmark and monitor the progress of learners’ reading achievement. Multiple indicators from different types of assessments provide a more complete picture of learners’ reading processes and achievements (Edwards, Turner & Mokhtari 2008). Scientifically based research studies in education continue to acknowledge the value of frequently assessing learners’ reading progress to prevent the downward spiral of reading failure. Valid and reliable assessment data are the key to provide early identification for intervention and to plan for meeting the needs of all learners identified at various levels of performance (Torgesen 2006).

The importance of addressing the needs of struggling readers cannot be overstated. Research confirms that the longer a learner moves through school with reading difficulties, the more entrenched those difficulties become and the more difficult they are to address (Torgesen 2006). An analysis of the PIRLS 2011 and 2016 results indicates that the performance of learners in English reading is not improving (Howie et al. 2017). Reading difficulties are predictive as well as cumulative:

A student who fails to learn to read adequately in the first grade has a 90 percent probability of remaining a poor reader by Grade 4 and a 75 percent probability of being a poor reader in high school. (Mathes 2015:1)

Assessment is an important part of successful teaching, because instruction needs to be calibrated according to learners’ knowledge, skills and interests (Paris, Paris & Carpenter 2001). Assessment results then guide the selection and use of supplementary supports, instruction and time to help the learner gain the skills that are weak or lacking (Helman 2005).

The purpose of this article is to present a blueprint for data-based English reading literacy instructional decision-making that can be utilised by districts, schools and teachers at a system-wide level.

Assessment and data-based instructional decision-making

The National Research Council (2001) of the United States defines a quality assessment system as one that is coherent, comprehensive and continuous. All components of a coherent system are aligned with the key goals for learners’ learning. A comprehensive assessment system should address the full range of knowledge and skills expected by the Curriculum and Assessment Policy Statement (CAPS). In addition, it should provide different users at different levels in the system (district, school and classroom) with the right kinds of data at the right level of detail to help with decision-making. A system that is continuous provides ongoing streams of information about learners’ learning throughout the year. Assessment data from a coherent, comprehensive and continuous system help teachers monitor learners’ learning by establishing a rich and productive foundation for understanding learner achievement (Herman, Osmundson & Dietel 2010).

However, an assessment system alone cannot ensure that all learners learn what they need to know to succeed. Teachers need curriculum and instructional tools to teach effectively. They also should possess the ability to use assessment data skilfully. A comprehensive school reading assessment system for English must be designed to take what is known from scientifically based reading research and translate it into effective reading practices. The overall goal of a school assessment system, specifically for the Foundation and Intermediate Phases, is to build the capacity, communication and commitment to ensure that all learners are readers by Grade 3, and that learners in the Intermediate Phase continue to progress as successful readers who can read for meaning and read-to-learn (Nel 2015).

Data-based instructional decision-making pertains to the systematic collection, analysis, examination and interpretation of data to inform practice and policy in education settings (Mandinach 2012; Shen & Cooley 2008). The South African Department of Education uses variable assessment data to evaluate the effectiveness of the educational system, educational districts use assessment data to monitor the success of the implementation of CAPS and classroom teachers use assessment data to record scores for reporting and determining progression (Adam 2013). However, only a limited number of teachers use the assessment data to determine learners’ strengths and weaknesses, in particular their reading literacy skills (i.e. phonemic awareness, phonics, fluency, vocabulary and reading comprehension) (Nel et al. 2015), in order to make instructional adaptations or to differentiate their instructional practices.

The implementation of the Annual National Assessments in South Africa produced immense interest in the use of assessment to measure and improve children’s learning (Kanjee & Moloi 2014; Spaull 2016a). Assessments are an integral part of every teacher’s and administrator’s professional role; yet, many teachers have not been trained in the how and why of assessments. Kanjee (2008) mentions that there is limited guidance, support and information for teachers on ‘how’ to use assessment to improve learning. In addition, the RSA DoHET (2011:53) states that one of the competencies that newly qualified teachers should have is the ability ‘to assess learners in reliable and varied ways, as well as being able to use the results of assessment to improve teaching and learning’. This knowledge would allow them to approach assessments with a critical eye of what the purpose of the assessment is and, maybe more importantly, how the data connect back to reading instruction in the classroom. However, in many schools, discussions about assessment are often met with resistance – teachers feeling the fatigue of frequent assessment and the frustration of not understanding the purpose and goal of the seemingly unending series of assessment and administrative reporting requirements made by districts (cf. Adam 2013).

South African research studies indicate that the effective use of assessment for identifying and addressing specific learner needs is especially relevant during the foundation phase (Kanjee & Mthembu 2015). Kanjee and Mthembu (2015) state that their findings seem to indicate that teachers are unable to determine whether learners are learning what they (the teachers) are teaching, and thus unable to provide support to those learners who require additional assistance to attain the curriculum objectives. A number of studies in South Africa have reported that teachers’ assessment practices are seriously wanting in terms of supporting the learning needs of their learners (Kanjee & Croft 2012; Pryor & Lubisi 2002; Ramsuran 2006).

South African studies, involving districts’, schools’ and teachers’ use of assessment data (cf. Adam 2013; O’Connor 2016), indicate an underlying tension between these stakeholders. Teachers and schools merely follow district directives and collect data on learner attendance, do quarterly analyses on the overall performance of learners (e.g. how many learners obtained a code 4, etc.), do item analyses on the results of the assessments they used and then plan interventions which seem to follow a generic, one-size-fits-all approach. The district reviews the data from the schools which, they mention, differ in quality (e.g. length of assessments, reading aspects addressed in the assessments, mark allocation, etc.). The districts seem to follow a checklist approach – received the quarterly analyses, item analyses were done and interventions were put in place. Based on the data received, the districts then also tend to follow a one-size-fits-all approach to intervention (e.g. all schools are invited to attend phonics workshops).

When assessments are properly administered and integrated into instruction, the resulting data can provide valuable information about progress towards instructional goals, success of interventions and overall curriculum implementation. However, obstacles begin to emerge when the appropriate professional development is not provided, and teachers and district officials (e.g. subject advisors) are left trying to piece together the story from assessments (i.e. PIRLS, the previous Annual National Assessments and school-based assessments) that may not be designed to tell a cohesive story.

In order to be effective, district officials and teachers need to make the connection between the underlying story behind learner reading data and how the data must inform their instructional strategies. This requires district officials and teachers to be knowledgeable about the various kinds of assessments and what conclusions about school and learner performance can and should be drawn from the data. Ultimately, this level of understanding gives teachers better clarity of purpose and anticipated outcome in order to understand not only what instructional resources are available, but also why specific strategies and resources are necessary for each individual in the classroom.

Like a doctor trying to identify what treatment patients need to improve their health, teachers need to identify what teaching their learners need now to improve their reading in English. Good doctors use modern tests and procedures to understand their patients’ symptoms and identify the underlying causes. Similarly, teachers should use proven strategies to develop a precise, evidence-based understanding of what their learners already know, and of what they are ready to learn next. When patients have an ongoing condition, doctors follow up with them over time to assess symptoms, check on progress and adjust treatment if required. Similarly, teachers should observe and assess how learners respond to reading instruction, track their progress and adjust their teaching strategies accordingly. With appropriate analysis and interpretation of reading literacy data, teachers can make informed decisions that would positively affect learner reading outcomes (Wayman, Cho & Johnston 2007; Wohlstetter, Datnow & Park 2008).

A blueprint for data-based reading literacy instructional decision-making

Learners have problems reading in English’ because they lack specific skills necessary for proficient reading. Studies such as PIRLS and SACMEQ. The Southern and Eastern Africa Consortium for Monitoring Educational Quality (SACMEQ) provide rich data about who can read and at what level. What they do not indicate, is why the learners cannot read. When a learner has problems learning to read, it is crucial that teachers are able to identify what specific building blocks are missing. Individuals who struggle with reading vary greatly in the specific skills they are lacking (Moats & Hancock 2012). However, while each individual is unique, certain problems commonly occur (Spear-Swerling 2015). Assessment systems that can identify missing reading literacy building blocks early and prevent later reading failure need to be in place. Good, Simmons and Smith (1998) state that assessment procedures are needed to:

(a) identify children early who are experiencing difficulty acquiring early literacy skills, (b) contribute to the effectiveness of interventions by providing ongoing feedback to teachers, parents, and learners, (c) evaluate the effectiveness of interventions for individual learners, (d) determine when learner progress is adequate and further intervention is not necessary, and (e) evaluate the overall effectiveness of early intervention efforts. (p. 46)

Unfortunately, many South African teachers have little or no experience in using data systematically to inform decisions (Kanjee & Mthembu 2015).

A blue print for data-based English reading instructional decision-making should provide districts, schools and teachers with data to meet their decision-making needs. Given the fact that the English reading literacy of learners in South Africa is not improving and that only 20% of the learners reached the intermediate benchmark and 6% the high benchmark (Howie & Tshele 2017), the need for relevant data instead of intuition, tradition and convenience to guide reading literacy decisions, has become increasingly important.

Knowledge about the causes, correlates and predictors of children’s reading success and reading failure in the early primary grades has expanded greatly in the past few decades (e.g. NICHD 2000; Snow, Burns & Griffin 1998). This knowledge has been incorporated into methods of identifying, monitoring and helping struggling readers in the primary school grades. Data from longitudinal studies reveal a high degree of continuity between the levels of reading-related skills displayed by preschool children, and the levels of reading-related and reading skills displayed by these children when they are in primary school (e.g. Lonigan, Burgess & Anthony 2000; Storch & Whitehurst 2002), indicating that the developmental antecedents that underlie the acquisition of reading are found early and prior to the onset of formal schooling. The fact that only 20.8% of the Grade 4 learners who wrote in English reached the intermediate benchmark and the majority did not reach the low benchmark indicates that these learners’ reading literacy skills were most probably not on track in Grade R.

Children arrive in Grade R with varying levels of early literacy skills. The data mentioned above seem to indicate that the instruction given and the support provided by teachers were not sufficient for them to acquire the well-developed early literacy skills needed to read for meaning by Grade 4. Consequently, a means of identifying those children who are either starting from a low level of skill or are not making sufficient gains in these skills to catch up, or both, is needed. This identification process is where the assessment of children’s early reading literacy skills fits into an integrated system of identification and intervention or instructional adaptation.

The blueprint, presented in this section, uses outcome-driven (i.e. towards the attainment of reading literacy targets) decision-making as the point of departure. An outcome-driven model incorporates decision-making steps designed to answer specific questions for specific purposes. Five basic steps are included in an outcome-driven decision-making model: identify the need for support, validate the need for support, plan and implement support, evaluate and modify support, and review outcomes (cf. Kaminski et al. 2008). The information presented in Table 1 should be used as a blueprint to guide comprehensive planning, decision-making and support related to English reading literacy development. The main features of the blueprint are discussed below.

TABLE 1: Blueprint for data-based English reading literacy planning, decision- making and support.
Identify the need for support

The first step requires the identification of learners who are ‘at risk’ for reading difficulties. Screening assessments can be used to obtain this information. Screening assessments are typically brief measures that allow a snapshot of children’s current skills. These measures are designed so that teachers who have minimal training in assessment can administer them. Results from screening assessments are often interpreted around one or more cut scores that indicate a child’s relative likelihood of needing additional assessment, more careful monitoring or additional instruction. A cut score is a score on a screening test that separates learners who are considered potentially at risk from those considered not at risk. Setting cut scores allows schools to identify an initial pool of learners who may require interventions or additional assessment. Most screening assessments provide recommended cut scores (cf. Good et al. 2012). Using consistent cut scores across schools within a district, allows for comparisons across schools.

Screening measures are given to all learners at least three times per year (i.e. beginning, middle and end of school year). It is critical to have this information at the beginning of the year, but periodic checks throughout the year are also valuable. All assessments are conducted to answer a question. At classroom level, teachers need answers to questions such as ‘Which of my learners are at-risk for difficulty?’ and ‘Who needs help?’ At a district and school level, questions such as ‘Are there Grade 4 learners who might need support at School X?’ and ‘On what reading skills might they need support?’ need to be considered. District-level screening data can be used to ensure that resources are equitably allocated for services and support across schools as well as differentiated intervention that might be needed at schools. School-level screening data can be used to inform and set measurable school improvement goals, and grade-level data can help identify learners who might need additional instruction or assessment.

The following reading skills are typically assessed with screening assessments: phonemic awareness, alphabetic principle and basic phonics, advanced phonics and word attack skills, accurate and fluent reading of connected text and reading comprehension (Good et al. 2012). Once identified as having low early literacy skills, children will need additional assessment (e.g. diagnostic assessment or other assessment) to determine their specific patterns of strengths and weaknesses to allow effective application of instructional support. At least 80% of all learners in a class should be showing adequate progress in the reading literacy component being assessed. If this is not the case, diagnostic assessment should be used to identify the reading skills that need additional instruction or support. In South Africa, given the PIRLS results referred to earlier, the likelihood is very real that approximately 80% of the learners in a class are not showing progress in the early reading literacy skills needed to be able to read for meaning in Grade 4.

Validate the need for support

During this step, we need to be reasonably confident that the learner(s) needs instructional support. Teachers, school management teams and district officials should therefore rule out easy reasons for unexpected performance: bad day, confused on directions or task, ill, shy or error in assessment administration. This step will help to determine which reading skills are not in place and what skills interventions or support should be targeting.

This step requires teachers, school management teams and district officials to look closely at the data they have available and decide whether they need additional diagnostic data in order to make decisions about the instruction and support to be provided to the learners and schools. The purpose is therefore to delve deeper into the learners’ profiles of strengths and needs in order to target specific areas of need (cf. Torgesen 2006). At a classroom level, diagnostic assessments answer the following questions: ‘Are we reasonably confident that the identified learners need support?’, ‘What are the learners’ skill strengths and needs?’ and ‘What building blocks are missing?’ At the district and/or school level, the following question can be answered: ‘Are we reasonably confident in the accuracy of our data overall?’, ‘What reading skills are the learners missing?’ ‘Which schools are experiencing what specific reading literacy skill needs?’

School management teams and district officials, tasked with monitoring the curriculum and assessment, should scan the system-level data and look for patterns in the data. Data of one classroom do not necessarily fit the pattern of other classrooms at a specific grade level. Similarly, the assessment results of the same grade may differ across schools. This could have implications for the level and type of support provided by districts to schools.

Diagnostic tests provide a deeper look at a broader set of skills often with data that are more reliable than quick, informal tools and/or screening assessments. The word ‘diagnosis’ is derived from the Greek diagignōskein, meaning ‘to discern the nature and cause of anything’. The focus should be on determining what are the learners’ skill strengths and needs. The information obtained from diagnostic tests can be used for planning more effective instruction. It should be clear that ‘treatment’ without diagnosis is malpractice.

Recognising the underlying pattern of poor reading is particularly helpful in providing effective intervention and differentiation of classroom instruction. Research generally finds that there are three distinct subgroups of learners with reading problems: those with significant weaknesses in phonological processing and word-reading skills that depend on phonological processing; those with slow or dysfluent printed word recognition, most likely related to a specific problem with orthographic processing; and those with oral and written language comprehension (cf. Spear-Swerling 2015). The existence of these major types of reading problem areas indicates that the emphasis of instruction should vary according to the nature of a learner’s problem. No one programme or intervention will be appropriate for all learners who are below benchmark (e.g. PIRLS 2016). Literacy assessment teams should use diagnostic ‘digging’ to guide their decision-making (cf. Figures 1 and 2). If learners are reading at grade level, teachers should continue teaching reading as usual. If the learners’ reading comprehension is low, they need to start checking oral reading fluency first. Learners should read text with sufficient speed, accuracy and expression to support comprehension. Accuracy means knowing the orthographic or spelling patterns of the words; automaticity refers to recognising and applying the patterns in words instantly (i.e. less than 1 s); phrasing refers to the grouping of words in grammatical entities (i.e. elaborated noun phrases, prepositions phrases, and verb and adverb phrases); and intonation means reading the text as though you are telling someone a story or conveying information. The digging should continue until the problem area has been identified (cf. Figure 1 and Figure 2).

FIGURE 1: Digging to diagnose target areas of instruction.

FIGURE 2: Reading literacy aspects to check.

Plan and implement support

This step focuses on the following questions: ‘What am I going to do about it?’, ‘Where do I need to focus intervention?’ and ‘What instructional differentiation or adaptation should I make?’ At a district and/or school level, the following questions can be considered: ‘At what grade level and what reading skill areas may support be needed?’ and ‘What is our district or school plan for support?’

Growing evidence suggests that high-quality reading instruction can be a powerful lever for preventing reading problems and significantly improving reading abilities of learners who are low performing (Allington 2006; Vellutino et al. 2004). One of the key features across effective instructional approaches and interventions is the teacher’s ability to differentiate instruction to meet the various needs of different learners as well as the particular strengths and needs of individual learners. A large body of research is emerging to confirm that the underlying roots of learners’ reading difficulties are diverse (cf. Valencia & Buly 2004). In addition, it is becoming quite clear that instruction focussed on the wrong thing not only does not help learners, but it may actually be harmful (Connor, Morrison & Katch 2004).

Research indicates that instruction for learners who have difficulties learning to read must be more focussed, explicit and comprehensive, more intensive and more supportive (Foorman & Torgesen 2001). In order to ensure that learners make progress, research indicates that the use of evidence-based materials and strategies is absolutely essential (Snow et al. 1998). Changing instruction usually means adjustments related to changing the intensity, the instructional approach or methodology and the group size or composition (Moats & Hancock 2012:118).

One of the most important questions may be the following: ‘What type of instruction?’ Even though ‘balanced literacy’ has become the mantra of early reading literacy education (cf. RSA DBE 2011), many classroom teachers are still not implementing instruction that is consistent with the evidence. In order to avoid the phonics versus whole language debate, I have aligned myself with the approach followed by Dale Willows, a literacy expert in Canada. She uses the metaphor of a balanced and flexible literacy diet to draw a parallel between the requirements of a healthy diet and important considerations in effective reading literacy instruction (Willows 2002). The notion underlying the literacy diet is that, in order to promote growth in reading literacy, it is important to provide the right amount and type of ‘food for reading literacy’, and teachers must ensure that every learner consumes enough of the right literacy foods on a daily basis. The literacy diet ‘components’ represent the equivalent of all the food groups (e.g. grains, fruit and vegetables, meat and alternatives, dairy products, and alternatives). The key ‘food groups’ of the literacy diet are based on what is known from research (and practice) to be the essential components of effective reading instruction. These components are required in appropriate proportions, complementing each other in fulfilling all reading literacy nutritional requirements for growth. Classroom teachers need to understand what the components are and how, when and why they must be provided to ensure the literacy success of their learners. Flexibility is necessary to satisfy personal preferences. As in any other diet, not everyone enjoys all foods for reading literacy. In the reading literacy diet framework, it is appropriate to say ‘I don’t eat cauliflower! ’, but it is not appropriate to say ‘I don’t eat vegetables!’ – for both teachers and learners. Teachers need to do a ‘reading literacy nutritional analysis’ and choose or create a balanced and appealing reading literacy diet for their learners (cf. Figure 3). There are many different ‘nutritious’ and motivating activities to provide each of the reading literacy diet components.

FIGURE 3: A reading literacy diet for Intermediate Phase learners reading at grade level.

Within the literacy diet metaphor, another useful concept is that human dietary requirements change at different stages. For example, when learners’ bones are growing, they require more foods from the dairy group, because these foods contain calcium. Similarly, learners at different stages of literacy development have different reading literacy nutritional needs. As learners progress through the stages, the components and activities in their literacy diet must change in order to promote growth. To be effective, teachers need to understand the requirements of the stages and provide their learners with stage-appropriate ‘foods for reading literacy’. Teachers who understand this complexity are well prepared to teach the vast majority of learners in their classrooms and to provide differentiated instruction for those who need ‘special literacy diets’. For example, if Grade 4 learners are not reading at grade level, the food groups given to these readers will have to change - they will need more from the fats and protein food groups (cf. Figure 3). In more serious cases, ‘iron’ supplements may be needed to ensure that they also progress on the trajectory to reading success (e.g. intensive support).

Evaluate and modify support

The purpose of assessment during this step is to monitor learners’ progress during the year to determine whether they are making adequate progress or whether they are falling behind. The frequency of monitoring is a reflection of risk - the higher the level of risk, the more frequent the monitoring. The aim of progress monitoring is to answer the following questions: ‘How much progress are my learners making?’ and ‘Is the support or intervention working?’ At a district or school level, the following questions need to be asked: ‘Are we making progress towards our district or provincial and/or national goals?’ and ‘Is our system of support effective for the schools in our district or for our school?’

Progress monitoring assessments are administered periodically (e.g. weekly, monthly, etc.) to determine whether learners are making progress. These assessments help teachers identify which learners have mastered specific skills and provide detail around the specific skills that learners have or have not mastered during that time period. The overarching purpose of progress monitoring tools is to provide teachers with information regarding learner progress in relation to the instruction or intervention they are currently receiving.

Review outcomes

The purpose of assessment during this step is usually of a summative nature in order to assess whether the instruction provided in a unit, specific theme, during the term or across the year was successful in helping all learners meet departmental indicators or grade-level expectations. Classroom teachers should consider the following questions: ‘Have my learners learnt the material that has been taught?’ and ‘How successful was learning (i.e., acquiring the reading literacy building blocks at their developmental level)?’ At a district and school level, the following questions can be considered: ‘Have we met our district or school goals?’, ‘Is our system of support effective?’ and ‘Are there and how many of our schools, grades or learners may still need support?’

Outcome assessments are administered at the end of the term or year. They assess the extent to which the learner has learnt the skills or mastered the subject-specific requirements as set out in the CAPS curriculum throughout the term or year. These assessments are important, because they give district officials, school management teams and teachers feedback about the overall effectiveness of their curriculum and instructional practices.

The blueprint aims to ensure the achievement of crucial English reading literacy outcomes for both individual learners and systems at the classroom, school and district levels. The reading literacy outcomes (i.e. reading skills at benchmark) drive the decisions. If outcomes are adequate, then instruction and support are deemed adequate. However, if outcomes are not adequate, then a change is necessary. Changes that increase outcomes are maintained; changes that decrease outcomes are abandoned. Because reading literacy data are monitored so closely, instructional modifications can be made in a timely manner to ensure that all learners can achieve the goal of becoming established readers by the end of Grade 3 and that they remain on track for reading for meaning and learning in the Intermediate Phase.

Communicating assessment results

Snow, Griffin and Burns (2005:193) state that ‘a key use of assessment results is to communicate with learners about their work’. The purpose is to help learners gain insight into their own reading strengths and needs, and develop self-monitoring systems that lead to self-improvement. Engaging learners in critiquing their own work, serves both cognitive and motivational purposes. The purpose of engaging learners in self-assessment is not to allocate a mark, but to gain insight that can be used to further learning (Darling-Hammond & Bransford 2005). Teachers who engage in regular classroom assessment can talk authoritatively about each learner’s strengths and weaknesses. They can provide parents with detailed evidence of their child’s progress or lack of progress and also make recommendations in terms of how parents can support their children (Snow et al. 2005).

Conclusion

Assessment is not an end in and of itself. It is one part of an identification, intervention and evaluation sequence. While accurate assessment can be a powerful tool for acquiring information, its value can only be realised in the context of a well-developed decision-making process that translates the information obtained from assessments into instructional differentiation, intervention and support that is matched to the individual needs of a child.

Today’s educational climate places immense pressure on teachers and education officials at all levels to collect and analyse learner reading literacy data. Most commonly, this burden takes the form of examining assessment scores with an eye focussed on reading skills that give the learners the most difficulty (e.g. Grade 4 learners’ reading comprehension). What is gleaned from such a practice only becomes meaningful if combined with purposeful actions that appropriately address targeted learner reading literacy outcomes. For many South African teachers, this process is often very intimidating. Furthermore, no classroom teacher or district official can act alone and expect any great measure of success. Thus, the data-based instructional decision-making blueprint highlights the need for a coordinated effort. All stakeholders have a vested interest in ensuring learner reading achievement, especially in English as a Home Language or as an Additional Language.

Acknowledgements

Competing interests

The author declares that she has no financial or personal relationships which may have inappropriately influenced her in writing this article.

References

Adam, A., 2013, ‘The development of a school-wide progress monitoring assessment system for early literacy skills’, PhD thesis, North-West University, Potchefstroom.

Allington, R.L., 2006, What really matters for struggling readers: Designing research-based programs, 2nd edn., Pearson Education, Boston, MA.

American Federation of Teachers, 1999, Teaching reading is rocket science. What expert teachers of reading should know and be able to do, report for the American Federation of Teachers, Item no. 39-0372, viewed 16 August 2016, from http://www/aft.org/pubs-reports/downloads/teachers/rocketsci.pdf

Connor, C.M., Morrison, F.J. & Katch, L.E., 2004, ‘Beyond the reading wars: Exploring the effect of child-instruction interactions on growth in early reading’, Scientific Studies of Reading 8(4), 305–336.https://doi.org/10.1207/s1532799xssr0804_1

Darling-Hammond, L. & Bransford, J., 2005, Preparing teachers for a changing world: What teachers should learn and be and do, Jossey-Bass, A Wiley Imprint, San Francisco, CA.

Edwards, P.A., Turner, J.D. & Mokhtari, K., 2008, ‘Balancing the assessment of learning and for learning in support of student literacy achievement’, The Reading Teacher 61(8), 682–684. https://doi.org/10.1598/RT.61.8.12

Foorman, B.R. & Torgesen, J., 2001, ‘Critical elements of classroom and small-group instruction promote reading success in all children’, Learning Disabilities Research & Practice 16, 203–212. https://doi.org/10.1111/0938-8982.00020

Good, R.H., III, Kaminski, R.A., Cummings, K., Dufour, C.M., Petersen, K., Powell-Smith, K A. et al., 2012, DIBELS next assessment manual, Dynamic Measurement Group, Eugene, OR.

Good, R.H., III, Simmons, D. C. & Smith, S., 1998, ‘Effective academic interventions in the United States: Evaluating and enhancing the acquisition of early reading skills’, School Psychology Review 27, 45–56.

Helman, L.A., 2005, ‘Using literacy assessment results to improve teaching for English-language learners’, The Reading Teacher 58(7), 668–677. https://doi.org/10.1598/RT.58.7.7

Herman, J. L., Osmundson, E. & Dietel, R., 2010, Benchmark assessment for improved learning (an AACC policy brief), University of California, Los Angeles, CA, viewed 12 July 2016, from http://www.cse.ucla.edu/policy/R1_benchmark.pdf

Howie, S.J., Combrinck, C., Roux, K., Tshele, M., Mokoena, G.M. & McLeod Palane, N., 2017, ‘PIRLS Literacy 2016 Progress in International Reading Literacy Study 2016: South African Children’s Reading Literacy Achievement’, Centre for Evaluation and Assessment, Pretoria.

Howie, S.J. & Tshele, M., 2017, ‘South African learner achievement in reading literacy in 2016’, in S.J. Howie, C. Combrinck, K. Roux, M. Tshele, G.M. Mokoena & N. McLeod Palane (eds.), PIRLS Literacy 2016 Progress in International Reading Literacy Study 2016: South African Children’s Reading Literacy Achievement, Centre for Evaluation and Assessment, pp. 47–68. Pretoria.

Kaminski, R.A., Cummings, K.D., Powell-Smith, K.A. & Good, R.H., 2008, ‘Best practices in using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) in an Outcomes-Driven model’, in A. Thomas & J. Grimes (eds.), Best practices in school psychology, National Association of School Psychologists, pp. 1181–1203. Bethesda, MD.

Kanjee, A., 2008, ‘Assessment of and for learning: Supporting teachers to improve education quality’, paper presented at the Learning Counts: International, Seminar on Assessing and Improving Quality Learning for All, UNESCO, Paris, 28–30th October.

Kanjee, A. & Croft, C., 2012, ‘Enhancing the use of assessment for learning: Addressing challenges facing South African teachers’, paper presented at the annual American Educational Research Conference, Vancouver, Canada, 13–17th April.

Kanjee, A. & Moloi, Q., 2014, ‘South African teachers’ use of national assessment data’, South African Journal of Childhood Education 4(2), 90–113. https://doi.org/10.4102/sajce.v4i2.206

Kanjee, A. & Mthembu, J., 2015, ‘Assessment literacy of foundation phase teachers: An exploratory study’, South African Journal of Childhood Education 5(1), 142–168. https://doi.org/10.4102/sajce.v5i1.354

Lonigan, C.J., Burgess, S.R. & Anthony, J.L., 2000, ‘Development of emergent literacy and early reading skills in preschool children: Evidence from a latent variable longitudinal study’, Developmental Psychology, 36, 596–613. https://doi.org/10.1037/0012-1649.36.5.596

Mandinach, E.B., 2012, ‘A perfect time for data use: Using data-driven decision making to inform practice’, Educational Psychologist 47(2), 71–85. https://doi.org/10.1080/00461520.2012.667064

Mathes, P.G., 2015, ‘The case for early intervention in reading’, viewed 10 January 2017, from http://giftofliteracy.org/media/2015/10/Early_Intervention.pdf

Moats, L.C. & Hancock, C., 2012, Assessment for prevention and early intervention (K-3), 2nd edn., Cambium Learning Group, Longmont, CO.

National Institute of Child Health and Human Development (NICHD), 2000, Report of the National Reading Panel. Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction: Reports of the sub-groups (NIH publication No. 00-4754), U.S. Government Printing Office, Washington, DC.

National Research Council, 2001, Knowing what students know: The science and design of educational assessment, National Academy of Sciences, Washington, DC.

Nel, C., 2015, ‘Catch them before they fall: A focus on a school-wide reading assessment and support model’, LSSA/SAALA/SAALT joint annual conference, Potchefstroom.

Nel, C., Adam, A., Good, R.H. & Kaminski, R., 2015, ‘Assessment for prevention and early intervention’, in M. Nel (ed.), How to support English Second Language Learners – Foundation and intermediate phase, Van Schaik, Pretoria.

O’Connor, M., 2016, ‘Data-based instructional decision making related to basic early literacy skills in the intermediate phase’, PhD thesis, North-West University, Potchefstroom.

Paris, S.G., Paris, A.H. & Carpenter, R.D., 2001, Effective practices for assessing young readers, Center for the Improvement of Early Reading Achievement, Ann Arbor, MI.

Pretorius, E.J. & Ribbens, R., 2005,. ‘Reading in a disadvantaged high school: Issues of accomplishment, assessment and accountability’, South African Journal of Education 25(3), 139–147.

Pryor, J. & Lubisi, C., 2002, ‘Reconceptualising educational assessment in South Africa: testing time for teachers’, International Journal of Educational Development 22(1), 673–686. https://doi.org/10.1016/S0738-0593(01)00034-7

Ramsuran, A., 2006, ‘How are teachers’ understandings and practices positioned in discourses of assessment?’, paper presented at the Proceedings of the 4th Sub-Regional Conference on Assessment in Education, Johannesburg, 26–30th June.

Republic of South Africa, Department of Basic Education (RSA DBE), 2011, Curriculum and assessment policy statements (CAPS). English Home Language. Foundation Phase Grade R-3, Government Printer, Pretoria.

Republic of South Africa, Department of Higher Education and Training (RSA DoHET), 2011, The minimum requirements for teacher education qualifications, Government Gazette No. 34467, Department of Education, Pretoria.

Shen, J. & Cooley, V.E., 2008, ‘Critical issues in using data for decision-making’, Leadership in Education 11(3), 319–329. https://doi.org/10.1080/13603120701721839

Snow, C.E., Burns, S.M. & Griffin, P., 1998, Preventing reading difficulties in young children, National Academy Press, Washington, DC.

Snow, C.E., Griffin, P. & Burns, M.S., 2005, Knowledge to support the teaching of reading: Preparing teachers for a changing world, Jossey-Bass, San Francisco, CA.

Spaull, N., 2016a, ‘Remodeling the annual national assessments. Thoughts and suggestions for the road ahead’, presentation made to the Department of Basic Education, 02 March, Pretoria.

Spaull, N., 2016b, ‘The biggest solvable problem in SA: Reading’, The Star, 29 March, pp. 16–17.

Spaull, N., 2016c, ‘What do we know about reading outcomes in South Africa?’, presentation to Bridge Forum, Johannesburg, 18 May.

Spear-Swerling, L., 2015, ‘Common types of reading problems and how to help children who have them’, The Reading Teacher 69(5), 513–522. https://doi.org/10.1002/trtr.1410

Storch, S.A. & Whitehurst, G.J., 2002, ‘Oral language and code-related precursors to reading: Evidence from a longitudinal structural model’, Developmental Psychology 38, 934–947. https://doi.org/10.1037/0012-1649.38.6.934

Torgesen, J.K., 2006, A comprehensive K-3 reading assessment plan: Guidance for school leaders, RMC Research Corporation, Center on Instruction, Portsmouth, NH.

Valencia, S.W. & Buly, M.R., 2004, ‘Behind test scores: What struggling readers really need’, The Reading Teacher 57(6), 520–531.

Vellutino, F.R., Fletcher, J.M., Snowling, M.J. & Scanlon, D.M., 2004, ‘Specific reading disability (dyslexia): What have we learned in the past four decades’, Journal of Child Psychology and Psychiatry 45(1), 2–40. https://doi.org/10.1046/j.0021-9630.2003.00305.x

Wayman, J.C., Cho, V. & Johnston, M.T., 2007, The data-informed district: A district-wide evaluation of data use in the Natrona County School District, The University of Texas, Austin.

Willows, D., 2002, ‘The balanced literacy diet’, School Administrator 59, 30–33.

Wohlstetter, P., Datnow, A. & Park, V., 2008, ‘Creating a system for data-driven decision-making: Applying the principal-agent framework’, School Effectiveness and School Improvement 19(3), 239–259. https://doi.org/10.1080/09243450802246376


 

Crossref Citations

1. The impact of different time limits and test versions on reliability in South Africa
Danille E. Arendse
African Journal of Psychological Assessment  vol: 2  year: 2020  
doi: 10.4102/ajopa.v2i0.14