Much has already been written on the U.S. obsession with standardized test scores. Add to the obsession the passionate belief that policymakers who gather, digest, and use a vast array of numbers can reshape teaching practices.
I refer to data-driven instruction–a way of making teaching less subjective, more objective, less experience-based, more scientific. Ultimately, a reform that will make teaching systematic and effective. Standardized test scores, dropout figures, percentages of non-native speakers proficient in English–are collected, disaggregated by ethnicity and school grade, and analyzed. Then with access to data warehouses, staff can obtain electronic packets of student performance data that can be used to make instructional decisions to increase academic performance. Data-driven instruction, advocates say, is scientific and consistent with how successful businesses have used data for decades in making decisions that increased their productivity.
Not a new idea. Teachers had always assessed learning informally before state- and district-designed tests. Teachers accumulated information from pop quizzes, class discussions, observing students in pairs and small groups, and individual conferences. Based on these data, teachers revised lessons. Teachers leaned heavily on their experience with students and the incremental learning they had accumulated from teaching 180 days, year after year.
In the 1990s and, especially after No Child Left Behind became law, the electronic gathering of data, disaggregating information by groups and individuals, and then applying lessons learned from the analysis to teaching became a top priority. Why? Because stigma and high-stakes consequences (e.g., state-inflicted penalties) occurred from public reporting of low test scores and inadequate school performance that could lead to a school’s closure.
Now, principals and teachers are awash in data.
How do teachers use the massive data available to them on student performance? Studies of teacher and administrator usage reveal wide variation and different strategies. In one 2007 study of 36 instances of data use in two districts, researchers found 15 where teachers used annual tests, for example, in basic ways to target weaknesses in professional development or to schedule double periods of language arts for English language learners. There were fewer instances of collective, sustained, and deeper inquiry by groups of teachers and administrators using multiple data sources (e.g., test scores, district surveys, and interviews) to, for example, reallocate funds for reading specialists or start an overhaul of district high schools. Researchers pointed out how timeliness of data, its perceived worth by teachers, and district support limited or expanded the quality of analysis. These researchers admitted, however, that they could not connect student achievement to the 36 instances of basic to complex data-driven decisions in these two districts.
Wait, it gets worse.
In 2009, the federal government published a report ( IES Expert Panel) that examined 490 studies where data was used by school staffs to make instructional decisions. Of these studies, the expert panel found 64 that used experimental or quasi-experimental designs and only six–yes, six–met the Institute of Education Sciences standard for making causal claims about data-driven decisions improving student achievement. When reviewing these six studies, however, the panel found “low evidence” (rather than “moderate” or “strong” evidence) to support data-driven instruction. In short, the assumption that data-driven instructional decisions improve student test scores is, well, still an assumption not a fact.
Another study offers little relief to those advocates of data-driven school and classroom decisions.
In a 2014 study of three districts, researchers used the concept of “sensemaking” to understand why responses from district office administrators, principals, and teachers to data-driven instruction differed. They concluded that the roles these educators play (district administrators, principals, and teachers) and their ideas about what data mean and toward what ends data should be used matter greatly in making decisions for the district, school, and classrooms.
For example:
“In one district, there were clear divisions among roles … regarding perspectives on data. For example, central office members felt that data should be thought of holistically, with each form of data providing another dimension or “piece of the puzzle” about students…. Rarely did central office members discuss data in terms of specific educational practices. Rather, they emphasized understanding about the needs, motivations, and histories of students.”
Principals in this district, however, “saw data more specifically in terms of practice. They saw data as being important to meeting individual students’ needs. One described this as choosing “the right kids to work with on the right objectives at the right time.” They also saw data as supporting programmatic decisions, such as when designing interventions for struggling students or making course scheduling decisions.”
And when it came to teachers in this district “the general sentiment … was that “data” were about testing…. Teachers at different levels named different tests, with the common thread being that teachers were required to give students assessments, but not to systematically reflect or act upon their results. In other words, these teachers viewed “data” as being about compliance and reporting information to central office, not necessarily ‘use.’”
In another district, “teachers presented yet another view about data…. The general sentiment from teachers was that ‘data’ were about testing. These teachers, unlike [those in the other districts] did not focus on any particular test. Teachers at different levels named different tests, with the common thread being that teachers were required to give students assessments, but not to systematically reflect or act upon their results. In other words, … teachers [in this district] viewed ‘data’ as being about compliance and reporting information to central office, not necessarily ‘use.’ [for altering practices]”
Thus far, then, not an enviable research record on data-driven (or informed) decision-making either being linked to classroom practices and student outcomes.
Numbers may be facts. Numbers may be objective. Numbers may smell scientific. But numbers have to be interpreted by those who do the daily work of classroom teaching. Data-driven instruction may be a worthwhile reform but as now driving evidence-based educational practice linked to student achievement, rhetoric notwithstanding, it is not there yet.
|
6 de outubro de 2015
Data-Driven Teaching Practices: Rhetoric and Reality by larrycuban
Postado por
jorge werthein
às
07:05
Assinar:
Postar comentários (Atom)
Nenhum comentário:
Postar um comentário