Every single federal, state, and district policy decision aimed at improving student academic performance has a set of taken-for-granted assumptions that link the adopted policy to classroom lessons.
From widespread adoption of Common Core standards, to the feds funding “Race to the Top” to get states to adopt charters and pay-for-performance schemes to a local school board and superintendent deciding to give tablets to each teacher and student, these policies contain crucial assumptions–not facts–about outcomes that supposedly will occur once those new policies enter classrooms.
And one of those key assumptions is that new policies aimed at the classroom will get teachers to change how they teach for the better. Or else why go through the elaborate process of shaping, adopting, and funding a policy? Unfortunately, serious questions are seldom asked about these assumptions before or after super-hyped policies were adopted, money allocated, expectations raised, and materials (or machines) entered classrooms.
Consider a few simple questions that, too often, go unasked of policies heralded as cure-alls for the ills of low-performing U.S. schools and urban dropout factories:
1. Did policies aimed at improving student achievement (e.g., Common Core standards. turning around failing schools, pay-for performance plans, and expanded parental choice of schools) get fully implemented?
2. When implemented fully, did they change the content and practice of teaching?
3. Did changed classroom practices account for what students learned?
4. Did what students learn meet the goals set by policy makers?
These straightforward questions about reform-driven policies inspect the chain of policy-to-practice assumptions that federal, state, and local decision-makers take for granted when adopting their pet policies. These questions distinguish policy talk (e.g. “charter schools outstrip regular schools,” “online instruction will disrupt bricks-and-mortar schools”) from policy action (e.g., actual adoption of policies aimed at changing teaching and learning) to classroom practice (e.g. how do teachers actually teach everyday as a result of new policies),and student learning (e.g., what have students actually learned from teachers who teach differently as a result of adopted policies).
Let’s apply these simple (but not simple-minded) questions to a current favorite policy of local, state, and federal policymakers: buy and deploy tablets for every teacher and student in the schools.
1. Did policies aimed at improving student achievement get fully implemented?
For schools in Auburn (ME) to Chicago to Los Angeles Unified School District, the answer is “yes’ and “no.” The "yes" refers to the actual deployment of devices to children and teachers but, as anyone who has spent a day in a school observing classrooms, access to machines does not mean daily or even weekly use. In Auburn (ME), iPads for kindergartners were fully implemented. Not so in either Chicago or LAUSD.
2. When implemented fully, did they change the content and practice of teaching?
For Auburn (Me), LAUSD, and all districts in-between those east and west coast locations, the answer is (and has been so for decades): we do not know. Informed guesses abound but hard evidence taken from actual classrooms is scarce. Classroom research of actual teaching practices before and after a policy aimed at teachers and students is adopted and implemented remains one of the least researched areas. To what degree have teachers altered how they teach daily as a result of new devices and software remains unanswered in most districts.
3. Did changed classroom practices account for what students learned?
The short answer is no one knows. Consider distributing tablets to teachers and students. Sure, there are success stories that pro-technology advocates beat the drums for and, sure, there are disasters, ones that anti-tech educators love to recount in gruesome detail. But beyond feel-good and feel-bad stories yawns an enormous gap in classroom evidence of "changed classroom practice," "what students learned," and why.
What makes knowing whether teachers using devices and software actually changed their lessons or that test score gains can be attributed to the tablets is the fact that where such results occur, those schools have engaged in long-term efforts to improve, say, literacy and math (see here and here). Well before tablets, laptops, and desktops were deployed, serious curricular and instructional reforms with heavy teacher involvement had occurred.
4. Did what students learn meet the goals set by policy makers?
Determining what students learned, of course, is easier said than done. With the three-decade long concentration on standardized tests, "learning" has been squished into students answering selected multiple choice questions with occasional writing of short essays. And when test scores rise, exactly what caused the rise causes great debate over which factor accounts for the gains (e.g., teachers, curricula, high-tech devices and software, family background--add your favorite factor here). Here, again, policymaker assumptions about what exactly improves teaching and what gets students to learn more, faster, and better come into play.
Take-away for readers: Ask the right (and hard) questions about unspoken assumptions built into a policy aimed at changing how teachers teach and how students learn.
|
Nenhum comentário:
Postar um comentário