9 de julho de 2011

Briefing Paper on Assessment for Edusummit 2011 Draft4:ICT

Briefing Paper on Assessment for Edusummit 2011 Draft4
Mary Webb (mary.webb@kcl.ac.uk), David Gibson (david.c.gibson@asu.edu)

Introduction

The Call to Action from Edusummit 2009 included two issues impacting the future of assessment that we would like to advance through discussions in the June 2011 meeting, with these goals:

•   To establish a clear view on the role of ICT in 21st century learning and its implications for formal and informal learning.
•   To develop new assessments designed to measure outcomes from technology enriched learning experiences

Since assessment exists in a complex dynamic relationship with curriculum, pedagogy, and the needs and demands of the world outside of schools, a better understanding is needed about ICT’s role in 21st century learning, especially in relation to both the formal and informal settings where 21st century skills may be acquired. For example, beyond the world of formal education, ICT now sits at the leading edges of all the sciences and humanities as well as popular culture, where informal lifelong learning thrives. Computational thinking – imagination and problem solving aided by visualizations, algorithms, and software agents is ubiquitous; it can be found when we shop online, plan a vacation, or find our way to a new address, as much as when scientists explore massive data sets looking for new patterns and natural laws, whether those be in language, arts, economics, or physical sciences (Beinhocker, 2006; Davidson & Goldberg, 2009; Wolfram, 2002).

The major shift of the information age toward ICT in learning is mirrored by its integration into curricular frameworks in primary and secondary education. However, assessment frameworks have often not changed accordingly. An urgent need for alternative assessment approaches and instruments is indicated, along with an understanding of the impact of IT on assessment. Our task in June 2011 is to explore the intersection of ICT in learning and the challenges of assessment.



In Edusummit 2009 the complex relationship between technology, curriculum and pedagogy, was explained by considering the behaviour of a double pendulum. For example if the two weights represent pedagogical content knowledge (PCK) (Shulman, 1986) and ICT and are suspended from a single point - the aim of education - then the dynamics and motion created by this system is highly complex (Figure 1).

Figure 1. Chaotic behavior of a double pendulum

Which concept is prior or dependent (ICT or PCK) changes with each context and at different moments in time, because now one leads and in the next instant the other one leads. The situation is complex: there are many potential trajectories and little chance of predictability.

Thus, when thinking of ICT and PCK the emphasis shifts from one to the other. While new technology facilitates novel PCK, new forms of PCK also enable and call for the implementation of ever more novel technologies. This new understanding brings us to conclude that both ICT and PCK are entwined in an inseparable co-evolutionary relationship, which is integral to the 21st Century view of teaching and learning. At the same time, ICT and PCK in the classroom are subservient to learning theories as much as to the aims of formal education. Furthermore, the relationship between pedagogy, assessment and curriculum has been represented in a dynamic triangular model (P. Black, 2000). So there is ample evidence that assessment rests in a complex evolving system with many influences that both attract and repel its course of development in formal and informal contexts (Figure 2).

The assessment of today’s curriculum needs to take account of the multiplicity of new literacies that have emerged as a result of ubiquitous technologies in the Digital Age as well as our emerging understanding of work force needs of the 21st Century; for example, the literacies acquired and engaged through games, simulations and social
Figure 2. Dynamic triad of pedagogy, content and assessment
media (Gee, 2004; Gibson, 2010; Jenkins, Purushotma, Clinton, Weigel, & Robison, 2006) and the increasing need for computational thinking and collaborative problem solving in real settings (Bransford, 2007; NAE, 2009). Innovative developments in assessment can support or even drive developments in both the curriculum and pedagogy. However, calcified assessment policy and practices can also act as barriers to change as has often been identified especially where assessments are high stakes for students or for ranking schools.

Research update
The nature of assessment
The nature and purpose of assessment requires some clarification. The importance of formative assessment for pedagogy and the relationship between formative and summative assessment has been recognised by policy makers in many countries recently but these issues are still subject to debate and ongoing research (Sebba, 2006). In particular there are different understandings of formative assessment in different contexts (ibid). A key feature of formative assessment in any context is that learners and/or teachers use information obtained from assessment to understand learning needs and to adapt teaching and learning in order to meet those needs. The term assessment for learning (formative assessment) is often used to distinguish these practices from assessment of learning (summative assessment) (Paul Black, Harrison, Lee, Marshall, & Wiliam, 2003). Therefore whether any particular assessment instrument is formative or summative depends on the use to which it is put (Paul Black & Wiliam, 2009). Some assessments designed for summative purposes may be used formatively and perhaps vice versa.
A framework (see Figure 3) proposed by Black and Wiliam (2009) based on empirical studies of formative assessment over the last fifteen years has proved useful for understanding formative assessment in classroom interactions and activities (Mary E. Webb & Jones, 2009, 2011 (in  preparation)).
Where the learner is going
Where the learner is right now
How to get there
Teacher
1 Clarifying learning intentions and criteria for success
2 Engineering effective class-room discussions and other learning tasks that elicit evidence of student understanding
3 Providing feedback that moves learners forward
Peer
Understanding and sharing learning intentions and criteria for success
4 Activating students as instructional resources for one another
Learner
Understanding learning intentions and criteria for success
5 Activating students as the owners of their own learning


Figure 3: Aspects of formative assessment (Paul Black & Wiliam, 2009)

This framework integrates five key aspects of formative assessment and  highlights the importance of learners’ taking ownership of their learning and the role of peer interaction including peer feedback (Paul Black & Wiliam, 2009).
Use of peer assessment is increasing for both formative and summative purposes and evidence suggests that in some contexts especially in higher education peer assessment provides similar reliability to tutor assessment (e.g. Davidson & Goldberg, 2009; K. Topping, 1998; K. J. Topping, 2008).
In designing e-assessments a broader and more flexible view of the nature and purpose of assessment may be possible both in design processes and implementation contexts.  


Effects of technological advances for assessment
New technologies can support both formative and summative assessment and technological advances are increasing the range of possibilities for assessments. For example students can be assessed through simulations, e-portfolios and interactive games (Clarke & Dede, 2010; Gibson, in press; Gibson, Cheong, Stuit, Annetta, & Nolte, 2009; Quellmalz, Timms, & Schneider, 2009). However whereas many of the technical challenges of enabling development of e-assessment are being overcome, there are still many barriers to the widespread use of e-assessment for high-stakes testing at school level (Craven, 2010). In particular, designing suitable questions and tasks especially for assessing higher-order thinking can be demanding and time-consuming. Thus far the balance of e-assessment development has focused on skill-based assessments and assessments in more technical aspects of subjects(Craven, 2010); so there is an outstanding challenge as to how to assess higher order, more complex, and hard-to-measure yet highly valued outcomes.

Developments in technologies are making it possible for many types of assessment to be marked automatically thus enabling large cost savings that will be enticing in austere times. Elements of assessments that still present significant technological challenges for automatic marking include: hand-drawn diagrams to illustrate answers; collaborative problem solving activities; and advanced essay answers where quality measures are negotiated in academic communities. Therefore ensuring that our focus remains on designing valid assessments that assess important knowledge and skills rather than being seduced by potential time and cost-savings may be a significant challenge.

Another characteristic of ICT enabled assessments, especially in embedded contexts such as working online problems, playing a digital game, or working collaboratively with a simulation to explore and discover patterns in data, is the creation of massive amounts of data about the interactions. For example, a log file for a single user might create 70,000 records for a 10-minute interaction. Traditional educational research is not equipped to analyze this sort of data, which implies a need to explore this and other game changers for teacher education (Gibson, in press; Gibson & Knezek, in press).

Formative assessment can be enabled by online peer assessment systems, adaptive feedback from computers, self assessment and systems that combine teacher, peer and automatic feedback. While, as discussed earlier, fully automated assessment systems are challenging to develop there are many opportunities to use relatively simple technological tools; including Web 2.0 applications to support  peer assessment and feedback (M. E.  Webb, 2011).

Current e-assessment in schools
Current e-assessment in schools is increasing owing to a push by exam boards and includes: on-screen marking by examiners, on-screen tests in some subject areas; diagnostics tests that are designed to provide both formative and summative information to teachers. Exam boards in the UK claim that on-screen tests offer a more engaging method of assessment in a familiar, comfortable environment (AQA 2011). OCR also claims that benefits of on-screen tests are: a richer assessment experience, instant feedback on results, reduced administrative load and increased flexibility over time and location. While the examination boards in the UK are focusing primarily on opportunities of e-assessment for increasing efficiency and cutting costs some specialised services are offering diagnostic and formative assessment services to schools e.g. NFER provides tests for reading, maths and science (
http://www.nfer.ac.uk/schools/nfer-formative-assessment-service/nfer-formative-assessment-service_home.cfm). These developments of e-assessment, which are similar to developments in the U.S. and elsewhere,  are in support of the existing curriculum and pedagogy and there is little evidence of significant change in what is being assessed, the purposes of assessment, and the value to learning.  

What to assess?
Analysis of frameworks for 21st Century skills across the globe (Voogt, 2010) showed strong agreement on the need for skills in the areas of communication, collaboration, ICT literacy, and social and/or cultural awareness. Creativity, critical thinking, problem solving and the capacity to develop relevant and high quality products are also regarded by most frameworks as important skills in the 21st century; see for example, (NAE, 2004; P2CS, 2008). Therefore developments in assessment systems need to focus on finding ways to assess these higher order and more complex skills.
The skills and knowledge needed to make use of new technologies for learning and for participating fully in the knowledge society have been discussed extensively in recent years and various new literacies have been defined e.g. ICT literacy; information literacy; digital literacy and media literacy. Currently the International Association for the Evaluation of Educational Achievement (IEA) is planning a cross country study of student computer and information literacy (CIL). CIL refers to an individual’s ability to use computers to investigate, create, and communicate in order to participate effectively at home, at school, in the workplace, and in the community (see http://www.iea.nl/icils.html ).
A further challenge stems from current high-stakes assessments at school level focusing predominantly on assessing individuals: the importance of assessment of collaborative work is sometimes recognised but rarely addressed. Furthermore, teacher assessment (e.g. observation, judgment, test making and scoring), which could contribute significant information for the assessment of 21st Century skills, has decreased owing to concerns about reliability and costs above those of validity, trustworthiness and value to the learner.

Issues, Unresolved questions & Concerns

What are we assessing?
Which are the elements of 21st Century learning that are not currently being assessed adequately?

Summative or formative purposes?
How can formative e-assessment enhance learning opportunities? Can assessments serve both formative and summative purposes? Can we design formative assessments and extract information from them for summative purposes? How? Does e-assessment facilitate this?

How important are summative assessment examinations? What are summative assessment purposes in the 21st Century?
Summative assessments tend to dominate our education systems, terrorise our young people and intimidate teachers. Many universities and employers claim that they are not fit for purpose and prefer to use entrance examinations. Should we therefore be looking for alternatives or should we reconsider whether summative information is needed at all? Should we consider entrance assessments instead?

How should we assess?
Which e-assessment methods should we adopt and for what purposes?
What combinations of e-assessment methods and more traditional methods might be suitable?
How often should we assess?

What are the unique affordances of ICT as a handmaiden to assessment?
What are the contexts of informal ICT use that offer assessment opportunities? What should researchers know how to do with massive assessment datasets that represent a single learner?

What are the game changers for teacher education?
Given our discussions, what should teacher educators know and be able to do with this knowledge about 21st C learning contexts, ICT, and assessment?

References
Beinhocker, E. (2006). The origin of wealth: Evolution, complexity and the radical remaking of economics. Boston, MA: Harvard Business School Press.
Black, P. (2000). Policy, practice and research: the case of testing and assessment. In R. Millar, J. Leach & J. Osborne (Eds.), Improving Science Education: the Contribution of Research (pp. 327-346). Philadelphia: Open University Press.
Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2003). Assessment for learning: putting it into practice. Buckingham, UK: Open University.
Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment,  Evaluation and Accountability, 21, 5-31.
Bransford, J. (2007). Preparing people for rapidly changing environments. Journal of Engineering Education, 96(1).
Clarke, J., & Dede, C. (2010). Assessment, technology, and change. Journal of Research in Teacher Education, 42(3).
Craven, P. (2010). History and Challenges of e-assessment The 'Cambridge Approach' perspective - e-assessment research and development 1989 to 2009. Cambridge: Cambridge Assessment.
Davidson, C., & Goldberg, D. (2009). The Future of Learning Institutions in a Digital Age. Chicago, IL: John D. and Catherine T. MacArthur Foundation.
Gee, J. (2004). What Video Games Have to Teach Us About Learning and Literacy. New York: Palgrave Macmillan.
Gibson, D. (2010). Bridging informal and formal learning: Experiences with participatory media. In Y. K. Baek (Ed.), Gaming for classroom-based learning: Digital role playing as a motivator of study (pp. 84 - 99). Hershey, PA: IGI Global.
Gibson, D. (in press). Elements of interactive digital media assessment. Journal of Technology and Teacher Education.
Gibson, D., Cheong, D., Stuit, D., Annetta, L., & Nolte, P. (2009). Assessment of Learning with Games and Simulations. Paper presented at the Proceedings of Society for Information Technology & Teacher Education International Conference 2009.
Gibson, D., & Knezek, G. (in press). Game changers for teacher education. In C. Maddux (Ed.), Research Highlights in Technology and Teacher Education 2011. Alexandria, VA: AACE-SITE.
Jenkins, H., Purushotma, R., Clinton, K., Weigel, M., & Robison, A. (2006). Confronting the challenges of participatory culture: Media education for the 21st Century. New Media Literacies Project, 72. Retrieved from http://www.newmedialiteracies.org/files/working/NMLWhitePaper.pdf
NAE. (2004). The Engineer of 2020. from http://www.nap.edu/catalog/10999.html
NAE. (2009). Engineering in K-12 education: Understanding the status and improving the prospects. Washington DC: National Academy of Engineering.
P2CS. (2008). Partnership for 21st Century Skills.   Retrieved Sept 9, 2008, from http://www.21stcenturyskills.org/
Quellmalz, E., Timms, M., & Schneider, S. (2009). Assessment of Student Learning in Science Simulations and Games. DC: National Research Council.
Sebba, J. (2006). Policy and Practice in Assessment for Learning: The Experience of Selected OECD Countries. In J. Gardner (Ed.), Assessment and learning (pp. 185-196). London: Sage.
Shulman, L. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4-14.
Topping, K. (1998). Peer Assessment between Students in Colleges and Universities. Review of Educational Research, 68(3 ), 249-276.
Topping, K. J. (2008). Peer assessment. Theory into Practice, 48(1), 20-27.
Webb, M. E. (2011). Feedback enabled by new technologies as a key component of pedagogy. Paper presented at the Society for Information Technology and Teacher Education (SITE) Conference.
Webb, M. E., & Jones, J. (2009). Exploring tensions in developing assessment for learning. Assessment in Education: Principles, Policy  &  Practice, 16 (2), 165-184.
Webb, M. E., & Jones, J. (2011 (in  preparation)). Pedagogical issues in peer support for collaborative learning in primary classrooms.
Wolfram, S. (2002). A new kind of science. Champaign, IL: Wolfram Media.

Nenhum comentário:

Postar um comentário