Formative assessment occurs when teachers check student understanding and guide decision making to improve learning. Formative assessment is a powerful way to improve student achievement, particularly when teachers use data to adjust instruction (Black & Wiliam, 1998a, 1998b; Boston, 2002; Roediger & Karpicke, 2006; Speece, Molloy, & Case, 2003). Formative assessment can provide critical information about whether students understand the targeted concepts and skills, and if not, what problematic or partial understandings are present instead. Teachers can use the evidence about student understanding to guide students from partial or incorrect understandings toward targeted learning goals.
Black and Wiliam’s (1998a) review of 250 studies found effect sizes for formative assessment to be larger than those seen for any other instructional intervention tested. Formative assessment has also been shown to have beneficial effects for student motivation: feedback to students about progress and performance can increase student persistence, sense of self-efficacy, and self-regulated learning (Black and Wiliam, 1998; Brookhart, 1997, 2001; Stiggins, 2001b). Still, teachers often feel they don’t have time to assess students due to tight schedules for covering new content (Dodge, 2009).
Technology enabled formative assessment has the potential to bring formative assessment and the associated benefits to more teachers, students, and classrooms in a timely, usable fashion (Bennett, 1999; Pellegrino, Chudowsky, & Glaser, 2001). Technology can help educators effectively implement formative assessment by enabling more immediate feedback, displaying feedback in readily usable ways, and by providing new possibilities for assessing student understanding of scientific phenomena in dynamic, interactive ways (Gobert et al., 2013). Technology-based systems, which log students’ actions in a non-intrusive way , can react on the basis of formative data to scaffold student learning in real time–even on open-ended, higher-order thinking skill tasks (Pellegrino, et al., 2001). (See, for example, the CIRCL Spotlight on dynamic formative assessment to enhance learning in virtual biology labs.) When carefully designed to align with the curriculum, standards, and large-scale tests, technology-supported classroom assessment further has the potential to generate data that are usable not only in guiding classroom instruction, but also in informing accountability programs (e.g., Wilson & Draney, 2004) and in improving program implementation.
Interest in technology-enabled assessment in K-12 Education is accelerating (Olson, 2004). Important drivers of growth include the ongoing shift of assessment from paper to digial media, educational policies that promote formative assessment, and the desire of actors at all levels of the educational system to improve their performance. Today, many online testing companies (such as Renaissance Learning, www.renlearn.com) automatically grade students and provide reports. Classroom response systems (e.g., clickers) have been widely used to pose multiple-choice questions and collect responses from students instantly; students’ responses can be aggregated visually and shared immediately with the class for discussion (Bransford, Brophy, & Williams, 2000; Roschelle, Penuel, & Abrahamson, 2004; Zurita, Nussbaum, & Salinas, 2005).
Commercially-available formative assessments, however, tend to focus on the most conventional aspects of school topics. Available assessments are more likely to measure student understanding of facts and procedures than concepts and strategies. They are more likely to be informed by classical test theory than by learning science methods, such as evidence-centered design (ECD; Mislevy, Steinberg, & Almond, 2003; Mislevy & Haertel, 2006). Formative assessments which are aligned to the ambitious elements of today’s standards are rare. Thus, important opportunities for advancing the field await research-based initiatives that integrate learning science-based views of content and learning with technology and with modern assessment frameworks such as evidence-centered design.
Indeed, NSF-funded dynamic assessment systems such as ASSISTments, Science Learning by Inquiry, Diagnoser.com, and Simbio are going beyond commonplace formative assessments. For example, they combine formative assessments with real-time scaffolding of student learning. When students respond to problems in these systems, they receive hints and tutoring to the extent they need them, based on a student model that is developed and constantly updated by the system. Research-based systems are exploring the use of games, visualizations, and simulations in formative assessment, as well as more complex tasks and scenarios. These systems also provide teachers with detailed diagnostic reports to help them adjust their instruction accordingly.
While the positive role of formative assessment has been widely accepted in the educational field, challenges persist for the implementation of formative assessment practice and technology-enabled formative assessment in schools.
Data Mining. A key issue is the complexity of the log data from technology-enabled learning environments, and the difficulty of meaningfully distilling, parsing, and aggregating the large amounts of log data generated by students as they work in such environments (Quellmalz & Pellegrino, 2009; Gobert et al., 2013). See the Educational Data Mining and Learning Analytics synthesis for more discussion of this issue.
Professional Development. The most effective formative assessments are embedded within the classroom and happen on a moment to moment basis. An implementation challenge is developing formative assessment practices in teachers and integrating these with instruction (including what concretely, they should do next). Technology-enabled formative assessment practices have the potential to increase student learning, but only where the teachers are prepared to adjust instruction and learning activities quickly and responsively while learning is in practice. Professional development needs to be provided to help teachers understand the output of formative assessment systems and respond to the results appropriately.
Design and Accessibility. The user experience (for both student and teacher views) needs to be well designed and highly accessible to lower the demands for teachers and students. For example, the ease of collecting data in technology-enabled assessment systems can lead to reports that could be overwhelming.The technology, ideally, should provide clear opportunities and resources for intervention. Careful design is required so that assessment feedback and reporting is informative and understandable and can be immediately acted upon by teachers and students.
Technology cost and support. The cost of introducing technology (clicker systems, laptop or desktop computers, touch pads, smart boards or other types of display stations, etc.) in the classroom can be high. When a project introduces technology into a classroom, how will the technology be maintained? What technology support is provided by the project vs. the school? Who pays for repairs? Can the school IT staff understand and support the technology?
Examples of NSF Cyberlearning projects that overlap with topics discussed in this primer (see project tag map).
- DIP: Game-based Assessment and Support of STEM-related Competencies
- Badge-Based STEM Assessment: Current Terrain and the Road Ahead
- EXP: Enabling Pedagogical Communication Between Learning and Programming Environments
- EXP: Collaborative Research: Fostering Ecologies of Online Learners through Technology Augmented Human Facilitation
- EXP: Collaborative Research: A cyber-ensemble of inversion, immersion, collaborative workspaces, query and media-making in mathematics classrooms
More posts: cyber-enhancedcomputer-assisted-assessments
- EXP: Inq-Blotter - A Real Time Alerting Tool to Transform Teachers' Assessment of Science Inquiry Practices
- Badge-Based STEM Assessment: Current Terrain and the Road Ahead
- EXP: Enabling Pedagogical Communication Between Learning and Programming Environments
- EXP: RUI: Exploring Spatial-Temporal Anchored Collaboration in Asynchronous Learning Experiences
- EXP: Learning Lens: An Evidence-Centered Tool for 21st Century Assessment
More posts: formative-assessment
Exemplary web sites and examples:
ASSISTments mathematics tutoring system
Diagnoser.com instructional tools for science and mathematics
Simbio virtual biology experiments
Science Learning by Inquiry microworlds for inquiry
SimScientists science learning and assessment projects
Virtual Performance Assessment Project to assess students’ science inquiry skills
Crystal Island intelligent game-based learning environment
References and key readings documenting the thinking behind the concept, important milestones in the work, foundational examples to build from, and summaries along the way.
Bergan, J. R., I. E. Sladeczek, R. D. Schwarz & A. N. Smith (1991). Effects of a measurement and planning system on kindergartners’ cognitive development and educational programming. American Educational Research Journal 28, 683–714.
Black, P., & Wiliam, D. (1998a). Assessment and classroom learning. Assessment in Education: Principles, Policy and Practice, 5, 7–74.
Black, P., & Wiliam, D. (1998b). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2): 139–149.
Boston, C. (2002). The concept of formative assessment. Practical Assessment, Research & Evaluation, 8(9).
Bransford, J., Brophy, S., & Williams, S. (2000). When computer technologies meet the learning sciences: Issues and opportunities. Journal of Applied Developmental Psychology, 21(1), 59–84.
Bransford, J., Brown, A., & Cocking, R. (Eds.). (2000). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press.
Brookhart, S. M. (1997). A theoretical framework for the role of classroom assessment in motivating students’ effort and achievement. Applied Measurement in Education, 10(2), 161–180.
Brookhart, S. M. (2001). Successful students’ formative and summative uses of assessment information. Assessment in Education, 8(2), 153–169.
CCSSO (2007). Formative Assessment for Students and Teachers. Retrieved from http://www.ccsso.org.
CTB/McGraw Hill. (n.d.). Promoting student achievement using research-based assessment with formative benefits. A White Paper prepared by CTB/McGraw Hill.
Crawford, V. M., Schlager, M., Penuel, W. R., & Toyama, Y. (2008). Supporting the art of teaching in a data-rich, high performance learning environment. In E. B. Mandinach & M. Honey (Eds.), Linking data and learning (pp. 109-129). New York: Teachers College Press.
Dodge, J. (2009). 25 Quick Formative Assessments for a Differentiated Classroom. New York, NY: Scholastic Teaching Resources.
Feng, M., Heffernan, N.T., & Koedinger, K.R. (2009). User modeling and user-adapted interaction: Addressing the assessment challenge in an online system that tutors as it assesses. The Journal of Personalization Research (UMUAI journal). 19(3), 243-266.
Gobert, J., Sao Pedro, M., Raziuddin, J., and Baker, R. S., (2013). From log files to assessment metrics for science inquiry using educational data mining. Journal of the Learning Sciences, 22(4), 521-563.
Herman, J., & Gribbons, B. (2001). Lessons learned in using data to support school inquiry and continuous improvement: Final report to the Stuart Foundation (CSE Technical Report 525). Center for the Study of Evaluation, University of California, Los Angeles.
Lewis, A. (2006). Celebrating 20 Years of Research on Educational Assessment: Proceedings of the 2005 CRESST Conference.
Minstrell, J. (2001a). Facets of students’ thinking: Designing to cross the gap from research to standards-based practice. In K. Crowley, C. D. Schunn and T. Okada (Eds.), Designing for Science: Implications for Professional, Instructional, and Everyday Science. Mahwah: Lawrence Erlbaum Associates.
Minstrell, J., Anderson, R., Kraus, P., & Minstrell, J.E. (2008). Bridging from practice to research and back: tools to support formative assessment. In J. Coffey, R. Douglas and C. Sterns (Eds.), Science Assessment: Research and Practical Approaches: NSTA Press.
Mislevy, R. J., & Haertel, G. D. (2006). Implications of evidence-centered design for educational testing. Educational Measurement: Issues and Practice, 25(4), 6–20.
Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2003). On the structure of educational
assessments. Measurement: Interdisciplinary Research and Perspectives, 1, 3-67.
Olson, L.: State test programs mushroom as NCLB Mandate Kicks. In: Education Week, 20 November,pp. 10–14 (2004)
Pellegrino, J. W., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy Press.
Ravitz, J. (2000). Using Technology to Support Formative Assessment in the Classroom. Center for Innovative Learning Technologies (CILT) at the University of California at Berkeley, and Center for Technology in Learning (SRI International).
Roediger, H. III., & Karpicke, J. D. (2006). Test-enhanced learning: taking memory tests improves long-term retention, Psychological Science, 17(3), 249–255.
Roschelle, J., Penuel, W. R., & Abrahamson, A. L. (2004). The networked classroom. Educational Leadership, 61(5), 50–54.
Quellmalz, E., & Pellegrino, J. W. (2009, January 2). Technology and testing. Science, 323, 75-29.
Speece, D.L., Molloy, D.E., & Case, L.P. (2003). Starting at the beginning for learning disabilities identification: Response to instruction in general education. Advances in Learning and Behavioral Disabilities, 16, 37-50.
Stiggins, R.J. (2007). Assessment for learning: A key to student motivation and learning. Phi Delta Kappa EDGE, 2(2), 19 pp.
Wiliam, D. (2007). Keeping learning on track: Formative assessment and the regulation of learning. In F. K. Lester, Jr. (Ed.), Second handbook of mathematics teaching and learning. Greenwich, CT: Information Age Publishing.
Wilson, M., & Draney, K. (2004). Some links between large-scale and classroom assessments: The case of the BEAR system. In M. Wilson (Ed.), Towards coherence between classroom assessment and accountability. 103rd Yearbook of the National Society for the Study of Education (pp. 132–154). Chicago: University of Chicago Press.
Publications from NSF-funded Cyberlearning Projects
Yuan, Y., Chang, K. M., Taylor, J. N., & Mostow, J. (2014, March). Toward unobtrusive measurement of reading comprehension using low-cost EEG. In proceedings of the Fourth International Conference on Learning Analytics and Knowledge (pp. 54-58). New York: New York; Learning Analytics and Knowledge.
Chen, J., Zhang, J. (2016). Design Collaborative Formative Assessment for Sustained Knowledge Building Using Idea Thread Mapper. In Proceedings of The International Conference of the Learning Sciences. Singapore: International Society of the Learning Sciences.
Primers are developed by small teams of volunteers and licensed under a Creative Commons Attribution 4.0 International License.
Feng, M., Gobert, J., & Schank, P. (2014). CIRCL Primer: Technology Enabled Formative Assessment. In CIRCL Primer Series. Retrieved from http://circlcenter.org/technology-enabled-formative-assessment/
After citing this primer in your text, consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”