There’s only one thing worse than requiring students to reduce all learning to a single “correct” answer, and that is reducing assessment and accountability to a single standardized test.
By Bob Peterson and Monty Neill
|Assessment must consist of more than “paper and pencil” tests based on memorization.photo by Susan Lina Ruggles|
Critics of standardized tests are often asked, “What’s your alternative?” It’s a legitimate — and important — question. Parents and community members have the right to know how well their children are learning.
Unfortunately, in part due to rhetoric that equates high standards with standardized tests, many parents believe that standardized tests will give them the answer. At the same time, parents are often the first to understand that the complexity of their child cannot be captured by a test score.
At issue is how to create alternatives to standardized tests that will inform parents and community members about how well the schools are doing and whether their children are learning what they need to know — that is, how to create an alternative approach to accountability. Teachers and parents also need to learn about and promote alternatives to “high-stakes” tests, the name given when a single exam determines if a child is promoted, or graduates from high school, or gets into college.
Standardized tests are just one type of assessment, although they often get the most publicity. It’s also important to recognize that teachers assess students regularly as part of their on-going teaching. The challenge is to match assessment that is integrated into classroom instruction, and is focused primarily on helping individual children, with assessment that provides school- and district-wide information being demanded by local and state officials or various community forces.
One of the first steps toward rethinking assessments is to ask, “What is the purpose of the assessment?” and, “Is this purpose worthy or meaningful?” Answering these questions means addressing what is important for students to learn, how we help them learn, and how we know what they have learned.
Too often, the rationale for standardized testing appears overly punitive: “We’re going to get these kids and schools to perform better — or else.” Such an approach forgets that assessment should serve one primary purpose: to improve student learning. The goal is not to flunk kids, not to wave fingers at lousy teachers, not to make bold pronouncements that will be remembered at election time, not to give kids more of the same even though it didn’t work the first time — but to provide information to help the student learn better.
Assessment serves other purposes as well. Community members may want data to see if schools are providing equal opportunity to all students. Policy-makers might want to know the effectiveness of various programs. Districts and state legislatures often use tests to hold schools accountable for how well they are spending taxpayers’ money. Schools might also use assessment as a way to report to parents, or summarize and certify a student’s achievement. Finally, districts might use changes in assessment policy to help transform the curriculum.
Depending on the purpose, different forms of assessment might be used. For example, an assessment designed to evaluate how well a school, overall, is teaching its students to read should not be used to decide whether a particular student should or should not be promoted to fourth grade. Furthermore, any assessment should ultimately serve, and not undercut, the primary goal of helping the student.
Alternatives to standardized testing are in use in both the United States and other industrialized countries — alternatives that range from student portfolios, to district-wide “proficiencies,” to outside review teams that evaluate a school. There is growing evidence that these measures do a better job of showing how well students and schools are performing.
The biggest drawback to most of these alternatives is that they challenge this country’s predominant approach to thinking and learning — that is, that we can only truly know something if it can be statistically and “objectively” determined and analyzed. History has unfortunately shown that such an approach has been used not just to predict, but to control the world and those who live in it. For many, the consequences are harmful, not beneficial.
Alternative assessments, on the other hand, require diversity in thinking about what is the purpose of knowledge and, indeed, even what constitutes knowledge. To challenge statistical ways of knowing is to challenge the status quo and its tendency to marginalize and describe as abnormal those who do not neatly fit into a statistical box. Alternative assessments mean alternative voices, perspectives, and actions. This is a vitally important reason why they should be embraced as an important part of accountability.
Other obstacles exist. Alternative assessments are new and, like any innovation, challenge those who prefer to do things the way they’ve always done them. It takes not only time but energy to re-educate teachers, parents, and students in new forms of assessments. Moreover, such assessments cost more because they require more sophisticated teaching, staff development, and scoring. Decent assessment can’t be done cheaply, any more than can decent education.
Nor are alternative assessments a magic bullet. Teachers and parents need to be aware of the strengths and weaknesses of any approach, and how to use it appropriately.
Following is a description of some of the most common forms of alternative assessments.
One of the more promising forms of assessment is what is known as “portfolio-based assessment.” The approaches to portfolios vary considerably, but they all rest on records kept by the teacher and on collections of the student’s work, called the “student portfolio.” During the school year, teachers and students gather work which shows student progress and achievement in various subjects such as English or science. Students are usually encouraged to reflect on the work that has been selected. Such reflection helps students think not only about what they have learned, but about their own learning processes, all of which contributes to the overall goal of improving student learning.
In some approaches, at the end of a marking period the teacher examines the portfolio and evaluates the work based on a scoring guide. Sometimes students or their peers also score their work. The teacher ultimately records a score on what is sometimes called a “learning record,” attaching evidence such as a writing sample or write-up of a science experiment. This approach is useful for the teacher and parent in determining how well a student is progressing. But, through what is known as “random sampling,” it also can be the basis for improved professional development and for school- and district-wide accountability.
Under “random sampling,” a number of the learning records and student portfolios are selected randomly from each classroom. An independent group — of teachers from other schools, members of the community, or a combination of both — reviews the records and portfolios. If there is a big difference between the conclusions of the independent readers and the classroom teacher, a third group might be called in or a larger sample might be taken from the classroom, in order to determine how well a particular teacher consistently applies the agreed upon assessment guidelines.
Approaches of this sort have been developed in Britain, Australia, and the United States, particularly in Vermont, which has instituted statewide assessment programs in math and writing based on student portfolios. Projects such as the Learning Record, based in California, and the Work Sampling System, based in Ann Arbor, are other examples.
This classroom-based approach has several advantages. For example, the evaluation is based on a wide range of student work done over a long period of time, rather than on a single, paper-and-pencil test taken over a few hours. Further, the approach encourages schools and districts to invest in the professional development of the teachers and outside evaluators, and it pushes teachers to reflect more consistently on the quality of student work in their classroom.
One of the criticisms of this approach is that it works best when there are quality teachers. But such criticism needs to take into account that this classroom approach, over time, can encourage collaboration between teachers and improve their work. If done properly, this approach has teachers regularly talking about students’ work and allows more-skilled teachers to help less-experienced teachers. Such portfolio discussions will inevitably include not only how to evaluate student work, but the nature of the work that is going on in particular classrooms, and strategies to get students to do better work. This approach can benefit a weak teacher, certainly more than standardized tests do.
Another criticism, especially when teachers have little control over what types of materials are to be included in the portfolio, is that the portfolio requirements can “hijack” the curriculum and overly dominate what is taught. For instance, if a district decides that the English portfolio for eighth graders needs to have an example of a business letter and a five-paragraph essay, the teacher may focus so much on those requirements that there is little time for other important topics such as poetry, creative writing, or literary analysis. One solution is to require a wide range of types of writing in a writing portfolio, as Vermont does. Many educators also note that it is better to have a “portfolio-driven curriculum,” which is based on real student work, than a curriculum shaped by standardized tests and their reliance on random bits of memorized data and procedures.
Another problem with portfolios is logistics. Where does a high school English teacher store over 100 portfolios? How does an elementary school maintain portfolios as students move up in grades? How does the issue of student mobility influence this kind of record keeping? One creative solution is to video-tape portfolios, another is to save the information digitally in a computer. Though methods vary, teachers and schools are overcoming these problems.
A fourth criticism of the portfolio approach is that it relies too much on the individual judgment of teachers and opens the door to overly subjective evaluation. This concern has been raised most directly where teachers may not be sensitive to the needs and skills of students of color, or non-English speakers, or immigrants. Clearly, this is a serious issue. At the same time, it is a problem that pervades all forms of assessment. Who, for example, chooses the questions on standardized tests? Rarely is it immigrants, or non-English speakers, or educators of color.
If the outside evaluators are sensitive to this potential problem, portfolio-based assessment can be used to identify teachers who are subjectively giving lower evaluations to particular groups of students or teachers whose pedagogical weaknesses lead them to have students focus on mindless worksheets rather than engaging projects.
Overall, we have found that portfolios are central to high-quality schooling. They can foster collaboration among teachers, focus attention on getting students to do quality work, and provide data to the community on how well a school is performing.
Some states and districts have adopted what are called performance examinations. These are tests given to all students, based on students “performing” a certain task, such as writing an essay, conducting a science experiment, or doing an oral presentation which is videotaped.
The Milwaukee Public Schools have done extensive work on developing such performance exams in the areas of writing, science, math, visual arts, and oral communications. For example, fourth or fifth graders must perform a 3-5 minute oral presentation. In writing, fourth, fifth, eighth, 11th, and 12th graders all have to write and revise an essay over a period of two days, based on a district-wide prompt that changes from year to year and covers different genres, from imaginary writing, to narrative essays, to expository essays. These essays are then judged independently and anonymously by teachers from the district, using a scale of one to four. Two teachers read each essay, and the final score is based on the sum of the two readers. To reduce subjectivity, if there is a difference of more than one point in the two readers’ evaluation, a third reader scores the paper.
Some districts also use these performance exams as a way to check how well classroom teachers are scoring their student portfolios. If large numbers of students are doing well on the performance exams yet score poorly on the student portfolios, or vice versa, it sends a signal that follow-up needs to occur.
These performance exams have the advantage over standardized tests in that they “drive the curriculum” in a relatively progressive way. In Milwaukee, the assessments have encouraged teachers to focus on actual student writing rather than fill-in-the-blank work sheets. They have led to more hands-on science experiments where students actually learn the scientific process and how to reflect on and analyze data, rather than merely answer questions at the end of a textbook chapter. The oral presentations have been a useful way to get students actively involved, rather than merely listening to lectures by the teacher; they also force teachers to pay attention to oral communication skills, which cannot be tested with a paper-and-pencil exam. The actual performance assessments, once they are scored, can become part of student portfolios.
Teachers who help write the performance assessment tasks (or prompts) learn a lot about how to develop more interesting and academically valuable projects for their students.
Performance exams are one form of “performance assessments” which most often take the form of projects, from laboratory experiments to group activities to exhibitions (described below) which are done as part of classroom work. (Sometimes the term includes portfolios as well.) Using performance exams can encourage teachers to use a wider range of activities in the classroom, which can enrich instruction, deepen learning, and provide detailed assessment information.
Performance exams have not been used more widely in part because they take considerable time, both for the classroom teacher and the district. It takes time, expertise, and ultimately money to develop the prompts and score the assessments, to say nothing of training teachers in activity-based teaching methods necessary for such performance assessments.
Some very good teachers, particularly those who have spent years developing a cohesive curriculum for their classroom, may find that the exams disrupt the flow of classroom work, although this shouldn’t be as much the case if the assessments are carefully aligned with good instructional practices.
Finally, another problem is that performance exams, as with any kind of assessment, can tempt teachers to “teach to the test.” Even in performance assessment, the emphasis must remain on higher-level thinking skills instead of on recall and memorization.
Writing in an opinion piece this December in The New York Times, Harvard professor Howard Gardner cautioned, “It might now seem far better to teach students how to write a personal essay than to simply ask them multiple-choice questions about a passage. Yet it is possible even with essay tests to teach students to do well through mimicry rather than through general writing skills. … Educators and parents should value the development of knowledge and skills that go beyond a single test. That is, high performance should be an incidental result of strong general preparation.”
As with using random sampling of student portfolios, sampling can also be used with performance exams. The National Assessment of Educational Progress (NAEP), a federal agency that monitors student achievement, uses such a technique. When the NAEP reports, for example, on the progress of U.S. fourth graders the data is based on a sample of students. Some states, such as Maryland, are also adopting this approach. The Maryland State Performance Assessment Program (MSPAP) covers writing, reading, math, science, and social studies; it also includes interdisciplinary exams. Each student is given an exam in only one subject area. This does not give an overall assessment of each student, but for the school it gives a score that covers all subject areas and provides comprehensive data.
We believe that performance assessments — including performance exams — can be useful, especially when they are integrated into the ongoing curriculum. They can suffer, however, when they are isolated from daily classroom life and imposed from above.
Proficiency Exit Standards
The assessment known as “proficiency exit standards” combines the approaches of portfolio-based assessment and performance exams; it also sometimes includes standardized tests.
Under this approach, students have to meet certain standards in order to be promoted to the next grade or to graduate from high school. In Milwaukee, for example, the district has developed proficiencies that students need to meet in order to complete eighth grade and graduate from high school. The proficiency standards focus on four broad areas — math, science, communication, and a research projectÑ and are generally considered more rigorous than most standardized exams.
Students are given several ways to show “proficiency” in each of these areas — through portfolios, classroom projects such as science projects, performance exams, standardized test scores, and research papers. The district took this approach because it did not want to rely on any single assessment to determine whether a student could be promoted or graduate.
In one example of how reliance on standardized tests is undercutting alternative assessment, MPS recently moved to give increased weight to standardized test scores, allowing high school students to meet certain proficiencies by merely passing the standardized Wisconsin Student Assessment System tests.
Exhibitions of student work are another useful assessment. Perhaps the most common exhibition is also one of the oldest — the science fair. As with any student work, the strength of the approach rests on providing ways for all students to succeed. Everyone knows stories of parents who do the science fair project for their kid, building elaborate electrical engines or wondrous weather kits. Some schools try to get around this problem by having students work on the projects at school.
At Central Park East in New York City, exhibitions are used along with portfolios. In order to graduate, students have to demonstrate competencies in 12 areas of learning and present their portfolio work to a committee of adults — somewhat similar to the oral exams common for postgraduate degrees.
At La Escuela Fratney in Milwaukee, at the end of fifth grade (before they leave for middle school), students select some of their work from throughout the year and invite family and community members to an open exhibition. One project that figures prominently is the student-made book, in which students reflect on what they’ve learned throughout elementary school. The book also includes examples of work from their entire time at Fratney, which have been collected as part of their portfolios.
Parent Conferences and Input
One important reason for assessment is to let parents know how well their child is progressing. This purpose cannot be separated from the larger issue of communication between school and home. A number of schools are experimenting with assessment programs that are based on a process of two-way communication.
Some schools, for instance, have lengthy conferences with parents before their child even enters kindergarten, both explaining the schools’ programs and getting input from the family on the child’s strengths and weaknesses. Other schools have adapted their parent-teacher conferences so that they do a better job of letting parents and teachers talk together about the child’s progress. In order for such an approach to work, parent-teacher conferences need to go beyond the “five minutes per teacher” syndrome that is particularly common in middle and high schools — where teachers haul out the grade book and talk, and parents listen.
In this approach, schools need to ensure that they give parents a clear idea of the school’s curriculum and a general view of child development. This is particularly important in early elementary grades, where children develop at different rates and ages and children cannot be pigeon-holed into a single set of expectations. Likewise, in adolescence, teachers and parents need to communicate about developmental issues and how they may be affecting student performance.
Some schools involve students in the conferences. Students are asked to present work from their portfolios, reflect on what they have learned, and help figure out where they have made good progress and where they still need work.
To work best, such an approach needs to be part of a comprehensive effort to ensure that parents know they can raise concerns at any point during the school year, not just at conference time. Soliciting and encouraging such parental input is not easy but is essential if there is to be a true collaboration between home and school.
This issue is, in the final analysis, grounded in difficult questions of the power imbalances in most schools, particularly along lines of race and class. Some schools have taken preliminary steps in trying to address this problem by hiring a parent organizer/liaison, or having a parent center, or forming a parent/teacher curriculum committee, or ensuring that principals welcome parental input rather than view it as yet another chore. In some districts, such as Rochester, NY, parents are involved in teacher evaluation; how well a teacher communicates with parents is specified as a part of the evaluation.
School Report Cards
Just as parents need to know how well their child is doing, communities have the right to know how well entire schools are performing. Sometimes, this happens in a rather distorted way: the local newspaper ranks schools based on a single standardized test or battery of tests. Beyond the cold hard number, there is little analysis of how or why some schools are performing differently — or even if the test is a valid measure of student achievement. Equally troubling, a school’s performance often tells more about the income level of the students’ families than the quality of teaching and learning at the school.
In the last few years, a growing number of schools have issued “school report cards” — in fact, over two-third of states now require such report cards, and many are posted on web sites.
School report cards generally go beyond a listing of test scores, although that data is included. Other information in the report, depending on the state or district, can include attendance, average grade point, the number of Advanced Placement courses, discipline issues such as suspension rates, parental involvement, types of assessment (such as whether performance exams are required in certain subjects) and their results, school mission and governance structure, and so forth. The information is sometimes broken down by race, gender, socio-economic status, first language, and other important categories, in order to show how well schools are serving students from diverse backgrounds.
While such report cards are superior to a simple listing of test scores, there are important cautions: in particular, data can be omitted or manipulated. Some high schools, for example, have a policy of dropping students from a class if they have more than three unexcused absences. As a result, the grade point average in that class can be artificially high because only a select group of students is included. Also, if the primary data on student learning is from standardized test scores, as is often the case, then parents will have too little information.
Overall, school report cards need to reflect a much richer view of student learning, such as can be found in portfolios and exhibitions. In fact, rather than just a “report card,” some schools have begun to develop school-level portfolios. Other schools and outside people can evaluate the school by looking at portfolios and by visiting the school.
School Quality Review Teams
Because student success is intimately related to the culture of learning in an entire school, one valuable assessment, known as the “School Quality Review Team,” focuses on school-wide issues.
Teams of trained educators and community members visit schools, usually for up to a week. The teams observe classrooms, follow students, examine the curriculum, and interview parents and teachers. Based on their observations, they write up a formal report, with specific recommendations for improvement.
This approach, modeled on a century-old system in England, has been adopted in a few states, including New York and Rhode Island. A growing number of schools in Boston use review teams.
To be most effective, the team’s recommendations need to be distributed to and acted upon by both teachers and parents — which often requires additional time and resources. Another shortcoming in this approach is that the team often reviews a school based on its self-described mission; if the mission is weak or inadequate, this might not be noted in the final report.
It Won’t Be Easy
Adopting these alternatives isn’t easy — old ways of doing things are always more comfortable and familiar. Here are some of the most common pitfalls:
- Assuming one can muster the political clout to change the growing emphasis on high stakes standardized tests, most alternatives take time to develop. Because most are implemented while existing standardized tests continue, teachers are being asked to do more and more assessing — but not given any more time to do so. One more task is added to an already filled day. Sometimes, that in and of itself causes teacher opposition.
- If such assessments are to provide a true alternative, it’s essential that a broad array of parents and staff be involved. Otherwise, both parents and teachers feel that, once again, someone else is telling them how to raise their child or how to teach.
- Many of these alternative assessments are new to just about everyone involved: policy-makers, students, teachers and parents. There needs to be thorough discussions of the pros and cons of various assessments, and clear understanding of the purpose of any particular assessment. While conservatives often decry the “status quo” mentality of teachers and schools, on the testing issue it is the conservatives who are refusing to “think outside the box” and are relying on traditional, and flawed, methods of standardized testing.
- Such assessments take more work, more time, and more resources.
- Any assessment is prone to problems of inequity, inadequacy, and subjectivity. Recognizing, and counteracting, these problems is essential.
Finally, it cannot be stated too often: the primary purpose of assessment is to improve the quality of teaching and to help students learn better. If the focus is not on student learning, it’s misplaced.
District and state officials have the right and responsibility to require schools to provide evidence that all students are learning, but such requirements must not be allowed to control all aspects of schooling. Students and teachers need time to explore their interests, to pursue matters in depth, to develop qualities of thinking and working. In fact, a really good accountability and assessment system will tell parents and the public that these, too, are part of education.
Bob Peterson (firstname.lastname@example.org) teaches in Milwaukee and is an editor of Rethinking Schools.
Monty Neill (Mneillft@aol.com) is Executive Director of FairTest, based in Cambridge, MA.