• By Laurie Posner, M.P.A. • IDRA Newsletter • August 2009

Laurie PosnerSince as long ago as 1845, when the Boston School Board gave a uniform test to its elementary students, schools in the United States have been gathering data to gauge how well they are educating students (Coe and Brunet, 2006). These early report cards planted seeds for what would later become a plethora of reports on the performance of American organizations and systems.

Standardized report cards emerged relatively quickly in the field of health care. In the early 1860s, Florence Nightingale pressed for mortality statistics to be published by London hospitals to raise standards of hospital sanitation. By 1917, the American College of Surgeons was using a cross-system report to publicize the performance of almost 700 hospitals (Coe and Brunet, 2006).

In the field of education the “report card” became the centerpiece of individual student assessment and, by the end of the 20th Century, the central trope for state and national assessment of school effectiveness. In its Round-up of National Education Report Cards, the Center for Public Education (2007) identified more than a dozen such reports, spanning a spectrum from pre-kindergarten to post-secondary education, from K-12 school funding to college scholarships, and from measures of international standing to state, school district and individual student performance.

Such report cards include the National Center for Education Statistics’ well-known “National Assessment of Educational Progress” (or NAEP) reports, the U.S. Chamber of Commerce’s Leaders and Laggards: A State-by-State Report Card on Educational Effectiveness, Education Week’s Quality Counts series, and the National Center for Public Policy and Education’s “Measuring Up” study. As the field has become increasingly meta-analytic, even assessment of assessment, such as The State of State Standards produced by the Thomas B. Fordham Foundation, has become commonplace.

Rankings of our nation’s performance in public education in these last few decades have tracked business objectives, advocacy aims and policy preoccupations. In 1983, US News and World Report published its first edition of America’s Best Colleges. At that time, blue-collar workers were being hit hardest by the recession, and low-income students had increasingly limited access to public higher education (Fligstein and Shin, 2004). But college-going rates and college revenues rose dramatically in the 1980s, with tuition increasing during the period by 106 percent (Levine, 1994).

Such increases justified the marriage between ranking and advertising. A report card, in this context, was as much about packaging schools for consumer choice as assessing quality.

The link between outcomes-based assessment (accountability) and consumer choice also was famously inscribed in the No Child Left Behind Act of 2001, which envisioned accountability ratings and parent choice as twin levers for raising performance locally and across school systems.

As a result of widespread rankings and ratings, more data are available to Americans than ever before about how our schools are doing. At a mouse click, we can locate public schools nationwide by name, find demographics and dropout rates, and compare student outcomes on state-mandated tests, national standards, and college entrance exams.

But raising national awareness of school performance has not necessarily coincided with better national outcomes. Graduation rates have hovered at about 75 percent since the 1960s (Heckman and LaFontaine, 2007), and disparities in, for example, fourth grade reading and mathematics outcomes over the last decade have persisted (NCES, 2009).

You might say that this is to be expected. Research, report cards and the presentation of data cannot be counted on to change school systems. After all, the correlations between research, policy and practice are inherently messy. School systems are complex organic organizations.

And policymaking for public education is similarly complex. As Vivien Tseng points out in a review of the role of research in policymaking and practice, research is “rarely used in… a clear-cut linear way [and] rarely offers a definitive answer to any policy or practice question… requiring instead that practitioners discern if research evidence is relevant to their particular needs and judge whether they can use it given political, budgetary and other constraints” (2008).

Still, reports and indicator systems contribute far less then they could to school improvement. In IDRA’s experience, they fall short for several reasons. First, their purposes and intended audiences are often diffuse or ill-defined. Second, they tend to focus too much on ranking and not enough on exemplary practices and models for action. Third, many data sets are overly tied to consumer choice and not enough to citizen engagement. Finally, despite vast improvements, research remains inaccessible to many people, bringing knowledge online but not infusing it into capitols, classrooms and kitchen-table problem-solving (Robledo Montecel, 2006).

Just as a father who finds out that his son has earned a “D” in algebra can do little with this data without information on how to make a difference, school, community and family leaders need more than annual yearly progress (AYP) scores and discrete outcome data to make a difference in schools that are struggling. Information must be available to people who need it most at the district- and school-level and it must be crafted around organizing and action.

“Actionable knowledge,” as researcher Chris Argyris has written, “is not only relevant to the world of practice, it is the knowledge that people use to create that world” (1993, emphasis added).

The good news is that with good information, school, community and family leaders can and are making a difference. In Organized Communities, Stronger Schools, for example, Kavitha Mediratta and her colleagues at the Annenberg Institute for School Reform (2008) chronicle seven community organizing efforts that use actionable knowledge in school reform. In Oakland and Philadelphia, community and school action led to new small schools that resulted in higher attendance and improved graduation rates. In Miami, a combination of improvements in literacy programming for elementary students with community engagement raised student outcomes on the Florida Comprehensive Assessment Test (FCAT).

According to Mediratta and her colleagues, the most effective organizing combines “community members’ knowledge… and insights… with analyses of administrative data and best practices identified by education research. The combination of data and local knowledge enabled groups to develop reform initiatives uniquely suited to local school conditions and needs.” (2008)

The Annenberg findings echo those of Janice Petrovich at the Ford Foundation, who, in a reflection on the foundation’s investments in community involvement in education from 1950 to 1990 (2008) points out that “no matter how well crafted or well intentioned [school] reforms may be, they will not endure without community support – and that community support is won not through public relations campaigns, but through active participation.” Promoting such participation requires the capacity “to clearly identify research questions and data needs, to find ways of obtaining these data and to use research evidence to bolster their arguments.”

Such findings also have been central to IDRA’s experience. IDRA’s partnership with the nonprofit organization, ARISE, is one example. ARISE is a faith-based organization, founded in the late 1980s, dedicated to supporting children for educational success and strengthening families from within. For the Latina leaders at ARISE working to improve Texas border communities, a guiding tenet is to “look around you, assess what’s going on, make a response, evaluate and celebrate.”

In keeping with this principle, IDRA designed a series of training sessions on family leadership in education with ARISE centers through IDRA’s Parent Information and Resource Center. Through these forums, parents have shared concerns about how their children were doing in school and looked together at data on dropout rates, college-going rates, and student test scores. In deepening their knowledge, a group of families in the Rio Grande Valley has been moved to action: This summer, ARISE families formed a PTA Comunitario to formalize their role as advocates to improve the quality of education. Through the PTA Comunitario, family leaders will consider why school outcomes are not matching up with their hopes and goals for their children and will form partnerships with their local neighborhood public schools for action.

Research and experience show that knowledge must be made actionable in order to have impact. Actionable knowledge:

  • Is framed around the right questions – for example, asking how schools as systems can go beyond dropout prevention and recovery to strengthen “holding power” across grades.
  • Tracks not just school outcomes but the conditions that give rise to them and effective practices – providing teachers, administrators, family and community members the data they can use to make a difference.
  • Presents data in context – including meaningful comparisons among peer schools and districts; and information on school funding, resources and data that are disaggregated by student groups to help people assess and improve both educational quality and equity.
  • Bridges data divides – presenting data online and in-person, in families’ first languages; answering burning questions and embedding salient knowledge into community forums, school-community partnerships, and professional development for educators and school leaders.

Incorporating these features in partnerships, like the ones profiled above, researchers, educators, and family members are building on the data-gathering strengths of the accountability era in their efforts to improve schools. To realize our aspirations for children more broadly, we need to make sure that these examples become the rule rather than the exception.


Resources

Argyris, C. Knowledge for Action: A Guide to Overcoming Barriers to Organizational Change (San Francisco, Calif.: Josssey-Bass Inc., 1993).

Coe, C.K., and J.R. Brunet. “Organizational Report Cards: Significant Impact or Much Ado about Nothing?” Public Administration Review (January 2006) Vol. 66, No. 1, pp. 90-100(11).

Center for Public Education. “Round-up of National Education Report Cards,” web site (2007).

Fligstein, N., and Shin, T. “The Stakeholder Value Society: A Review of the Changes in Working Conditions and Inequality in the United States, 1976-2000,” in Social Inequality (Kathryn M. Neckerman, ed.) (New York, N.Y.: Russell Sage Foundation, 2004) pg. 407.

Heckman, J., and P.A. LaFontaine. The American High School Graduation Rate: Trends and Levels (Bonn, Germany: The Institute for the Study of Labor. December, 2007).

Levine, A. Higher Learning in America, 1980-2000 (Baltimore, Md.: Johns Hopkins University Press, 1994).

Mediratta, K., and S. Shah, S. McAlister, D. Lockwood, C. Mokhtar, N. Fruchter. Organized Communities, Stronger Schools: A Preview of Research Findings (Providence, R.I.: Annenberg Institute for School Reform at Brown University, 2008).

National Center for Education Statistics. The Condition of Education: Learner Outcomes (Washington, D.C.: May 2009).

Petrovich, J. A Foundation Returns to School: Strategies for Improving Public Education (New York: Ford Foundation, 2008).

Robledo Montecel, M. “Knowledge and Action – From Dropping Out to Holding On,” IDRA Newsletter (San Antonio, Texas: Intercultural Development Research Association, November-December 2006).

Tseng, V. “Studying the Use of Research Evidence in Policy and Practice,” William T. Grant Foundation 2007 Annual Report (New York, N.Y.: William T. Grant Foundation, 2008).


Laurie Posner, M.P.A., is an education associate in IDRA’s Support Services. Comments and questions may be directed to her via e-mail at feedback@idra.org.


[©2009, IDRA. This article originally appeared in the August 2009 IDRA Newsletter by the Intercultural Development Research Association. Permission to reproduce this article is granted provided the article is reprinted in its entirety and proper credit is given to IDRA and the author.]

Share