• by Josie Danini Supik, M.A. • IDRA Newsletter • January 1998 • Josie Cortez Josie Danini Supik, M.A.

In October of 1997, the U.S. Department of Education’s Office of Bilingual Education and Minority Languages Affairs (OBEMLA) invited selected individuals to serve as members of a panel in Washington, D.C., with a specific task to “shape a process for aggregating and analyzing student English proficiency and student academic achievement data from biennial Title VII evaluation reports” (Letter of Invitation, 1997).

Delia Pompa, OBEMLA’s director, told panelists that December 1997 would be the first time that biennial evaluation reports of new Title VII programs (systemwide, comprehensive, enhancement), funded two years ago under the 1994 Improving America’s Schools (IAS) legislation, would be submitted to OBEMLA. The purpose of the evaluation is three-fold:

  • To ensure that districts use the data in meaningful and useful ways,
  • To ensure that OBEMLA uses the data to create new technical assistance strategies, and
  • To ensure that the evaluation data informs policy.

By law, the new Title VII programs must compare limited-English-proficient (LEP) students with non-LEP students in the areas of school retention, academic achievement and gains in language proficiency. The evaluation must also provide evidence of the appropriateness of the programs’ implementation indicators as well as provide program context indicators.

This evaluation study is but a piece of a larger OBEMLA research agenda that was created to address OBEMLA’s performance indicators. The first objective is to improve English proficiency and academic achievement of students served by Title VII of the Bilingual Education Act. The indicators include the following:

  • 1.1 – English proficiency. Students in the program will annually demonstrate continuous and educationally significant progress on oral or written English proficiency measures.
  • 1.2 – Other academic achievement. Students in the program will annually demonstrate continuous and educationally significant progress on appropriate academic achievement measures of language arts, reading and math.
  • 1.3 – Success in regular classrooms. Sixth grade students who were identified as LEP in the first grade and who have been in the program for five years or who have successfully exited the program will perform comparably to similar non-LEP students on English language academic achievement measures by fiscal year 2000.
  • 1.4 – Low retention. LEP students in programs will be retained in-grade at rates comparable to similar non-LEP students by fiscal year 1998.

These performance indicators drive OBEMLA’s research agenda. The research agenda includes the benchmark study, field-initiated research, the previously described evaluation study, professional development, data collection, capacity building, expected gains study, inclusion of LEP students in assessment, and transfer of reading skills.

The expected gains study is particularly noteworthy. Its research question is whether expected score gains for native English-speaking students (adequate progress) should apply to LEP students given the following:

  • Educational research has never defined how to determine significant educational progress for LEP students.
  • Research on second language acquisition and bilingual education has established that LEP students tend to require specific strategies for learning and achieving high standards.
  • The literature has established that LEP students’ learning process in English is not necessarily parallel to native English speakers.

This study will determine what student gains should be expected yearly in English proficiency and content area achievement (English and native language arts and reading and math) for LEP students who are in effective, high quality programs specifically designed for them. The study will ultimately yield expected yearly gains based on the level of English proficiency, grade span, grade level at entry, native language and educational background.

It is important that researchers never lose sight of one important underlying premise: All students – whether LEP or not – are expected to achieve. No algorithm or equation must be formulated without this incontrovertible belief. Furthermore, educators must never use a “research-based” formula for expecting less than excellence for all students.

OBEMLA has provided guidance to Title VII grantees through the IASA Title VII Writing the Biennial Evaluation Report (June 1997). This guide provides grantees with a concise review of the evaluation requirements, approaches for writing reports, data collection methods, and roles and responsibilities of the evaluator and program staff.

OBEMLA expects to receive 32 systemwide program reports, 106 comprehensive program reports and 97 enhancement program reports. OBEMLA will have the challenge of analyzing and synthesizing 235 evaluation reports with different reporting formats, different data sources (given the variety of assessment instruments for language proficiency and achievement) and different contextual indicators.

A review of a small sample of programs showed a variety of different language proficiency assessment instruments including the Language Assessment Scales (LAS), the Idea Proficiency Test (IPT), the Language Assessment Battery (LAB) and Woodcock-Muñoz, the Prueba and Iowa Test of Basic Skills (ITBS). Academic achievement was measured using the Comprehensive Test of Basic Skills (CTBS), the California Achievement Test – Edition Five (CAT5), the Texas Assessment of Academic Skills (TAAS), portfolios, and other state and district assessment tools. Further complicating the task is the probability of missing academic achievement data and/or low student numbers in the comparison of LEP and non-LEP students due to the number of students often exempted from taking these tests.

At the October meeting, panel members (mostly researchers and evaluators) discussed how to reconcile these complexities so that the evaluation findings would be useful and meaningful. The integrity of the data, validity and reliability issues, and the appropriateness of the instrumentation used, all factored into the conversation. They also discussed the “intangibles,” the contextual variables (leadership, implementation, staffing, etc.) that are often difficult to convert to quantifiable measures, yet are critically important in making sense of the outcomes.

Making sense of the outcomes is more important than ever. Bilingual education is under vigorous attack in this country despite significant research that sound bilingual education programs work. If reasonable and rational heads do not prevail, this country may find itself with classrooms where it is illegal to speak a language other than English. OBEMLA, IDRA and others are determined to ensure that all of this country’s children have equity and excellence in their education. Bilingual education is one proven method that should not be denied them.

Secretary of Education Richard W. Riley stated:

    Bilingual education ensures that students who are not native English speakers get the necessary grounding in core academic subjects while making the transition to all-English classrooms (IDRA, 1997).

There is no argument that programs (including bilingual education programs) or strategies should be rigorously evaluated. However, the underlying premise for the evaluation must always be to maximize students’ achievement while recognizing their inherent strengths. The questions asked in an evaluation differ greatly if the assumptions are that some students are deficient and will never achieve than if the assumptions are that all students are valuable and none is expendable. With this latter premise, evaluators and researchers must ask if all students are achieving and if programs are in fact making a positive difference in students’ achievement and success in schools.

These are the fundamental questions that should be asked of those accountable for student results – teachers, administrators, policy-makers, parents and communities. The answers lie not only in the “bottom line” of student outcomes but also in those previously mentioned “intangibles,” leadership, experienced and qualified teachers, and the valuing of all students. An exemplary program cannot work on a campus that lacks these factors.

When bilingual education programs do not work, it is usually not the result of poor pedagogy. It is the result of inexperienced or unqualified teachers and a hostile or indifferent administration and/or community.

Yet, when students do not achieve, bilingual education is often seen as the culprit. Interestingly, if a student is not achieving in mathematics, there is no national call for abolishing mathematics from our classrooms or for making it illegal to add or subtract or multiply.

In effective classrooms, strategies are put into place, teachers are provided the resources including professional development, and administrators provide the needed support so that all students master mathematics. Also in place in effective classrooms are ongoing assessments of mastery and competence so that quick corrective measures can be taken as needed.

Evaluation and research are critical components of effective programs and strategies. But their true power comes from our ability and commitment to use the information for improving student outcomes and ensuring that all students achieve in an equitable and excellent environment.


Navarrete, C. and J. Wilde. IASA Title VII Writing the Biennial Evaluation Report (Las Vegas, N.M.: New Mexico Highlands University, June 1997).

Intercultural Development Research Association. “Bilingual education is an investment that pays off,” Class Notes (San Antonio, Texas: Intercultural Development Research Association, 1996) Issue number 2.

Pompa, D. Unpublished letter of invitation. (Washington, D.C.: OBEMLA, 1997).

Josie D. Supik, M.A., is the director of the IDRA Division of Research and Evaluation. She was a panelist at the OBEMLA meeting. Comments and questions may be sent to her via e-mail at feedback@idra.org.

[©1998, IDRA. This article originally appeared in the January 1998 IDRA Newsletter by the Intercultural Development Research Association. Permission to reproduce this article is granted provided the article is reprinted in its entirety and proper credit is given to IDRA and the author.]