Op-Ed

ACT not key measure of college readiness

At issue | Jan. 2 commentary by Bob King, "Time to raise expectationsfor education, better preparation for college, work"

If I were to assert that a player who cannot make 56 percent of his free throws is not "ready" for the National Basketball Association, a fan would point out that there is much more to basketball than shooting free throws.

An astute fan with a historic prespective would point out that Wilt Chamberlain, Shaquille O'Neal and a bevy of other players would have not been ready, using that arbitrary standard. One facet of basketball does not make or break a player's "readiness."

Bob King, president of Kentucky's Council on Postsecondary Education applies the council's arbitrary standard of using a single test score to determine whether a student is ready for regular course work in Kentucky's public universities.

He implies a test score is a better predictor of grades in college than is a high school record. He then presents results from one high school that lump higher performing students (those with above-average high school records) with lower performing ones in a misguided approach to justify his position. A test score, however, does not make or break a student's readiness for higher education. Decades of research indicate:

■ Performance in academic courses in high school is the single best predictor of success in higher education.

■ A combination of the high school record and test scores predict better than the high school record alone.

■ How good the prediction is and how the components are combined vary depending on the institution; there is no one size-fits-all model to predict grades in different courses or different institutions.

In addition to the thoroughly suspect notion of labeling a test score as readiness, the council's use of a single score for placement purposes violates standards for the proper use of tests.

Apparently, the council determines readiness by doing statistical analyses of ACT scores and grades in first-year courses without regard to institution. A certain ACT score produces a 50/50 chance of getting certain grades, say C, or better. I could find no information about this or other investigations done by the council. And although it is possible to present information about how good a model is, I could not find any information of that kind either.

To give a sense of the power of statistical models to predict first-year grades, I cite analyses conducted years ago on University of Kentucky students. The question was whether results of KIRIS, the first commonwealth assessment related to school reform, could be used for admission and placement in a university. It looked at three models: predictions based on high school grades, the grades and ACT scores, and grades and state testing.

None of these models were strong predictors of first-year grades. Grades were overwhelmingly determined by differences in students' study habits, class attendance, interests, any of a thousand other variables or simply things not explained statistically.

These are not unusual results. The point, obviously, is to gather more information about a student before making a placement decision.

Here is what ACT says: "ACT offers a variety of tools to ensure post-secondary students are quickly and accurately placed in courses appropriate to their skill levels. Assessment tools from ACT offer a highly accurate and cost-effective basis for course placement.

By combining students' test scores with information about their high school coursework and their needs, interests and goals, advisors and faculty members can make placement recommendations with a high degree of validity." To that, I would add, for obvious reasons, it is desirable for an educational agency to use tests in exemplary ways.

King's commentary goes on to exhort parents to ask the right questions, ask an undefined "we" to fear international test results, say admissions offices should align themselves with the council's readiness standards and calls on the still undefined "we" to serve teachers more effectively.

Such exhortations would be more convincing if, in the first instance, the council could propose placement procedures based on a robust notion of readiness that in addition did not violate test score use standards.

  Comments