A few years ago, the story goes, someone with a video camera cornered a dozen newly minted Massachusetts Institute of Technology graduates at commencement. Each was asked to explain why it's hot in the summer in Cambridge, Mass., and cold in winter. Only two or three knew that seasons change according to the Earth's tilt toward the sun.
The story is probably apocryphal. It's told about Harvard, Yale and Princeton graduates, too. But Donald N. Langenberg, a physicist and recently retired chancellor of the University System of Maryland, uses it to point out what he calls "painful reality." Even the most prestigious colleges may not be doing a good job, he says. But colleges and universities lack reliable measurements of how well students learn, so the job goes by default to the annual ranking sweepstakes of news magazines.
That's changing. Under pressure from business groups and Congress, which is scheduled to reauthorize the Higher Education Act next year, colleges and universities across the nation are seeking ways to document student learning. No one predicts standardized testing of the kind required in public schools by the new No Child Left Behind Act, but there's a frenzy of activity around student assessment. Testing is going to college.
Many colleges are experimenting with end-of-the-year tests that cover general knowledge. Fifteen four-year schools are piloting an assessment designed to measure the knowledge added each year as students move through college. In Virginia, James Madison University is giving its general education tests in a computer lab and using the results to improve programs.
Langenberg predicts that such testing will overcome even the resistance of professors who believe assessment imposed from the outside is a violation of academic freedom. "For too long," he says, "we in higher education have been telling the world, 'We're wonderful - take our word for it.' We're not going to be able to get away with it much longer. We've got to do it, or someone else will do it for us."
A new yardstick
Richard H. Hersh, president of Trinity College in Hartford, Conn., says higher education is too complex and varied to give off-the-shelf tests like those that have become almost universal in elementary and secondary education. But he and other officials are experimenting with a test that he says can determine the "value added" by a liberal arts education. Eventually, he hopes, the new test, which resembles the essay portion of state bar examinations, could end higher education's "sick and obsessive reliance on U.S. News & World Report."
Hersh says the magazine's annual ranking of colleges, which it bases on statistics supplied by the schools and the subjective judgments of college officials, "turns evaluation into a horse race."
The new test, developed by researchers at the Rand Corp. think tank, was given last spring to about 110 students at each of 15 campuses, large and small, public and private, predominantly white and historically black. To promote motivation, students - freshmen, sophomores, juniors and seniors about to graduate - were paid $25 an hour to take the 3 1/2 -hour assessment, which consists of a series of essays requiring them to analyze data and solve problems.
One task, for example, presents students with a vast amount of policy and research data on anti-drug programs, then asks them to write a memo to the mayor of a major American city proposing a comprehensive drug plan.
When results come out early next year, and in subsequent years, such factors as the race and economic status of the test-takers will be taken into account, or "controlled," so that the value added in each student's college learning can be judged as accurately as possible.
Hersh predicts that students at "handcrafted" liberal arts colleges such as Trinity will do better in the assessment than those at what he calls "McUniversity." "We're going to demonstrate that liberal arts colleges are well worth the price. Employers who are looking for people with good analytical skills already know this."
Hersh says such an assessment is "a powerful educational tool. It's a way of asking, 'What does the college experience contribute to a student's learning?' Once we answer that question, we can do wonders with our programs."
Difficult to quantify
Others are doubtful. "The value-added concept probably won't work well in practice," says John V. Lombardi, chancellor of the University of Massachusetts at Amherst. "Like many business models adapted into the academic environment, it makes some assumptions that probably don't hold up well. Students are not raw material to be processed by the institution because students are active participants in the process themselves."
Higher education does have its indicators of quality, but according to a recent report from the Association of American Colleges and Universities, "reputation generally substitutes for college-wide measurement of student achievement."
Schools with great reputations might actually do a worse job adding value to an education than those that take on many students needing remedial work, the report says, referring to a phenomenon some call "diamonds in, diamonds out."
"Harvard freshmen have already written symphonies," says Langenberg. "If Coppin [State College in West Baltimore] teaches a student to write a symphony, it may show that Coppin is doing a better job."
Of course, Coppin and Harvard have different missions. Their freshmen arrive with different abilities and expectations. These are only two of several factors that make the development of assessments in higher education so extraordinarily difficult, according to testing experts. They say a higher education equivalent of the National Assessment of Educational Progress could never be imposed on such a diverse system. And, of course, Harvard, as a private school, wouldn't have to participate.
"Students have an educational impact on other students," says Paula P. Burger, vice provost for academic affairs at the Johns Hopkins University. "Part of what you pay for at a first-rate institution is the privilege of being stretched by classmates."
Recent graduates of Trinity said they learned as much outside the classroom as inside - and that they didn't realize how thoroughly Trinity had changed their lives until they entered the work force after graduation. "It would be tough for a lot of what I learned at Trinity to come through on a test like that," says Brooke Crisman, 24, who joined Teach for America after graduation and taught social studies at Baltimore's Southwestern High School for two years before entering law school in Washington, D.C.
"I first realized what Trinity had done for me when I got to graduate school at Morgan State University," adds Erik C. Johnson, 30, who runs the Baltimore office of a Philadelphia venture capital firm. "I found myself in a much different academic environment, and thanks to Trinity I was more prepared to take on the rigor of it."
Another problem with higher education testing is that no one can agree on what students ought to learn, says Peter Ewell of the National Center for Higher Education Management Systems. Ewell's Boulder, Colo.-based center recently awarded a grade of "incomplete" in student testing to the higher education systems of all 50 states. Ewell reports that 10 states administer a common test to college students, but the tests vary widely in scope and purpose. Some states want to make sure students are learning as they move through college. Some examine only one area of study.
"The most important outcomes in your education aren't the most important in mine. That's the trouble," says Carol Geary Schneider, executive director of the colleges and universities association. "If each of us learned the same things in college, it would be a problem, not a benefit."
But the AACU report recommends "flexible" testing at colleges and universities, and Schneider's organization recently devoted an entire issue of its magazine to the value-added scheme.
Useful data
The granddaddy of university testing programs is at James Madison University in Harrisonburg, Va., which has been testing students for 16 years. They're assessed in August of their freshman year, again at midpoint after they've taken most of their general education courses and finally in their academic major at graduation.
"These aren't high-stakes tests that determine graduation or passing a course," says T. Dary Erwin of the university's Center for Assessment and Research Studies. That's why it's not unusual for graduates to forget having taken the tests, Erwin says.
"We use the data to improve programs and services. Sometimes we've had a course added or dropped. Or we'll change the content of a course or the way it's delivered. Partly by using the tests, we built a new general education curriculum from scratch and posted it on the Web - what students should know and be able to do, how they should change as persons."
On the cutting edge of higher education assessment, James Madison is putting its assessments on computer, which Erwin says "allows us to do so much more than we used to do with pencil and paper."
Four afternoons a week, a computer lab is open for students to take assessment tests. If it's the test of information literacy, questions are posted at the bottom of the screen, Web addresses at the top. Students are tested not only on their ability to find information on the Internet, but on how well they judge its credibility.
To take the fine arts and humanities test, students wear earphones and answer questions after listening to classical music or watching a Martin Luther King Jr. oration.
Erwin, who runs a doctoral program in higher education assessment, endorses Hersh's value-added concept. "The focus should be on what people learn at an institution, not on the amount of money alums give or what others think of a school," Erwin says.
"From an employer perspective and from a public-policy perspective, wouldn't you want to know what students actually learn in college? That's what's around the bend."