Introduction
Race preferences, under the rubric of affirmative action, have been a matter of policy in American higher education for more than half a century. In its early academic incarnation, and recurrently ever since in the public statements of some of its supporters, affirmative action was to seek out, and perhaps mildly to boost, minority applicants to colleges and universities, with a light touch and primarily on behalf of African Americans. In 1965, for example, Harvard Law School created a “Special Summer Program” for juniors and seniors from what later came to be called historically black colleges in the South. Harvard’s idea, modestly enough, was to encourage and try to qualify these students to apply to its law school. Yet, in the same year, New York University’s law faculty approved the admission of “ten to fifteen” students with test scores and college grades below the usual cutoff.
Preferential policies became more widespread, and far more aggressive, almost at once. By 1969, the American Association of Medical Colleges recommended that 12 percent of first-year medical students should be black. Affirmative-action programs, several of them financed by the Rockefeller Foundation, sought to recruit African American students from the inner city for admission to college and university, almost without concern for the students’ academic preparation. That same year, New York’s City University—known in earlier decades as the “Harvard of the proletariat” for its academic excellence—first created a “dual admissions” system whereby 50 percent of the class would still be admitted under academically selective standards but 50 percent would no longer be required to meet the standards; two months later, the City University dropped admissions standards altogether and introduced “open admissions” for all high school graduates.
At first, there was some tendency to acknowledge that preferences by race—and for other selected groups—were at odds with generally accepted, and usually desirable, principles of equal justice. Hence, preferences were often shrouded in euphemisms. As the “tough liberal” teachers’ union leader Albert Shanker reported, after a conversation in 1972 with Democratic presidential candidate George McGovern, the candidate “is willing to abandon the word ‘quota’ but still endorse the practice.”
In the ensuing years and decades, what had been introduced and justified as a temporary expedient, with perhaps mild preferences for blacks just emerging from the era of Jim Crow, became a permanent feature—and, increasingly, a predominating principle—of higher education, with sometimes dramatically relaxed standards for applicants and students from an array of specified groups.
From the outset in the 1960s and 1970s, there were voices warning that race preferences (and preferences by ethnicity and sex as well), once established, would not be temporary, would not be modest in scope, and would not conduce toward social integration or reconciliation, but rather quite the contrary. As early as 1967, Ralph J. Winter, a Yale law professor who was later appointed to the United States Court of Appeals, wrote of preferential programs: “Instead of helping to eliminate race from politics, they inject it. Instead of teaching tolerance and helping those forces seeking accommodation, they divide on a racial basis.” In the mid-1970s, the prominent public intellectual Nathan Glazer published Affirmative Discrimination: Ethnic Inequality and Public Policy, criticizing both the factual claims underlying preferential affirmative action—such as the idea of “institutional racism”—and the implementation of preferences, which represent a “withdrawal from the acceptance of common standards, a weakening of our sense of rightness or justice” in making academic judgments.
Fifty years and more have now elapsed, and race preferences—as well as preferences for other selected groups—have been thoroughly institutionalized in American higher education. Under the slogan of “diversity,” they have become a way of life, even an article of faith. Proliferating campus bureaucracies oversee and promote preferential policies. Preferences are an important factor not only in undergraduate and graduate admissions, but in hiring and promoting faculty and administrators as well. Preferences, and the preoccupation with race and group identity that they engender, increasingly affect the academic curriculum: in some fields, they virtually transform it. And preferences create ongoing pressure for lower academic standards, as institutions try to disguise the educational gap between students admitted preferentially and those admitted with standard qualifications.
Is it now too late to assess the edifice of racial and group preferences honestly and to suggest that American higher education has taken a wrong turn? We hope—and think—not.
It is a matter of urgent current concern, as well as concern for the future, whether preferential affirmative action really fulfills its putative positive or desirable goals. These policies were first advanced—and are sometimes still defended—as promoting racial integration. Yet college and university campuses, far from being more racially integrated, are more segregated than ever: on many campuses there are now racially separate dormitories, racially separate orientation and graduation ceremonies, and racially separate social lives. Even entire academic departments are effectively set aside racially.