James Hankins, Author at Law & Liberty https://lawliberty.org/author/james-hankins/ Tue, 20 May 2025 20:02:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 226183671 Reform Higher Ed by Raising Standards https://lawliberty.org/reform-higher-ed-by-raising-standards/ Thu, 15 May 2025 10:00:00 +0000 https://lawliberty.org/?p=67202 The first hundred days of the second Trump presidency have brought unprecedented challenges to the complacent status quo of American higher education. It has begun to dawn on university administrators and faculty alike that the Trump administration is serious about its plans to break the hold of illiberal progressives (a.k.a. “woke” progressives) over America’s most […]

The post Reform Higher Ed by Raising Standards appeared first on Law & Liberty.

]]>
The first hundred days of the second Trump presidency have brought unprecedented challenges to the complacent status quo of American higher education. It has begun to dawn on university administrators and faculty alike that the Trump administration is serious about its plans to break the hold of illiberal progressives (a.k.a. “woke” progressives) over America’s most prestigious universities. The Ivy League universities, the targets of most of the administration’s fire so far, have thrown up legal defenses of their institutional autonomy and their right to receive the federal funding appropriated by Congress. As those legal battles play out, the schools’ administrators are in a state of panic and uncertainty about their financial viability. Harvard, Penn, Brown, and Cornell have announced hiring freezes. Several of the Ivies have even mobilized lobbyists in Washington, DC to fight the cuts in Congress as well as to take action against the threatened loss of their non-profit status. Whether those defenses will hold remains to be seen.

Even before this recent round of debate, though, academe’s moderate-to-conservative reformers have been exploring a variety of strategies to protect what is still valuable in our system and to reassert the teaching of Western and American traditions. Some believe that existing institutions can be reformed merely by curtailing DEI programs and restoring some degree of ideological balance (“viewpoint diversity”). Other reformers, primarily in red states, have taken a bolder approach and have set up civics institutes within state-funded universities. These are intended as traditionalist citadels, designed to preserve the un-politicized study of Western and American history, literature, and the arts in universities that have largely abandoned such study. Still other reformers maintain that existing universities are irredeemable, and that the only possible course is to set up new institutions to replace those corrupted by radical gender ideology, left racialism, antisemitism, and radical environmentalism.

The last strategy has created some impressive new institutions, like the University of Austin in Texas (UATX) and Ralston College in Savannah, Georgia. Because there are only a few, however, such institutions are unlikely to make much of a dent in the existing structure of American higher education, which includes almost 6,000 accredited institutions. The pecking order of academic prestige, based as it is on antiquity, large endowments, and the influence of alumni, is difficult to disturb. Universities in general appear to possess, in extreme form, a first mover advantage that provides them with extraordinary protection from competition. Oxford and Cambridge were Britain’s first universities when they were founded in the thirteenth century. They are still its best. Ditto the University of Paris, est. 1215. All of China’s top universities are also its oldest. In America, the Ivy League is not accepting new entrants, and the rankings among the top public research universities have changed little since the 1950s. (The University of Florida, which has risen from the top twenty to the top five public research universities in recent years, is a notable exception.) The percentage of Americans who regard higher education positively has fallen, according to Gallup, from 57 percent to 36 percent just in the last decade, but that dramatic drop in public approval has done little to disturb the complacency or modify the behavior of those at the top of the heap.

The conclusion follows that if there is to be any meaningful reform of higher education in America, it will have to come from within the existing system. One strategy that might lead to real reform, I would argue, involves changing the way institutional prestige is perceived and measured. Though universities are, for the most part, political monocultures dominated by the left, the good news is that they are highly competitive. They care a great deal about rankings and publicity. Positive movement in the rankings and positive publicity help with fundraising and recruitment of the best faculty and students. To take just one example from my own experience: soon after coming to Harvard in 1985, I discovered that my department, History, was considered a failure by the administration because its national ranking had fallen to no. 4; it was unacceptable to be anywhere other than in the top three. Raising our ranking to no. 1 again turned out to be an effective goal for recruiting alumni support. Mutatis mutandis, the same obsession with rankings—“who loses and who wins; who’s in, who’s out”—is general throughout academe. Negative movement in national and international rankings can lead to major administrative shakeups and soul-searching about how to improve.

The question, then, is how to use the competitiveness of universities, their thirst for higher rankings, to generate a virtuous circle of reform. I’ll suggest here, first, one direction that university reform could take, then address more briefly the question of how a competition for the right kind of institutional prestige could be fostered by governments and other stakeholders outside of academe itself. The new strategy of reform I propose—creating new standards for student achievement—has, to my knowledge, not yet become part of the debate on higher education. Making educational outcomes a major component in ranking colleges and universities could have, I believe, a number of positive effects on academe’s current priorities. If the prestige of an institution can be made to depend in great part on its success in helping students achieve mastery of the subjects they study, universities will have fewer incentives to indulge political activists, promote luxury beliefs, and foster boutique subjects.

It should certainly be possible for educational outcomes—objective, measurable criteria to assess what students have learned in college—to become a major component in a university’s reputational standing.

Most faculty who have taught in universities for a long time are aware that educational standards have fallen dramatically in recent decades. Even the most prestigious universities have made it much easier for students to graduate with little gain in knowledge and critical thinking. As David Butterfield argued in a viral article last fall, education has become infantilized. His article was about teaching classics, but the problem is widespread in the humanities and social sciences.

Let me give an example from my own experience teaching history at Harvard. When I began teaching 40 years ago, I regularly assigned over 300 pages of reading per week. At present, assigning more than 75 pages per week, as we are advised by curriculum committees, is considered an unmanageable burden for most students. Students at highly selective colleges and universities average only about 15 hours of study outside the classroom, down from 24 hours in the 1960s. The average includes students in the natural sciences, who generally put in more hours outside of class. As long ago as 2011, Richard Arum and Josipa Roksa argued, on the basis of data from the College Learning Assessment (CLA), that American higher education did not deliver substantial intellectual growth for at least a third of students. The figure must be significantly higher today. At the same time, top colleges and universities have been competing to expand the number of available sports and extracurricular activities, to build luxurious student centers, and to provide many other para-academic experiences. Student life today resembles more a four-year residence at an expensive country club than a quasi-monastic period of devotion to science, scholarship, and the contemplative life.

The growing indifference of most college students to academic achievement is, of course, perfectly rational. Why study, when the worst grade you can get is a B, when it is almost impossible to flunk a course, and when getting a diploma from a top university, however little effort is involved, is enough to set you up with a well-paying job? Ivy League universities like to tell prospective students that the connections formed with their fellow students during college years will bring them success in their future careers. So doesn’t it make sense to spend your four years hanging out with friends, playing sports, and joining clubs and extracurricular activities? Studying is for losers.

There have, of course, always been students with these attitudes, and they have always had their reward. But institutional prestige should not be based on the pleasures offered by student life or the future income streams of its graduates. The reputation of a university should be judged primarily by two criteria: (1) the quality of its scientific and scholarly research and (2) its success in enhancing the abilities of its graduates to live good lives as workers, citizens, and human beings. Everything else is instrumental.

If universities are to take their duty to educate students more seriously, some substantial redirection of current trends will be needed. Let me list six ways of addressing the long-standing patterns of decline in standards.

Grade inflation. Nobody in the contemporary university has an interest in controlling grade inflation. Students obviously want higher grades, but faculty also prefer to give high grades rather than risk lower enrollments. University administrations profess concern about the future success of graduates. If student grades are lower than those of other universities, graduates won’t be able to get into law schools and medical schools at the same rate. Lower rates of admission to professional schools are not an outcome any administrator wants. The current impasse about grade inflation constitutes a collective action dilemma. Yet an effective grading system is vital to student achievement: students need to find out what they are good at and where they need to improve. A grading system that produces an average GPA of between 3.7 and 3.8 on a 4.0 scale, as is generally the case in the Ivy League, does not give students the information they need to achieve excellence. The only way this situation will change is if outside pressure is applied. More of that later.

Buffet course offerings. Core curricula, except at a few holdouts like the University of Chicago and Columbia, disappeared from US universities a generation ago. Faculty no longer have enough conviction or consensus to tell students that they need to know X rather than Y, or that some subjects are more important than others. This is a natural result of an educational ecosystem that privileges “choice” and “diversity” over the mastery of some body of knowledge. Required courses have been replaced for the most part with distribution requirements or “general education” courses, which are just a slighter smaller selection from the same buffet.

Except in the natural sciences, most departments have abandoned sequencing, meaning a distinction between lower-level and upper-level courses. Survey courses have been made optional or have become—in the effort to represent the interests of every victim group and every culture—hopelessly lacking in narrative focus. In my own department, for example, you don’t need to take a survey of American history before taking other courses on a narrower subject or period. This, in practice, means that a great deal of time in specialized courses is wasted on remedial education. The lack of required sequencing prohibits professors from offering advanced courses, where prior knowledge can be taken as a given. Students come away from four years of historical study with an individualized hodgepodge of knowledge but little mastery of a body of knowledge.

Interdisciplinarity. University administrations and faculty have been promoting interdisciplinary education for half a century at least, on the assumption that such study makes a student more open-minded and creative and fosters learning across disciplines. That assumption is no doubt true to some extent. The price of this emphasis, however, has been the loss of mastery over distinct bodies of knowledge. These are called disciplines for a reason: they require discipline and the accumulation of knowledge and skills. A student who has never been required to master a discipline ends up as what is unpolitely called a bullshit artist—a perfect future McKinsey employee, in other words. They are good at problem solving but lack the practical wisdom and extensive views that come from wider mastery of a field of knowledge.

At Harvard, most departments outside the natural sciences used to require a General Examination, a long oral exam where professors asked students to discuss the courses they had taken in the department over the previous four years and to compare and synthesize their knowledge. General Examinations have mostly been dropped in favor of a “capstone experience,” generally a long, specialized research paper. To combat the ill effects of premature specialization and interdisciplinarity, a requirement for a General Examination or something like it might be brought back as a way to demonstrate mastery of a body of knowledge. Or an examination system for particular kinds of degrees, such as that used in the Oxbridge honors system, might offer another way to measure educational achievement. The existence of cumulative examinations would increase respect for the institution’s graduates and benefit students by providing a focus for their studies.

Unlimited extracurriculars. At Harvard, incoming students are greeted during first-year orientation with the spectacle of a Harvard Yard filled from end to end with hundreds of tables, manned by attractive and persuasive upperclassmen. They are there to recruit freshmen into extracurriculars: sports, musical activities, political clubs, and other special-interest groups. There are over 450 such groups recognized by the university. The usual pattern among Harvard first-years is to sign up for a dozen or so extracurriculars, then gradually winnow the number in their junior and senior years. They realize, often too late, that they cannot achieve anything in their fields of study with so many notifications buzzing on their phones. The university, predictably, celebrates this abundance as so many opportunities for personal enrichment as well as a way for new students to feel included and to enhance their sense of belonging to the community.

This belief in the value of extracurriculars is certainly true to some extent, but universities at the same time need to emphasize that the primary goal of a university education is to master bodies of knowledge under the guidance of trained experts. One way it could send that message would be to limit the number of extracurriculars per student and not to allow first-years to participate in them during their first term. In general, the university needs to counsel students against overextending themselves—rather than letting them find out for themselves—reminding them of the truth of Virgil’s adage, non omnia possumus omnes, “we cannot all do everything.”

Ignorance of foreign languages. Among the more disgraceful failures of the modern American university is the collapse of meaningful language requirements. Universities boast of their global reach and multicultural ethic, but that commitment does not extend to requiring students to acquire deep proficiency in a language other than English. Learning another language is a lesson in cross-cultural sympathy and the art of communication. Yet we, the richest country in the world, are the one nation that permits its university graduates to remain monoglots. Lack of language skills makes us weaker as a nation in business and cultural “soft power”; it impedes our understanding of foreign countries, and is, in my opinion, responsible in no small measure for turning US international relations in recent decades into an ongoing circus of witless blunders.

Politicization. American universities in general, at least until recently, have smiled indulgently at and even encouraged student protests. As has often been observed, the experience of the generation that lived through the Vietnam era shaped higher education for decades afterwards. But that generation, thankfully, is now passing from the scene. The younger generation of university leaders, after the experiences of recent years, may be willing to regard protests and demonstrations as something other than shining examples of civic virtue. In fact, they are the opposite: they are symptoms of political failure, the failure of our democratic institutions to hear all voices and shape dissent in socially useful ways. In any case, a university that is committed to academic excellence cannot tolerate unauthorized demonstrations. Ideally, they would prohibit them entirely from campus and force activists to apply for permits to demonstrate in public spaces, like everyone else. In any case, the university must send the message that political activism is not a good use of students’ time. They can attend protests when they are not engaged with their studies, but while they are on campus, they need to be reminded that political activism means losing a priceless opportunity to learn things they cannot learn in later life. It is an opportunity that has been paid for by their parents, previous generations of alumni donors, and by the public through government grants. Tolerating protests that disrupt the opportunity of others to learn should lead immediately to expulsion from the university.

It is, of course, one thing to outline an alternative vision of university reform, one that flows from academe’s core functions to educate and to engage in useful research, and quite another to have that vision embraced by the kind of faculty and administrators populating progressive institutions today. In order for a shift in perspective to occur, the structure of incentives has to change in such a way as to initiate a preference cascade. This can be done by putting at risk the prestige and ranking of universities that fail to reform themselves, namely through inducing the agencies that rank and evaluate them, both inside and outside of government, to include educational outcomes in their algorithms in a more robust way.

Currently, university ranking systems include data such as selectivity and graduation rates, peer assessment of academic quality, class sizes, student-faculty ratios, and the percentage of faculty with PhDs. None of these tells us much about what, if anything, students have learned during their college years. Nor do the regional accrediting agencies do more than ensure that minimal criteria are met. They evaluate in the most general terms things like mission clarity, academic rigor, faculty credentials, adequate support for students, and prudent financial governance.

To outline a full strategy of reform must be a subject for another essay and perhaps for another essayist with more knowledge of ranking and accreditation procedures than I possess. It requires some delicacy of judgement to reshape the current methods of ranking universities without sacrificing educational liberty and flouting expert opinion. If it is to avoid ham-fisted interventions, the present government will have to approach regional accrediting agencies and the Council for Higher Education (CHEA), which oversees accreditors, on a basis of mutual goodwill, or at least civility and common purpose. Surely it would be to the advantage of all but the most complacent for educational outcomes—objective, measurable criteria to assess what students have learned in college—to become a major component in a university’s reputational standing. That should not be a partisan matter. If prudently managed, establishing such measures could help both to improve educational quality and to bring American universities more in alignment with the needs and values of the society they were intended to serve.

The post Reform Higher Ed by Raising Standards appeared first on Law & Liberty.

]]>
67202 https://lawliberty.org/app/uploads/2025/05/Harvard_Yard_at_Night_03.jpg
Seven Easy Steps for Reforming Healthcare https://lawliberty.org/seven-easy-steps-for-reforming-healthcare/ Tue, 25 Feb 2025 11:00:00 +0000 https://lawliberty.org/?p=65162 The new administration has at last come to DC, and opportunities for genuine reform, it seems, abound. Positive energy has been flowing since the recent “vibe-shift” made cracks in the edifice of the progressive status quo. The breadth of the new governing coalition—from post-liberal populists to libertarians and practically everything in between those poles—has opened […]

The post Seven Easy Steps for Reforming Healthcare appeared first on Law & Liberty.

]]>
The new administration has at last come to DC, and opportunities for genuine reform, it seems, abound. Positive energy has been flowing since the recent “vibe-shift” made cracks in the edifice of the progressive status quo. The breadth of the new governing coalition—from post-liberal populists to libertarians and practically everything in between those poles—has opened up a realm of creativity and enthusiasm for seeking fresh solutions to problems long thought to be intractable. 

The last libertarian moment came and went circa 2008, giving way to the Woke Era, when the nursemaids of the progressive nanny state rolled their prams triumphantly through the institutions. Now, suddenly, tech billionaires have rediscovered their inner libertarian, DOGE is rattling—indeed smashing—the teacups of administrative state mandarins, even if they operate in a sometimes testy “Rebel Alliance” with populists. Most unlikely of all, Javier Milei, the tousled-headed Austrian-school economist, is showing the way forward after an extraordinarily successful year as president of Argentina.

In the first sixty years of my life, healthcare was always recognized as a progressive issue. Whenever the subject came up in Congress, Democrats smiled and licked their lips and Republicans anxiously scanned the crowd for pharmaceutical and insurance lobbyists. But if “Making America Healthy Again” is firmly on the agenda, perhaps there’s now an opportunity for thinking in new ways about healthcare delivery, one that starts with health rather than the priorities of bureaucrats, drug companies, insurers, and the politicians who love them.

The ordo-liberal tradition might be a key guide. “Ordo-liberalism” was the brainchild of Wilhelm Röpke, an intellectual leader of the anti-fascist resistance in Germany during the 1930s. An economist in the Austrian tradition and a defender of free markets, he nevertheless recognized the need to “order” markets under some higher purpose. Human beings should not be reduced to utility maximizers. Government policies needed to aim at more than increasing GDP; they needed to serve citizens in their full humanity, balancing freedom with a need for security and collective responsibility. Societies had to direct markets to serve wider human ends via incentive structures and minimal government regulation, subject to democratic approval. Working with Ludwig Erhard, the West German minister for economic affairs between 1949 and 1963, Röpke was one of the architects of the German postwar Wirtschaftswunder, which combined economic freedom with policies designed to shape a “social market.” Ordo-liberalism was meant to be a “third way” between laissez-faire capitalism and socialism.

So here, in the spirit of ordo-liberalism, is a common-sense proposal for the reform of healthcare from a conservative writer who has absolutely zero qualifications as a healthcare expert—that’s how you know it’s a commonsense proposal. I bring to the table, apart from (I hope) common sense, only a college major in economics, plus an informed sense of outrage over the medical profession’s abuse of classical Greek. The common sense comes from my father, who taught me the importance of getting compound interest on your side as early as possible in life. From other conservative mentors, I acquired three other convictions that have been borne out by experience. One: If you want to solve a social problem, the least effective way to do so is to send massive sums for that purpose to our elected representatives in Congress. Two: You cannot and indeed should not take away benefits that have been promised to American voters and paid for through payroll deductions. Three: Never underestimate the power of Americans’ generosity in a good cause.

But I promised seven easy steps to reform the US healthcare system. Please wait until the end before dismissing the author as an impractical dreamer. Here they are:

1. All healthcare providers, doctors, nurses, and hospitals must operate on a fee-for-service basis, with full-price transparency. A healthcare price transparency initiative was begun during the previous term of office of the present administration, but was abandoned by its successor.

2. All citizens will on demand be issued with healthcare cards, which will contain (1) all of their medical records, encrypted, which only they can release to providers; (2) a payment function linked to personal Health Savings Accounts (HSAs). Cards issued to minors would be linked to their parents’ accounts. Your healthcare card will entitle you to membership in a local HSA alliance (see Step Seven below). Note that the cards would be issued on demand by citizens or their legal guardians; they would not be mandated or required. Those not demanding healthcare cards will be provided with free psychological and personal finance counseling, so they can have their heads examined.

3. Value might be added to personal HSAs by the individual citizen, by his or her parents or other relatives, or by an employer, a church, or a charity. HSAs would be heritable—grandparents could leave any balances in their HSAs to their grandchildren. The wealthy could donate funds to needy individuals or to charities that make grants to needy individuals, providing spectacular opportunities for virtue-signaling, but for a truly good cause. A fixed percentage of funds in a citizen’s HSA, decreasing with age, might be invested in equities, with capital gains taxed at one-half the rate of other capital gains. All individual transfers to one’s own HSAs would be tax free, and transfers to another person’s HSA would be tax-deductible. For persons under 50, the amounts they have already paid to the federal government in Medicare taxes would be transferred to their HSAs. Persons over 50 could decide whether to remain in the Medicare program or to request a sum, equivalent to double their lifetime payments, to be distributed to their personal HSA. Any amounts previously paid into HSAs—which already cover 71 million Americans—would be deducted from the distribution.

4. Funds in HSAs should be spent only on licensed professionals or hospitals and clinics approved by state governments. States would decide whether to include mental healthcare. Purchase of medical services would be tax free. Licensed, full-time medical professionals, as well as for-profit hospitals and clinics, would be taxed at lower rates. Licensing would continue to be the responsibility of the states. Regulation of hospitals would be the responsibility of states and localities, as at present, not the federal government. Regulation of pharmaceutical companies would continue to be the responsibility of federal agencies, with oversight from Congress. Keeping Big Pharma as national companies would preserve economies of scale.

For too long, healthcare reform has been synonymous with heavy-handed government. Shifting political winds, however, might make room for common sense reform.

5. The federal government would continue to fund research into rare diseases and public health, but public funds would only be allocated for these purposes after public hearings before Congress. All other government funding of pharmaceutical companies would end. So would “direct to consumer” (DTC) marketing. Drugs could only be marketed to medical professionals (as is the case in most countries). To ensure a robust rate of investment in healthcare and pharmaceutical research, those individuals who are investing a percentage of their HSAs in the market might be incentivized to invest some proportion of their HSA portfolio in healthcare equities.

6. Medicare and Medicaid would eventually be abolished, along with Obamacare. Individuals would no longer be taxed for Medicare, reducing payroll taxes for individuals and employers by 1.45 percent each. The federal government would remain the insurer of last resort for catastrophic illnesses, should other sources of funding prove insufficient. State healthcare options could be transformed into public charities for state residents with insufficient healthcare funds available to them. This is essentially what most of them do now. Or state healthcare funds might be redistributed to poorer regions in the state, to offset the inequalities in healthcare provision that might otherwise arise.

7. Private insurance companies would be abolished. They would be replaced by basic healthcare plans offered by local providers, or, for more expensive procedures, plans offered by state HSA alliances, run by and on behalf of consumers. HSA alliances would be geographically restricted to cover the populations of individual congressional districts (averaging around 750,000 people). This provision of the reform would provide enough income from payors to finance most extraordinary health expenses incurred by individuals. It would require individual alliances to follow sound financial practices, but would do away with the need for the competitive cost-cutting that drives giant insurance companies to deny coverage so as to maximize shareholder profits. The provision to restrict HSA alliances geographically would prevent too great a divergence between the interests of citizens and their healthcare providers.

HSA alliances would be run by local boards, elected by the members, with their finances overseen by state legislatures. HSA alliances might subsidize low-cost gym memberships, walking and biking clubs, and other activities promoting health. The existence of 435 distinct healthcare alliances across the country would ensure a substantial amount of medical experimentation. There might emerge both cooperation between and virtuous rivalry among regional alliances. Home buyers might start to look for good HSA alliances the way they look for good school districts today.

I think it will be evident to anyone who has taken Econ 101 that, under this proposal, Americans collectively, and in a very short time, would have very large sums available to them in their HSAs, and government expenditure on Medicare and Medicaid would drop rapidly, possibly after an initial spike. Health outcomes would improve, if only because doctors would be able to spend less time justifying tests and procedures to insurance companies and complying with regulation, and more time with patients. The tax incentives and relative freedom from regulation and compliance paperwork would increase the motivation of young persons to undertake medical careers and help remedy the shortage of physicians. There would be more doctors and nurses in private practice and clinics and fewer in HMOs; expenditures on medical administration would drop. Investment in pharmaceutical research would increase, although it would come principally from individual shareholders in a competitive market, rather than from government agencies under the control of Big Pharma. Above all, Americans for the first time in history would enjoy universal healthcare.

All told, individuals, families, employers, philanthropists, and civil society institutions such as HSA alliances would collectively become responsible for the community’s health. The federal government’s role would return to its constitutional limits for the first time in 60 years. A directed market would replace the current dysfunctional “system,” a monstrous creature born of corporate profit-seeking, bureaucratic bloat, and political power-seeking. The proposed reform would have the effect of depoliticizing healthcare policy, or at least bringing the politics of healthcare closer to those who use it and need it.

There would, of course, be major political obstacles from various entrenched interest groups, who can be counted on to inject fear into the public that they are “losing their healthcare” and that their health will be at risk without government or corporate insurance. Those who follow healthcare debates in Canada or the United Kingdom will be aware that having insurance doesn’t guarantee prompt or excellent healthcare, but the uninformed will undoubtedly be lured by the siren song of “Medicare For All” and other socialist nostrums. 

One major industry that is bound to oppose this plan is America’s mighty insurance industry, which currently spends over $117 million on lobbying efforts. These companies would have to be offered a new role, repurposing their current activities. They might, for example, be consolidated into a few independent health assessment agencies. These would act like the three largest credit reporting agencies (Experian, Equifax, and Transunion) do now, as agencies closely regulated by Congress and the Treasury Department. Their job would be to compile independent actuarial tables to assess the health of individual Americans, to serve as umpires to evaluate the claims of drugs and treatments offered by pharmaceutical and medical research companies, and to ensure transparency. They could keep track of the spread of diseases and recommend investments in medical research. Individuals with exceptional healthcare needs could apply to have their needs assessed, so that available funds could be disbursed for needy cases. Needy individuals who met the criteria could receive care from HSA alliances or, in the worst cases, receive government catastrophic coverage. The existence of multiple agencies in competition with each other should help keep the system honest.

There will also, of course, be opposition from celebrities, publicity-seeking politicians, NGOs, and other pressure groups concerned with questions of social justice in healthcare. With their bias in favor of government solutions, they would surely oppose an ortho-liberal or directed market reform, where great differences in wealth between different populations might lead to unequal health outcomes. The problem, structurally, would be similar to the inequities that exist between inner-city school districts and school districts in wealthy suburbs. In my opinion, this is more of a political problem than an economic one, and finding solutions will be much easier, since in a new directed market there would be no entrenched unions to defend incompetence and privilege, such as exist now in public education. Detailed solutions to the remaining problems will have to be addressed by others. In the short term, however, social justice warriors could be invited to contribute some of their own wealth to healthcare funds for the poor and needy. If they continue to natter on about “equity” and equalization of outcomes, they could be asked: how much of your own or your organization’s resources have been spent on the poor and needy?

For too long, healthcare reform has been synonymous with heavy-handed government mandates and massive spending increases. Shifting political winds, however, might make room for common sense reform. The idea of a directed market—where economic freedom is combined with social welfare—is one that can bring together libertarians and populists alike, and perhaps offer a rare opportunity for popular, market-oriented reform that would stand as one of the most important policy initiatives in US history.

The post Seven Easy Steps for Reforming Healthcare appeared first on Law & Liberty.

]]>
65162 https://lawliberty.org/app/uploads/2025/02/pharmacy-shelves_GettyImages-2163305420.jpg
Reparations Done Right https://lawliberty.org/reparations-done-right/ Thu, 21 Nov 2024 11:00:00 +0000 https://lawliberty.org/?p=63148 The policies of the incoming administration with regard to the federal role in education, despite a great deal of fearmongering during the election season, have not yet taken shape. Though the administration nominated Linda McMahon for Secretary of Education this week, the administration’s priorities in education are being assessed on the basis of statements made […]

The post Reparations Done Right appeared first on Law & Liberty.

]]>
The policies of the incoming administration with regard to the federal role in education, despite a great deal of fearmongering during the election season, have not yet taken shape. Though the administration nominated Linda McMahon for Secretary of Education this week, the administration’s priorities in education are being assessed on the basis of statements made by candidates during the election. Before McMahon’s nomination, Education Week reported that the Secretary of Education “will likely support slimming down if not dismantling the Education Department; expanding school choice; slashing K-12 spending; and attacking school districts’ diversity, equity, and inclusion initiatives.” It will probably be some time before we know how these positions will cash out in terms of policy choices.

Not on the horizon of the incoming administration are any proposals for reparations to the African American population of America’s inner cities. Such proposals have generally come from radicals. But could there be a centrist version of reparations to be found in education policy?

The Usual (Failed) Case for Reparations

The 2024 elections were, if anything, a mandate to reject the idea of reparations as they have been proposed on the left. Pressure from the left to enact reparations policies has multiplied since the George Floyd riots and demonstrations of 2020. In that year, the city council of Asheville, North Carolina, approved reparations for black residents in the form of subsidies for homeownership, businesses, and career development. More recently, San Francisco formed an African American Reparations Advisory Committee which has proposed that “every eligible Black adult receive a $5 million lump-sum cash payment and a guaranteed income of nearly $100,000 a year to remedy San Francisco’s deep racial wealth gap.” The proposal has been enthusiastically received by the San Francisco Board of Supervisors. How it will be paid for has yet to be disclosed.

The San Francisco measure is typical of how the left thinks about reparations. The policies they propose are manifestly unjust. The more extreme activists are calling for a guaranteed minimum income, free healthcare, and housing for all African Americans—reparations through racialized socialism. Despite the sanction shamefully bestowed on such demands by media organs and many public officials, most Americans still have the good sense to recognize that enacting these measures would wreck the lives of both black and white citizens and lead to social turmoil if they were imposed on the vast majority of Americans who live outside of leftist enclaves.

Other proposals for reparations are more in line with the liberal priorities of the welfare state, which still has support from at least a large plurality of American voters. Typical is the Economic Justice Act, introduced in 2020 by Senate Democrats in response to the George Floyd demonstrations. This is a $350 billion measure described as a “down payment” to compensate for “systemic racism and historic underinvestment in communities of color.” Though the law, introduced by Sen. Charles Schumer, has never been brought up for a vote, and is surely DOA in the new Republican Senate, it is a revealing example of the old welfare-state approach to racial justice. The measure includes the usual set of liberal nostrums for improving the condition of African Americans: racial set-asides, Medicaid expansion, and infrastructure projects in high-poverty communities. More job training, despite the persistent failure of such programs in the past to yield results, are also included. Deaf to the lessons of the 2008 subprime mortgage crisis, the bill offers tax credits for down payments on homes, ensuring that more African Americans will be trapped into taking on unaffordable mortgages.

Particularly striking is what is missing from the bill: any measures that would improve educational opportunities for American blacks. Apart from funding for more childcare and preschool programs of dubious value, there are no provisions for improving funding or school choice options for blacks trapped in inner-city schools. There can be little doubt that this loud absence represents the preferences of a leading progressive constituency, namely the teachers’ unions. The teachers’ unions have long blocked access to the most important childcare program of all for poor, inner-city blacks: quality K-12 education.

Hold that thought, and now consider whether more centrist and conservative Americans have anything to offer black Americans in the form of reparations for slavery. Historians such as myself are likely to respond that reparations for slavery have already been made somewhere between Fort Sumter and Appomattox. Abraham Lincoln in his Second Inaugural put it this way:

Fondly do we hope—fervently do we pray—that this mighty scourge of war may speedily pass away. Yet, if God wills that it continue until all the wealth piled by the bondsman’s two hundred and fifty years of unrequited toil shall be sunk and until every drop of blood drawn with the lash shall be paid by another drawn with the sword as was said three thousand years ago so still it must be said “the judgments of the Lord are true and righteous altogether.”

Yet there is a weighty moral argument that the country still owes its black citizens a debt. Ta-Nehisi Coates—before he became a darling of the left—made a persuasive case that reparations should be made to blacks for discrimination against African Americans in the period from Reconstruction to the civil rights movement of the 1960s. For many decades, blacks lived under racist legislation imposed by local and state governments, north and south. Such legislation was tolerated and even supported by the Democratic administrations after 1932 which relied on the southern states to maintain their electoral majorities. The argument that a great part of the persistent income gap between whites and blacks is a direct result of this legal discrimination is not really in doubt anymore among economists. Coates argued that restitution should be paid, not only to the descendants of African slaves brought to America, but also to blacks injured by unjust enrichment authorized by white majority governments since the end of slavery.

Many Americans, and not only those on the progressive left, continue to feel that we as a people need to make some kind of restitution for past discrimination.

In view of this more recent pattern of racist discrimination, many Americans, and not only those on the progressive left, continue to feel that we as a people need to make some kind of restitution for past discrimination. Doing so would allow America to right a great wrong and allow African Americans, at long last, to lay aside their just resentment and integrate into American society as full members. It will allow white Americans aware of the evils in our past to feel that justice, at long last, has been done.

The difficulty with the reparations argument has always been practical, not moral. It lies in the questions, by whom, to whom, and how much? Who has the obligation to pay? Who has a just claim to be paid? And how much of the relative poverty suffered by modern black Americans can be measurably traced to the discriminatory practices of the past?

Most people who reflect on these questions thoughtfully will conclude that we as a people can never really make restitution for slavery and the racial discrimination of the past. The damages can never be calculated in monetary terms. Nor will we ever be able to explain to the Chinese businessman in my neighborhood, a man whose parents brought him to this country in 1956 to escape communism, why his taxes should go up to compensate African Americans, some of them now well off and college educated, for discriminatory housing policies in 1940s Chicago.

A Centrist Form of Reparations

While reparations in the form of cash payments is obviously impractical and unjust, what we can do as a people is embrace policies that will both close the wealth gap between white and black Americans and increase the prosperity of all Americans. It is possible to have a form of reparations that truly serves the common good: a policy oriented to equalizing educational opportunity. What holds poor African Americans back more than any other circumstance is the wretched quality of public schools in the inner cities. This is not really in doubt. But the inner cities also hold the largest untapped reservoir of talent in America: badly educated African Americans. If we could find a way to improve how these young men and women are educated, we could reduce the poverty gap quickly and dramatically improve social relations in America.

We have the resources to do so without raising taxes. The federal government spends $240 billion annually on grants, work-study programs, and loans for post-secondary education. Instead of spending those funds on middle-class entitlements, which often just inflate tuition at wealthy private colleges and for-profit schools, we could convert those funds to education vouchers that would give African-American parents the resources to choose better K-12 schools for their children. Instead of loading middle-class students with mountains of debt, we could provide a solid benefit to those who need it most.

Educating a young black man so that he can profit from college is a much wiser use of public funds than educating future baristas.

Surely those funds would be much better spent on the part of our society that has the most potential for improvement and for contributing to the general welfare. Given the misallocation of educational resources caused by lavish federal spending in higher education, shifting funds to basic education makes a great deal of economic sense. Those of us who have watched with alarm the performances of our college-educated youth in the last year following the events of October 7 will not regret reducing government subsidies for poisonous indoctrination. Educating a young black man so that he can profit from college is a much wiser use of public funds than educating future baristas in subjects of small value to society. Educating African Americans well at the earliest stages of their development will make us all better off.

Improving African Americans’ education, moreover, would reduce the pressure for “equity,” defined as equality of outcomes. The progressive concept of equity is the chief obstacle at present to the principle of merit—the principle that should govern all education. The current progressive solution to black education—trapping black children in terrible K-12 schools, then pretending that their educational achievement in college matches those of their peers—is not helping them or anyone else.

In the current educational landscape, this proposal has the benefit of not being easily locatable on the simple-minded left/right spectrum used by most political commentators. The political benefits that would accrue to any administration that takes up this proposal for shifting educational priorities should be evident, but let me spell them out a bit.

The country would stop over-investing in higher education. This would have the effect of reducing the influence of higher education on the political beliefs of graduates simply in terms of numbers. Reducing federal support for college loans should also have good effects within universities. Political indoctrination in many schools is a luxury addition to the basic curriculum, made possible in part by the flood of federal dollars that has been put at the disposal of university administrators. Reducing subsidies for higher education would force administrators to choose between their basic mission—to educate students in ways that benefit them—and the desire of activists to use the university as a platform for indoctrination.

It is a well-known economic dictum that if you subsidize something, you get more of it. Subsidies also reduce the efficiency of the free market. If students are not subsidized to enjoy what many of them look upon as a four-year vacation from the workforce, they will be forced to make better decisions about their investment in education. Black Americans will have more reason to be grateful to the sane center of American politics. And we will all feel that we have done something to redeem ourselves as a nation from the injustices of the past.

The post Reparations Done Right appeared first on Law & Liberty.

]]>
63148 https://lawliberty.org/app/uploads/2024/11/Operation_Arkansas_Little_Rock_Nine-e1732115278860.jpg
Republics and the Ethical Ideal of Democracy https://lawliberty.org/republics-and-the-ethical-ideal-of-democracy/ Fri, 08 Nov 2024 11:01:00 +0000 https://lawliberty.org/?p=62853 Editor’s note: This is an edited version of a talk given to the John Marshall Program at Boston College on November 4, 2024. Let me begin by thanking David DiPasquale for the kind invitation to cross the Charles River and address the John Marshall Program (JMP) here at Boston College and Dallas Terry for helping […]

The post Republics and the Ethical Ideal of Democracy appeared first on Law & Liberty.

]]>

Editor’s note: This is an edited version of a talk given to the John Marshall Program at Boston College on November 4, 2024.

Let me begin by thanking David DiPasquale for the kind invitation to cross the Charles River and address the John Marshall Program (JMP) here at Boston College and Dallas Terry for helping with the arrangements. When we settled on the date I was working with the professional/academic part of my brain and somehow tuned out the fact that the lecture would take place on the day before the election. The staff of JMP had read an essay I wrote for Law & Liberty about the historical uses of the terms “republic” and “democracy.” They asked me to speak on that subject, but relate it to the timelier theme of elections. I agreed, stipulating that I didn’t want to make a partisan speech but give a historical reflection. However, now that the moment has come to put my ideas into words, I’m not finding it easy to evade the charge of partisanship. We are experiencing a moment of particularly strong passion in our already passionate political life, being literally on the eve of what people are saying is the most important election of our lifetimes. (Let me reassure the younger people in the room that this has been said of every election in my lifetime of nearly seven decades.)

My problem is not just trying to speak on a historical subject at a moment when my audience will be hypersensitive to the partisan implications of my remarks. Election season is a moment when civic-minded people are focused on the present and the future. To bring up the past, especially the remote past I’m going to talk about, appears as an annoying distraction from our most important concerns as a nation. For me personally, it’s also a challenge to speak about history in a moment when the shallowness of Americans’ historical knowledge, and the consequent poverty of our public discourse, have become blindingly obvious. Simple-minded traditionalists like myself have the idea that before elections we should be engaging in some kind of democratic deliberation, discussing the merits of the candidate’s policy proposals, for example. Instead, public discourse has degenerated into an ignorant exercise in name-calling. One side calls the other fascist, the other side calls their opponents communist.

Both claims are hysterical and historically illiterate, and the fact that they are taken seriously at all by anyone is a condemnation of American civic education as well as the absence of deliberation in our public life. Most serious democratic thinkers, from the fifth century BC sophist Protagoras onwards, have believed that participation in the public life of a democracy via deliberation was itself an educational experience for all citizens, and one necessary to the flourishing of democracy. Instead, public deliberation is being led by people who throw around terms of whose meaning they are invincibly ignorant, terms like “republican” and “democratic.” To my mind, it’s like going to an academic conference on biology organized and conducted by people who don’t understand the meaning of the terms botany, zoology, micro-organism, or cell structure.

A disturbing feature of the present moment in our public life is that both sides are accusing the other of being a threat to democracy. People who hold that the events of January 6, 2020, can be plausibly described as the worst insurrection since the Civil War identify the Republican candidate as the chief source of this threat. The wealthiest man in the world shouts back through his megaphone on “X” that the real threat to democracy comes from people who accuse the Republican nominee of endangering democracy. Both parties in the US now take the position that “it’s only democracy when we win.”

No wonder that the latest Georgetown Institute of Politics and Public Service Battleground Civility Poll shows that an alarming number of Americans across party lines believe that the democratic system of government is under threat, although for very different reasons. The poll, conducted by a consortium of Republican and Democratic pollsters, found that 81 percent of respondents agreed with the statement that democracy in America is currently under threat, and 72 percent agreed with that statement strongly. Americans disagree about the source of the threat, however. The forces in America identified as very serious threats to democracy include MAGA Republicans (49 percent, 34 percent extremely serious), major news organizations (47 percent, 24 percent extremely serious), and social media (43 percent, 23 percent extremely serious).

Americans have an almost religious belief that we are a democracy and that democracy is precious to us. On the right, it is said that our personal freedoms depend on democracy, while the left emphasizes that our goodness as a people is threatened by a breakdown in democracy. Everybody has an opinion about this subject, myself included. But a much smaller number seem to have a clear understanding of what democracy is.

Many people, especially foreigners, seem surprised to learn that the US Constitution outlines a form of government that is not democratic, but republican. Many Americans have only a vague conceptions of what a republic is. I remember a student—and a Harvard history major!—writing on an exam I gave some years ago that “republic is just an old name for democracy.” In fact, the thinkers who most shaped the US Constitution, John Adams and James Madison, had a horror of the democratic form of government, which they understood from their reading of history to be a proven failure, leading inevitably and quickly to violence, anarchy, and ultimately tyranny. As John Adams wrote in a letter to John Taylor in 1814, “Democracy never lasts long. It soon wastes, exhausts and murders itself. There never was a democracy yet that did not commit suicide.” Even Jefferson, the Founder most confident in the power of the people to govern themselves, thought direct democracy could only be exercised at the local level, and that the principle of self-government would have to be diluted by the device of representation—a republican device—if it were to operate over large areas. Late in life, Jefferson admitted in a letter to William Charles Jarvis (1820) that the American system of a government, consisting as it did of three distinct and independent branches, would not be able to resist judicial oligarchies abusing their powers on partisan impulses unless the people were to step in to prevent that outcome, using their “wholesome discretion.” But they would lack such discretion absent a serious program of civic education, which in that period, before the founding of the public school system in the 1840s, did not exist.

Of course, democracy is not only a form of government, that is, a particular kind of regime or constitution. A democratic regime, as it was understood in antiquity, is like the one used in Athens in the fifth century BC: a form of government, in other words, in which the people govern themselves via councils and assemblies, random selection of magistrates by lot, and juries consisting of hundreds of jurors to prevent bribery and the undue influence of the wealthy on the judicial process. Democracy as it exists in America is better understood, not as a regime, but as an ethical ideal, one that has grown and developed since the Reformation into a way of life and thought built around three concepts: popular sovereignty, personal autonomy, and equality. This characterization of democracy and its fundamental concepts come from what I believe is the best book ever written on the history of democracy as an ethical ideal, namely my colleague James Kloppenberg’s book Towards Democracy: The Struggle for Self-Rule in European and American Thought (Oxford 2016). As Kloppenberg notes, democracy as an ethical ideal is unthinkable without the influence of Christianity, particularly Protestant Christianity.

So for the balance of this talk, I aim, first, to explain why the United States Constitution does not outline a democratic regime but a republic, and why the Founders thought a republican regime could channel the popular will without suffering from the bad design of the democratic regimes they knew from history. Second, I will discuss democracy as an ethical ideal and way of life, and argue that the aspirations of Americans to be a democratic society, which emerged strongly after the American Revolution, are failing to be realized. I will leave it up to you to decide for yourselves which party or parties in America are most responsible for that failure.

First, let me put a bit more meat on the bones of my claim that the American system of government is republican (lowercase R!), not democratic (lowercase D!). The reason why the Founders did not want a democratic system of government is that, unlike modern Americans, they knew something about Western history and particularly British history. Anyone who has read The Federalist Papers or the private correspondence of the Founders will be aware of just how deep their knowledge was. John Adams was already exciting Americans in 1774 with the thought that their generation could play the role of the ancient Greek legislators Lycurgus and Solon, or the Roman king Servius Tullius, who established Rome’s Servian constitution. In 1776 he wrote in a famous letter known as Thoughts on Government.

You and I, my dear Friend, have been sent into life, at a time when the greatest law-givers of antiquity would have wished to have lived. How few of the human race have ever enjoyed an opportunity of making an election of government more than of air, soil, or climate, for themselves or their children. When, before the present epoch, had three millions of people full power and a fair opportunity to form and establish the wisest and happiest government that human wisdom can contrive?

The Founders were bookish people, and they turned for inspiration as much to history as to political theorists such as Aristotle, Locke, Algernon Sidney, and Montesquieu. Benjamin Franklin’s Library Company of Philadelphia, founded in 1731, which became effectively the Library of Congress during that assembly’s long residence in the city, was well-stocked with histories. The shelves of John Adams’ library, the largest in colonial America, were also loaded with works of history. His writings, like those of Jefferson and Madison, teem with references to the republics of past times: to the ancient Romans above all, but also to the medieval Italian republics, to the Venetian, Swiss, and Dutch republics, and above all, to the English Commonwealth of the seventeenth century. (Note that the word “commonwealth” is just an English translation of the Latin respublica).

Human beings have an ineradicable inclination to evil as well as to good, which is why we need the constraints of a republican regime.

Some of the Founders read Latin, Greek, and French as well as English. They read Thucydides (often in Hobbes’ translation), Livy, Sallust, Cicero, and Tacitus; they read Plutarch’s Lives of the Noble Greeks and Romans in Sir Thomas North’s translation; they read Polybius in the translation of James Hampton (in whose pages they could learn about the federal republics of ancient Greece); they read Edward Mortley Montagu’s Reflections on the Rise and Fall of Ancient Republics; of the Italians, they read Leonardo Bruni’s History of the Florentine People, Guicciardini’s History of Italy, and Machiavelli’s History of Florence; they read John Jacob Mascou’s History of the Ancient Germans; they read David Hume’s six-volume History of England and Obadiah Hulme’s Historical Essay on the English Constitution. As soon as each volume of Edward Gibbon’s Decline and Fall of the Roman Empire left the presses, between 1775 and 1788, copies flew across the Atlantic and were eagerly consumed by Americans. Americans had good reason to be interested in the collapse of states in those years, when the new Confederation in North America was being torn apart by its weak central institutions.

So what understandings of the term “republic” might they have gleaned from their reading? First of all, they would be aware that a republic is not a democracy. The Founders knew what a democracy was and had no interest in giving America a democratic constitution. They knew their history. The historical experience of classical Athens was taken by nearly all the historians the Founders knew to prove that a democratic constitution was doomed to failure.

Already in the fourth century BC, it was widely believed by Greek thinkers that both pure democracy (Athens) and pure oligarchy (Sparta) were failed forms of government. The great political theorists of the fourth century BC—Plato, Aristotle, Isocrates, and Xenophon—had all proposed various fixes for the defects of democracy. The most influential of these was Aristotle’s “mixed” regime, where elements of democracy and oligarchy were balanced against each other to produce stability. Later, Polybius and other writers in the Aristotelian tradition added a monarchical principle for added stability. Aristotle called his mixed regime politeia.

When Aristotle’s Politics was translated into Latin around 1436/37 by the Florentine historian Leonardo Bruni, politeia became respublica. Bruni’s translation was the most popular Latin version for centuries. The 1597 Geneva edition was in John Adams’ library. (Adams also possessed the 1776 edition of the Politics in the English translation of William Ellis, first printed in 1597, where the constitution named politeia was translated, unhelpfully, as “state.”)

When the Romans conquered the Mediterranean in the second century BC, the historian Polybius explained the growth of their power largely in terms of their (unwritten) constitution, which he recognized as a form of mixed regime. The Romans were proud of their republic even in the dark decades of civil war during the first century BC, blaming Rome’s parlous condition on the moral defects of powerful warlords rather than on any weaknesses in her constitution. According to Cicero, Rome’s basic constitutional principles had been laid down by one of the early kings, Servius Tullius. Servius had established the bedrock principle that political power should be proportionate to a man’s income and his contribution to Rome’s military power. Poorer citizens could participate in assemblies but decision-making power was kept in the hands of the most influential citizens. The censors, a magistracy responsible (among other things) for deciding which citizens could belong to the Senate, judged them fit for membership not only on the basis of their moral rectitude, but also on their income. A man without sufficient income to support himself and his family comfortably without engaging in trade or a paid profession was ineligible.

Post-classical Athenians, by contrast, continued to call their city-state a democracy even after all the real power came to be exercised behind the scenes by wealthy oligarchs. As the great authority on Hellenistic Greece, Peter Green, once wittily remarked, Athenians came to see democracy as a privilege best restricted to the upper classes. Modern parallels spring to mind. The Romans for their part were not in the least embarrassed about the preponderant power of the wealthy in their system. It was a feature, not a bug. But in Rome, the possession of wealth and preponderant power imposed upon the great the responsibility to put themselves and their treasure at the service of the republic. It was assumed that the wealthy would also be the best educated, the most likely to have experience in civil and military affairs, and, as persons of long residence in Rome, the most loyal and public-spirited.

In the middle republic (third to second centuries BC), the principle of merit was added to the Servian constitution: distinguished service to the state was also to be a source of dignitas or merited status. Thus, “new men” like Cicero could be taken into the ruling elite on the basis of outstanding abilities and contributions to the republic’s welfare, the salus reipublicae. To prevent the powerful from oppressing the common people, a new magistracy was invented, the tribunate, consisting of ten tribunes of the plebs. The existence of this magistracy led to the emergence of populist politics at the end of the second century BC, but Rome never became a democracy. Roman populism ultimately brought Julius Caesar and Augustus to power, over the opposition of the Senate. Rome’s populists were almost always led by nobles who were more devoted to acquiring power for themselves than serving the interests of the common people. 

Cicero, in his dialogue On the Commonwealth (54/51 BC), praised the old republic for favoring the best men or “optimates,” observing “the principle which ought always to be adhered to in the commonwealth, that the greatest number should not have the greatest power” (ne plurimum valeant plurimi). Rome should never be a democracy; that would be too dangerous for ordered liberty, which was guaranteed by law, not popular power.

In a democracy, Cicero believed, sensible public deliberation was impossible. In one of his speeches, Cicero mocked Greek democracies for their foolish practice of herding large numbers of ordinary citizens into amphitheaters and allowing them to shout at each other. The Romans, more sensibly, conducted deliberation in the Senate, among educated men with experience of government. The Senate proposed legislation and the people in their assemblies had the right to vote on the Senate’s proposals, up or down. This practice, that the wise should deliberate and propose, the people approve, was the normal procedure used by most European republics in the centuries before the founding of our American republic. It was recommended by many of the Whig writers—among them The Commonwealth of Oceana by James Harrington—that were widely read in America.

By establishing a House of Representatives to conduct its own deliberation and to propose all legislation involving taxation (a principle now apparently forgotten in Washington, DC), the Founders were attempting to rebalance the republican tradition they inherited in a popular direction, so that the interests of the wealthy could never prevail over those of the people. Nevertheless, they continued to uphold the view that the presumably wiser and better-educated men in the Senate—Jefferson’s “natural aristocracy”—should prevail in matters of foreign policy and in the oversight of the other branches of government. The aristocratic element was also, originally, meant to prevail in the choice of the president, through the Electoral College. The Electoral College was supposed to deliberate about the election results and exercise its discretion, but it very quickly, within a decade of the Constitution’s adoption, was corrupted by party politics. At this point, it lost its deliberative and decision-making power.

All that being said, most of the Founders were much more optimistic than the tradition they inherited about the possibility that ordinary citizens could engage in democratic deliberation. What this shows, I believe—and here I am again following Jim Kloppenberg as well as Gordon Wood’s classic work, The Radicalism of the American Revolution (1992)—is that the founding generation and the generations that followed were imbued with the democratic spirit. I mean here the ethical ideal of democracy, as distinct from the political regime. As an ethical ideal, democracy will always be aspirational. Like other ethical ideals, the frailty of human nature means that we will always fall short in our efforts to realize it. Human beings have an ineradicable inclination to evil as well as to good, which is why we need the constraints of a republican regime.

As analyzed by Kloppenberg, the democratic ideal has three main elements: popular sovereignty, individual autonomy, and equality. Popular sovereignty means that the ultimate authority in the state is the people, and that the form of government, whether constitutional monarchy, aristocratic republic, or popular republic, should reflect its will. Members of this John Madison Program will recognize this as Rousseau’s view in The Social Contract, who posited that the sovereign will of the people could be invested in a regime at the moment, historical or notional, when the social contract was formed. After that moment of authorization, the regime established by the contract did not need to seek continuous and regular authorization from the people for its subsequent acts. In the democratic republics that emerged in the nineteenth century, however—modeled to a large extent on the American republic—the popular will had to be expressed continuously through representatives, duly constrained by law and the enumerated powers given to the legislature by the Constitution. Thus, in the American republic, popular sovereignty implies both participation—being open to citizen participation at all levels and in all branches of government—and representation, the authorization of persons who can then represent the will of citizens in the legislature. As the democratic spirit has spread, any barriers to political participation based on race, sex, or property qualifications have been torn down. At the same time, it is widely recognized that popular sovereignty needs constitutional limits to protect individual rights, the common good, and civil peace and stability.

If people do not have confidence that elections are honest and that the courts are non-political, then there can be no democracy.

The gradual removal of barriers to participation has come from the second element identified by Kloppenberg as part of the ethical ideal of democracy: individual autonomy. This means self-rule, being sui iuris as the Romans would say, not being treated or acting as subject to another, but free to choose ends for oneself. In America and Europe, the impetus behind the modern commitment to autonomy came most powerfully from the struggle against slavery and unfree labor. Autonomy combines both positive and negative freedom, the freedom to rule oneself and specific freedoms from constraints imposed by the public power—civil rights, in other words. Autonomy means that all adult citizens should have the capacity to shape their own lives, within the standards set by law, tradition, and custom. All citizens should also be able to participate on an equal basis in shaping those standards, and revising them when necessary. Liberal pluralism is a valuable thing in a country as diverse as ours, but ideally, it should be based on explicit democratic authorization, not imposed by the courts. This is especially the case when advocates of pluralism seek to change settled ways of life, above all those affecting the family and religion. When judges impose pluralism (as Jefferson noted in the letter of 1816 referred to earlier), the people are likely to become estranged from the legal elites who take it upon themselves to dictate social norms.

This brings us to the third element in the democratic ethic: equality. Equality means, minimally, equality of political rights and equality before the law. These were ancient ideals, associated, respectively, with Greece and Rome. In addition, Aristotle recognized that great inequality of incomes was destabilizing and counseled, as a maxim of practical wisdom, as distinct from a principle of justice, that legislators should act to prevent too much inequality in a state.

Modern ideals of equality descend from the idea of innate human equality, a principle first enunciated by the Greek church father Gregory of Nyssa in the fourth century AD. He based the principle of human equality on the individual possession of reason, on being formed in the image and likeness of God, and on the New Testament injunction to treat others, even the poorest and weakest of human beings, as though they were Christ. The republican political tradition was impregnated with these notions via radical Protestantism in the sixteenth and seventeenth centuries. The modern democratic ethic represents a secularization of these ideas. The weightiest defender of human dignity in its secularized form was Immanuel Kant, whose moral philosophy has been a major source of much later dignitarian thinking.

It is not the exclusive role of governments, of course, to support the democratic ethical ideal; like any other set of ethical beliefs, they need all the sources of reinforcement they can get, including parental teaching, civic education, religious institutions, professional norms, and community standards. Nevertheless, if we want our republic to have a democratic spirit, we need to recognize the ethical preconditions of a democratic way of life, and governments must do what they can to support those norms, or at least not get in their way. Allowing elections to take place is a necessary condition, but hardly a sufficient one.

For the sake of discussion, let me give a short list of three things I believe governments must do to foster the ethical ideal of democracy—what we might also call the democratic way of life or the democratic spirit. There are other preconditions of democratic civil life, but these seem to be the most pressing at the moment.

First, in order to support healthy forms of pluralism and autonomy, democratic states need to foster a particular kind of sociability. They must be committed to allowing fundamental differences between and among the people to persist, including religious differences. Safeguards should exist, at least in the form of peer pressure or common norms, to prevent political parties from demonizing each other. Single political parties should not be allowed to monopolize the public square in which support may be sought from the people. They should also not be allowed to monopolize public education. The government needs to foster a commitment to tolerance. It cannot allow itself to be taken over by utopian fanatics determined to impose their beliefs on their fellow citizens. Governments and public institutions need to encourage a spirit of live and let live, a spirit of reciprocity, and not try to impose a fixed unitary conception of the good life. They need to support a kind of sociability, in short, which allows people of very different beliefs to live and work together with an attitude of mutual respect. This, to me, is far more important to democracy than preserving what is called “diversity” by departments of Human Resources, diversity based only on arbitrary definitions of group identity.

This sort of sociability is much harder to maintain around election time, of course, but I submit that it has been some time, well over two decades, since the spirit of democratic sociability has prevailed in the councils of our government.

Secondly, states must also promote genuine democratic deliberation, and not only among elected representatives in constitutional assemblies. They must also, as much as possible, include the people as a whole in democratic deliberation, promote rational persuasion, and prevent the use of force or fraud in determining the outcomes of political choices. They should be wary of declaring states of emergency, as these are historically the antechamber to tyranny. As an example of what should not be done, I would mention the health dictatorships established during the Covid panic. These curtailed our liberties in the most dramatic fashion, and we the people had little to say about it. Democracy almost completely disappeared at just the moment when the state had assumed unprecedented dictatorial power over us. Most of the time these dictatorships were legal in the sense of operating under legislative authority, but that authority was originally designed to last for short periods, not for many months and years. Not allowing proper democratic deliberation in legislatures by the people’s representative about issues that affected everyone’s lives and livelihoods has done much to undermine the idea that our republic is an expression of the will of the people. It caused the hypertrophy of conspiratorial thinking, always a sign of a lack of transparency or the use of deceit in decision-making.

Finally, for the democratic spirit to flourish, governments have to foster an ethic of impartiality among those who are the umpires of democratic deliberation, namely, those who run the electoral system and the courts. The law cannot be politicized or weaponized by one party against another. If people do not have confidence that elections are honest and that the courts are non-political, then there can be no democracy. This principle of impartiality of course is an ethical derivative of the Roman republican conception of the rule of law, a civil law derived from natural law and standing above politics. It is apparently difficult for many people to understand that they cannot oppose persons they take to be demagogues by corrupting the legal system. This only makes the law itself into a demagogue. It should be a primary goal of public education to teach young citizens what the rule of law means, its history, and its successes and failures. This means the young need to be taught Western history, beginning with Roman history.

The ancient Romans saw clearly the need for an impartial and non-partisan legal system. Cicero’s solution to the problems of demagoguery and warlordism in his time was to limit popular self-rule through the rule of law and to prevent political abuse of the law by placing its interpretation in the hands of the wise, a relatively new class of legal experts known as jurisconsults. Roman civil law, which had begun to coalesce as a system of rules for settling court cases in the second century BC, had by Cicero’s time assumed the character of an autonomous source of right, set above social and political competition, to which appeal might be made by all Roman citizens on a basis of equality. In a famous speech, In Defense of Aulus Caecina, Cicero maintained that it was this autonomy of law, its superiority to politics, that made it the “incorruptible guarantor” of civil rights. It created “the bonds of social welfare and life” and had therefore to be “uniform among all and identical for everyone.”

The strong separation of legal processes—in principle at least—from the corruptions of politics became a bedrock principle of Western legal thought. It was reformulated and strengthened in the eighteenth century as the principle of an independent judiciary. For the Romans, the separation of law from politics was what made a man free: it protected him and his property from more powerful figures in the state and their political projects. As Cicero put it, using a dramatic paradox, “the magistrates are ministers of the law, the judges are its interpreters, and we are thus all slaves to the law so that we can be free.” Roman citizens were subject to the law, not to persons; and if they became subject to persons, they were eo ipso slaves or dependents, not sui iuris, directly under law. For this system to work, lawyers had to see themselves as representatives of the law, not of political parties, demagogues, warlords, or any particular interest. It was their solemn obligation and sacred duty to uphold the law and justice, and to put its integrity before any private interest. The requirement that a judge should be impartial and never align himself with a political party, as this group will know, was a bedrock principle of our first Chief Justice of the Supreme Court, John Marshall.

I submit that in all these respects, America is failing to uphold the ethical ideal of democracy, and we are actually falling away from that ideal, becoming less democratic, rather than simply failing to make progress. What are the causes of this deplorable situation and what should be done about it I leave open to discussion.

The post Republics and the Ethical Ideal of Democracy appeared first on Law & Liberty.

]]>
62853 https://lawliberty.org/app/uploads/2024/11/The-Country-Election-by-George-Caleb-Bingham.jpg
Teaching Eloquence https://lawliberty.org/teaching-eloquence/ Thu, 17 Oct 2024 10:01:00 +0000 https://lawliberty.org/?p=62214 As Election Day approaches, I’ve been listening, though as little as possible, to our candidates for public office giving their standard speeches on their standard issues. These, frankly, are boring. The crowds may respond with (apparently) spontaneous enthusiasm and even excitement, but the words being spoken are more or less boilerplate, what the French call […]

The post Teaching Eloquence appeared first on Law & Liberty.

]]>
As Election Day approaches, I’ve been listening, though as little as possible, to our candidates for public office giving their standard speeches on their standard issues. These, frankly, are boring. The crowds may respond with (apparently) spontaneous enthusiasm and even excitement, but the words being spoken are more or less boilerplate, what the French call langue de bois, or xyloglossie, wooden language.

There are different varieties of wooden speech. The official state language of North Korea is one; the dictator Kim Jong Il spoke like a native. It causes acute discomfort in anyone accustomed to thinking of speech as a vehicle of truth. There also is what the British call “bafflegab,” language designed not to be understood, like an insurance policy, or pretentious bureaucratic language designed to conceal vacuity of thought. This sort of verbiage can now be machine-generated to suit the needs of, for example, DEI bureaucracies, without passing through a human mind, like the computer in John Searles’ “Chinese room.” Then there is the language most of our politicians speak, the manufacture of focus groups and polls, designed to hit hot buttons and bring voters out on Election Day (or, these days, Election Season). I refer here to the language used by the minority of politicians who are capable of avoiding word salads and speaking in a disciplined way, which in America means speaking in complete sentences.

Contrast these forms of wooden language with genuine eloquence. Contemporary students of history are most likely to think of Edmund Burke, Winston Churchill, or Martin Luther King rather than the great ancient orators Demosthenes and Cicero (although, in my insufficiently humble opinion, the eloquence of the classical orators has never been fully equaled in later Western history). In the contemporary world, great heights of eloquence have been scaled by very few. Boris Johnson’s moving tribute in 2022 to the monarch he called Elizabeth the Great surely stands out. Douglas Murray in his Sunday column for The Free Press has made eloquence his theme for the year, providing many rich modern examples. Those who have viewed his recent interview with Bari Weiss about the war in Israel will have realized that he is among the most eloquent men of our time. I don’t think it’s merely the partiality of a close friend and fellow historian that sees in Allen Guelzo one of the few Americans who can equal the British in his capacity to mobilize the English language in the articulate defense of high ideals.

Our universities should not overlook this matter of eloquence. Since last month’s forum in Law and Liberty on the new generation of civics institutes being founded in state universities across America, I’ve had some further thoughts about subjects they might teach that would attract enrollments and also serve the country well. Many people believe that eloquence can only be the result of inborn gifts not given to the commonality of men. That belief, however, has never been shared by educators in the Western tradition since the time of the ancient Greeks. It was the firm conviction of the Greeks and Romans and, for that matter, of all Western educators from the Italian Renaissance to modern times, that eloquence could be learned. Even Plato, a rival and critic of Isocrates—the great founder of humanistic education—held that philosophy had its own form of eloquence, of which he gave a splendid example in his Apology of Socrates.

Improving the ability of students to express themselves in public is an ideal way for the new civics institutes to add value to a university education and to American civic life more generally.

In the humanistic educational tradition—which in the ancient world ran from Isocrates to the Romans Cicero and Quintilian and was revived in the humanist schools of the Renaissance—eloquence could be taught through formal study of the art of rhetoric. In the early American republic many of the finest orators, including Ralph Waldo Emerson, Horace Greeley, and Frederick Douglass, learned to speak in public by studying and memorizing gems from the anthology The Columbian Orator (1797 and many later editions). This was a collection of speeches ancient and modern, including texts of Socrates, Cato, and Cicero alongside more recent speeches by the Elder Pitt, Charles James Fox, George Washington, and Benjamin Franklin.

Here, I believe, is a great art that the new civics institutes could restore to our republic. Rhetoric and public speaking used to be taught, even required, in American universities. Harvard had a requirement in rhetoric (public speaking) until 1955, and there was an endowed chair, the Boylston Professorship of Rhetoric and Oratory, founded in 1804 to teach the subject. Its first holder was John Quincy Adams; in recent times it has typically been occupied by a poet. But the old art of learning to speak persuasively in public has been abandoned by the modern university. If there are required courses in public speaking at any American university, I’m unaware of them. Anyone who has overheard American undergraduates attempting to communicate with each other will be aware of this gap in their education.

Improving the ability of students to express themselves in public is an ideal way for the new civics institutes to add value to a university education and to American civic life more generally. Everyone knows that American politicians (unlike British ones) are as a rule incompetent public speakers, unable to express their thoughts or persuade people to accept their policies. They have to rely on the dark arts of political consultants to move even the tiniest percentages of voters into their column. That is one reason why they are not effective leaders.

As the Western tradition understood eloquence—the word comes from the Latin, eloquentia or speaking out—speaking out meant speaking with courage and conviction. “Free speech” in the premodern tradition was not a right hedged about with legal protections, as it is for us, but the courage to speak truth to power or the integrity to reject bad counsel that might be in one’s personal interest. Free speech, in other words, was a form of moral courage. It was a skill vital to republican government. Without the eloquence to convince our fellow citizens of the right course of action, politicians have to rely on force or fraud to compel assent, as we see demonstrated daily in this election season. When republics cease to rely on rational persuasion, they cease to exist as true republics.

Laments are heard on all sides these days that our politicians lack courage. But as Cicero observed, a man is more likely to speak with courage when he knows how to speak and has confidence in his ability to persuade. In the eyes of humanist educators since Isocrates, the acquisition of eloquence was a moral discipline intended to persuade and forge consensus in states; the man who acquired eloquence had acquired an indispensable tool of political leadership. For traditional humanist educators, the ideal orator would, like Cicero, denounce tyranny and corruption and preserve the republic from its enemies. Moreover, being able to speak your mind with power and beauty makes you fully human and thus able to contribute more excellence (or virtue) to the human community.

As the greatest political philosopher of Renaissance humanism, Francesco Patrizi of Siena—himself a professor of rhetoric—wrote in his treatise on republican government, “Rightly considered, of all the disciplines, none is more appropriate to the state (respublica) than the oratorical discipline.” Now that is a worthy subject for an institute of civic thought and leadership to take under its wing.

The post Teaching Eloquence appeared first on Law & Liberty.

]]>
62214 https://lawliberty.org/app/uploads/2024/10/Demosthenes_on_the_Seashore_P379.jpg
Real Republics and Fake Democracies https://lawliberty.org/real-republics-and-fake-democracies/ Tue, 24 Sep 2024 10:00:00 +0000 https://lawliberty.org/?p=61649 In an essay recently published in The Free Press, the political commentator Martin Gurri made a nicely arch response to the fashionable hand-wringing about supposed threats to “our democracy.” I come with good news. We can’t lose our democracy because we never had one. Our system is called “representative government.” It enjoys brief spasms of […]

The post Real Republics and Fake Democracies appeared first on Law & Liberty.

]]>
In an essay recently published in The Free Press, the political commentator Martin Gurri made a nicely arch response to the fashionable hand-wringing about supposed threats to “our democracy.”

I come with good news. We can’t lose our democracy because we never had one. Our system is called “representative government.” It enjoys brief spasms of democratic involvement—elections, trials by jury—but by and large it glories in being densely and opaquely mediated, and many of its operations are patently undemocratic—appointed judges, for example, or the Electoral College. This is a feature, not a bug, of the system. By making sure the right hand of power seldom knows what the left hand is doing, the Framers sought to prevent various flavors of tyranny—including, in James Madison’s words, “an unjust combination of the majority.”

I suppose it was to avoid the appearance of partisanship that Gurri called our political regime “representative government” rather than using the name the Founders used, that is, a republic. This was no doubt a prudent choice of words on his part. So shallow is knowledge of history among our politicians, journalists, and the political nation in general that most would struggle to describe the difference between a republic and a ham sandwich. Heedless of capitalization, they would inevitably associate it with the name of one of our political parties, whose structure is no more republican than the Democratic Party’s organs are democratic. Or they might think of the Staatsname of other current republics like the Democratic Republic of North Korea, or the Islamic Republic of Iran. These associations would also be unenlightening. So “representative government” was no doubt Gurri’s best choice, but it is far from adequate as a description of how the Founders intended the country to govern itself. 

What did the term “republic” mean for them? Unlike modern politicos, our Founders were keen students of history. Benjamin Franklin’s Library Company of Philadelphia, founded in 1731, which became effectively the Library of Congress during that assembly’s long residence in the city, was well-stocked with histories. The shelves of John Adams’ library, the largest in colonial America, were also loaded with works of history. His writings, like those of Jefferson and Madison, teem with references to the republics of past times: to the ancient Romans above all, but also to the medieval Italian republics, to the Venetian, Swiss, and Dutch republics, and to the English Commonwealth (the word is just an English translation of the Latin respublica). 

Some of the Founders read Latin, Greek, and French as well as English. They read Thucydides (often in Hobbes’ translation), Livy, Sallust, Cicero, and Tacitus; they read Plutarch’s Lives of the Noble Greeks and Romans in Sir Thomas North’s translation; they read Polybius in the translation of James Hampton (in whose pages they could learn about the federal republics of ancient Greece); they read Edward Mortley Montagu’s Reflections on the Rise and Fall of Ancient Republics; of the Italians, they read Leonardo Bruni’s History of the Florentine People, Guicciardini’s History of Italy, and Machiavelli’s History of Florence; they read John Jacob Mascou’s History of the Ancient Germans; they read David Hume’s six-volume History of England and Obadiah Hulme’s Historical Essay on the English Constitution. As soon as each volume of Edward Gibbon’s Decline and Fall of the Roman Empire left the presses, between 1775 and 1788, copies flew across the Atlantic and were eagerly consumed by Americans. Americans had good reason to be interested in the collapse of states in those years, when the new Confederation in North America was being torn apart by its weak central institutions.

So what understandings of the term “republic” might they have gleaned from their reading? First of all they would be aware, like Gurri, that a republic is not a democracy. (This is not as obvious as it seems: I remember a student— a Harvard history major!—writing on an exam I gave some years ago that “republic is just an old name for democracy.”) The Founders knew what a democracy was and had no interest in giving America a democratic constitution. They knew their history. As John Adams wrote in a letter to John Taylor in 1814, “Democracy never lasts long. It soon wastes, exhausts and murders itself. There never was a democracy yet that did not commit suicide.” The historical experience of classical Athens was taken by nearly all the historians the Founders knew to prove Adams’ assertion. The great political theorists of the fourth century BC—Plato, Aristotle, Isocrates, and Xenophon—had all proposed various fixes for the defects of democracy. The most influential of these was Aristotle’s “mixed” regime, where elements of democracy and oligarchy were balanced against each other to produce stability. Later, Polybius and other writers in the Aristotelian tradition added a monarchical principle for added stability. Aristotle called his mixed regime politeia

When his Politics was translated into Latin around 1436–37 by the Florentine historian Leonardo Bruni, politeia became respublica. Bruni’s translation was the most popular Latin version for centuries. The 1597 Geneva edition was in John Adams’ library. (Adams also possessed the 1776 edition of the Politics in the English translation of William Ellis, first printed in 1597, where the constitution named politeia was translated, unhelpfully, as “state.”)

When the Romans conquered the Mediterranean in the second century BC, the historian Polybius explained the growth of their power largely in terms of their (unwritten) constitution, which he recognized as a form of mixed regime. The Romans were proud of their republic even in the dark decades of civil war, blaming Rome’s parlous condition on the moral defects of powerful warlords rather than on any weaknesses in her constitution. According to Cicero, Rome’s basic constitutional principles had been laid down by one of the early kings, Servius Tullius. Servius had established the bedrock principle that political power should be proportionate to a man’s income and his contribution to Rome’s military power. Poorer citizens could participate in assemblies but decision-making power was kept in the hands of the most influential citizens. The censors, a magistracy responsible (among other things) for deciding which citizens could belong to the Senate, judged them fit for membership not only on the basis of their moral rectitude, but also on their income. A man without sufficient income to support himself and his family comfortably without engaging in trade or a paid profession was ineligible. 

Post-classical Athenians, by contrast, continued to call their city-state a democracy even after all the real power came to be exercised behind the scenes by wealthy oligarchs. The great authority on Hellenistic Greece, Peter Green, once wittily remarked that Athenians came to see democracy as a privilege best restricted to the upper classes. Modern parallels spring to mind. The Romans for their part were not in the least embarrassed about the preponderant power of the wealthy in their system. It was a feature, not a bug. But in Rome, the possession of wealth and preponderant power imposed upon the great the responsibility to put themselves and their treasure at the service of the republic. It was assumed that the wealthy would also be the best educated, the most likely to have experience in civil and military affairs, and, as persons of long residence in Rome, the most loyal and public-spirited. 

If we began again to use the correct historical term for our regime—a republic—we might be able at least to have an honest discussion about who holds power in the American system.

In the middle republic (third and second centuries BC), the principle of merit was added to the Servian constitution: distinguished service to the state was also to be a source of dignitas or merited status. Thus “new men” like Cicero could be taken into the ruling elite on the basis of outstanding abilities and contributions to the republic’s welfare, the salus reipublicae. To prevent the powerful from oppressing the common people a new magistracy was invented, the tribunate, consisting of ten tribunes of the plebs. The existence of this magistracy led to the emergence of populist politics at the end of the second century BC, but Rome never became a democracy. Roman populism ultimately brought Julius Caesar and Augustus to power, over the opposition of the Senate. Rome’s populists were almost always led by nobles who were more devoted to acquiring power for themselves than serving the interests of the common people.

Cicero in his dialogue On the Commonwealth (54–51 BC) praised the old republic for favoring the best men or “optimates,” observing “the principle which ought always to be adhered to in the commonwealth, that the greatest number should not have the greatest power” (ne plurimum valeant plurimi). Rome should never be a democracy; that would be too dangerous for ordered liberty, which was guaranteed by law, not popular power. 

In a democracy, Cicero believed, sensible public deliberation was impossible. In one of his speeches, Cicero mocked Greek democracies for their foolish practice of herding large numbers of ordinary citizens into amphitheaters and allowing them to shout at each other. The Romans, more sensibly, conducted deliberation in the Senate, among educated men with experience of government. The Senate proposed legislation and the people in their assemblies had the right to vote on the Senate’s proposals, up or down. This practice, that the wise should deliberate and propose, the people approve, was the normal procedure used by most European republics in the centuries before the founding of our American republic. It was recommended, among others, by James Harrington, a seventeenth-century British authority on republics widely read in America.

By establishing a House of Representatives to conduct its own deliberation and to propose all legislation involving taxation (a principle now apparently forgotten in Washington, DC), the Founders were attempting to rebalance the republican tradition they inherited in a popular direction, so that the interests of the wealthy could never prevail over those of the people. Nevertheless, they continued to uphold the view that the presumably wiser and better-educated men in the Senate—Jefferson’s “natural aristocracy”—should prevail in matters of foreign policy and the oversight of the other branches of government. The aristocratic element was also, originally, meant to prevail in the choice of the president, although the Electoral College was soon corrupted by party politics, at which point it lost its deliberative and most of its decision-making power. 

But were history’s only democracies to be found in classical Greece? No. When Mr. Gibbon’s history began to be read in the early republic, Americans were given access to another concept of democracy, different from that associated with classical Athens, one that might be called honorary democracy or, less politely, fake democracy. This concept might remind us of the way the term is used by certain of our contemporaries. 

Gibbon famously regarded the second-century AD ad, the period between the reigns of the emperors Domitian and Commodus, as “the period in the history of the world, during which the condition of the human race was most happy and prosperous.” The text upon which the great historian hung this judgment (to which, as an upholder of constitutional monarchy, he was predisposed) was an oration entitled An Encomium of Rome (ca. 154–55), written by Aelius Aristides. 

Aristides, the most prominent Greek intellectual of his day, heaped praise on Rome as the greatest empire the world had ever known. It managed to combine unquestioned authority regulated by law with a free citizenry, and its government was not handed over to foreign princes but was administered by fair and disinterested citizen-officials, capable of ruling and being ruled in turn, as in the best days of classical Greece. For Aristides, the Roman empire was more like a city-state on a vast scale than a traditional despotism. Yet its courts of appeal, administered by Roman governors, were an improvement on city-state justice, and they treated everyone equally, no matter what their status. This was an achievement unprecedented in human history, and a great proof of Rome’s genius for government. Virtuous rule had made the empire flourish as no human government had ever done before. By widely extending citizen rights, the empire vastly increased the pool of talent upon which it could draw. 

Aristides could think of no greater praise than to say that the Roman empire was like a democracy over all the earth, under a single best magistrate and bringer of cosmic order. The Roman system, he wrote, is the final state of mankind: “no other way of life is left”—it was the end of history, as it were. No longer does any city wish to revolt from it and to rule itself, and the Romans have made even the memory of war fade by doing away permanently with local struggles for preeminence. The whole world has become a garden with gleaming cities enjoying a perpetual festival of blessings from the emperor and the gods.

This, of course, is flattery under a mask of high-flown rhetoric, decorated with concepts from political philosophy. It didn’t matter that Aristides’s use of the word “democracy”—and he was by no means the only imperial subject who used the word in this sense—matched no known democratic regime in history and had nothing to do with political power in the hands of the people. It was just a word that had positive connotations in Greek; it would sound nice to his listeners and flatter them. Aristides was a professional orator-entertainer who went about the Greek world giving speeches before audiences educated to appreciate the fine art of eloquence. In this instance, Aristides was speaking before the imperial court, and he knew what to say to win their approval. 

As far as I know, the founding generation never discussed Aristides’ faux democracy. They were serious men who understood the history of political regimes. But perhaps the time to revive Aristides’ concept has now come. Modern politicos who talk glibly of “our democracy” might be asked to explain to the rest of us just what they think democracy is. Is it just a nice-sounding word used to flatter themselves and their political allies, or do they support putting real power in the hands of popular assemblies on the basis of equality? If neither of those alternatives seems palatable, perhaps they might avail themselves of the correct adjective to describe our constitution: republican. 

Martin Gurri is right: we are not a democracy. We are a republic, and that is no bad thing. Republics come in several flavors, aristocratic, popular, and mixed. Not all of them are militaristic and dominated by warlords and the wealthy, as the late Roman republic was. In late medieval and early modern times, most republics preferred trade and industry to making war. Even so, some of these commercial republics lasted a very long time, like Venice, which endured for 1,100 years, or Lucca, which lasted for almost 650 years. (Both were crushed by Napoleon.) If modern advocates for “our democracy” fancy themselves lovers of the people, they might appreciate the fact that our republic at its founding was already weighted towards the popular more than previous early modern republics had been. The Seventeenth Amendment to the Constitution made it still more so. If we began again to use the correct historical term for our regime we might be able at least to have an honest discussion about who holds power in the American system, and whether they deserve to do so, instead of playing make-believe with terms that conceal more than they reveal.

The post Real Republics and Fake Democracies appeared first on Law & Liberty.

]]>
61649 https://lawliberty.org/app/uploads/2024/09/Carpenters-Hall-in-Phillie_shutterstock_1189479106.jpg
Can Harvard Win Back America’s Respect? https://lawliberty.org/can-harvard-win-back-americas-respect/ Mon, 03 Jun 2024 10:01:00 +0000 https://lawliberty.org/?p=58542 Harvard has had a very bad year. It began last summer with the Supreme Court’s verdict in Students for Fair Admissions v. Harvard, which declared that the university’s admissions policies were unconstitutionally discriminatory—or in plain terms, racist. Then came October 7, when Hamas unilaterally broke a cease-fire to attack Israel, killing 1,200 and kidnapping some […]

The post Can Harvard Win Back America’s Respect? appeared first on Law & Liberty.

]]>
Harvard has had a very bad year. It began last summer with the Supreme Court’s verdict in Students for Fair Admissions v. Harvard, which declared that the university’s admissions policies were unconstitutionally discriminatory—or in plain terms, racist. Then came October 7, when Hamas unilaterally broke a cease-fire to attack Israel, killing 1,200 and kidnapping some 250, with many of the horrific atrocities captured on camera. Harvard, along with many elite universities, issued public statements that revealed, to put it delicately, an absence of moral clarity. Then came the disaster of Claudine Gay’s testimony in Congress, followed by the humiliating exposé of her history of plagiarism, followed by her grudging resignation. 

More recently we have had the further humiliation of our interim president’s negotiations with the small pro-Palestinian encampments in Harvard Yard. While other college presidents have had the nerve to call in the police and clear out illegal encampments, our president chose a two-state solution and negotiated. He gave relatively little away, but it was enough to reward the protestors for their efforts, guaranteeing more of the same in the future. 

The undergraduate Administrative Board took the bold step of suspending the thirteen seniors involved in the protest pending further review of their cases, which meant they were unable to take their degrees in last week’s graduation ceremonies. However, on Monday of graduation week, a rump meeting of the Faculty of Arts and Sciences (only 15 percent of professors showed up, mostly activists) passed a motion (despite it being out of order) to allow the students to graduate. The faculty was overruled by the Corporation, Harvard’s senior governing board, in a surprising show of good sense. This did not prevent various forms of moral exhibitionism about the sainted Thirteen during the graduation ceremony itself, acerbically described in the conservative student paper, the Salient

This turbulence and humiliation has not played well in the outside world, particularly among Jewish alumni or the 79 percent of Americans (according to a recent Harvard Caps/Harris poll) who support Israel over Hamas. Only Wednesday, Harvard graduate Senator John Fetterman, in a graduation speech at Yeshiva University, dramatically took off his Harvard hood (he has a degree from Harvard’s Kennedy School of Government), saying it wasn’t right for him to wear a symbol of Harvard given its “inability to stand up for the Jewish community after October 7.”

If the news coming out of Harvard is about its scientific and scholarly achievements and not about its political stances, public attitudes will change.

I am one of those ivory-tower professors you read about (the view from the ivory tower, by the way, is amazing!) and I’ve followed most of these events from afar, via the listserv commentaries of my colleagues on the Council for Academic Freedom. CAFH, as it is known for short, is a Harvard faculty group founded in 2023. We have discussed over the course of the year various ways the university might act to prevent a further slide into the abyss. In the fall, the discussions were mostly about how to limit or eliminate the influence of the DEI bureaucracy (at Harvard the expression is EDIB: Equity, Diversity, Inclusion, and Belonging), whether and how to prohibit diversity statements, how to stop the silencing of heterodox (i.e. non-woke) opinion, and how to introduce more viewpoint diversity. Much energy was expended on defining the scope and nature of academic freedom (my views are here), and considering what principles the university should declare and how they should be enforced. 

This spring a frequent subject of discussion has been whether we should organize a university-wide faculty senate like Berkeley’s to fight back against the unaccountable power of administrators; what limits should be placed on activism; and how the university can recover its proper telos and maintain neutrality on issues of partisan politics. These discussions have borne some fruit. CAFH has some very impressive members, including a former president of Harvard (Larry Summers), many former deans and department heads, and we are in sympathetic contact with multiple members of the governing boards, the current interim president, and the new provost, whose appointment was one of the clearest signs of Harvard’s intention to reform itself. 

Pressure from CAFH, concerned alumni, and some elements within the Harvard administration led Interim President Garber in April to announce the formation of the Institutional Voice Working Group. According to the Harvard Gazette, an official publication (wags call it Harvard’s Pravda), the group was tasked with “the question of whether and when Harvard as a University should speak on matters of social and political significance and who should be authorized to speak for the institution as a whole.” The group issued its report on Tuesday this week, and it was immediately accepted by the administration and endorsed by the Corporation as university policy. It is the clearest sign yet of the university’s intention to take more vigorous damage control measures and perhaps alter the ship’s direction entirely. Whether it will be enough to restore the immense respect Harvard once enjoyed with the public is, however, doubtful. 

The Institutional Voice statement is commendable in certain respects. Its premise is stated in the first sentence, “The purpose of the university is to pursue truth.” The pursuit of truth is the university’s one moral imperative, which it must defend to the general public. The pursuit of truth requires “open inquiry, debate, and the weighing of evidence.” So far so good. The statement shows a firm grasp of the obvious, and the obvious is ordinarily difficult for academics to get their heads around. Derek Bok, a former Harvard president, once wrote that the definition of a professor is “someone who thinks otherwise.” For the eight members of the committee to converge on the obvious is an achievement.

Defending truth means ensuring the conditions of free inquiry and if “outside forces” (read: Governor DeSantis) “seek to determine what students the university can admit, what subjects it can teach, or which research it supports,” the university must defend its autonomy. This principle is an excellent and necessary one for private universities, but less defensible for public ones (as I’ve argued here). 

A president can, by precept and example, create an ethos among university administrators that public comment on partisan political issues is inappropriate.

The statement further argues that when the university makes a habit of issuing official statements that can be interpreted as politically partisan, it undermines its mission and makes those in the community who don’t agree feel alienated, even threatened. It should no longer issue such statements, and any persons who do so in the name of the university should be disavowed. Instead of issuing public statements in support of one group or another (read: Jews or Palestinian sympathizers) it should counsel unhappy students through its “pastoral arms in the different schools and residential houses to support affected community members. It must dedicate resources to training staff most directly in contact with affected community members.” Less official bombast, more therapy. 

Overall, the statement is a step in the right direction, but I doubt whether it will do much to change Harvard’s image as a politically partisan institution. Princeton has had for some years a policy of “institutional restraint” on expressions of partisan politics, but that did not stop various entities within the university from speaking in its name to condemn the Supreme Court for overturning Roe v. Wade two years ago. The partisan political atmosphere at Princeton made it impossible for the university to disavow them. Despite the existence of CAFH (which represents less than 5 percent of the professoriate at Harvard), there is little reason to expect that Harvard’s faculty would exercise any more “restraint.” 

In fact, it seems unlikely that either the Harvard faculty or its administration will engage with any project to depoliticize the university. (The number of persons in the Harvard administration has never been publicly acknowledged for obvious reasons, though the well-informed Ira Stoll estimates it at four times the number of faculty.) In part, this is a long-standing structural issue. As Bernard Bailyn explained many years ago in a brilliant and charming piece for the Harvard Magazine (“Fixing the Turnips“), American universities, even Harvard, from the beginning were public institutions meant to serve civic purposes. Unlike Oxford and Cambridge, they have always been uncomfortable with the Aristotelian idea that there are some things worth learning for their own sakes, apart from any social benefit they might yield. This attitude often mystified British scholars who came to American universities and observed their highly instrumentalized attitude to learning. Bailyn quotes an article by Isaiah Berlin, who had lectured at Harvard in 1949 and found ludicrous the faculty’s bad conscience—their uneasy sense that their scholarly interests were frivolous in view of the sufferings of mankind.

A student or professor in this condition wonders whether it can be right for him to continue to absorb himself in the study of, let us say, the early Greek epic at Harvard, while the poor of south Boston go hungry and unshod, and negroes are denied fundamental rights. … With society in a state of misery or injustice [the scholar, the aspiring student, feels] his occupation is a luxury which it should not be able to afford; and from this flows the feeling that if only he can devote some—perhaps the greater part—of his time to some activity more obviously useful to society, work for a Government department, or journalism, or administration and organization of some kind, etc., he might still with this pay for the right to pursue his proper subject (now rapidly, in his own eyes, acquiring the status of a private hobby).

Given this history, American universities are always going to have a strong sense of their duty to the outside world. The ideal of institutional neutrality, or of ordering a university’s activities towards a purely academic telos, is ultimately foreign to the American tradition of higher education. Princeton’s motto is perfectly typical in this regard: “Princeton in the nation’s service and the service of humanity.” There is all too slippery a slope between the idea of service to the public and the preaching of one’s partisan political views. As Jonathan Haidt showed years ago in The Righteous Mind, individuals on the left of the political spectrum have difficulty recognizing views other than their own as morally legitimate. (The right does much better in this respect.) Most faculty don’t think of their views as political at all; they think of them as simply moral. So long as most college faculties keep recycling their leftish political monocultures, universities committed to public service are going to sound to the great American public like the research arm of the Democratic party. 

So what is to be done? Many people at Harvard still don’t care very much what persons in the outside world think, but after the experiences of this year, with large fall-offs in alumni giving and in the number of high school students applying for early admission, the more serious people here are ready to act. Given the likely hostility of most faculty and administrators to any project of depoliticization, the best hope of reform will have to come from the top. 

Fortunately, the president of Harvard since the time of Charles William Eliot in the nineteenth century has always wielded considerable institutional power and resources. These could be used to project a more favorable image of the university and win renewed respect. A determined president who resisted the temptations of collegiality has the power to transfer, say, resources from the administration (does the university really need sixty Title IX coordinators? Do we really need quite so many vice presidents?) to the teaching staff. He has the power to see that departments hire distinguished faculty of his choosing in fields that are far from politics. 

This used to be the job of our university president. I remember hearing Peter Brown, a famous Princeton historian of late antiquity, jokingly complaining that he could not come near Harvard without Derek Bok offering him a job. Derek Bok had a brain trust whose principal role was to search out distinguished faculty in all fields and bring them to Harvard. One opportunity cost of Harvard’s obsession with identity politics in recent years is that the search for excellent faculty has taken second place to hiring faculty with high intersectional scores. My experience of nearly forty years on the Harvard faculty has taught me that a department can always find some highly placed authority who will tell it that the faculty person it wants to hire is brilliant and doing ground-breaking work. Finding true excellence, however—finding the truly exceptional person whose achievements will make the best students want to study at Harvard—is an altogether more difficult task. But it has been done in the past and can be done again.

If a president and a few well-chosen deans know what excellence is, set real standards, and back the best candidates with ample funding, an institutional culture can quickly change. A president of Harvard also has the power to use the university’s extraordinary resources in public relations to foreground the work of its best scientists and scholars. He or she can make sure the world knows the wonderful things that are being done by our faculty and researchers. If the news coming out of Harvard is about its scientific and scholarly achievements and not about its political stances, public attitudes will change. Intemperate persons on the right who want to punish the university will have a harder time doing it if the country is more aware of the good things Harvard has been doing. A president can also, by precept and example, create an ethos among university administrators that public comment on partisan political issues is inappropriate. Such an ethos existed among administrators when I came to Harvard in 1985 and it should be possible to restore it. The university has traditions of science and scholarship unequaled by any university in the world and, under the right leadership, the country will come to value the university’s achievements again, and for the right reasons. 

The post Can Harvard Win Back America’s Respect? appeared first on Law & Liberty.

]]>
58542 https://lawliberty.org/app/uploads/2024/06/Harvard-Gate-Inscription.jpg
The World We Have Lost https://lawliberty.org/the-world-we-have-lost/ Fri, 31 May 2024 10:00:00 +0000 https://lawliberty.org/?p=58320 It’s a well-known fact that historians generally don’t like historical fiction. Movies set in past periods of history, “based on real events” or not, generally put our teeth on edge. Such fictions are ordinarily filled with ridiculous anachronisms. The anachronisms are most obvious when mushy modern phrases from our therapeutic culture—urging us to share our […]

The post The World We Have Lost appeared first on Law & Liberty.

]]>
It’s a well-known fact that historians generally don’t like historical fiction. Movies set in past periods of history, “based on real events” or not, generally put our teeth on edge. Such fictions are ordinarily filled with ridiculous anachronisms. The anachronisms are most obvious when mushy modern phrases from our therapeutic culture—urging us to share our feelings or hoping we are comfortable with this or that—are put in the mouths of Roman legionaries or medieval churchmen. The producers of British costume dramas are generally brilliant at providing exact reconstructions of the physical environment and costumes used in, say, Jane Austin’s Bath, but they are less accurate when it comes to reconstructing her lost linguistic and conceptual world. I’m sure it sounds stuffy to non-historians when those of us in the trade get annoyed over anachronisms in historical fiction and films. But there’s a method to our miffedness. 

To pursue the discipline of history, we tell our students, requires ceaseless vigilance against anachronism. You can’t tell true stories about the past or describe past times or explain why things happened as they did back then without rigorously excluding the false expectations we bring with us from the present. A classic example is the tendency to forget, when dealing with pre-modern societies, how impoverished, violent, and unhealthy life was for at least 90 percent of the population before the West was enriched by the Industrial Revolution. When I was a graduate student in the 1970s we were given Peter Laslett’s The World We Have Lost: England Before the Industrial Age to read, which was meant to impress upon us the enormous differences between the way ordinary English families lived in the seventeenth century and the way we live now. Another blast from the past came from the economic historian Carlo Cipolla’s Before the Industrial Revolution: European Society and Economy, which presented an extraordinarily vivid picture of how difficult it was to survive into old age in premodern Europe. A book I assign my own students now which performs a similar function is Patricia Crone’s Pre-industrial Societies: Anatomy of the Premodern World. This brilliant work ranges around the globe and across 5,000 years to illustrate what life was like when most people had to grow their own food (or seize it from others), how premodern governments worked (or didn’t), and the indispensable role of religion in keeping communities from falling apart. One comfort that arises from studying such books is that we historians are better able to discount the alarmists when they announce, on almost a daily basis now, that America—or the West—is on the point of collapse owing to one or another crisis that is upon us. Yes, things are bad, but not nearly as bad as they were for most people for the first 2,500 years of Western history.

Hollywood, however, in recent years seems determined to deprive us of even the minimal historical sensitivity that used to be imbibed from its blockbuster historical epics of the past, like Cecil B. DeMille’s The Ten Commandments or Cleopatra. I recently was compelled by familial force majeure to watch the Netflix television series Vikings: Valhalla. After a while I had to ask myself: Now that nobody is teaching Western civ anymore, is the next generation really going to grow up believing that Viking communities in Denmark were led by black women or that Viking men cooked dinner for their warrior wives in the forests of Scandinavia? This sort of gaslighting is de rigueur in Hollywood these days, and it has been widely reported that a treatment that doesn’t make appropriate offerings to the gods of intersectionality won’t find funding from the major studios. 

The film has a number of characters in it who are simply good in the old-fashioned, American way. They know instinctively what is decent behavior, how to put each other at ease, and how to defuse awkward situations.

So it was positively astonishing, almost the next day after watching the woke Valhalla, to stumble across The Holdovers (on Prime), a film by Alexander Payne released last summer with little fanfare. It’s a gentle human comedy about a teacher, played by Paul Giamatti, who gets stuck over the Christmas holidays on the campus of a fictitious New England boarding school, Barton Academy, supervising a small group of boys who are not able to return home to their families over the two-week break. The year is 1970 and the Vietnam war is still raging. The movie filled me with a bittersweet nostalgia precisely because its makers had taken great pains to avoid anachronisms of any kind, even verbal or conceptual ones, despite the risk of offending woke sensitivities. I was fifteen on the dramatic date of the film, and everything in it rang true to me. I found few anachronisms to complain about, apart from the foul language used by some of the adults, which surely would have been out of character in 1970 for devout Catholic women or Harvard-educated prep school teachers. I suppose Hollywood can no longer imagine what the speech of decent people sounds like. But apart from that, the world of the film impressed me as truly the world we have lost. 

The film is built around a clash of personalities between a teacher, Paul Hunham (Giamatti), and one of his charges, Angus Tully. Hunham is a socially awkward, slightly wall-eyed man who, owing to a congenital digestive ailment, smells faintly of fish. He teaches ancient history and is hated by the students as a ruthless disciplinarian and taskmaster, a reputation he revels in. He is famous for having flunked the son of a senator, whose offer to attend Princeton was rescinded after failing Paul’s class. On day six of the Christmas holidays, four of the five holdovers are whisked away in a helicopter owned by one of the boys’ wealthy father. The fifth boy, Angus Tully, a rebellious misfit played by Dominic Sessa, is unable to contact his parents and has to stay behind. This sets up a comedic war between Paul and Angus which drives the rest of the film. Witness to their encounter is a black head cook, Mary Lamb, wonderfully played by Da’Vine Joy Randolph. She is deep in grief following the recent loss of her only son, who had been a scholarship boy at Barton but was drafted and killed in Vietnam. Randolph won the film’s only Oscar, for Best Supporting Actress, for her interpretation of Mary Lamb’s distinctive combination of humor, sympathy, and simple moral dignity.

The coming cultural collapse is only just visible in the little world of Barton Academy in 1970. All the male students and teachers wear jackets and ties, and discipline is still largely intact. There is no grade inflation and students are expected to master difficult historical texts such as Thucydides—the whole book, not just excerpts. No one expresses the slightest doubt that the material is worth learning. No one in authority is embarrassed by the word “Christmas” and no one feels the need to substitute “the holidays.” (In the final assembly of the term, a good-natured quip that there might be some students in the school celebrating a different holiday, meaning Hanukkah, draws a few chuckles.) 

But there are clouds on the horizon. Angus has been abandoned by his mother because she has responded to the expressive individualism of the ’60s and is determined to pursue her own happiness, no matter how much damage she does to her family. The Vietnam draft has opened up a rift between boys who can afford college and win deferments and poorer boys who have to go and risk their lives. In 1970, that rift has only just started to widen into the gap that today separates the privileged from non-elites. 

The counterculture has not yet taken hold in 1970. Among the five student holdovers is a long-haired hippy type, Teddy Kountze (Brady Hepner), also the school’s drug dealer, who makes racist jokes about the one Asian student; this just makes the other boys, who are better bred, uncomfortable. Gentlemanly standards are still enough to squelch his ugly behavior; there is no need for a DEI apparatus to compel ideological conformity. Tellingly, Teddy also displays a certain arriviste snobbery when he objects to Paul’s democratic suggestion that Mary should join the small table of holdovers for dinner. Mary considers, but noting Teddy’s truculent expression, quietly decides to eat by herself. Hunham then angrily dresses Teddy down for his boorish behavior.

Paul may be a stuffed shirt who attended Harvard but one of the pleasant memories the film evokes is the relative absence of class consciousness in the post-war era of American life. The school population mostly gets along easily with the townies. When Paul, Angus, Mary, and the janitor Danny get invited to a Christmas Eve party in town, everyone interacts on an equal basis, and the service staff don’t adopt deferential attitudes to their putative social superiors. The film has a number of characters in it who are simply good in the old-fashioned, American way. They know instinctively what is decent behavior, how to put each other at ease, and how to defuse awkward situations. The plot turns on the unexpected performance, in the course of Paul and Angus’ combative relationship, of pure acts of kindness and generosity towards each other, acts which change them both for the better. 

The plot of the film is expertly managed so that the broad theme of how institutional discipline is internalized in students gradually narrows to a personal confrontation between Paul and Angus. This leads, in an entirely convincing way, to both characters undermining their own carefully constructed personas and a splendid, unexpected act of moral courage and generosity on Paul’s part at the end of the film. It is hard to imagine a humane relationship of this sort blossoming in a modern prep school, with its strict regimes designed to prevent intimacies of any kind between teachers and students. I am not sure whether it was the intention of Alexander Payne (who was seven years old in 1970) and his colleagues and backers to show us an older America that in many ways is far more admirable than the present. If they did, they were wise not to signal it and risk losing the support of Hollywood’s woke studios. Whatever their intentions, they have given us a richly enjoyable reminder of better times and more humane institutions of learning.

The post The World We Have Lost appeared first on Law & Liberty.

]]>
58320 https://lawliberty.org/app/uploads/2024/06/8A91F30D-0CF1-4F79-8B75-86A310B2D5DC.webp
Hope for Harvard? https://lawliberty.org/hope-for-harvard/ Mon, 18 Mar 2024 10:00:00 +0000 https://lawliberty.org/?p=56028 Tacitus at the beginning of his Annals, after brilliantly summarizing all of Roman history in the space of a few paragraphs, ends by providing an answer to a question that must have arisen in the minds of his Roman readers. Why was it that the present generation offered such little resistance to the revolutionary transformation […]

The post Hope for Harvard? appeared first on Law & Liberty.

]]>
Tacitus at the beginning of his Annals, after brilliantly summarizing all of Roman history in the space of a few paragraphs, ends by providing an answer to a question that must have arisen in the minds of his Roman readers. Why was it that the present generation offered such little resistance to the revolutionary transformation of the republic into a monarchy that Augustus had gradually brought about over the course of three decades? Senators used to stand up for their right to participate in governing the republic; indeed, in the previous century, they had fought a series of civil wars to defend that right. What was different about the present moment? Why did no one care about the end of the republic?

Tacitus answered that Augustus had been clever enough to make sure that the workings of government all looked the same. The senate and the popular assemblies still met and magistrates were elected as usual; the courts still passed judgments as before. Augustus controlled everything himself, of course, behind the scenes, but “the younger men had been born after the victory of Actium; most, even of the elder generation, had been born during the civil wars.” Then comes the famous line—few indeed were left who had seen the republic. Whole generations had come and gone, and those alive now simply had no idea how the old republican system had worked in its heyday. Hence, men accepted their slavery without even realizing they had lost their freedom.

I won’t belabor the obvious parallel to present-day America. Those of us who remember how America and its leading institutions used to be governed, and what they used to stand for, are growing fewer. The younger generation seems indifferent to their loss of liberty, mostly because no one has ever taught them it is a thing to be valued.

The passage from Tacitus occurred to me while reading some of the daily deluge of emails I get as a member of the Council on Academic Freedom at Harvard (CAFH for short). The deluge comes from listserv discussions where we members, if we like, can engage in ongoing, daily debates on various topics and news items connected with academic freedom. The goal of these discussions overall is to keep Harvard an institution that helps its students to live flourishing lives and its researchers to contribute to the common good. We members believe that academic freedom is necessary to achieve this, and the pressures to politicize Harvard coming from various bureaucracies on campus are an obstacle to that goal. CAFH’s listserv discussions over the last six months, to me at least, have been invigorating. They have revealed to me that there is no shortage of brilliant people, even some potential leaders, aboard the Good Ship Harvard who could help it reverse course and return to its historic mission, were it possible to summon up the collective will to do so.

Many Americans who would like to do something about the woke seizure of major cultural institutions in our country have been forced to confront the following basic issue: (1) Is there any hope of de-politicizing captured institutions, extracting the DEI and Title IX and other bugs that are infecting their operating systems, and returning them to their normal activities? A number of universities now have done just that, and the results I’m sure will be eagerly watched. (2) Or is it better just to toss the old equipment on the junk heap of woke institutions, and build something entirely new, as has been done at the University of Austin in Texas? (3) Or perhaps it would be best to build citadels of academic freedom within politicized institutions, like Stanford’s Hoover Institution, Princeton’s Madison Program, or the University of Florida’s new Hamilton Center?  Thanks to the Claudine Gay affair many people have come to the conclusion that Harvard belongs to the second class of institutions that can’t be saved from woke capture. Are they right?  Doesn’t the mere existence of CAFH give a reason not to give up yet on Harvard?

Since its formation last year, CAFH has signed up 180 members, many of them distinguished senior professors, including four university professors (Harvard’s highest academic distinction). According to CAFH’s director, Flynn Cratty, however, 121 of these come from the professional schools: law, business, divinity, the Kennedy School of Government, and the medical school. In fact, a third of the total, 60 members, come from the medical school, dental school, and the School of Public Health. This reflects the little-known fact that the medical school boasts over 12,000 faculty members, about half the active physicians working in the Boston area. Only 59 CAFH members come from the one school that teaches undergraduates, the Faculty of Arts and Sciences (FAS), which has 1,221 faculty members in all. Only 4.8%, in other words, of Harvard’s undergraduate teachers are worried enough about the university’s direction that they are willing to participate in the one forum where questions about university policies can be freely debated. There has been a nasty attempt to paint CAFH as “right-wing,” but this is simply untrue: my educated guess is that the vast majority are registered Democrats, like 88% of Harvard faculty (and 27% of Massachusetts voters), or “Unenrolled” like me (and 63% of state voters). I have personal knowledge of only four registered Republicans in CAFH (Republicans amount to only 8.38% of Massachusetts voters).

You won’t score any points by recommending to its attention the world’s greatest expert in Old Church Slavonic or Hittitology unless “they” happen also to score high in intersectionality.

As I see it, our biggest problem is—you guessed it—few indeed were left who had seen the republic. There aren’t very many younger faculty in CAFH. When trying to recruit more historians for the council from within the History Department, I have gotten more or less the same answer from the younger generation. Their response is, “What’s wrong with the way things are?” Most of my younger colleagues have lived their entire professional lives within DEI and Title IX regimes and have willingly made whatever adjustments were needed to go on with their teaching and to write their books. Going along to get along has worked for them. Whenever ukases have come down from University Hall (the administrative headquarters of FAS) urging us to align our courses and teaching practices to suit radical progressive priorities, few indeed were left who saw any reason to object, especially as the requested alignments were usually accompanied by tasty incentives in the form of grants or time off. In any case, the younger generation, by and large, don’t see anything wrong with having our History offerings interlarded with courses on environmentalism, identity politics, transhumanism, intersectionality, post-colonial theory, and the whole radioactive cargo of the politicized university.

So my answer to the first question above is, no, I don’t see any prospects right now for fundamental reform at Harvard. The only way to steer the ship back into port and keep it from leaking more prestige, public support, alumni loyalty, and financial stability is to appoint a strong young president committed to our traditional purposes. The office of the Harvard president constitutionally, at least on paper, has enormous powers; and there is no reason why a strong individual with a steady moral compass would not be able to do what Harvard’s great presidents like Charles William Eliot and James Bryant Conant have done in the past, when their presidencies helped Harvard achieve the goal of the world’s best university. But we will not find such a leader if the same person who was responsible for the disaster of Claudine Gay remains in charge of choosing her successor. That, unfortunately, seems to be exactly what is fated to happen. The Harvard Corporation has shown itself incapable of getting rid of the person or persons who have been responsible for dragging the university’s name in the mud for the last six months and reducing the value of Harvard degrees. A miracle could happen, but one must expect the Corporation’s current disgraceful behavior to continue. History, I suppose, will judge them. Maybe that will be my next book.

This is a sad conclusion for me to reach. I have taught at Harvard for almost 39 years. For about a decade, I taught the history of Harvard University in an undergraduate tutorial and acquired some sense of the enormous dedication and sacrifices that have gone into making Harvard what it is—or used to be. The extraordinary Harvard endowment didn’t grow into a fund greater than the GNP of the bottom 19 countries in the world combined just by having a few billionaires toss us a bone now and then. It was built by tens of thousands of alumni contributing over the course of a century relatively small sums to support mostly specific goals, many of which are no longer approved of by the woke university. Building Harvard’s marvelous museums and libraries was the work of many generations. When I came to Harvard in 1985, I heard the (to me) astonishing boast that it was possible to learn over 150 languages here if you could locate the persons who knew them, who were usually squirreled away somewhere in the bowels of Widener Library. Now that number is 45, somewhat fewer than are taught at the University of Michigan, and considerably fewer than the 75 taught at Yale. Such, apparently, are the fruits of “multi-culturalism.”

Maintaining and improving the quality of its faculty was once recognized as the most important duty of the senior faculty in the various departments, and the process was carefully overseen by the administration. The strenuous efforts to bring to Cambridge, Massachusetts, the world’s most famous philosophers, economists, historians, scientists, poets, and scholars of every description have now been overlaid by a higher purpose, which is making faculty appointments reflect the proportion in the American population of a few chosen minorities. If you are a young faculty member seeking to ingratiate yourself with the administration today, you won’t score any points by recommending to its attention the world’s greatest expert in Old Church Slavonic or Hittitology unless “they” happen to also score high in intersectionality.

What the university’s top leadership, including its governing boards, have done in the last couple of decades is to betray the trust of those who built Harvard into the great beacon of science and learning it once was. The builders of modern Harvard included the alumni and alumnae who gave of their plenty with intelligence and enthusiasm, confident that the institution shared their values. It was built by the great faculty deans for whom Harvard was once famous, whose own scholarly and scientific accomplishments won the respect of academics around the world. Our current leadership has also betrayed the public trust: the trust of our fellow citizens who have long given us lavish support in the form of grants and tax breaks in the belief that we, as an institution, were contributing to the common good. Half of the country now believes that we no longer do that. It’s a scandal and a shame, but it looks like no real changes are likely to happen at America’s premier woke university.

The post Hope for Harvard? appeared first on Law & Liberty.

]]>
56028 https://lawliberty.org/app/uploads/2024/03/005714_1363927.jpg.1500x1000_q95_crop-smart_upscale-e1709676198323.jpg
An Honest Diversity Statement https://lawliberty.org/an-honest-diversity-statement/ Thu, 18 Jan 2024 11:00:00 +0000 https://lawliberty.org/?p=54158 For a number of years now pleasant young women (or persons identifying as women, or with female-sounding names) have been contacting me from the university’s diversity office, inviting me to attend sessions to discuss our DEI policies. Harvard has to be different, so we use the acronym EDIB, for Equity, Diversity, Inclusion, and Belonging (our […]

The post An Honest Diversity Statement appeared first on Law & Liberty.

]]>
For a number of years now pleasant young women (or persons identifying as women, or with female-sounding names) have been contacting me from the university’s diversity office, inviting me to attend sessions to discuss our DEI policies. Harvard has to be different, so we use the acronym EDIB, for Equity, Diversity, Inclusion, and Belonging (our previous president Drew Faust, as her contribution to the collective wisdom, added the “Belonging”). These sessions are never described as compulsory, but the pleasant young women don’t take “no” for an answer. In former times, I was able to avoid these sessions by pleading that I had a subsequent engagement. During the pandemic, however, there was no escape. There was no obvious way to evade a Zoom EDIB “training” session that one could take at one’s leisure. So I took the “training.” I was afraid that the interactive videos would demand that I agree with the policies, in which case I would not be able to check the appropriate boxes, with what tedious and time-consuming consequences I knew not. But fortunately, that didn’t happen. Professors, then, were still exempt from taking loyalty oaths.

Recently, however, volcanic activity has broken out further down Olympus, reportedly arising from the graduate students, who want to step up the pressure on us. They ask why, if they have to write diversity statements, shouldn’t we senior professors be subjected to the same requirements? (Let’s smoke out those white supremacists!) In past times I would have been confident that Olympus could easily withstand attack from any and all inferior cults. Now I’m not so sure. I thought maybe I should get a statement ready, just in case.

I consulted AI, asking it to compose a 500-word statement that would, following the usual format, explain “my thinking about diversity, equity, and inclusion; the efforts I have taken to promote these values; and the steps I hope to take in the future to spread them.” Under the first rubric, AI offered the following:

In the ever-evolving landscape of education, I firmly believe that the cornerstones of excellence are diversity, equity, and inclusion. These principles are not just theoretical concepts but essential ingredients in the creation of a vibrant and effective learning environment. As an educator in [Your College], I am committed to fostering these values in my teaching, research, and service.

Well, that was easy (it took less than 30 seconds to generate the required 500 words), but the suggested language had some drawbacks. It would be embarrassing to put my name to such drivel, but more to the point, I didn’t believe a word of it—at least if the usual meanings were attached to the cult terms “diversity, equity, and inclusion.” I would just have to write my own statement.

So here goes.

Dear Members of Harvard’s Faceless Bureaucracy:

You ask me to explain my thinking about DEI. The fact is that I don’t think about it (or them?) at all if I can help it. Sherlock Holmes once told Watson that he couldn’t be bothered to know about Copernicus’ theory of heliocentrism because it took up valuable space in his brain which he needed for his work as a detective. “But the Solar System!” I protested. —”What of the deuce is it to me?” he interrupted impatiently. “You say that we go round the sun. If we went round the moon it would not make a pennyworth of difference to me or to my work.” I’m a working historian and don’t want to waste my brain space on inessentials.

Since, however, you require me, as a condition of further employment, to state my attitude to these “values” that the university is said to share (though I don’t remember a faculty vote endorsing them), let me say that, in general, the statement of EDIB beliefs offered on your website is too vapid to offer any purchase for serious ethical analysis. The university, according to you, espouses an absolute commitment to a set of words that seems to generate positive feelings in your office, and perhaps among administrators generally, but it is not my practice to make judgments based on feelings. In fact, my training as a historian leads me to distrust such feelings as a potential obstacle to clear thinking. I don’t think it’s useful to describe the feelings I experience when particular words and slogans are invoked and how they affect my professional motivations. It might be useful on a psychoanalyst’s couch or in a religious cult, but not in a university.

Let me take, as an example, the popular DEI slogan “Diversity is our strength.” This states as an absolute truth a belief that, at best, can only be conditional. When George Washington decided not to require, as part of the military oath of the Continental Army, a disavowal of transubstantiation (as had been previous practice), he was able to enlist Catholic soldiers from Maryland to fight the British. Diversity was our strength. On the other hand, when the combined forces of Islam, under the command of Maslama ibn Abd al-Malik, besieged Constantinople in 717, diversity was not their strength. At the crisis of the siege, the Christian sailors rowing in the Muslim navy rose in revolt and the amphibious assault broke down.

Since most societies have usually been at war or under the threat of war for most of history, public sentiment has ordinarily preferred unity to diversity. Prudent and humane governments have usually tolerated a degree of pluralism in order to reduce social discord, but pluralism as such has not been celebrated as a positive feature of society until quite recently. In fact, diversity is a luxury good that can be enjoyed only in secure, peaceful societies. Even in such societies, it has to be weighed against other goods (like meritocracy) that will have to be sacrificed if it is pursued as an absolute good. An indiscriminate commitment to “diversity,” bereft of any loyalty to unifying principles, is the mark of a weak or collapsing society.

It’s not just governments and armies that prefer unity to diversity. Most religions in the last millennium have placed a premium on preserving the original vision of their founders. They have had to resist pressures to undermine (or diversify) that vision and conform to the values of the world around them. They have had to fight against spiritual entrepreneurs, whom they disobligingly label heretics, who have been eager to diversify their doctrines. For those religions, which include orthodox Christianity, Islam, and Buddhism, diversity has not only not been a strength, it has been dangerous, even damnable. When religions cease to care about their unifying beliefs, they cease to exist.

On the other hand, when one of Alexander the Great’s generals, King Ptolemy I, took over control of Egypt in the third century BC, he decided not to repeat the mistake the Persians had made when they pillaged traditional Egyptian temples, alienating the locals. Instead, Ptolemy lavishly promoted a new syncretic deity, Serapis, who could be worshipped by both the Greek conquest elite and by its Egyptian subjects. Diversity was their strength. 

Many people who have come to this country in the last four hundred years came precisely because in America they could escape racist or class prejudice and be treated as equal.

All this should be blindingly obvious to anyone with a cursory knowledge of the past. It may be less obvious why Equity is not a value that all can willingly embrace. The word has a legitimate meaning in Roman law, referring to the need to correct strict justice in light of a wider sense of fairness. Summum ius, summa iniuria. The law cannot be strictly applied in cases where a greater injury might result.

This is not, however, the way your office likes to understand the term Equity. In EDIB-speak, it means “equality of outcomes.” Any policies that produce unequal outcomes—for example, an admissions policy that produces a student body that does not mirror the exact proportions of some (not all) minorities in the country—lack Equity. In this sense, an absolute commitment to Equity can’t help but undermine the university’s commitment to its primary purpose, which is the pursuit of truth. In Latin, that’s veritas, the motto on the Harvard coat of arms that adorns your wall. Living up to that motto is no easy matter. We’re not talking here about telling the truth or being sincere. At a research university, we are in the business of finding out new truths. That can be anything from discovering new galaxies to digging up the remains of hitherto unknown civilizations. The number of people in the world who are really capable of expanding the body of known truths is quite small. I’ve been on many search committees at Harvard in the last 38 years and can vouch for just how small the number is of truly exceptional candidates. If a research university really wants the best, if it really wants to discover new truths, it can’t allow non-expert administrators to overrule search committees and throw out candidates just because they don’t help the EDIB office reach its diversity targets.

Inclusion and belonging (I’m not clear on the difference) are ideals I can get behind so long as they apply to everybody, even to people we don’t agree with. Many people who have come to this country in the last four hundred years came precisely because in America they could escape racist or class prejudice and be treated as equals. It might take a while, but they or their children would eventually fit in. In the meantime, they could start a business, practice their religion, and educate their children without anyone requiring them to hold particular political beliefs. I think our university should imitate America’s best traditions in this respect and make everybody welcome too. But we fail when we impose smelly little orthodoxies on our students—in the form, for example, of diversity statements that call for a certain kind of response.

I realize I am not giving you the kind of statement you wished to get from me, and that I have not even answered all your queries about how I expect to implement EDIB values in my future teaching and research. But I think you can read between the lines.

The post An Honest Diversity Statement appeared first on Law & Liberty.

]]>
54158 https://lawliberty.org/app/uploads/2024/01/200632_1316866.jpg.1499x1040_q95_crop-smart_upscale-e1704332237826.jpg