Richard Gunderman, Author at Law & Liberty https://lawliberty.org/author/richard-gunderman/ Thu, 19 Jun 2025 01:28:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 226183671 The Pacers’ Model https://lawliberty.org/the-pacers-model/ Thu, 19 Jun 2025 10:00:00 +0000 https://lawliberty.org/?p=68133 Win or lose, the 2024-25 Indiana Pacers are something special. After an undistinguished start to the season, they were the fourth-best team in the National Basketball Association, amassing a regular season record of 34-14 since the start of the year. In the playoffs, after defeating underdog Milwaukee, they beat the odds and defeated heavily favored […]

The post The Pacers’ Model appeared first on Law & Liberty.

]]>
Win or lose, the 2024-25 Indiana Pacers are something special. After an undistinguished start to the season, they were the fourth-best team in the National Basketball Association, amassing a regular season record of 34-14 since the start of the year. In the playoffs, after defeating underdog Milwaukee, they beat the odds and defeated heavily favored Cleveland and New York. As of this writing, they have taken two games from one of the teams most heavily favored to win the championship in decades, the Oklahoma City Thunder. And in so doing, they have exemplified key features of classical liberalism.

In the best sense of the word, they are an individualistic team. That doesn’t mean that every player is thinking about himself, but rather that everyone on the roster is a unique threat, and it is common for a different player to shine in each game. The players themselves bring distinctive sets of abilities to the court, and the Pacers thrive not by making each player fit into a mold but by allowing each to remain true to his identity. The team is one of the most hyperkinetic in the league, which means that each player on the floor needs to work hard the whole time. And the coaching staff, led by Rick Carlisle, encourages each player to do what he judges best in the moment, while owning his choices.

The Pacers embody many of the principles of one of the greatest coaches in any sport, fellow Hoosier John Wooden, the first person to be inducted into the basketball Hall of Fame as both a player and a coach. Wooden’s classic championship teams varied dramatically from one another, as did their winning strategies. His first championship team was also his shortest, with no players taller than 6 feet 5 inches. Yet they went 30-0 because each player knew how to “play tall,” intimidating opponents with a full-court press. Yet Wooden could also win with teams built around big men such as Lew Alcindor.

To watch the Pacers play is to behold freedom and autonomy in action. The players are unpredictable, creative, and always improvising. Instead of implementing a top-down plan, they focus on adapting moment to moment to opposing players. This often renders the game exhilarating, the ball moving up and down and around the court at breakneck speed. Offense and defense meld into one another with nearly unprecedented fluidity. Whether the players get the ball on a rebound, turnover, or score, it often reaches the opposite end of the court in just seconds.

Wooden trusted his players to exercise their judgment in the moment. Never was he seen diagramming a play on the sidelines. The winner of twice as many men’s national collegiate championships as his closest competitor—10—he delighted in challenging his much younger, fitter players to beat him in a race down the court. When they took off sprinting, Wooden would throw the ball to the opposite end, a lesson the Pacers have learned well. Likewise, when it comes to improvisation, Wooden often said that he hoped to be as surprised by what his players did on the court as the opposing coach.

The Pacers embody one of John Wooden’s favorite sayings, that the player who makes the team great is better than a great player.

The Pacers also embody equality of opportunity. It is not uncommon for more than five players to finish the game having scored in double digits, and their approach is balanced in an unusual way. They are not afraid to let a player or two shine, but the player in the limelight typically changes from game to game. There is no star that the rest of the team consistently relies on to pull them through. Each player is free to work his magic on the court on both offense and defense, and stymying opponents’ attempts to drive, stealing the ball, and blocking shots on defense is as highly prized as a hot hand.

During the regular season, seven Pacers averaged at least 10 points per game, and in the playoffs, the difficulty they present to opposing teams has been likened to a game of Whac-a-Mole. If the defense tries to key on one player, they seamlessly shift the ball to others, and when things are going well, their play resembles poetry in motion. They can drive the lane for a layup and then suddenly rocket the ball to a player on the perimeter, turning two points into three. A player who has an off night one game can come back and score 30 or more points the next.

Again, there are echoes of Wooden. During after-game press conferences, Wooden typically devoted considerably more attention to the squad’s unsung heroes than superstars such as Bill Walton. Most spectators saw what was happening only during game time, but Wooden took into account the many hours of conditioning, drills, and scrimmages that went into preparing each player to perform at his best. Wooden knew that his players had been prepared to excel, and only someone actually on the court could adapt quickly enough to the continuously evolving flow of the game.

In essence, the Pacers feel like more of a team than most NBA teams, precisely because they function more collaboratively. They embody one of Wooden’s favorite sayings, that the player who makes the team great is better than a great player. The Pacers often play 10 or 12 players per game, and their bench usually contributes far more points than their opponents’ bench. Players come and go, but the rapid flow of the Pacers’ game continues unabated, literally taking the breath away from many opposing players, who often begin to look winded well before the final buzzer sounds.

The Pacers also persevere, embodying the Churchillian directive, “Never give in. Never. Never. Never. Never.” Their tenacious defense forces opponents out of their comfort zone, and their rapid ball and player movement make it difficult for opponents to keep track of who they are supposed to be guarding. Their strong, Wooden-esque work ethic shines through as they apply full-court pressure throughout a game, ensuring that nothing comes easy for the opposing team. They stay with it right up to the end, and like Wooden’s teams, they often win in the last minutes or even seconds, after trailing most of the game.

In an Eastern Conference quarterfinals game, they rallied from a 7-point deficit in overtime to defeat the Bucks, when their odds of winning were approximately 2 percent. In game 2 of the semifinals, they staged a comeback and took the lead with only seconds to go. In an Eastern Conference finals game, they overcame a 14-point deficit to win in overtime, nearly triumphing in regulation except for the fact that a player’s foot was slightly over the three-point line. And in game 1 versus the Thunder, they rallied from a 15-point deficit to win with just 0.3 seconds on the clock. They don’t give in.

Wooden loved to say that failure is not fatal, but the failure to change might be. The Pacers are continuously changing, like a successful entrepreneur adapting to changing market conditions, except at a much faster rate. They are like the Greek mythological figure of Proteus, shifting their shape in ways that make it impossible for opponents to get a grip. The main ingredient of stardom, Wooden said, is the rest of the team, a truth that the Pacers embody to as great an extent as any team in history. They are—as any lover of freedom and responsibility cannot fail to see—liberty in motion.

The post The Pacers’ Model appeared first on Law & Liberty.

]]>
68133 https://lawliberty.org/app/uploads/2025/06/Pacers-vs-Thunder-2025-NBA-Finals.jpg
The Perils of Generational Thinking https://lawliberty.org/the-perils-of-generational-thinking/ Fri, 16 May 2025 10:00:00 +0000 https://lawliberty.org/?p=67131 I recently attended a conference with several hundred educators. The keynote speaker was a “generationist.” Her talk overflowed with graphs, tables, and charts highlighting generational characteristics, including a host of stereotypes intended, it seemed, to amuse the audience. For example, if you want to communicate with a Boomer, schedule an in-person conversation. If you want […]

The post The Perils of Generational Thinking appeared first on Law & Liberty.

]]>
I recently attended a conference with several hundred educators. The keynote speaker was a “generationist.” Her talk overflowed with graphs, tables, and charts highlighting generational characteristics, including a host of stereotypes intended, it seemed, to amuse the audience. For example, if you want to communicate with a Boomer, schedule an in-person conversation. If you want to communicate with a Gen-X’er, use email. If you want to reach a Millennial, send a text. And if you want to communicate with a Gen-Z’er, share a TikTok video. When the session was over, I overheard conversations between several groups of participants, each of which converged on a single theme: “Wow, that was depressing!” Generationism suggests that we are what we are largely because of a factor over which we have no control—the year of our birth.

Prejudices are pervasive. Communists believe as a matter of doctrine that it is appropriate to prejudge persons by class, sexists according to whether they are male or female, and racists according to such traits as skin color and facial features. ”Generationism” is a newer form of prejudice that seems to have become remarkably acceptable in public discourse. It holds, much like astrology, that persons can be prejudged according to their birth cohort. There is nothing inherently objectionable about the notion that people might be shaped by their times, but to assume that any member of such a cohort will conform to a stereotype is merely to lapse into another form of bias.

Before critiquing generationism, it is important to survey the categories into which it presumes to divide people. Among these are “Silents,” those born between 1928 and 1945; “Boomers,” 1946 and 1964; “Generation X,” 1965 and 1980; “Millennials,” 1980 and 1996; “Generation Z,” 1997 and 2012; and “Alphas,” 2013 and 2029. “Generationists” believe that the members of these different cohorts differ from one another in predictable ways that should inform what we expect and how we interact with them. Such differences are presumed to be rooted in the changing historical circumstances of their formative years.

Having been shaped by the Great Depression and World War II, the members of the Silent Generation are said to be thrifty, respectful, and loyal. As a result of the civil rights movement and the Vietnam War, Boomers are competitive, hardworking, and team-oriented. Having been shaped by increased maternal participation in the workforce and lack of adult supervision, Gen Xers are informal, skeptical, and independent. The Millennials, having experienced the birth of the personal computer and the rise of environmentalism, are digitally proficient and care more about experiences than possessions. Gen Z, having been shaped by the 9/11 attacks and the birth of smartphones, is socially aware, keen on activism, and prizes diversity and inclusion. Alphas, having experienced artificial intelligence and the COVID pandemic, are globally conscious, socially responsible, and concerned with sustainability.

Seemingly harmless jokes about the generations abound. Why did the Boomer have a no-coins policy in his store? He couldn’t tolerate change. Why is the age of 30 so significant for the members of Gen X? Because they were 30 at 10 and remain 30 at 50. What does a Millennial get for doing nothing? A trophy. Yet sheer repetition can make such stereotypes seem truer than they really are. For example, numerous educational experts have proposed that learners in Generation Z require short, highly visual content to thrive. Yet I just completed an undergraduate course composed entirely of Gen Z students in which we enjoyed lively discussions of Tolstoy’s War and Peace. What would have been lost had we heeded the advice of generationists?

Like communism, sexism, and racism, generationism fosters a number of bad habits.

There are powerful reasons to question such generalizations. For one thing, the many millions of persons born within each cohort exhibit differences between one another that are at least as great as the supposed differences between generations. Groups of Boomers may have grown up under radically different circumstances, one in well-off, highly educated, intact families that provided every advantage in life, while another may have grown up in poverty among poorly educated people in a one-parent or no-parent household. Although the two may belong to the same birth cohort and may even have been born on the very same day, there is a good chance that they will differ sharply from one another in a variety of ways. Cheech Marin, Marie Osmond, Donald Trump, Oprah Winfrey, and John Roberts are all Boomers, but how similar are they, really?

The categories themselves often prove shifting and fuzzy. For example, one generationist says that Gen Z begins in 1995 and ends in 2010, while another says it begins in 1997 and ends in 2012. Almost all refer to such divisions in approximate terms, as in “born roughly between 1995 and 2010.” And some even posit that there are overlaps between different generations that result in so-called “microgenerations” such as Xennials, born between 1977 and 1983. How long before proponents begin recognizing nanogenerations, such as those born during the presidential election campaign of 2016 or the lockdown phase of the COVID pandemic? All such divisions of time—generations, decades, years, and even hours—are somewhat arbitrary and, when used to categorize people, often obscure at least as much as they reveal.

To be sure, year of birth tells us some things; would a person benefit more from a pacifier, a tricycle, a driver’s permit, a parenting class, a silver wedding anniversary card, or retirement counseling? Such determinations are appropriately informed by the age of the person in question, and placing others in generational categories may, in some cases, serve as a useful mental shortcut. Yet to assume that we can make inferences about personality or character based on birth cohort is quite a stretch. In fact, people often share more characteristics in common with different generations of their own family than with members of their graduation class. 

Like communism, sexism, and racism, generationism fosters a number of bad habits. Those who operate with such poorly grounded, inaccurate, and imprecise distinctions may find themselves easier prey to other superficial overgeneralizations. Simply put, sloppiness of thought anywhere threatens clarity of thought everywhere. I teach undergraduate, graduate, and health professions students each year at a large public university, as well as seniors in retirement and assisted living communities—groups made up of Boomers and Gen Xers. I find that the members of both groups take to reading and discussing books like ducks to water.

Generationism is a form of what the sociologist Robert Merton referred to as a “self-fulfilling prophecy.” We assume that people born at different times must differ from one another. Then we set about seeking to identify the stereotypical characteristics of each such group. We take a continuous variable, year of birth, and attempt to impose discontinuities upon it, even though firm boundaries are virtually impossible to establish. We take people from very different geographic, educational, and economic circumstances and suppose that, due to one shared variable, they can be grouped together, eliding a host of notable differences. “All of you”—insert Boomers, Gen-Xers, or Millennials, and so on—“are alike. You have no loyalty. You don’t understand the value of a good day’s work. You spend your whole day just staring at a screen.” Repeating such epithets indeed helps to reinforce the stereotypes generationists purport to observe, but there is no sound reason to do so.

There is a different way, as outlined in perhaps the greatest study of human character even composed, Aristotle’s Nicomachean Ethics. In it, Aristotle suggests that one of the key factors shaping each person’s character—including the degree to which they are virtuous or vicious—is the choices they have made. If we do what we can to ensure that our children, students, and neighbors have good conversations, take responsibility for their actions, aim to contribute to the lives of others, and yes, read good books, and if we take care as well to do so ourselves, the results are likely to be favorable. We become what we habitually do, think, feel, and attend to, and by developing better habits, we become better versions of ourselves. By assigning personal attributes to birth cohort, however, generationism tends to undermine personal responsibility.

The post The Perils of Generational Thinking appeared first on Law & Liberty.

]]>
67131 https://lawliberty.org/app/uploads/2025/05/Generations-e1746484890579.jpeg
Sweet Melodies of the Catacombs https://lawliberty.org/sweet-melodies-of-the-catacombs/ Fri, 25 Apr 2025 10:00:00 +0000 https://lawliberty.org/?p=66405 The “city in speech” constructed in Plato’s Republic is characterized in part by the censorship of music. Music, Socrates persuades his interlocuters, can powerfully stir the passions and thereby exert a profound effect on each citizen’s character. “Musical innovation,” he explains, “is full of danger to the state, for when modes of music change, the […]

The post Sweet Melodies of the Catacombs appeared first on Law & Liberty.

]]>
The “city in speech” constructed in Plato’s Republic is characterized in part by the censorship of music. Music, Socrates persuades his interlocuters, can powerfully stir the passions and thereby exert a profound effect on each citizen’s character. “Musical innovation,” he explains, “is full of danger to the state, for when modes of music change, the fundamental laws of the state change with them.” 

Officials of the Soviet Union reached a similar conclusion, wielding a pitiless cultural eraser to protect their interests. Yet the power of every state is subject to limits, and the story of Soviet-era “bone music” provides inspiring testimony to the human spirit’s courage and ingenuity in resisting such oppression.

Soviet Censorship

In 1953, subscribers to the Great Soviet Encyclopedia received a replacement page, one of many examples of Soviet attempts to rewrite history to suit the ruling Communist party’s interests. The page in question extended the article on idealist philosopher George Berkeley, after whom Berkeley, California, is named. The page it replaced contained an article on Lavrentiy Beria, one of Stalin’s longest-serving secret police chiefs. After a successful coup led by rival Nikita Krushchev that same year, Beria was arrested, tried as a “traitor and capitalist agent,” and executed, the historical record of his existence having become a matter of embarrassment to those in power.

Bone music should be kept alive in the hearts of as many freedom lovers as possible. For to hold a “rib record” in one’s hand is a remarkable experience.

It is hard for the inhabitants of a free nation such as the United States, with its First Amendment protections for free speech, to appreciate the pervasiveness of state censorship within the Soviet Union. Accounts of such varying events as the starvation of Moscow’s population during the October Revolution, defeats of the Red Army, the civility and generosity of Westerners, and the advanced state of technology and high Western living standards were all rigorously repressed. Likewise, photos were doctored to remove repressed persons, films were edited to promote Soviet ideals, and newspapers and broadcast media were all subject to strict state control.

Penalties for defying censors were often severe. Until 1929, artistic expression enjoyed a relatively high degree of freedom, but with the rise of Stalin, a doctrine of “Soviet realism” arose, which sought to extol communist values such as the emancipation of the Proletariat while vigorously repressing opposing views. The music of Shostakovich, Prokofiev, and Stravinsky was denounced and prohibited, and writers such as Osip Mandelstam and Isaac Babel were imprisoned, ultimately executed, or allowed to die of starvation. The Russian physician-novelist Mikhal Bulgakov dared not publish his masterpiece, The Master and Margarita, during his life, and it was only 25 years after his death that a censored version first appeared.

Inevitably, efforts to circumvent the censors, undertaken at great personal risk, sprang up. One of the best-known examples is samizdat, the self-publishing of books using copying machines. The word samizdat is a portmanteau composed of sam, meaning self, and izdat, for publishing. The 1957 novel Dr. Zhivago by Boris Pasternak, who received the Nobel Prize in Literature the following year, was rejected by censors and circulated only in samizdat. The same was true of the poetry of Joseph Brodsky, who was charged with “social parasitism” but eventually received the Nobel Prize in 1987. The dissident Vladimir Bukovsky wrote, “I write it myself, I publish it myself, I distribute myself, and I sit in jail for myself.” 

Bone Music

One of the most intriguing means of thwarting the censors was known as roentgenizdat, sometimes referred to as “bone music.” “Roentgen” was Wilhelm Röntgen, the German physicist who received the first Nobel Prize in Physics for the 1895 discovery of x-rays. Medical x-ray film represented a relatively inexpensive and widely available medium onto which such audio recordings could be etched, enabling the production of homemade phonograph records. Three basic ingredients were required: the original audio of a live performance, a recording lathe, and a piece of x-ray film, onto which a circle could be traced using a compass, with a hole cut in the middle. Running at 78 rpm, most such discs could hold three to four minutes of material, enough to capture many of the most popular songs of the day.

Living as we do in a time and place where essentially all music is immediately and freely available, it is hard to believe the extremities to which Soviet audiophiles would go to obtain a recording. Musical genres such as jazz and rock ’n’ roll were strictly prohibited, and such Western music was regarded by censors as something approaching a public health threat. Music seemed to them a sort of virus that could transmit decadent capitalist values. On the opposite side were the Voice of America and Radio Free Europe, which broadcast American music into the Soviet Union. In response, Soviet officials erected radio jamming stations. 

It is difficult to overstate the outcry of Soviet officials and their propagandists against musical genres such as jazz, rumba, and rock ’n’ roll. One Soviet censor’s 1960 article, excerpted by Stephen Coates in his book, Bone Music, offers a caustic critique of a roentgenizdat producer of bootleg recordings, Rudy Fuchs:

In his authoritative opinion, our young people need new spiritual food. No, he’s not afraid to leave a mark. This is what his album, a kind of rock ’n’ roll gospel, is all about. There’s a naked woman on the title page in a flesh pink undershirt. There are black barefoot prints on her back. The album contains clips from newspapers and magazines about the fans of a dance that combines epileptic jerking with jiu-jitsu techniques. Fuchs, savoring it, re-reads stories about how English Teddy Boys uprooted the seating and ecstatically threw beer bottles during a movie about rock ’n’ roll. He is delighted to know that in one of the concert halls of Holland, demon-possessed dancers smashed gilded furniture into pieces, and how in Oslo, fans broke shop windows and threw stones at passing buses—all to the rhythms of rock ’n’ roll.

Such fears were not entirely out of bounds, at least concerning the potential for such music to exert far-reaching cultural effects. For example, Coates recounts the extraordinary influence of the Beatles in Soviet Culture. Although John, Paul, George, and Ringo came along at the end of bone music’s heyday, it was inevitable that the most successful group in the history of recorded music would achieve worldwide influence. They even released their own Beach Boys-inspired parody of Soviet life, 1968’s “Back in the USSR.” The group:

had a transformational effect on music and counterculture in the West, but it didn’t compare with the scale of their influence on the eastern bloc. [One authority] claims they had more impact on undermining the Soviet system than all the Cold War cultural strategies and propaganda broadcasts combined. As far as young people were concerned, the wonderful power of their melodies, their lyrics, their madcap energy and their mystique revealed the claims of authorities about life in the west to be a lie. The hunger for their music … was a tidal wave of desire that could not be stopped, particularly when many of the intelligentsia, and even apparatchiks, became fans too. This music was a connection to a better world … while making them feel “strangers in their own land.”

Some bootlegging audiophiles, like Fuchs, went to prison—in his case, for three years. But most were just members of an underground culture, who bought, sold, and shared their skeletal discs, often gathering at clandestine “music and coffee parties.” Today, the rise of once-forgotten analog vinyl recordings is being paralleled by a resurgence of interest in this largely unknown form of music, partly fueled by amazement at its ingenuity and technical details, but even more by an appreciation for the enduring human passions for music and freedom of expression.

The Future of Bone Music

In the 1960s, magnetic reel-to-reel tapes, known as magnitizdat, began to replace x-ray film. Such machines required less ingenuity than the recording lathes used to produce roentgenizdat, while also offering the advantage of being able to record more readily from broadcasts and live performances. Thanks to this increased ease of production, the number of illicit recordings on tape vastly outnumbered those on x-ray film, and instead of making purchases from street vendors or in back alleys, the widespread availability of tape recorders made it possible for citizens to produce them independently. Anyone’s apartment could become a recording studio or audio reproduction facility.

The greatest threat to our democracy is not foreign conquest but internal complacency, a failure on the part of citizens to jealously guard civil rights.

In one sense, bone music has no future. Such technology is completely obsolete. But in another sense, it must not be allowed to fade from memory. Instead, it should be kept alive in the hearts of as many freedom lovers as possible. For to hold a “rib record” in one’s hand is a remarkable experience. On a piece of x-ray film depicting a skull fracture, a bullet wound, or cracked ribs—the sorts of injuries a totalitarian state might inflict on its citizens—is recorded music that liberates and inspires. In pointing out the subversive effects of music, the ironic Socrates was not defending totalitarianism but seeking to rouse in his young conversation partners a deeper appreciation for the power of the arts to engage and draw out what is best in human beings. At every turn, he is hoping that they will wake up and pose objections to his “city in speech,” which, in its monomaniacal dedication to a version of justice, represents a tyranny under which no good person would choose to live.

Words are powerful. Images are powerful. Music is powerful. When such power is wielded by the rulers of a state in defense of their own power, arts that should enrich and elevate the character of citizens, families, and communities begin to suppress and distort them. As Tocqueville and others have long pointed out, the greatest threat to our democracy is not foreign conquest but internal complacency, a failure on the part of citizens to jealously guard civil rights such as free speech. Keeping alive in memory the stories of bone music and similar acts of rebellion against state censorship, even though they were never a part of free societies, represents a vital mission of every citizen. Should we come to take for granted what we must in fact earn anew with each generation, we leave the gates that shield us from tyranny unattended.

The post Sweet Melodies of the Catacombs appeared first on Law & Liberty.

]]>
66405 https://lawliberty.org/app/uploads/2025/04/Bone-music.jpg
The Bias in Health Science https://lawliberty.org/the-bias-in-health-science/ Tue, 01 Apr 2025 10:00:00 +0000 https://lawliberty.org/?p=66251 A health food store owner is cryogenically frozen and revived two centuries later. To his surprise, he learns that the steak, cream pies, and hot fudge he once avoided as unhealthy have turned out to be anything but. The film, of course, is Woody Allen’s 1973 “Sleeper,” and his cinematic send-up highlights a serious problem […]

The post The Bias in Health Science appeared first on Law & Liberty.

]]>
A health food store owner is cryogenically frozen and revived two centuries later. To his surprise, he learns that the steak, cream pies, and hot fudge he once avoided as unhealthy have turned out to be anything but. The film, of course, is Woody Allen’s 1973 “Sleeper,” and his cinematic send-up highlights a serious problem that has long haunted biomedical science and especially nutrition research—namely, that a surprising number of conclusions are based on very thin evidence, and many are not only unreliable but flat-out false.

Some of these problems were highlighted in a recent Law and Liberty essay by Theodore Dalrymple entitled “The Fraudulent Laboratory.” Scientific dishonesty does indeed pose a real threat to the credibility of research, but fraud represents only the tip of the iceberg. Fraud and dishonesty imply an intent to deceive, but the rabbit hole of unreliability in research goes far deeper still, to the point that many findings are false despite no deliberate deception on the part of their authors. Many studies are contaminated by biases of design and analysis of which the investigators themselves are unaware.

Consider heart disease. In the 1960s, experts began recommending that Americans cut back on dietary saturated fats and cholesterol, which they promoted as the principal culprit behind heart disease, the nation’s number one killer. However, reducing fats and cholesterol did not improve the situation. In fact, there is little robust evidence that low-fat diets improve health, and when the US bought into this approach, it began developing an obesity epidemic, with about 40 percent of adult Americans now qualifying as obese.

Fifty years ago, it may have made sense to scientists who found cholesterol-containing plaques choking coronary arteries to place the blame for heart disease on excessive dietary cholesterol consumption. But this reasoning is simplistic in the extreme. For one thing, Ludwig Feuerbach’s dictum that we are what we eat is wrong. We do not, for example, become more bovine when we consume beef. Nor does consuming a low-fat diet appear to lower heart disease risk. In fact, the far greater threat appears to arise from simple sugars, which one nutrition researcher labelled “pure, white, and deadly.”

In this case, while there was no attempt to falsify data, the intent to obfuscate seems to have played an important role. An influential 1965 New England Journal of Medicine review article found that fat and cholesterol were the principal dietary culprits in coronary artery disease. Only much later did it become clear that this research had been funded by the Sugar Research Foundation, whose primary aim seems to have been to exonerate sucrose as the culprit. If peer reviewers and readers had known the funding source, they might have subjected the report to greater scrutiny.

Yet the deeper problem is not so much a deliberate attempt to mislead but the less-than-robust methods underlying nearly all nutrition research. John Ioannidis, MD DSc, a highly regarded researcher at Stanford University who leveled early criticisms at ventures as diverse as the now-defunct Wall Street darling Theranos and widespread COVID lockdowns, has helped to explain why it is difficult to base nutritional recommendations on truly rigorous research. Perhaps his best-known article, published in 2005, is “Why Most Published Research Findings Are False.” Summarizing his conclusions, he writes:

A research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.

Notwithstanding decades of study, there is still a great deal that we think we know but do not. One difficulty arises from the fact that so many nutritional studies lack randomization, which means that any results are subject to confounding factors. Differences in outcome between two groups that we attribute to diet may in fact be explained by something else—factors such as smoking, alcohol consumption, or exercise. Not surprisingly, people who make an effort to consume only healthy foods may be taking care of themselves in other ways that get credited to diet.

This problem stems from the fact that most nutritional studies are observational, not experimental. They rely on self-reported diet and health outcomes, erroneously attempting to conclude that the latter can be attributed to the former. The popular media are part of the problem in the sense that they are happy to report such results, even though their scientific basis is weak. So long as observational studies continue to draw such widespread attention, randomized experimental trials are likely to languish.

Another problem is even more fundamental—the difficulty of assessing diet in any rigorous way. Many studies are too dependent on recall, asking people what they have been eating over a long period of time, which is subject to many sorts of bias. When questioned by a researcher, many of us may tend to unconsciously downplay our dietary indiscretions and overplay the soundness of our nutritional choices. This applies not only to what we eat but how much we eat of many different types of food.

Like all human endeavors, science is subject to bias, and this very liability constitutes a blind spot for many people, both outside and inside the scientific community.

Still another problem concerns the tendency to focus excessively on single nutrients, such as protein, vitamin D-containing foods, or cruciferous vegetables. Ioannidis suggests that the overall role of any single food type or nutrient in accounting for human health is relatively small. It is likely that a person’s overall diet exerts far greater influence, yet researchers often persist in focusing on individual constituents. In many cases, the “noise” from other factors likely overwhelms the “signal” of the dietary ingredient of interest.

Ioannidis likens the long-burgeoning numbers of poorly designed, unreliable nutritional studies to a pandemic. Instead of lots of little studies that attempt to answer a panoply of questions, fewer well-targeted studies are needed. While this would diminish the number of nutritional studies and the researchers producing them, it would also likely reduce the overall costs of nutritional research and provide far more reliable conclusions. Nutritional advice should be based on robust science, not competing opinions.

Of course, nutrition is not the only area in which findings are questionable or worse. Ioannidis argues that similar problems bedevil other fields such as neuroscience and oncology. Summarizing the full extent of the problem, he has described what he calls the “medical misinformation mess”:

First, much published medical research is not reliable or is of uncertain reliability, offers no benefit to patients, or is not useful to decision makers. Second, most healthcare professionals are not aware of this problem. Third, they also lack the skills necessary to evaluate the reliability and usefulness of medical evidence. Finally, patients and families frequently lack relevant, accurate medical evidence and skilled guidance at the time of medical decision-making.

Perhaps the most fundamental problem centers on the rationale underlying such studies. In many cases, the intent is not to elucidate the truth but to advance an agenda—for example, to boost profits for a pharmaceutical or medical device company or to advance a researcher’s career. Ioannidis characterizes many successful researchers as “managers absorbing more money.” If a scientifically valid discovery or innovation generates revenue, so much the better, but revenue should not be permitted to bend science.

Scientific findings are not necessarily true simply because they are backed by a large data set, have been subjected to complex statistical analysis, or have been published in a peer-reviewed journal. Like all human endeavors, science is subject to bias, and this very liability constitutes a blind spot for many people, both outside and inside the scientific community. We all need to recall that science, at its core, is not a body of irrefutable received facts, but one means among others by which we pursue knowledge. And because it is a human endeavor, it is inevitably subject to human bias.

Nutrition research funded by the sugar industry warrants the same scrutiny as studies of the health effects of cigarette smoking underwritten by the tobacco industry. With large sums of money on the line, different and more self-serving questions can be asked, research methods can be tweaked, analyses can be skewed, and results can, where unfavorable, be suppressed or spun in directions deemed to be more advantageous to the funder. One thing is certain—that low-fat yogurt is not so healthful as we have long been led to suppose, especially if it is loaded with sugar.

The post The Bias in Health Science appeared first on Law & Liberty.

]]>
66251 https://lawliberty.org/app/uploads/2025/04/nutrition-label_shutterstock_124122148.jpg
Liberty in Translation https://lawliberty.org/liberty-in-translation/ Fri, 24 Jan 2025 11:00:00 +0000 https://lawliberty.org/?p=64379 The year 2025 marks the 500th anniversary of perhaps the most momentous literary event in the history of the rule of law and the promotion of personal liberty. It was in the year 1525 that William Tyndale, working in the German city of Cologne, began publishing portions of his English translation of the New Testament. […]

The post Liberty in Translation appeared first on Law & Liberty.

]]>
The year 2025 marks the 500th anniversary of perhaps the most momentous literary event in the history of the rule of law and the promotion of personal liberty. It was in the year 1525 that William Tyndale, working in the German city of Cologne, began publishing portions of his English translation of the New Testament. When we think of the origins of modern democracy, we often think of theorists such as Locke and Montesquieu and Founders such as Jefferson and Madison, but it was Tyndale’s contributions that may have done the most for self-rule, while also shaping the English language more profoundly than even William Shakespeare.

Tyndale was born in England around 1494 and studied at Oxford University, becoming fluent in eight languages, including Greek and Hebrew, over the course of his life. After additional study at Cambridge, he took a position as a tutor to a wealthy family’s offspring, a period during which he began to quarrel with members of the religious establishment. Told by one clergyman that people would be better off without God’s laws than the Pope’s, Tyndale responded, “If God spares my life, ere many years, I will cause the plowboy to know more of the scriptures than you do.”

The English reformer was powerfully influenced by his German contemporary Martin Luther, who promoted what came to be called “the priesthood of all believers.” Luther argued that each person can and should encounter both God and the scriptures for himself, declaring for example that “if a group of pious laymen were taken captive and set down in a wilderness, and had among them no consecrated priest, but agreed among themselves to choose one with the office of baptizing, saying the mass, and preaching, such a man would be as truly a priest as though all bishops and popes had consecrated him.”

Such views were not favorably received by the authorities of Tyndale’s day. He sought patronage for an English translation of the Bible but was uniformly rebuffed. In England at the time, the Bible was available only in Latin, which most people could neither read nor understand. More than a century earlier, John Wycliffe and his followers had produced an inferior Bible translation from Latin, resulting in their persecution and even execution. As Tyndale’s views became more widely known, he found it necessary to depart England, eventually ending up in what is now Germany.

Beginning to publish his English New Testament in Cologne, he was betrayed to the authorities, barely escaping to Worms, where Luther had recently been condemned as a “notorious heretic.” There, in 1526, Tyndale published his complete New Testament—of which only three copies are known to have survived. He translated from Erasmus’ Greek text, and when, over the succeeding decade, he turned to the Old Testament, he used the Hebrew text, producing English versions of the Pentateuch and a number of historical books, but he did not live to complete the project.

Working next in Antwerp, Tyndale was betrayed by a fellow Oxford man who handed him over to agents of the Holy Roman Empire. He was imprisoned in the dungeon of a castle near Brussels, where he remained for 16 months. During this time, attempts were made to convince him to renounce the convictions that led him to translate the Bible into the vernacular, including the notion that each person should encounter and interpret scripture for himself, but Tyndale refused. In 1536, he was tied to a stake, strangled, and then burnt, but not before crying out in a loud voice, “Lord, open the King of England’s eyes!”

Ironically, only a few years elapsed before multiple English translations of the Bible had been produced. When less than a century later the King James Version of the Bible, often regarded as one of the most important works in English literature, was published in 1611, it was presented as the work of a group of 47 scholars. Yet more recent scholarly comparisons of Tyndale’s translation and the King James Version reveal that between 80 and 90 percent of the latter is either identical to or only slightly modified from Tyndale’s work. Hence Tyndale is widely heralded as the “father of the English Bible.”

Tyndale’s translation and the theology behind it had a profound effect on the rule of law and the promotion of liberty.

Through his translation, Tyndale introduced many new words into the English language. These include atonement, Passover, and scapegoat. Among the phrases that Tyndale coined are the following: “It came to pass,” “Let there be light,” and “the powers that be.” Tyndale’s word choices sometimes reflect his opposition to the existing religious authorities of his day. For example, he translated ecclesia not as church but as congregation, emphasizing the people over the institution, and he translated presbyter as elder instead of priest, emphasizing wisdom and experience over clerical office.

Tyndale’s New Testaments had to be smuggled into England, often hidden in other books and bales of clothes. It was a small volume, intended to be carried unobtrusively in the pocket of a coat or even sewn into a garment. In October of 1526, the Bishop of London staged a burning of several thousand copies at St. Paul’s Cathedral, which is now the site of one of the three remaining volumes. Tyndale regarded such policies as attempts to “keep the world in darkness,” so that authorities might “sit on the consciences of the people and exalt their own honor above God himself.”

Tyndale’s translation and the theology behind it had a profound effect on the rule of law and the promotion of liberty. Tyndale evinced a deep respect for the dignity and judgment of ordinary people, we who would one day become the citizens of nations such as the United States. Not only could we be trusted to interpret scripture for ourselves, he argued, but we had a religious duty to do so. The word of God itself took precedence over the pronouncements and policies of prelates, a viewpoint that undermines arbitrary authority and lays the groundwork for the primacy of written constitutions.

What mattered most were not arcane and esoteric doctrines but words that could be heard and remembered by even illiterate people. Tyndale’s efforts to sift and purify the text in this way are manifest in the remarkable simplicity of his language, which contains a striking proportion of short sentences and monosyllabic words. The Bible was not meant, he held, to reside in some remote and ornate sanctum sanctorum but in the homes and hands and on the lips of every person, just as schoolchildren today might memorize the Declaration of Independence or the Gettysburg Address.

Like Luther’s German version, Tyndale’s English translation sowed the seeds of literacy throughout the land. If what matters most is each person’s relationship to God, and if that relationship is grounded in scripture, then each person needs direct access to the Bible. Initially, for most, this meant having the text read to them in their own tongue, but it rapidly progressed in less than a century to a society that produced both a Shakespeare and a thriving market for his works. Words define us and give our lives meaning, and no one did more than Tyndale to provide the words to English speakers.

Judaism, Christianity, and Islam are often called religions of the book, and Tyndale aspired to make the Bible comprehensible, memorable, and formative for every person. No one, in his view, was unworthy to know it. In fostering this conviction in the hearts of the people, Tyndale was in effect spreading seeds of revolution and preparing the populace for democratic self-rule. We the people could interpret scripture and discuss and even argue about it with one another, helping to establish lively and sometimes contentious conversations that continue to the present day.

Of course, making the Bible accessible to all comes with certain risks and costs, as well. Telling ordinary people that they are capable of self-government can lead to uprisings and even revolutions, as it did in the German states of Luther’s day and more recently in the American Revolution. Moreover, some might say that the absence of a single creed inevitably leads to theological anarchy, although the interchange of different points of view may also foster insight and creativity. Finally, once people have tasted freedom, they may become more difficult to control, yet this is exactly what lovers of liberty hope for.

Once we the people were deemed fit enough to bear the Bible, we naturally saw ourselves as fit for the ballot box, as well. Tyndale preached a message of freedom and responsibility, the absolute bedrock of self-governance. We the people became knowers and judges in our own right—not sheep to be led about as some despot deems advantageous but possessed of certain unalienable rights. Rather than cede our thinking to someone else, we can and should do so for ourselves, establishing an enduring respect for words and ideas vital to the rule of law, as well as the private conscience essential to liberty.

The post Liberty in Translation appeared first on Law & Liberty.

]]>
64379 https://lawliberty.org/app/uploads/2025/01/Statue-of-William-Tyndale.jpg
Dickens at Delphi https://lawliberty.org/dickens-at-delphi/ Tue, 24 Dec 2024 11:00:00 +0000 https://lawliberty.org/?p=63679 In ancient Greece, those who sought counsel from the Oracle at Delphi passed under an arch that bore the inscription, “Know thyself.” Presumably, those who did not know themselves would be ill-equipped to hear the truth. Yet unanswered questions reverberate down through the ages: what form does such self-knowledge take, how are we to gain […]

The post Dickens at Delphi appeared first on Law & Liberty.

]]>
In ancient Greece, those who sought counsel from the Oracle at Delphi passed under an arch that bore the inscription, “Know thyself.” Presumably, those who did not know themselves would be ill-equipped to hear the truth. Yet unanswered questions reverberate down through the ages: what form does such self-knowledge take, how are we to gain it, and what difference will it make if we do?

Knowing yourself could mean many things. Knowing your strengths and your limits. Knowing that you are both mortal and sinful. Knowing that you are an embodied creature with a soul. Knowing that you are not an island. Or knowing that, in knowing yourself, you glimpse reality itself, the universe.

How are we to gain this self-knowledge? From navel-gazing, or engaging in rigorous self-analysis and self-criticism? Perhaps a sage, counselor, or therapist could help? Or should we turn to philosophical treatises, sacred texts, or novels?

Finally, what difference will self-knowledge make to our lives? Will it show us how insignificant our lives really are, or awaken us to latent significances that we had previously overlooked? Will it lead us to become more self-centered, or inspire us to live for purposes beyond self? Will it lead us to despair or render us more likely to enjoy life?

At this time of year, seekers after self-knowledge would be well-advised to turn to one of literature’s great texts on the matter, Charles Dickens’ 1843 classic, A Christmas Carol, which tells the tale of one Ebenezer Scrooge’s reluctant journey toward self-knowledge.

I am currently working through the text with a group of students in a senior living community whose average age is north of 80 years. If sheer quantity of life experience increases prospects for self-knowledge, then this group should have it in abundance. As Solon warns to call no man happy before he is dead, so this group, perhaps closest to death, has known life in full, and sallies forth upon Dickens’ pilgrimages of self-discovery with an urgency unknown to my 20-something university students.

When we first meet Scrooge, whose surname is likely a portmanteau of screw and gouge, he is described as a “squeezing, wrenching, grasping, scraping, clutching, covetous old sinner.” He has no place in his heart to make merry at Christmastime with his nephew, no donation for charity workers who come in search of assistance for the poor, and no pity for his woefully underpaid clerk, Bob Cratchit, and his underfed family.

Scrooge looks upon others with disdain, as little more than opportunities for self-enrichment, maliciously delighting in the fact that a sucker is born every minute. He “edges his way along the crowded paths of life, warning all human sympathy to keep its distance.” Like Rembrandt’s portrait of the rich fool, he sits alone at his desk at midnight, counting his coins, supposing that by amassing more he is somehow securing a good life.

In fact, however, his life of unremitting extraction has left him utterly alone. His fortress has become his prison, and double locking the door behind himself has only cut him off even further. When the ghost of his deceased business partner, Jacob Marley, appears, the spirit laments bitterly how such a life left him “captive, bound, and double-ironed.” Scrooge cannot believe it. “But you were always a good man of business, Jacob.”

“Business!” thunders the ghost. “Mankind was my business. The common welfare was my business; charity, mercy, forbearance, and benevolence, were all my business. The dealings of my trade were but a drop of water in the compressive ocean of my business.” Yet Scrooge is somehow unmoved. His only hope of seeing himself and the life he has led for what they really are, Marley tells him, is to be visited by three ghosts.

The first, the Ghost of Christmas Past, shows him scenes from his early life: a lonely boy abandoned at boarding school for the holidays; an evening of great merrymaking with Fezziwig, a former employee; falling in love and winning a young woman’s heart, but eventually being rejected because an “idol has displaced me,” that being of course Scrooge’s professional ambition. Some years later, Scrooge is shown the holiday jubilance his former fiancee shares with her husband and children.

Scrooge’s frozen heart is somewhat thawed by the sight of his lonely childhood self, and the reminder of his affection for Fezziwig, a much kinder and more benevolent employer than Scrooge himself became. Yet seeing Belle tell his younger self that he will soon forget her “as an unprofitable dream, from which it happened well that you awoke,” is more than he can bear. He accuses the spirit of torture.

Yet we know from Marley’s ghost that this is all for Scrooge’s welfare, his “reclamation.” He must see his life not from the perspective of the day planner, the quarterly statement, or even the annual report, but from the vantage of his life writ large. To appreciate the change that has come over him, and how far he has strayed from his initial aspirations, he must see it all, from the very beginning. The Ghost of Christmas Past prepares Scrooge for more complete self-knowledge by showing him that he has lost his appreciation for good and beautiful things that his younger self was able to relish.

Afterwards, it is always said of him, “That he knew how to keep Christmas well, if any man alive possessed the knowledge.”

The next spirit, the Ghost of Christmas Present, shows Scrooge how Christmas is being celebrated all over the world, even in a lonely lighthouse and a ship at sea. They visit the Cratchit family, where a spare but joyous feast is in preparation, and Scrooge glimpses for the first time his clerk’s ailing son, Tiny Tim, who will soon die if the course of events is not somehow changed.

At the end of their journey, Scrooge sees something strange protruding from the spirit’s robes. They are a boy and a girl, “yellow, meagre, ragged, scowling.” Assuring Scrooge that these are “his children” the spirit tells him that, “This boy is ignorance and this girl is want.” Scrooge is shocked and dumbfounded. “Have they no refuge or resource?” he stammers. Echoing Scrooge’s own words to charity workers who had asked him for a donation the previous day, the ghost counters, “Are there no prisons? Are there no workhouses?” 

Up to this point, Scrooge has regarded the poor as an abstraction. They are people whose income or net worth places them below a certain threshold, and thus must consign themselves to the stations to which society sentences them. They fit a statistical category and should meet with their Malthusian statistical fate.

In Ignorance and Want, the spirit shows him his own moral failings personified by the sorts of people he once regarded as beneath his concern. By flinging his own callous words back in his face, the spirit helps him to understand that his belligerence to the worthy poor (such as the Cratchits), and his rudeness to philanthropists (such as the ones who approached him the previous day) are all defense mechanisms, which he uses to hide from the awful reality of his own moral impoverishment. 

Scrooge is then visited by the mute Ghost of Christmas Yet to Come, who shows him some disreputable people haggling over the meagre possessions of a man who died rich but unloved. Alarmed by the lack of feeling for the deceased, Scrooge asks to see someone who truly cares for the dead, and he is taken to the Cratchit family mourning the death of Tiny Tim. Then he visits a neglected grave bearing the inscription, “Ebenezer Scrooge.”

To understand the life he has been leading, Scrooge needs to see firsthand where his life trajectory leads. In contrast to Tiny Tim, deeply loved even in death, he will die alone, unmourned, a veritable laughingstock among those who care for nothing but the earthly goods he left behind. At last, in the cemetery, he is prepared for the full weight of self-knowledge. He is the failure, not the poor man on the street. He is the one whose life has, through his own choice, been emptied of everything that really matters. 

He pleads with the ghost, “Men’s courses will foreshadow certain ends. But if the courses be departed from, the ends will change. Say it is thus with what you show me!”

On that note, Scrooge awakens. It’s Christmas morning, and he is not dead. In a euphoria of gratitude, he resolves, 

I will live in the Past, the Present, and the Future! The Spirits of all three shall strive within me. Oh Jacob Marley! Heaven and the Christmas Time be praised for this! I say it on my knees, old Jacob, on my knees.

Scrooge, who has not laughed for many years, finds himself laughing at a world so transformed, now brimming with possibilities for giving and making merry. 

Scrooge sees a boy out his window and sends him to buy the prize turkey—twice the size of Tiny Tim—for the Cratchit family. Walking the streets, his delightful smile is so incandescent that many passersby wish him a merry Christmas. He encounters one of the charity workers and makes a donation so munificent that the man does not know what to say. He visits his nephew’s home, where he is so full of cheer that he feels at home in five minutes.

The next day, Scrooge catches Bob Cratchit arriving at the office a few minutes late, and feigns anger before suddenly raising Bob’s salary, a move so unexpected that the poor man cannot believe it. From that point forward, Scrooge is a changed man. To Tiny Tim, he becomes a second father. And afterward, it is always said of him, “That he knew how to keep Christmas well, if any man alive possessed the knowledge.”

Self-knowledge freed Scrooge from the prison of his spiritually impoverished existence. But keeping Christmas well takes him beyond mere self-knowledge. It is the knowledge that the universe revolves around no one human being, that every person is part of a larger whole, and that we can only acquit ourselves well in our human role by serving that larger whole of which we are but a part. Instead of living only for himself and his money, which is to say, to die a little more every day, for the first time in a long time Scrooge hears a call to live for others. 

I see it in the gleaming eyes of the seniors—the call to live in the past, present, and future, rejoicing in the rediscovery of what it really means to live, yet yearning to have glimpsed this secret sooner along life’s path. May each of us heed the lessons of the spirits!

The post Dickens at Delphi appeared first on Law & Liberty.

]]>
63679 https://lawliberty.org/app/uploads/2024/12/8B8B83D7-4BCB-471D-AC99-C0D34116EB1B_1_201_a-scaled.jpeg
The Work Cure https://lawliberty.org/the-work-cure/ Fri, 11 Oct 2024 10:00:00 +0000 https://lawliberty.org/?p=62027 The view that work represents an affliction or even a curse stretches far back in our cultural history. In the Book of Genesis, when the first humans are expelled from the Garden of Eden, the woman is told that her labor in childbearing will be accompanied by suffering, and the man learns that the ground […]

The post The Work Cure appeared first on Law & Liberty.

]]>
The view that work represents an affliction or even a curse stretches far back in our cultural history. In the Book of Genesis, when the first humans are expelled from the Garden of Eden, the woman is told that her labor in childbearing will be accompanied by suffering, and the man learns that the ground is cursed because of him, and only through painful toil will he eat of it. In the very next book, Exodus, the Egyptians forced the Israelites into “hard service in mortar and bricks and in every kind of field labor.” A casual reader of core Western texts might easily suppose that freedom from work is a blessing devoutly to be wished for.

Yet there are good reasons to doubt this assessment. For one thing, contemporary demography provides evidence that worklessness, and especially the loss of the desire to work, represents a grave affliction besetting millions of American males. First published in 2016 and updated in 2022 during the COVID pandemic, Nicholas Eberstadt’s Men Without Work reveals that, despite 11 million open jobs, 1 in 6 men between the ages of 25 and 55 is not earning wages because he has chosen to be unemployed, a number not reflected in federal unemployment statistics. 

If work is a curse, this group should be faring well, but there is ample evidence that worklessness is associated with a variety of adverse consequences. These include a strong association with conditions that predispose to poor health, such as high blood pressure, as well as numerous diseases, such as heart attack, stroke, and arthritis. In addition, worklessness takes a substantial toll on mental health and well-being, as manifested by higher rates of depression, anxiety, and suicide. It is no accident that the opioid epidemic hit hardest in the Rust Belt, where large numbers of jobs had recently been lost. Yet the suspicion that work is strongly associated with well-being is not a new idea.

One of our nation’s founders, Dr. Benjamin Rush, was also one of the foremost proponents of work as a means of therapy for mental illness. A remarkable polymath, Rush’s achievements were many: he was the youngest graduate in the history of Princeton University, one of the Revolution’s earliest and most ardent proponents, a signer of the Declaration of Independence (at age 30), and Surgeon General of the Continental Army. After the Revolution, he became a leading abolitionist, a champion of public education and the education of women, a reformer of mental hospitals and prisons, and a founder of American psychiatry. 

The year before Rush died in 1813, having performed one of his most valuable services by orchestrating a reconciliation between the nation’s then-estranged second and third presidents, John Adams and Thomas Jefferson, he published his treatise, Medical Inquiries and Observations upon The Diseases of the Mind. In it, Rush acknowledges that physical exercise alone is healthful, likely because it improves the flow of blood to the brain, but he goes on to argue that work offers several additional benefits, which he traces to the fact that it restores habits of action that are “regular and natural.” He writes:

It has been remarked that the maniacs of the male sex in all hospitals who assist in cutting wood, making fires, and digging in a garden, and the females who are employed in washing, ironing, and scrubbing floors, often recover, while persons whose rank exempts them from performing such services languish away their lives within the walls of the hospital.

Rush recounts the case of an English gentleman who came under his care soon after arriving in the United States. He prescribed the usual medicines, which relieved some of the man’s symptoms but did not cure him. Then the man went to live with family in Maryland, where he was lured into helping with the harvest, taking up a rake in his hand and helping to make hay. Writes Rush: “He worked for some time, and brought on thereby a profuse sweat, which soon carried off his disease.” My medical training and experience cast some doubt on the value of a good sweat, but I find the salutary effects of work considerably more compelling.

Rush himself suggests as much by distinguishing between two types of work, bodily exertion and “exercise and diversion of the mind.” He argues that both kinds of work can replace the “antic gestures, listless attitudes, and vociferous or muttering conversations” that characterize those suffering from mental illnesses with “habits of rational industry.” Moreover, such work can supply needed income to family or friends and help to support institutions that provide care. Instead of pitying or fearing the mentally ill, others learn to look upon them with approbation. Once the chains of mental illness had been cast off, a person could then begin to develop the civic virtues so necessary to democracy.

Rush’s theory connecting work and well-being is borne out by contemporary research conducted by the Gallup Corporation. In global surveys of what people want most, the single most common reply is good work or a good job. Some reasons are obvious. Those who lack work are likely to be suffering financially. Yet work also provides opportunities for physical activity, interacting with other people, and building relationships. The best work provides a sense of worth and purpose, enabling workers to feel we are making a difference in the lives of others and our community.

Good work not only enhances the quality of our lives at work but also helps to instill habits of character that are definitive of virtue and essential to self-government.

Of course, not all work is good work, and the recent surge in discussions about workplace burnout, quiet quitting, and even workplace violence can be traced to poor work conditions. As the industrial psychologist Frederick Herzberg recounts in the most requested article in the history of the Harvard Business Review, “One More Time: How Do You Motivate Employees?,” factors such as a neglected workplace, poor supervision, unfair treatment, and inadequate compensation can precipitate severe dissatisfaction, eroding both fulfillment and quality.

By contrast, if we want to do good work, we need to give ourselves good work to do. Good work brings people together, challenges workers, and permits us to keep growing and developing through our work. It also instills self-respect by recognizing people for the good work we are doing, treating us as responsible and trustworthy, giving us a role in decision-making, and enabling us to make meaningful contributions to the lives of others. Those who feel unfairly underpaid will be dissatisfied, but creating opportunities to do good work well is the only way to promote genuine fulfillment.

Of course, good work need not necessarily result in the payment of wages. Among the many examples of unpaid good work are building and enjoying a good marriage, bearing and raising children, maintaining a household, coaching youth sports teams and leading scouting groups, teaching Sunday school, and aiding neighbors or community by doing yard work, running errands, or organizing a block party. The adage that no dying person ever expresses regret for spending too little time at the office represents an implicit critique of wage-earning work, but not work itself.

Good work not only enhances the quality of our lives at work but also helps to instill habits of character that are definitive of virtue and essential to self-government. It teaches habits of dependability, dedication, and resilience. It fosters self-regulation and rewards such desirable traits as creativity and a willingness to take risks. When we feel entitled to what we have produced, we become better guardians of private property, and by extension, our own rights and the rights of others. In these respects, good work constitutes an essential ingredient in the recipe for the flourishing of freedom and responsibility.

Few Americans understood this better than Rush, who, like the other great Benjamin of Philadelphia, Franklin, found the greatest fulfillment in his acts of public service. Rush knew that the United States’ victory in the Revolutionary War did not end but rather began the true work of all Americans:

On the contrary, nothing but the first act of the drama is closed. It remains yet to establish and perfect our new forms of government and to prepare the principles, morals, and manners of our citizens for these forms of government after they are established and brought to perfection.

The opportunity to undertake such work represents not a curse but one of the greatest blessings ever bestowed upon a people.

In helping young men prepare for work, find jobs, and grow and develop through business, trades, and professions, we serve not only their interests but those of society as a whole. We need to recognize that idleness, not work, is the true curse, and that good work affords us one of our best opportunities to become better versions of ourselves and make meaningful contributions to the lives of others. To be sure, we need housing, food, transportation, and the other things wages can buy, but we long even more deeply for challenges and the opportunities that good jobs provide to make a difference. We become our best by giving our best, and work has a vital role to play in drawing out the best in each of us.

The post The Work Cure appeared first on Law & Liberty.

]]>
62027 https://lawliberty.org/app/uploads/2020/01/Rust-Belt.jpg
A Plea for Forgiveness https://lawliberty.org/a-plea-for-forgiveness/ Thu, 14 Mar 2024 09:59:00 +0000 https://lawliberty.org/?p=56160 At dark times in American political life, the art of forgiveness has unexpectedly shone through. This year marks the fiftieth anniversary of US President Gerald Ford’s unconditional pardon of his predecessor Richard Nixon, who was likely to face criminal charges such as conspiracy and obstruction of justice for his role in the Watergate affair. Many […]

The post A Plea for Forgiveness appeared first on Law & Liberty.

]]>
At dark times in American political life, the art of forgiveness has unexpectedly shone through. This year marks the fiftieth anniversary of US President Gerald Ford’s unconditional pardon of his predecessor Richard Nixon, who was likely to face criminal charges such as conspiracy and obstruction of justice for his role in the Watergate affair. Many Americans felt betrayed by Nixon and wanted to see him prosecuted for his actions. Once the pardon was announced, critics labeled it a “corrupt bargain.” Ford vehemently denied that there had been any quid pro quo, but that did not prevent his own press secretary from resigning in protest. Most blame the pardon for dooming Ford’s 1976 presidential campaign.

A little more than a month later, when Ford testified before the House Committee on the Judiciary, he explained his decision. For one thing, he sought to change the national focus from “a fallen president to the pursuit of the urgent needs of a rising nation.” We would, he said, be diverted from meeting these challenges by “remaining sharply divided over whether to indict, bring to trial, and punish a former president.” He quoted Alexander Hamilton, who wrote that a well-timed offer of pardon could “restore the tranquility of the commonwealth.” In his televised broadcast to the nation at the time of the pardon, he also expressed compassion for the tragedy of the Nixon family, which he felt only he could end.

There is good reason to think that our contemporary civic and political life needs to rediscover the art of forgiveness. We live in an age of cancel culture, in which differences of opinion often quickly swell into shunning and boycotting. Social media has magnified the ghettoization and weaponization of public discourse, amplifying the phenomena of echo chambers and cyberbullying—especially, of all places, on college campuses. Even Pope Francis has weighed in, condemning a culture of “ideological colonization” that “leaves no room for freedom of expression.” In too many cases, critics of political candidates, such as Biden and Trump, seem not merely to disagree with them but also to demonize and hate them.

Of course, rancor and vilification are not new to American life. In his 1796 Farewell Address, US President George Washington counseled against a similar spirit by a different name, writing:

Let me now warn you in the most solemn manner against the baneful effects of the spirit of party. It serves always to distract the public councils and enfeebles the public administration. It agitates the community with ill-founded jealousies and false alarms, kindles the animosity of one party against another, foments occasionally riot and insurrection. A fire not to be quenched, it demands a uniform vigilance to prevent its bursting into flame, lest, instead of warming, it should consume.

If we have come to prefer the circuses of shouting matches and character assassination to reasoned dialogue and deliberation, democracy’s goose may well be cooked.

As the old saying goes, holding a grudge is like drinking poison and waiting for the other person to die.

How might we defuse the situation? I believe that part of the answer lies in fostering forgiveness, a habit of the heart that has found practical expression in American public and political life on many occasions.

Perhaps the most vitriolic presidential campaign in US history took place in 1800 between two former allies who had become bitter political enemies. Supporters of incumbent John Adams labeled Thomas Jefferson “a mean-spirited, low-lived fellow, the son of a half-breed Indiana squaw, sired by a mulatto father,” accusing him of fathering multiple children by one of his slaves. Jefferson’s supporters called Adams “a hideous hermaphroditical character, which has neither the force and firmness of a man, nor the gentleness and sensibility of a woman.” Yet, thanks to the peacemaking efforts of Benjamin Rush, the two founding fathers eventually reconciled. They died on the same day, July 4, 1826, Adams’ last words being something to effect, “Thank God, Jefferson still lives.”

Perhaps the greatest expression of forgiveness in US history is found in Abraham Lincoln’s 1865 “Second Inaugural Address.” Anticipating the end of what remains by far the nation’s bloodiest war, Lincoln called for both sides to collaborate in knitting the riven nation back together again:

With malice toward none; with charity for all; with firmness in the right, as God gives us to see the right, let us strive on to finish the work we are in; to bind up the nation’s wounds; to care for him who shall have borne the battle, and for his widow and his orphan—to do all which may achieve and cherish a just, and lasting peace, among ourselves, and with all nations.

Lincoln’s pleas for forgiveness extended beyond his opponents in the Civil War. In his 1863 “Proclamation for a National Day of Fasting,” he wrote, “It behooves us to humble ourselves before the offended Power, to confess our national sins, and to pray for clemency and forgiveness.” We need, in other words, not only to forgive but to seek forgiveness. Lincoln’s humility is almost diametrically opposed to the hubris of many self-righteous contemporaries, who exhibit little appreciation of the need for rigorous self-examination and self-criticism. They have failed to heed the Biblical warning not to focus so intently on the splinter in our neighbor’s eye that we fail to notice the log in our own.

As the old saying goes, holding a grudge is like drinking poison and waiting for the other person to die. Yet before we can forgive someone, we must first recognize, listen to, and gain some understanding of them. Instead of reviling them, we need to connect with them, to realize that we share many of the same fears and hopes in life. In this sense, forgiveness is not so much an occasional attitude as an enduring disposition—a tendency toward curiosity and, yes, compassion toward others. It transforms personal relationships and public discourse from a zero-sum game, in which one person’s victory requires another’s loss, into one from which both parties can emerge enriched.

To cultivate forgiveness, we need to start early. When I was a fifth grader at Ralph Waldo Emerson Public School 58 in Indianapolis, a select group of students whose parents had granted permission would rise one morning each week and walk in a group to a nearby church, where we participated in Weekday Religious Education. Among other Christian teachings, we learned about the value of forgiveness. Such programs have long been subject to close scrutiny, and one 1948 Supreme Court ruling found them an unconstitutional violation of the separation of church and state, although they were later permitted if not held on school grounds.

If we aspire to be a society that spurns rancor and vilification, we must do more to foster understanding and practice of forgiveness. And these efforts must be grounded not in legislation or civil legal action, but in the education of character. Weekday religious education may or may not be the solution, but at a time when Americans have become more isolated and less engaged in religious and civic life, the need for voluntary associations to step forward is acute. Democracy can only survive where the people are well prepared to govern themselves, and in a culture where shouting heads too often serve as models of public discourse, each of us can lead by practicing and promoting the art of forgiveness.

The post A Plea for Forgiveness appeared first on Law & Liberty.

]]>
56160 https://lawliberty.org/app/uploads/2024/03/G-Ford-at-desk_GettyImages-515112844.jpg
The Unintended Consequences of Immortality https://lawliberty.org/the-unintended-consequences-of-immortality/ Fri, 02 Feb 2024 11:00:00 +0000 https://lawliberty.org/?p=54289 The great increase in human life expectancy during the twentieth century is often cited as proof that humanity’s lot is rapidly improving. On average, an infant born in 1900 could expect to live to about 32 years, while by 2000, life expectancy had reached 65 years. By contrast, alarm bells sounded when American life expectancy […]

The post The Unintended Consequences of Immortality appeared first on Law & Liberty.

]]>
The great increase in human life expectancy during the twentieth century is often cited as proof that humanity’s lot is rapidly improving. On average, an infant born in 1900 could expect to live to about 32 years, while by 2000, life expectancy had reached 65 years. By contrast, alarm bells sounded when American life expectancy declined in the first years of this decade by 2.5 years, primarily due to Covid and opioid overdoses. Some scientists and entrepreneurs suggest that the human lifespan might be dramatically increased beyond the current limit of 120 years. But before we place too many of our eggs into this basket, we would be well-advised to think more deeply about the allure of longevity.

Our attitudes toward length of life reflect profound assumptions about human nature and what makes for a good life. For example, is life necessarily improved by its lengthening? Are there other goods in human existence that need to be balanced against duration? Might there be other features of life for which a good and wise person would trade some portion of longevity? To gain insight into such questions, there are few better intellectual resources than one of the great explorations of human nature and human good, Jonathan Swift’s Gulliver’s Travels, which contains perhaps our most memorable literary depiction of deathless human life.

In parts I, II, and IV of the book, Gulliver finds himself among little people over whom he towers as a giant, then a homunculus among giants, and finally a humanoid yahoo amid a race of utterly rational and virtuous equine creatures. Book III represents a very different voyage, in that Gulliver ventures through a number of different lands, all of which caricature the ambitions and pretensions of modern science. For example, the Laputans are superb abstract reasoners utterly divorced from the practical aspects of life, the Balnibarians are ruined by living under scientific tyranny, and the royal academy’s denizens, the projectors, devote themselves to absurd ventures such as extracting sunbeams from cucumbers.

It is on this third voyage that for the first time Gulliver encounters a vision that dissuades him from attempting to return home to his wife and family. Specifically, he learns of people known as “struldbruggs,” who are rumored to be immortal, a prospect that fills him with “inexpressible delight.” Specifically, he writes,

Happy nation, where every child hath a least a chance for being immortal! Happy people who enjoy so many living examples of ancient virtue, and have masters ready to instruct them in the wisdom of all former ages. But happiest beyond all comparison are those excellent struldbruggs, who being born exempt from that universal calamity of human nature, have their minds free and disengaged, without the weight and depression of spirits caused by the continual apprehension of death.

Suddenly relieved of all desire to return to those who make him a husband and father, Gulliver resolves to spend the remainder of his life in conversation with these immortal creatures, enraptured by thoughts of what he would do if only he could live forever.

To repeat, it is the prospect of immortality that seduces Gulliver from his homecoming. Like Odysseus in the Odyssey, he encounters many strange places and people, many alternative accounts of what life might amount to, but in the end, he always yearns for home. This time, however, having learned of the struldbruggs, he is prepared to set aside his return, envisioning that he will pass his remaining years among what he imagines must surely be the happiest people in all the world, perhaps even hoping that immortality might somehow rub off on him. Gulliver, like many contemporary titans of Silicone Valley, supposes that the solution to life’s problems lies in its indefinite prolongation.

In fact, however, the struldbruggs and their lives are nothing like what Gulliver imagines. First, they are each born with a red dot above their left eye, which identifies them as immortals. Second, they lead normal human lives, including the decline and indignities of old age such as hair loss and diminished vision and hearing. Finally, upon reaching their 80th year, they are declared legally dead and forbidden to own property.

Their heirs immediately succeed to their estates; only a small pittance is reserved for their support; and the poor ones are maintained at the public charge. After that period, they are held incapable of any employment of trust of profit; they cannot purchase lands or take leases; neither are they allowed to be witnesses in any cause, either civil or criminal or economic, not even for the decision of metes and bounds.

In other words, these immortal creatures are neither imbued with the “wisdom of all former ages” nor fit to serve as “living examples of ancient virtue.” To the contrary, they are wrenching, grasping, clutching, covetous old sinners who must be legally incapacitated to prevent harm to others.

Otherwise, as avarice is the necessary consequence of old age, those immortals would in time become proprietors of the whole nation, and engross the civil power, which, for want of abilities to manage, must end in the ruin of the public.

Immortality, it turns out, leads not to the perfection of the virtues, but to their most extreme corruption. Just imagine if some of the great tyrants of history—in the twentieth century, men such as Hitler, Stalin, and Mao—were granted not one but many lifetimes to carry out their plans.

In Gulliver’s Travels, Swift issues an important warning to scientists, physicians, economists, and politicians who mistake the quantitative prolongation of life for its qualitative improvement.

To this magnification of human pride and greed, Swift juxtaposes Gulliver’s lofty dreams of what he would accomplish if only he could live forever. First, he would procure riches “by all arts and methods whatsoever” until he had become the wealthiest of all men. Next, he would commit himself to study until he excelled “all others in learning.” Then he would keep a record of all events and customs, becoming “a living treasury of knowledge and wisdom, the oracle of the nation.” He would then teach the young about the “usefulness of virtue in public and private life.” In other words, Gulliver would make himself a great man, perhaps the greatest ever to live.

Yet there is a problem. He does not really provide a convincing account of how he would acquire virtue, nor does he appear to understand what virtue really is. He thinks only of himself, failing to recognize that other struldbruggs would have lived far longer than he, and as a result, possess even greater wealth, knowledge, and virtue. His account also makes it rather difficult to imagine that he would devote himself to the service of others, his country, and humankind, in part because he seems to envision spending all his time exclusively with other immortals. In fact, he supposes he would shed nary a tear for any mere mortals, regarding their passing with no more remorse than a gardener of his seasonal tulips.

Why, then, does Gulliver choose to return home? In part because he realizes that the real struldbruggs are hidebound, selfish human beings, leading lives that can only be characterized as both vacuous and a threat to the happiness and well-being of those around them. They are not the happiest of humans but the saddest. Those plotting the extension of the human lifespan through movements such as transhumanism would be well-advised to think again about the welfare of humanity, and those supposing that the problems of the world can be well addressed simply by turning to people advanced in years, the impulse of gerontocracy, should recall that extending something inherently defective merely prolongs deficiency. 

In Gulliver’s Travels, Swift issues an important warning to scientists, physicians, economists, and politicians who mistake the quantitative prolongation of life for its qualitative improvement. Merely extending the duration of a vapid and indifferent life does nothing to enhance its excellence and may only magnify its deficiencies. To genuinely improve human life, it is necessary to discern and enhance what is good in it. All the psychopharmacology and artificial intelligence in the world cannot add one iota of goodness, and until we are ready to do the hard work of educating minds and hearts and cultivating characters, we should avoid—like the plague—the temptation to vest our hopes in life’s prolongation.

The post The Unintended Consequences of Immortality appeared first on Law & Liberty.

]]>
54289 https://lawliberty.org/app/uploads/2024/01/3751689-e1704488942523.jpg
The Wisdom of Lifelong Education https://lawliberty.org/the-wisdom-of-lifelong-education/ Fri, 15 Dec 2023 11:00:00 +0000 https://lawliberty.org/?p=53043 On the northside of Indianapolis, there is a senior living community, numbering about 160 residents. Over the past few months, a group of interested residents and I have been meeting weekly for 90 minutes, devoting our attention to the discussion of a great book. There is no course title, and the offering does not appear […]

The post The Wisdom of Lifelong Education appeared first on Law & Liberty.

]]>
On the northside of Indianapolis, there is a senior living community, numbering about 160 residents. Over the past few months, a group of interested residents and I have been meeting weekly for 90 minutes, devoting our attention to the discussion of a great book. There is no course title, and the offering does not appear in any course catalog. No money changes hands. No credit is awarded. No one is making progress toward a degree or advancing a career, and no lines are getting added to a resume or CV. There is no reason for anyone to attend the sessions except for the desire to share in the pursuit of knowledge.

The comments of some of the 20 or so regular participants are telling. Said one woman, “Many fine activities are planned for residents here each day, but after a while, the day trips, board games, movies, and art projects leave you feeling like you are back in kindergarten.” This sentiment may betray a mistaken assumption on the part of the organizations operating such communities—namely, that aged people need nothing so much as ways to keep occupied, pass the time, and stay amused. Said another, “It is such a treat to gather like this and talk about great books and great ideas. It makes us feel as though society has not given up on us, that developing our minds still matters.”

When Americans think about “students,” this is not what they call to mind. Our institutions of education, which together comprise a $1.5 trillion industry, cater almost exclusively to the young. Whether K-12 schools, vocational and career training centers, or colleges and universities, the US educational apparatus is heavily inclined to those under the age of 30. This reflects a longstanding presumption that education should be focused on preparation, teaching students fundamentals such as reading and math, prepping them for entry into the job market and a lifetime of gainful employment, and paying income taxes. There are obvious economic returns on this investment.

Yet education and youth are not necessarily linked, and participants in this group are truly students in the fullest sense. The word student comes from the Latin studere, meaning to apply oneself zealously, an activity for the old, thanks to a lifetime of focus and application, who are often better prepared than the young. What might from one perspective appear to be a bug of aging—a diminished capacity for multitasking—might from the standpoint of focus and concentration prove to be a feature. Both Laura Ingalls Wilder of Little House on the Prairie and Frank McCourt of Angela’s Ashes began writing at the age of 65, and Grandma Moses did not even take up painting until she was 77 years old.

We have discovered that even very senior human beings often exhibit a deep longing to know, a delight in discussing ideas, and a passion to share in the life of the mind for its own sake.

Sometimes education achieves its highest goals when it is largely divorced from utilitarian considerations. Consider Socrates. A stonecutter by trade, he seems to have devoted most of his days to teaching students pro bono, without payment. Although many of his well-off interlocutors would have been happy to pay, just as they were quite prepared to bribe his jailers to rescue him from imprisonment and execution, Socrates would have none of it. To accept payment would have implied that he had no true calling as an educator—that he, like the sophists whom he so reviled, was merely teaching to make money, subordinating the pursuit of wisdom to the appetite for wealth. From the Socratic perspective, selling knowledge or virtue for money represents a subversion of values far more invidious than prostitution.

A great deal hinges on what we take to be the purpose of human life. If we are here to make money and pursue education to boost earning power, then a powerful case can be made for prioritizing studies that will pay off in purely pecuniary terms. But perhaps education also has higher and better purposes. Suppose, for example, that it can help to fulfill the most essential feature of human nature, as indicated in the first line of Aristotle’s “First Philosophy”—the desire to know. If what comes most naturally to us is the pursuit of understanding, then education will be worth pursuing for its own sake, not merely as a means to something else, and learning will remain a priority for us so long as we remain alive.

If the world were constructed according to utilitarian principles, perhaps only the young would be educated, and once educated, they would stop learning and start producing. But if utility is not the central organizing principle, then human destiny may be as much or more fulfilled when we understand and marvel at an idea as when we exploit it for some other purpose, such as boosting economic productivity. Regarding the role of humans in the greater scheme of things, perhaps it is not so important whether we live with the knowledge of something for 50 years or just 5 minutes, so long as, in the moment, we recognize and understand it for what it really is.

These non-utilitarian goods seem especially precious to the participants in my group. The book we have been discussing lately is Tolstoy’s “Anna Karenina,” and we are devoting a weekly session to each of its eight parts. Recently we focused on the topic of marriage. The students in my university courses, who average about 25 years of age, always glean insights from Tolstoy. But with the senior group, which includes participants whose marriages have lasted as long as 68 years, 67 years, and 64 years, the discussion takes on quite a different tenor. Unlike younger students, they can draw on a lifetime of experience that includes the death of their parents, their entire careers, and parenthood and grandparenthood. Ideas arise that are absent with younger students who are just getting started in life.

Said one participant, “Our reading and discussion of this book is enriching our lives in so many ways. It provokes us to think about our careers, our families, and our communities in ways nothing else around here does. In some instances, we revisit unpleasant parts of our lives, and tears of shame and regret are shed. But more often, it brings us back in touch with things in life that have meant the most to us, and we are so grateful for the chance to reconnect with them and savor them again.” It is as though the novel were opening up long-forgotten ideas and experiences and bringing them, and the participants who encounter them, back to life. Each week, for 90 minutes, we are revitalizing one another.

I am not suggesting that seniors should return to full-time, degree-earning studies, although there is also no reason they shouldn’t. But what we have discovered is that even very senior human beings often exhibit a deep longing to know, a delight in discussing ideas, and a passion to share in the life of the mind for its own sake. They may not be able to run as fast or jump as far as when they were young, but they can inquire, discover, ponder, and rejoice in learning at least as deeply and in some ways even more richly than in their youth. We are learning that liberal education, while not wasted on the young, should be expanded where possible to include those at life’s other pole.

The post The Wisdom of Lifelong Education appeared first on Law & Liberty.

]]>
53043 https://lawliberty.org/app/uploads/2020/01/AdobeStock_30866656-scaled-e1702555245627.jpeg