Rebecca Burgess, Author at Law & Liberty https://lawliberty.org/author/rebecca-burgess/ Wed, 22 May 2024 14:01:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 226183671 Majesty, Mystery, and Malaise in Maestro https://lawliberty.org/majesty-mystery-and-malaise-in-maestro/ Fri, 10 May 2024 10:00:00 +0000 https://lawliberty.org/?p=57618 Downbeats are what initiate musical measures; upbeats are what end them. For the conductor and orchestra, the downbeat creates the sense of structure and provides stability, grounding the composition with a rhythmic anchor. The upbeat is what introduces anticipation and motion. Rhythm—the necessary pulse of any piece of music—has as its basis the symbiotic relationship […]

The post Majesty, Mystery, and Malaise in <em>Maestro</em> appeared first on Law & Liberty.

]]>

Downbeats are what initiate musical measures; upbeats are what end them. For the conductor and orchestra, the downbeat creates the sense of structure and provides stability, grounding the composition with a rhythmic anchor. The upbeat is what introduces anticipation and motion. Rhythm—the necessary pulse of any piece of music—has as its basis the symbiotic relationship between these, and yet it’s the downbeat that acts as the heartbeat within. 

Intriguingly, a piece of music doesn’t have to start with a downbeat. There can be a few pickup notes of introduction (anacrusis, for the formal types keeping score) from a preceding measure to help create the atmospheric scene. For that matter, the initial downbeat of a musical composition doesn’t even have to be a note at all: it can be that space of musical silence signified by a rest. What does it mean to anchor an artistic piece whose medium is sound—in a beat of silence?

Bradley Cooper’s Leonard “Lenny” Bernstein bursts onto the primetime stage in his film Maestro conducting—at the last minute and unrehearsed—Robert Schumann’s Manfred Overture. But Manfred’s opening downbeat is a rest. If the orchestra doesn’t “get” that with the conductor, inescapable chaos will ensue. Maestro the film, however, begins with its own particular anacrusis. It’s scaffolded around a piano rendition of a sequence from Bernstein’s opera A Quiet Place. The meditative scene sounds a note of personified silence—about one particular absence. This frames the film, which ends as it begins, with the aged and widowed Bernstein at the piano, invoking an image of his deceased wife Felicia Montealegre. Can a cinematic anacrusis that involves both sound and sight function as a downbeat “rest”—a space of specialized but anchoring silence as it were? 

If so, there are more wonderings to navigate here than those of mere biography.

Maestro, directed by Cooper and produced in part by veteran filmmakers Martin Scorsese and Stephen Spielberg, is Cooper’s second directorial outing. It is his second film about music and those musicians who are impelled to create music, not simply perform it. There are certainly many similarities to his first film, A Star is Born. Maestro is equally suffused with music—Bernstein’s own; Schumann, William Walton, Beethoven, Lincoln Chase, and pivotally, Gustav Mahler. It is similarly about artistic souls and the world they inhabit. And Maestro is also about the types of human relationships that can form around the nucleus of the artist-genius, and whether their human relationships are inevitably or necessarily less humane. Maestro, however, is a subtler affair about the mysteries of human creativity and the human soul. 

Maestro is not your standard musical biopic. This doesn’t deny that familiar tropes are the building blocks of the story throughout—there’s the opening quote by the film’s real-life protagonist; the reflective scene by the older man cutting to the big pivotal night of his young career; the meeting of various loves; montages of artistic highlights interspersed with life events; the midcareer anxieties and family anguishes; reconciliations and a death; the return to that reflective opening scene. But Cooper and cinematographer Matthew Libatique frame and shoot these tropes in a woven rather than linear fashion. This makes their very composition a part of the specific story Cooper is telling. 

The puzzlement of how a person whose entire artistic and professional persona flows from his extraordinary perceptive and communicative abilities, could fail to understand the sensitivities of actual human relationships, is not one that Maestro attempts to solve.

Viewers may start to suspect that the story is not really about the historical Leonard Bernstein at all. There are no chyrons or newspapers flashing headlines telling us the year or concurrent world events. Nothing is mentioned about the length (and true breadth) of Bernstein’s career, the date of his death, or the fate of his closest peers and proteges. There are no career stats, nor reminders of how Bernstein measured up to the anticipation that he would be “America’s first great conductor.” Very little is said about what it means to be a conductor—a musical Moses tasked with anticipating the way for his musicians. But art and creativity and human communication clearly are the substance of the film. 

The cue lies in the film’s opening quote: “A work of art does not answer questions, it provokes them; and its essential meaning is in the tension between the contradictory answers.” 

Perhaps for Cooper, the character Lenny Bernstein is the piece of art—or its analogy. In that case, his meaning must be in the tension between the musical artist’s dependency upon people to compose for and perform before, and the composer’s need for non-performative solitude and interior reflection in which to create (having “a grand inner life,” as Lenny puts it). Lenny is both conductor and composer; he desires to be both; he wavers between the two. Either persona leeches off of the ability to be fully the other. Composer, as a musical creator, seems to be the higher calling, and yet Lenny cannot bear to be alone: he has a near-pathological need not to be separated from people, such that he even leaves the door open when using the bathroom. More interestingly, that tension between the exhibitive and the creative aspects of music-making are mirrored in Cooper’s cinematic project: Film is disclosive, after all, and yet the core of what’s being disclosed in Maestro is a mystery that’s so deeply personal—and yet cosmic—that it’s not to be solved, only guarded.

Cooper emphasizes the veil concealing the human creative process rather than rending it with some performative “raw” vulnerability. This is precisely the opposite of what contemporary audiences have come to expect of a film about artistic figures. It’s daring of Cooper, given the nature of his art form. But as Cooper stands as guardian to the historical Bernstein’s wish to preserve the creative mystery, so does the character of Bernstein’s wife, Felicia Montealegre, stand as guardian to Lenny’s persona. She is the guardian of his creativity, not the source of it; hence, at certain points, her deep anger at Lenny’s “unfaithfulness” to his composing. Felicia is herself a musician, after all, and an artist, too.

Carey Mulligan’s wonderfully, sensitively espressivo Felicia does not fulfill the expected trope of muse to her unrestrainable husband. She makes Felicia Lenny’s forever anchor, the necessary structure countering the chosen chaos of his life—the downbeat to his upbeat. Despite his fondness and affection, Lenny largely fails to appreciate what Felicia is and does when she is in front of him and is even oblivious to the true why of her deep importance to his life; this forms the quiet tragedy of the tale. Though she is the heart of the story, Felicia is quite literally always in Lenny’s shadow—until her death, that is, when Lenny is in hers.

“I love two things, music and people. I don’t know which I like better, but I make music because I love people. I love to play for them and communicate with them on the deepest, which is the musical, level.” In the film, Lenny relays a version of this historical quote from a 1990 Bernstein interview, but as a middle-aged man chain-smoking cigarettes in the backyard of his own estate. It’s all of a piece of his professional success, and yet he’s restless and preoccupied, reflecting to art critic and biographer John Gruen that to its peril, the world has been losing its ability for creativity. “I feel the world is on the verge of collapse. … The diminution of creativity … which has come to a grinding halt. I mean, not scientifically. That has exploded. … But I know that Felicia—she senses it enormously.” 

These two later quotes are the addenda to that first introductory one. Together they more properly reveal the framework for the real story Cooper has identified within or via the relationship of Bernstein and his wife alongside Bernstein’s string of male lovers. That story is the revelation of the cruel irony and casual cruelty, of the musical genius in love with artistic communication whose fixation on self-expression deafens him to what his most intimate audience needs to hear. Music without words may be the highest or deepest human expression, philosophically speaking, but enfleshed husbands and wives, parents and children, cannot subsist on such music alone. The physical reality of separate bodies forever prevents the “marriage of true minds.” Sometimes, words are what’s required. 

To be a maestro, a conductor of Bernstein’s caliber, requires an extraordinary sensitivity to the minutest of factors—of tone, pitch, timing, phrasing, dynamics, potentialities of various instruments, individual players, sections of an orchestra, the whole human symphony cum audience along with the particular musical composition on the podium. It’s a stage drama, but an aural one, and the conductor is the stage director. How the conductor gets his musicians to play sequences of notes in a particular way, leading toward the next sequences of notes, and onward to an ending, fundamentally matters. And it requires reservoirs of thoughtfulness, and not just feeling, on the conductor’s part.

Perhaps Felicia’s adoring love for and guardianship of Lenny’s creativity mistook mere great talent for true genius. The latter was impossible given Lenny’s own unwillingness or inability to separate himself enough from his admirers to cultivate a grand enough interior silence.

In a penultimate sequence of Maestro, we’re privy to a minuscule clinic in the difference that mastering all of these aspects has on the music as actually performed and heard. A now widowed Lenny gives a coaching masterclass at Tanglewood. Student conductor William struggles through a particular sequence of the Allegro Vivace from Beethoven’s Symphony No. 8. The aspiring conductor doesn’t know how to “get out” from “retarding into the fermata” and into the next musical phrase—“Are you gonna bleed out of it; drip out of it … leak out of it, that’s what it sounds like.” Lenny challenges William with his inarticulateness. Eventually, he takes William’s baton and demonstrates how to do it. Lenny signals a cutoff and an upbeat, “… quarters. I think that’s what you really mean.” You can hear instantaneously the difference, how meandering notes suddenly become purposeful; how a defined moment was articulated with a few precise flicks of the conductor’s wrists. 

But Lenny’s genius-level perceptions and sensitivity in musical matters do not seem to translate to his human relationships. When the breakup in their marriage comes, it is the result of Lenny “getting sloppy” in his extramarital homosexual affairs. But what Felicia means by “sloppy” is Lenny’s refusal to be sensitive enough about the modicum of public respect and dignity she had hoped and asked for in exchange for accepting his sexual behaviors. He is publicly flaunting his unconventional affairs. But her anger is deeper than some disappointed desire for conformity to social convention. The basic kindness of recognizing what deeply matters to a spouse and acting accordingly ultimately does not figure into Lenny’s calculation. It had always been Lenny’s world, with everyone else simply privileged to be in it.

Felicia ultimately recognizes that neither her own artist’s appreciation of Lenny’s gifts, nor her guardianship of his creative genius, nor her intellectual acceptance of the situation could ever cancel the very real heart-sting of that truth. In the poignant, deftly portrayed sequence of Felicia’s cancer diagnosis and death (almost too poignant for anyone who’s witnessed a loved one succumb to cancer), Felicia herself finally gives words to this most basic need not just of the marital and the familial relationship, but of the human heart: “You know, all you need, all anyone needs is to be sensitive to others,” she tells her daughter. “Kindness. Kindness. Kindness.” Kindness is the proof of their seeing the other person’s vulnerabilities, of acknowledging them for who they are as a distinct human being. But genius, while it may be a gift, is clearly not always a blessing in this regard.

The puzzlement of how a person whose entire artistic and professional persona flows from his extraordinary perceptive and communicative abilities, could fail to understand the sensitivities of actual human relationships, is not one that Maestro attempts to solve. But Cooper has said that this imbalanced relationship of a God-like Bernstein “coming down to us and for his destiny,” as though the modern musical god were being pulled by the hoi polloi into their darkened world, in order for it to be enlivened with his gifts, was precisely the thing that Cooper had wanted to explore. This certainly illuminates the main first sequence, which has a nearly-naked Lenny receiving the fateful phone call about the last-minute Carnegie debut in his apartment above Carnegie Hall. He rips open the curtains with a jubilant “You got it boy!”—only to play the drums on the exposed buttocks of his sleeping lover while bounding out the door for a tracked descent to the fabled stage. It’s the world as an instrument, cinematized. 

That descent and what it means for us mere mortals—and for Lenny himself—seems cleverly echoed in the pacing, shots, and styles of Cooper’s film as it follows Lenny through some forty-odd years of his life. Our poor lives, at least, are enlivened by his: It’s exhilarating and heady and glorious at first, and Lenny and Felicia’s meeting and flirting and coupling is its own grand 1940s movie, complete with high-contrast black-and-white and Academy ratio frames. We move through fantasy sets and scenes of Bernstein’s own artistic creations, Fancy Free and On the Town, and on to real-life radio and TV interviews of the young couple’s married and professional lives. And as the decades succeed each other in Lenny and Felicia’s lives, so too do the film stock, the color palettes, and the aspect ratio change to match their historical era. The banter and bubbliness and soulful conversations become short and strained exchanges, marked more by what is not being said than what is. There’s contention and aloofness, distrust, and a certain malaise that has settled in by the time we reach the early 1970s, despite Lenny’s continued professional success. Creativity is in decline, right in front of us.

And yet it’s not that any artistic or creative energy has declined simply: the pinnacle of Maestro is arguably the six-minute live shot of the recreation of Bernstein’s famous 1973 performance of Mahler’s Symphony No.2, “Resurrection,” with the London Symphony Orchestra in Ely Cathedral. It is stunningly shot. Quite simply, it’s transportive. 

Cooper did his homework well (spending six years studying conducting with Yannick Nézet-Séguin, music and artistic director of the Philadelphia Orchestra), conveying the essence of a Bernstein conducting event in all its sweaty, physical ecstaticness. (Bernstein was famous for conducting even just with his eyebrows.) Some have criticized Cooper’s performance as hammy overacting, but it’s worth remembering that audiences rarely see much more than the backside of the conductor, while the camera shows us what the orchestra sees—his face. But the Mahler moment is doubly significant for what the symphony itself represents—besides being a gigantic, superhuman, 90-minute musical effort that demands extraordinarily high technical skills alongside emotional depths from musicians—the symphony was also a working out of the composer’s idea of creating a world of its own.

Lenny, too, remains in his own self-referential yet performative world. But there is no “resurrection” for Lenny, just a specter of the deceased Felicia that’s now vividly present in his mind’s eye. He is with Felicia in her dying moments; but then he is back to being a too-old adult attempting to act like a college kid alongside some of his partying Tanglewood students; blasting R.E.M.’s “It’s the End of the World as We Know It (And I Feel Fine)” from his convertible; being filmed talking about himself, and how “summer sings less often” now. There’s nothing godlike in his presentation or behavior anymore, though there is something performative, and it’s now clear that those are not the same. Nor is this a “sad” scene about decline. But it is an embarrassing one. 

The “diminution of creativity” of which Lenny had earlier complained turns out to be in his own case a diminution of dignity, which the cinematographers subtly capture. Perhaps the musical creator-god did not in the end so much descend from his heights to give technicolor to the people. Rather, technicolor revealed him to be rather pedestrianly human after all, because so perplexingly lacking in humane sensitivity in life outside of art. This thought leads to a less kind, but no less warranted thought: that perhaps Felicia’s adoring love for and guardianship of Lenny’s creativity mistook mere great talent for true genius. The latter was impossible given Lenny’s own unwillingness or inability to separate himself enough from his admirers to cultivate a grand enough interior silence. But for the film Maestro to have said this explicitly, its directors would have had to reduce any mystery of human creativity to a mere output of its creator’s behavior. This, thankfully, they did not do, allowing for the more pregnant silence.

Whether and how to separate the art from the artist is one of the most frequently revisited themes raised by another recent film about conducting and music, Todd Field’s Tár, the 2022 film about the world’s “most renowned” fictional interpreter of Mahler and a protege of Leonard Bernstein, Lydia Tár (played by Cate Blanchett). But from the perspective of Maestro, the richest themes of Tár are in its showing of exactly how high the cost is to conductors and musical artists of transmitting to their audiences the transcendent compositions of a universal human heritage, faithfully and well. Every conductor, but especially the greatest conductors, bear interiorly the immense responsibility of transmitting anew some of the most sophisticated artistic achievements of the human race. And the instrument they have to rely on to do this is not some passive wooden or brass one, but a human instrument, the orchestra. They must therefore control that instrument, and a veritable symphony of tangential factors, the most important of which is time itself—and silence.

“Keeping time—it’s no small thing,” Blanchett’s Tár says to the New Yorker’s Adam Gopnik. “Time is the thing. It’s the essential piece of interpretation. You cannot start without me. I start the clock. My left hand shapes, but my right hand marks time, and moves it forward,” she continues. “The reality is, that right from the very beginning I know precisely what time it is, and the exact moment that you and I will arrive at our destination, together.” Of course, it turns out that like Lenny, the infinitely more artistically and intellectually disciplined Tár also does not know the moment or place of arrival of her destination, severely if not hideously miscalculating the rhythm of her own life choices and how different the tempi are between her interpretation of her art, job, profession, and artistic responsibilities as a transmitter of music, and that of the vocal critics of the world. 

Tasked with being so godlike as to even control time within the confines of the concert hall, the enduring mystery is how, when it comes to the most human interactions of all, the best conductors so often miss the beat.

The post Majesty, Mystery, and Malaise in <em>Maestro</em> appeared first on Law & Liberty.

]]>
57618 https://lawliberty.org/app/uploads/2024/05/9016E92D-A3D9-4DE6-BD90-DAE85D51B3B4_1_201_a.jpeg
Scenes from a Kibbutz https://lawliberty.org/scenes-from-a-kibbutz/ Wed, 15 Nov 2023 11:00:00 +0000 https://lawliberty.org/?p=52322 If you prick us, do we not bleed? Was it the contrast of the unfurled flowers’ deep jewel tones against the earth’s sandy-desert neutrals? The vivid, joyous folk art adorning the external walls contrasted with the concrete of the reinforced bomb shelters intersecting the playgrounds? Or, was it simply the monochrome brightness of our hostess […]

The post Scenes from a Kibbutz appeared first on Law & Liberty.

]]>
If you prick us, do we not bleed?

Was it the contrast of the unfurled flowers’ deep jewel tones against the earth’s sandy-desert neutrals? The vivid, joyous folk art adorning the external walls contrasted with the concrete of the reinforced bomb shelters intersecting the playgrounds? Or, was it simply the monochrome brightness of our hostess Chen Abrams’ red shirt playing out against washed-out capris—as she strode forward against a painfully sun-filled sky to greet us—that had prompted my recollection of the poignant words Shakespeare gives to Shylock the Jew in The Merchant of Venice

If you poison us do we not die?

And if you wrong us shall we not revenge? If we are like you in the rest, we will resemble you in that.

There was neither revenge nor anger that day. There was hospitality and fortitude, energy and earnestness; a certain notable quickness of reflexes on the part of our Israeli hosts, and an atmosphere of normalcy so pervasive that the absolute abnormality—from an American perspective—was cinematically heightened. White resin patio chairs stood sentinel over potted plants and sunning felines on verandas. These abutted green grassed lawns demarcated by pathways and not by fences. Children’s bikes leaned against each other tidily, expectantly awaiting their rambunctious riders. In snapshot, any of the neighborhoods we walked through could have belonged to a golf course community in Coeur d’Alene, Idaho—only there’d have been fences, if it were Idaho.

Two people were tending to the landscaping, watering trees and bushes. Neighbors were running quick next-door errands. We were witnessing how knitted together, like a family, this community is. How no one locks their doors.

But the hum of human movement was quiet—it was, after all, a working day, and this was a working community, not a summer camp. Many were out in the surrounding fields. It was also a school day, and the kids were being schooled. After entering the kibbutz, we stood alongside the school complex for a while, hearing the murmur of voices within. What remained with me was the sight of the still line of baby strollers near the door, with multiple painted bomb shelters a few yards away.

Hath not a Jew eyes?

This was Kfar Aza, a kibbutz on the Israeli border. It was June 2022, and for about fifteen years of intermittent war already, the farming community had every day had to live with their ears trained for the sound of the air raid siren and with their reflexes quickened to find immediate shelter from rocket fire. Even the toddlers here are so used to it, Abrams told us, that they’ll immediately lift up their arms at the sound. They know instinctively that somebody will rush to pick them up. That’s because, at just about one mile away from the Gaza Strip, the residents here have a mere ten to fifteen seconds to run for cover. And so, from anywhere in the kibbutz, there is a bomb shelter ten seconds away. 

Up through 2016, during “mostly peaceful” weeks, the kibbutz would experience between one to five rockets per week. During times of escalation, Kfar Aza and the surrounding areas could experience up to 120 incoming rockets in under 48 hours. But looking back now, one notices something: In 2021, 2022, and May of 2023, Islamic Jihad followed their usual course of rocket launching behavior, but with the distinct difference of also launching barrages of thousands of rockets over a few concentrated days, according to the compilations published by the Jewish Virtual Library. Between October 7-20, 2023 alone, more than 7,380 rockets were fired from Gaza, 6,000 on October 7. That’s a notable difference of patterning from 2008, the year that 48-year-old Jimmy Kdoshim was killed by a Hamas mortar shell from Gaza while working in his Kfar Aza garden: From January through December, the whole year, a total of 3,107 rockets and mortars were fired at Israel, with no one huge concentration. 

The artwork was the product of the young children. There were sunflowers and green grass painted on some; on others, teepee-like depictions with a helmeted soldier figure in front denoting warmth and safety within; zigzagging lines of exploding danger without.

Before Kdoshim’s shocking death in 2008 (“this kibbutz changed forever,” Abrams has said)—but especially before Israel’s unilateral evacuation of every Israeli household, gravesite, and settlement from Gaza in 2005 and Hamas’ bloody ouster of its rival Fatah within the Gaza Strip in 2006—one could almost have thought of Kfar Aza as one of those idyllic beach-adjacent communities where the abundance of nautical elements in décor, from sea shells to wind chimes, matches the sunniness of disposition and laidback lifestyle of a seaside community. Kfar Aza is, after all, only about ten miles away from the Eastern Mediterranean; its residents “used to go to Gaza and go to the beach there. … We did commerce together.” This history is so obscured now that it took me an afternoon to piece together why those sea shells would be in Kfar Aza gardens. 

Also obscured is the deep and once unlikely historical connection for the modern state of Israel between the kibbutzim, their farmers, and the things of the sea. In pre-state years, for those early Jewish advocates for a Jewish nation-state who were making their way back to the region, the focus was on redeeming the soil—reclaiming Eretz Yisrael—and thus the farmer was the hero, the ideal. The sea may have been the necessary pathway, from the port at Odessa or from Trieste or even Portsmouth, but once arrived in Jaffa, the future lay inland. This early neglect of the sea and maritime enterprises may have had a further ideological root, however: the collectivist-socialist (social democratic) mindset that would have been suspicious of fishing as too individualistic and different from the communal lifestyle of the kibbutz. 

But a sea-change in attitude among the Zionists occurred over the course of the 1930s, steered in large part by the man who would be Israel’s primary national founder and first prime minister, David Ben-Gurion. As the newish city of Tel Aviv began to come into its own, it opened its own port in 1936 (later moved down the coast to Ashdod), with Ben-Gurion even announcing then: 

The conquest of the soil by city people was the great, first adventure of our movement, of our endeavour in the country. A second adventure, great also, and perhaps harder than the first, still awaits us—the conquest of the sea. … The Mediterranean is the natural bridge that connects our small country with the wide world. The sea is an organic, economic and political part of our country. And it is still free. The force that pushed us from the city to the village pushes us now from the land to the sea. … The sea opens unbounded horizons for us. … We should remember: this country of ours combines land and water.

Necessity may have been the initial mother of invention here. As Jewish immigration increased, the Arab sailors, workers, and unions who controlled the Jaffa port became more hostile, eventually even closing it to Jews altogether amid the anti-Jewish and anti-British riots of 1936. But Ben-Gurion harnessed the political and economic opportunity of the moment, writing an essay in 1937 entitled “Going Down to the Sea” in which he deliberately linked a maritime mastery with the Jewish nation-building process. 

The sea assumed a pivotal role in the consciousness of this erstwhile traditionally “interior people,” translating into everything from new Jewish shipping lines to the creation of fishing collectives, to “maritime athletics” and even to a new holiday called “Sea Day.” This is the neglected, complicated story that Kobi Cohen-Hattab tells in Zionism’s Maritime Revolution: The Yishuv’s Hold on the Land of Israel’s Sea and Shores. But it is clear that the sea occupies an important role in the story of modern Israeli independence and sovereignty. Culturally at least, the sea ought to continue to hold an important place in the mind of the modern Israeli. But whether the sea was ever as pivotal for the cultural consciousness of the Israeli as it was for the late sixteenth century Venetian, is a harder matter to settle. 

Easier is to recognize some situational similarities between the Venice of Shakespeare and the island of Israel in its modern reality, encircled by Arab lands geographically and by anti-Israeli sentiment and lawfare internationally. The classic differentiations between sea and land; between the churn of commercialism and diversity and increasing secularism on the one hand and homogeneity of people and law and custom and religion on the other; between what is fixed and what is unfixed (movement and rest, in Thucydidean terms); between what is rational and governable and what is ungovernable and irrational—are woven into Kfar Aza and the similar kibbutzim on the edges of Israel’s borders as they are woven into the hardly-hidden dynamics of Shakespeare’s Venice. 

That’s not where my thoughts wandered to in June 2022, but it is where they have now, as these sociopolitical elements and literary metaphors offer some help toward illustrating the savage clash of dynamics, red in tooth and claw, the world witnessed in action at Kfar Aza on October 7, 2023 and since.

Hath not a Jew hands, organs, dimensions, senses, affections, passions; fed with the same food, hurt with the same weapons, subject to the same diseases, healed by the same means, warmed and cooled by the same winter and summer as a Christian is?

Our Israeli hostess had deliberately begun our visit that June day with the playground and the school, the bus stop, and the bomb shelters, to situate us in the physical and mental reality of the deep contrasts of living in the Gaza border kibbutzim. As her survivor father put it recently on a Zoom call, “It’s heaven here 95 percent of the time; 5 percent of the time it’s hell,” or, as Kibbutz Nahal Oz resident Amir Tibon describes it, life here is “like an amazing resort village next to a war zone.” To deal with the rockets and the sirens, they have not only built the physical bomb shelters but invested in mental health care aimed at building resilience in addition to addressing trauma, for their children and for the adults. Some do leave, but many more return—it’s a popular place to spend a year or two during the Israeli equivalent of a “gap year,” living in communal dorms. Once these individuals start to think about raising a family, many settle down here for that sense of community and safety. And that’s where the brightly painted bomb shelter art came in. 

The artwork was the product of the young children. There were sunflowers and green grass painted on some; on others, teepee-like depictions with a helmeted soldier figure in front denoting warmth and safety within; zigzagging lines of exploding danger without. One colorful wall depicted a newish tactic that had recently been ramped up by Hamas: the use of birthday balloon bunches to float small explosives into the fields and yards of Kfar Aza—“incendiary balloons”—undetectable and unanswerable by the far more technically advanced Israeli missile defense system. Between 2018–21 alone, such balloons have been responsible for more than 10,400 acres of burnt crops in the Gaza envelope. Falling as they can in individual yards with children playing in them, the adults have had to walk a fine line in teaching young kids how to be cautious about even the harmless things of childhood, without backing them into corners of paralyzing fear. 

The Israeli agricultural sector has a distinct reliance on foreign workers, employing some 30,000 Thai farmhands alone, 5,000 of whom work near the Gaza border, where 6 percent of Israel’s milk, 20 percent of its fruit, and 75 percent of its vegetables are grown. 

And meanwhile, through all of this, Abrams is not unique among the residents in voicing pacific, humanitarian thoughts about her Palestinian neighbors: “I believe that on the other side of the fence, there are children like my son. There are lots of children there that are less fortunate than him … And they deserve life like he deserves life. And I can’t give up this hope.” 

It’s no secret that the kibbutzim traditionally have been left-leaning—they originated as experiments in voluntary socialism, after all—or, that they have historically filled a deliberate two-fold strategic role in the establishment and development of the Israeli state. They were meant to cultivate the land and to settle the area, but also to be an element of border security, as it were. Out of around 270 such communities around the state of Israel today, 20–50 of these dot the 32-mile border on the east and north of the Gaza Strip. And while always a minority population, the kibbutzim up through recent memory were perceived to be the “vanguard” of Israeli society in agriculture, politics, and the military. (As late as July 2000, 42 percent of the air force came from kibbutzim and similar collectives, for instance.) Thus, it’s not surprising to learn that among Israelis, the kibbutz residents on the Gaza border are actually more likely to have supported the peace process and normalization with their Palestinian neighbors than not, and even to have been activists for the cause, volunteering via a variety of organizations to give physical aide to their Palestinian neighbors.

These were happy to welcome Gazan workers in their fields and orchards, once Israel resumed in 2021 the granting of work permits to Gazan Palestinians that it had halted because of Hamas in 2007. Just this summer, Israeli authorities were reportedly discussing increasing the numbers of those work permits—already around 20,000—due to pressure from the Biden administration. Advocates of the program, whether American or Israeli, argued that by helping to relieve the economic situation and improving the quality of life in the Gaza Strip, they were helping “calm the region and reduc[e] tension.” Kibbutz farmers also welcomed Gazan Palestinian workers because of their shared agricultural knowledge of tending to bananas, citrus fruits, and other crops. For that matter, the Israeli agricultural sector has a distinct reliance on foreign workers, employing some 30,000 Thai farmhands alone, 5,000 of whom work near the Gaza border, where 6 percent of Israel’s milk, 20 percent of its fruit, and 75 percent of its vegetables are grown. 

“It is not between me and Hashem or between me and Nabil. It’s between governments,” is how Israeli kibbutz Nir Am defender and survivor Ofer Liberman still thinks about the relationship between himself and some longtime Gaza Palestinian kibbutz workers. Several of those Gazan farmworkers were shot at and killed by their fellow Gazans on October 7. Is it right, is it correct, to think that such an attitude writ large—meaning the deliberate attempt to circumvent the political by merely economic means—is directly responsible for this October having witnessed not the usual harvest of fruits and vegetables from the earth but the more horrific one of human blood and souls? Of the hundreds (thousands) of Hamas terrorists who streamed through these kibbutzim and border villages, there were those who had worked the very same fields alongside their Israeli and foreign counterparts, who used their daily access to relay back to their Hamas commanders detailed information about every single Israeli community eventually targeted, most especially about the location of schools and youth centers, and about the relative populations of women, children, and the elderly there.

An estimated 300 terrorists overran Kfar Aza from six different positions in the early hours of October 7. Of the nearly 1,000 registered residents, about 600 were home that morning, and they felt the siege individually—each thought that they were experiencing an isolated incident, until enlightened via the modern tools of technology that their communal massacre had been planned all along. The mayor and small security team were killed almost immediately, the survivors later pieced together. Nor could the bomb shelters and in-house saferooms—designed with aerial rocket attacks, not urban terrorism, in mind—entirely protect them.

By the end of the day, one out of every ten Kfar Aza residents had been murdered or kidnapped, according to Abrams; half the kibbutz “looks like Hiroshima in 1945,” Abrams’ father says; when the survivors were eventually evacuated by the IDF it was so immediate many didn’t even have shoes on and were not allowed to take anything (the fighting was still active, and booby-traps were everywhere). As they were fleeing, they could not check on the bodies of their friends they had to step around—from the voices that had gradually gone silent on the WhatsApp chat throughout the day, they already had a running tally of their dead. Kfar Aza remains a closed military zone. 

From the Hamas terrorists’ own streamings and uploads, and later the IDF and independent journalists, we know about the pre-civilizational barbarity enacted on these friendly farmer-citizens, their babies, and their grandparents. We also bore witness in real time to how within less than twenty-four hours, Western elites on prestigious college campuses, who perhaps yesterday were inveigling in academic papers against Shakespeare’s “chilling” antisemitism that supposedly “‘prepare[d] the ground’ for the Nazi Holocaust,” excused if not celebrated in public the premeditated rape and butchering of flesh-and-blood Jews and other Israeli citizens. 

If a Jew wrong a Christian, what is his humility? Revenge. If a Christian wrong a Jew, what should his sufferance be by Christian example? Why, revenge. The villainy you teach me I will execute, and it shall go hard but I will better the instruction.

While nearly all of our public spaces are filled with moralizing admonishments to Israelis about the value of Gazan life; about how “this is not the time for revenge”; about “proportional response”; or about how war is futile and hate “solves nothing,” Israelis and Jews worldwide—those who have survived, that is—are breathing examples of how nothing is so ferociously destructive as unrestrained hatred. These thoroughly secularized, antireligious modern Antonios believe that their contemporary Shylock counterparts shouldn’t even get the protection of the law; that they have no standing before the court—any court; that they are unworthy of even basic legal protection of their human life because of the mere fact of geography, nationality, and religion. Not even Shakespeare’s supposedly racist Venetians espoused such a view. 

When Bassanio in The Merchant of Venice pleads in court with the Duke to invoke executive privilege (Act IV), in order to “do a great right, do a little wrong, / And curb this cruel devil of his will,” Shylock is also present in the court, having been able to have Antonio arrested for defaulting on their contract. The lawyer Balthazar reminds the court that the power to overturn the basic contract law nowhere exists in Venice; moreover, such an “executive order” would create true chaos in the state. Shylock himself reveals why: Venice is a commercial republic wholly reliant on the smooth conduct of international trade amongst peoples of diverse ethnicities, religions, and customs. Stability comes from a mutual trust that Venice will ensure that all lawfully entered, legal contracts are binding and their execution upheld. Shylock thus demands that the law honor his oath. Famously, Balthazar (Portia) ultimately trounces Shylock’s case, using his own strict letter-of-the-law interpretation against him, but with the aim of upholding law and importantly—justice

But what we forget to our peril is how Shylock’s great anger against Antonio only manifests in demanding his pound of flesh after he has lost his daughter Jessica. Jessica eloped secretly with Lorenzo, a young Christian in Antonio’s circle, having apparently also stolen away with her a great deal of her father’s wealth. In losing his daughter, Shylock believes he’s lost his veritable “flesh and blood” and his connection to future generations—traditional Jewish law recognizes matrilineal descent, and thus Jewishness passes through the mother no matter the status of the father—but Shylock believes that even Jessica’s blood has “rebelled,” forever disconnecting them. And in losing his wealth and his daughter to the very group of Christians who openly castigate him and his business “on principle” while privately utilizing his services, Shylock loses not just his livelihood (his financial blood) but his dignity and reputation. He thus has only his oath with Antonio left, and a reliance on the law and the state. Seemingly denied a future, Shylock has no commerce with mercy.

Mercy, of course, gets visited upon Shylock, when the court and the duke ultimately rule that instead of forfeiting his life and all his remaining wealth to the state, Shylock must eventually convert to Christianity, pay a fine, and officially recognize Jessica and her husband as his legal heirs. But we never see this play out. What we see instead is Shylock exiting the court, and the group of merchants, minor aristocrats, disguised lawyer-wives, and their servants decamping back to Portia’s idyllic refuge from the world, Belmont, where they reconvene with Jessica and Lorenzo. It seems that not even the cosmopolitan, globalist Venetian republic can successfully perpetuate its sociopolitical promise—it’s a fiction that in the best of scenarios can just manage to play out in small enclaves, but which in the worst, ends in mutual distrust, murder, and suicide, as the tragic Venice-based Othello implies. But can even wealthy, idyllic enclaves ever truly escape from the sea of the political that encircles them always?

Shylock is still left to mourn his daughter Jessica. And Israel has not yet been allowed to mourn its murdered children.

The post Scenes from a Kibbutz appeared first on Law & Liberty.

]]>
52322 https://lawliberty.org/app/uploads/2023/11/shutterstock_402384799.jpeg
To Make A People in the Eyes of the World https://lawliberty.org/to-make-a-people-in-the-eyes-of-the-world/ Tue, 04 Jul 2023 10:00:00 +0000 https://lawliberty.org/?p=47780 Not so long, but long enough ago that the likelihood seems foreign for us today, American statesmen and civic leaders delivered eloquent and stirring speeches, the best of which they delivered not on aircraft carriers or in the halls of Congress but in local park gazebos and bandstands, and on Independence Day. As so scenically […]

The post To Make A People in the Eyes of the World appeared first on Law & Liberty.

]]>
Not so long, but long enough ago that the likelihood seems foreign for us today, American statesmen and civic leaders delivered eloquent and stirring speeches, the best of which they delivered not on aircraft carriers or in the halls of Congress but in local park gazebos and bandstands, and on Independence Day. As so scenically portrayed by George Caleb Bingham in his Election Series, these were celebratory events, drawing the whole town’s public as an audience and retaining them sometimes for hours. The speaker would harken back to lessons of the American Founding and to that generation of individuals who, as Abraham Lincoln would later write in praise, had had the vision and foresight to “embalm” in its first great declaration of their country’s right to sovereignty “an abstract truth, applicable to all men and all times” about human liberty and political equality.

Such speeches were not merely paeons to the past or occasions of indulgent hero-worship. They were meant to reforge the sense of America for their audiences by bringing them back to the principles of the Declaration of Independence, and, by reminding them of the political and moral causes and sequence of events leading up to independence from Great Britain, to renew Americans’ sense of themselves as a people and the unfinished work of liberty they were every day engaged in by virtue of living in this democratic republic. Hence these speeches would traditionally precede, include, or follow a recitation in full of the Declaration of Independence—and not just of its famous first two paragraphs.

With his characteristic drollness, Mark Twain liked to describe the public recitation of the Declaration “with its majestic ending, which is worthy to live forever,” as an in fact serious civic ritual, of a hurling of its truths “at the bones of a fossilized monarch, old King George the III, who has been dead these many years,” but which “will continue to be hurled at him annually as long as this republic lives.” Daniel Webster had earlier provided the sense of why such an annual ritualistic hurling was necessary—to remind American citizens “on whom the defence of our country will ere long devolve” about “the duties incumbent upon us,” lest they “pusillanimously disclaim the legacy bequeathed” and so have to “pronounce the sad valediction to freedom.” But Samuel Adams may have been more to the point when, in 1776, he pronounced the necessity of reminding the American people about the fundamental contrast between the American political system and other nations; that whereas “other nations have received their laws from conquerors,” or “are indebted for a constitution to the suffering of their ancestors through revolving centuries,” Americans alone “have formally and deliberately chosen a government for themselves” in a defiant stand against political tyranny from abroad.

There has always been a vivid awareness of the global character of the Declaration in these Independence Day speeches, whether Webster’s in 1800 or in his 1851 “Speech at the laying of the cornerstone of the capitol” (“the whole world was the stage and higher characters than princes trod it”), in Samuel Adams’ speech quoted above, or in John Quincy Adams’ famous 1821 address that includes the now contentious-in-policy-interpretation tenet, “[America] goes not abroad, in search of monsters to destroy. She is the well-wisher to the freedom and independence of all. She is the champion and vindicator only of her own…. [America’s] glory is not dominion, but liberty.” Even Frederick Douglass’ celebrated 1852 “What to the slave is the 4th of July?” speech has a comparative politics/foreign policy element to it, in his scathing criticism of how Americans had no problem denouncing the external or international slave trade with righteous indignation but were consciously blind to the growing internal slave trade then in existence in America.

That global characteristic is twofold. For the Declaration of Independence is both a foreign policy document and a diplomatic event—however revolutionary (and therefore contentious) its tone and intent—and a culmination of a project begun years before to craft a distinctly American people.

The Declaration is the public rationale that the Continental Congress issued to the world, explaining “with a decent respect to the opinions of mankind” why it had voted on July 2nd, 1776, to break from Great Britain. Thus, while the Declaration is indeed a statement of the governing principles by which our break from Whitehall and our future government was to be judged, it is also a foreign policy statement. By drawing the proverbial line in the sand that any government’s failure to take account of the truth that “all men are created equal” and that a failure by government to secure men’s individual rights to “life, liberty, and the pursuit of happiness” gives a people justifiable grounds for “abolishing” its allegiance and ties to that government, the Declaration’s promoters were putting the world on notice that its revolutionary principles extended far beyond the sliver of the North American continent they inhabited. Even the monarchies and despotisms that then ruled the vast majority of the rest of mankind recognized the revolutionary moment for the rest of the world in this one government, for the first time in history, coming into being, whose legitimacy explicitly rested on the claims of human nature and not on common blood, soil, language, religion, or ancient tradition. (Winston Churchill’s July 4, 1918 speech on “The Third Great Title-Deed of Anglo-American Liberties” is a nice nod in this regard.)

This helps to explain why we more properly celebrate Independence Day on July 4th, rather than on July 2nd, the day the Continental Congress actually voted on the Declaration’s resolutions. By adopting the Declaration on July 4th and publicly proclaiming its philosophical and legal rationale (the “long train of abuses and usurpations” which make up the bulk of the document) for political separation from Great Britain on that date, the (now) former thirteen North American colonies were officially taking their place on the world stage as a new and sovereign nation.

Naturally then, we tend to think of the Declaration as the beginning point of a truly “American” politics, and as the first salvo of fighting words used to propel America as a political entity onto the international stage. But in truth, the Declaration is just as much a terminus. It’s the endpoint to a project begun years earlier by men such as the Declaration’s author, Thomas Jefferson, to make a people out of the numerous, disparate peoples of the thirteen American colonies. However much a government’s legitimacy does not depend on a common blood or soil, as the Declaration affirms, the Founders knew full well that a government not the product of such accidents and forces was uniquely dependent for its survival on a people made distinct by their mutual acceptance and belief in a common set of principles. Thus, well before the Declaration, Jefferson was already engaging in a type of stealth diplomacy across the Thirteen Colonies, drafting public documents ostensibly addressed to King George III that detailed long trains of abuses to the colonies from the hands of the British Parliament, but which were intentionally directed closer to home, toward shaping the sentiments of the American colonists—into being Americans.

It’s high time for our statesmen and civic leaders to revisit the entire text and history of the Declaration of Independence, and to offer the American public something more than mere fundraising pro-forma PR releases on Independence Day.

Seen in this light, Jefferson’s 1774 Summary View of the Rights of British America is less easily dismissed as some clumsy, naïve telling off of the king. Political theorist Ralph Lerner argues in Naïve Readings: Reveilles Political and Philosophic, that there’s a strategic reason Jefferson never refers in the Summary View to his fellows as Britons or Englishmen living in America. Jefferson “is intent on preserving as great a gap as he can between the transplanted or emigrant man of America and those whom that man or his forebears left behind in old Europe.” For this reason, Jefferson expounds—to King George—on the Saxon Urureltern for some paragraphs in order to focus—for the sake of the American colonists—on the Saxons’ “priceless bequest—a readiness to live free or die.” Summarizing the argument in Jefferson’s voice, Lerner writes:

From such stock are we, the free inhabitants of the British dominions in America, descended…. [T]he striking parallel between the ancient Saxon emigration to Britain and the modern emigration of Englishmen to America offers a telling example of the proper relation of a mother country to its expatriates. Can one imagine the indignation and scorn with which today’s Britons would greet a latter-day German monarch’s claim to reassert his dominions over descendants of those early Saxon emigrants now resident in Britain? And yet George III and his ministers and Parliament presume to assert such “visionary pretentions” with respect to the descendants of early English emigrants now resident in America.

Not only was Jefferson highlighting for his American peers a historical trait of a love of political liberty handed down from Saxon to American colonist, but he was also putting words to an argument vaguely felt rather than crystalized in the colonists’ heads. The litany of complaints he makes to the king in the Summary View ring familiar to us today because they anticipate the grievances in the Declaration, not to mention in the Declaration and Resolves of the First Continental Congress (October 14, 1774) and in the Declaration of the Causes and Necessity of Taking Up Arms of the Second Continental Congress(July 6, 1775).

But in 1774, no one yet had suggested a single answer to two perplexing questions: “Why were the British brethren so deaf to the Americans’ appeals to justice and consanguinity? Further, why were the expatriated colonists so long accepting of metropolitan encroachments, usurpations, and high-handedness?” Lerner’s insight into the Summary View is that Jefferson answered both questions through the listing of evolving political complaints. “It was rather a failure on both sides to fully grasp that modern Britons and modern Americans (whatever their shared biological inheritance) had become two different peoples.” Because British authorities failed to acknowledge that Americans had become a breed as well as a land apart, “they persisted in treating New Hampshire as though it is old Hampshire.” Meanwhile, the Americans had let themselves be consistently mistreated, because they also had been slow to recognize how historical circumstances had “altered the political spirit of the two peoples.”

Jefferson’s end goal with the Summary View was thus for his fellow American colonists to become one American people, “capable and worthy of shaping their own destiny,” by means of coalescing around the then-radical principles he was giving voice to in writing the Summary View. The less-than-radical-propositions and conclusions expressed to King George in the Summary View (for example, a type of British Empire over which the King would preside, as a neutral umpire) are mere cover then, for the truly revolutionary political principles Jefferson is already crystalizing for his peers in 1774, and which they will officially publish to the court of the world’s public opinion with the Declaration of Independence in 1776.

To get to July 4, 1776, required no small amount of strategic thinking, of prudent statesmanship, of expert melding together of situational awareness, rhetorical prowess, alliance-leveraging, and political maneuverings. Jefferson was acutely aware that among the American colonial politicians of his day, there was an “inequality of pace with which [they] moved” towards the end goal of political independence from Great Britain, and that therefore a great “prudence [was] required to keep front and rear together,” for them ever to hope to be successful in the undertaking. How Jefferson and the more zealous members of his set built up to the Declaration of Independence is arguably a masterclass in statecraft, with publication of Jefferson’s Summary View as their opening move: Unsolicited, Jefferson drafted and sent to Patrick Henry and Peyton Randolph a set of supposedly anonymous instructions

to be adopted by a body of Virginians meeting as a specially elected albeit irregular convention. These instructions, if adopted, would be carried by Virginia’s deputies to what we now know as the First Continental Congress and proposed to that body for adoption as “an humble and dutiful address” to King George III. At each level, then, there [were] objections to be met, opinions to be won over, and ultimately actions to be taken.

Lerner gives a short summary of what happens next: Randolph brings the draft to the attention of the members of the First Continental Congress, who, though they feel it is “too bold for the present state of things,” nevertheless still print it in pamphlet form under the title of “A Summary view of the rights of British America.” The rhetoric, and the principles argued, by some supposedly anonymous “Native, and Member of the House of Burgesses” in the Summary View could now reach an audience of thousands, if not millions, and on both sides of the Atlantic.

The rest, we could say, is history. But it is worthwhile to note along with Lerner, that it was a junior member of the Virginia colony’s political establishment that took it on himself to set all this in motion. Without Jefferson and a Jefferson-led similarly-minded and similarly-spirited coterie of individuals, when and what type of declaration of political independence would the American colonists have produced? To return to Lincoln’s 1859 Letter to Henry L. Pierce, despite his failure to ultimately resolve the political problem of slavery:

All honor to Jefferson—to the man who, in the concrete pressure of a struggle for national independence by a single people, had the coolness, forecast, and capacity to introduce into a merely revolutionary document, an abstract truth, applicable to all men and all times, and so to embalm it there, that to-day, and in all coming days, it shall be a rebuke and a stumbling-block to the very harbingers of re-appearing tyranny and oppression.

The Declaration of Independence was not inevitable, neither was the successful forging of the American people as a people predetermined. It took a distinctly human element of thoughtful—crafty yes—strategic, and even hot-blooded individuals laboring in the intellectual and political vineyards, communicating about such truths with their peers, to pull off the political revolution of July 4th. That was what American statesmen and civic leaders used to remind their large audiences of, with their Independence Day speeches. That human element of especially principles-based democratic politics requires an “electric cord” as Lincoln put it, linking succeeding generations to those foundational principles that can only happen by a conscious renewal to them as a people.

It’s high time for our statesmen and civic leaders to revisit the entire text and history of the Declaration of Independence, and to offer the American public something more than mere fundraising pro-forma PR releases on Independence Day. But it’s equally high time for all Americans to make it their Independence Day tradition to reread the Declaration, and to sit awhile with the latent considerations of politics and statecraft, diplomacy, international relations, and grand strategy—of Grand Politics as it were, (or at least, Politics with a capital “P”)—that the act as well as the words of the Declaration represent.

The post To Make A People in the Eyes of the World appeared first on Law & Liberty.

]]>
47780 https://lawliberty.org/app/uploads/2022/08/Declaration_of_Independence_1819_by_John_Trumbull-e1651156218154-1024x512-1-e1661292918368.jpg
Distant Strains of Memory https://lawliberty.org/distant-strains-of-memory/ Fri, 16 Jun 2023 09:59:00 +0000 https://lawliberty.org/?p=46494 Memory, like tears, seems to be of many kinds, and any farewell can sound a symphony of both. Mournful, painful, melancholic, nostalgic; elegiac, dramatic, cathartic, aesthetic; anguished; sacred or poignant; or perhaps soulful, joyful, transcendent—concert program notes always employ a multitude of descriptors to let us know in which adjective the composer registered his pensiveness. […]

The post Distant Strains of Memory appeared first on Law & Liberty.

]]>
Memory, like tears, seems to be of many kinds, and any farewell can sound a symphony of both. Mournful, painful, melancholic, nostalgic; elegiac, dramatic, cathartic, aesthetic; anguished; sacred or poignant; or perhaps soulful, joyful, transcendent—concert program notes always employ a multitude of descriptors to let us know in which adjective the composer registered his pensiveness. (Perhaps it was with program literature writers in mind that Nietzsche sighed, “I cannot differentiate between tears and music.”) 

Over a hundred years ago, after four years of compositional silence produced by the ongoing shock of the Great War, Sir Edward Elgar emerged from the musical shadows with his Cello Concerto in E minor, Opus 85. The work was recognized then, as now, as a swan song. In the final catalog of his works, Elgar wrote “Finis. RIP” next to Opus 85. Though he would live another fifteen years after it premiered, he didn’t complete another major piece. He’d been terribly ill, and he’d watched his beloved wife Alice “become mysteriously smaller and more fragile,” while he was writing it. Elgar remembered after, “She seemed to be fading away before one’s very eyes.” Alice died within that year. She’d thought his cello concerto “flawless.” There’s certainly something of an intimate farewell in its melody.

Elgar had also been hearing the mechanical sounds of war for years, and the human sounds of mourning for a quarter million British lads killed, and had known that the Old World and its genteel imperialism was over. Is his Cello Concerto then Elgar’s “war requiem”? Or perhaps, a proper British sendoff for the Grand Old Values? Track the rich, human timbre of the solo cello against the pared-down orchestra as it sings through the opening Adagio (Nobilmente, the score commands), and it seems the concerto is all of these things. Listen further, and you’re drawn into your own memories, and your own farewells, and on to the universal human. George Bernard Shaw avowed that what he found in the Elgar Cello Concerto was “the stigmata of what we call immortality.” 

There is sorrow here. But if we were to parse sorrow just a little, we might find that there are two classes. Sorrow can be pessimistic; sorrow can also be compassionate. The more I experience the Elgar, the more I’m convinced this song is of the compassionate sort, and may, in the end, be about compassion more than sorrow. It feels everything for every living thing. It gives all away with generosity, keeping nothing for itself. In the Concerto, the accompanying woodwinds are high; the low strings are low. Throughout the four movements, square in the middle of the musical texture, is the cello. You never lose its voice. Compassion can be the loneliest for the compassionate. 

Compassion comes from the Latin marriage of com- and patior (past participle, passus), meaning to suffer together with; to have a fellow feeling; an awareness of another’s suffering coupled with the wish to relieve it. But passio is a calque from the Greek sympatheia, and along with the fellow feeling there is a meaningful sense of affection and affinity, and even of the affinity of heavenly bodies to each other. Compassion then is a gravitational and therefore undeniable pull towards suffering through the suffering of others. “I am never merry when I hear sweet music,” Shylock’s daughter Jessica says to her lover Lorenzo. “The reason is, your spirits are attentive,” he replies. “The man that hath no music in himself, / Nor is not moved with concord of sweet sounds, / Is fit for treasons, stratagems and spoils.”

Du Pré’s uninhibited style of playing catapulted the Elgar Cello Concerto into popular consciousness in an age primed to push against any inhibitions.

Remarkably, the performance history of the Elgar Cello Concerto draws something from the larger dynamic of compassion. Today the Elgar is a staple of the cellist’s repertoire, and beloved, with over forty recordings. But it had a rocky premiere with the London Symphony Orchestra on October 27, 1919. Conductor Albert Coates had spent so much time rehearsing the rest of the program that the orchestra hadn’t really practiced the new concerto. It was basically shelved from audiences. That all changed sixty-six years later, when a former cellist turned conductor (who’d played the Concerto in the orchestra for its first-ever recording, under the baton of Sir Edward himself), John Barbirolli, recorded the piece with a twenty-year-old cellist prodigy named Jacqueline du Pré playing the Davydov Stradavarius. The Concerto has been a rock star in the classical repertoire ever since.

Du Pré played the piece a little like a 1960s rock star, mercurially, with a passion that explodes from the iconic four opening chords to its final three, dancing and tussling with the orchestra, and demanding the full attention of its audience. You are in medias res before you realize that it’s the violas who’ve actually introduced the main theme and that the cantabile of the long bow strokes now marks the cello’s ownership of it. Elgar eschewed the traditional and formal orchestral introduction for his concerto, and that small note of dignified rebellion du Pré uncannily absorbed and redirected in her up bows and down bows, and the barely-controlled impatience they deliver as she lays into the strings, demanding they give up their very soul. There must have been a cloud of rosin dust by the time du Pré went through the famous scalic run to a top E, and after the scampering, scherzo-like moto perpetuo in the main body of the second movement.

The effect is mesmerizing, even without the visuals. Here’s the first Adagio movement with Barbirolli on YouTube. (And here’s the whole Concerto on Spotify.) Note the effect of the cello’s pizzicato as it brings the first movement to a close and opens up the second Lento movement, in its allegro molto phrasing. The relationship with the silence there is pivotal, as it is throughout the third Adagio movement, and du Pré masters its manipulation. NPR thought that there was “always gentleness to the pain, always an edge to the tenderness,” in her rendering of the Elgar under Barbirolli. 

You can see that edge in clips filmed for the 1967 documentary about du Pré by Christopher Nupen. This time her husband Daniel Barenboim conducts du Pré, and her espressivo is on display for all to see (as all should.) That became even more pronounced, and perhaps excessively so, in her 1970 live recording of the Elgar, again with Barenboim, and the Philadelphia Orchestra. In her passionate slaps of the bow, and the dynamic range and portentous swellings she achieves with the orchestra, there’s something closer to desperation in her playing. Her intensity radiates, which says something about the way that a single piece of music can take on a new life before every different live audience. But she was already beginning to experience numbness in her hands and arms from the multiple sclerosis that would soon confine her to a wheelchair.

Du Pré’s uninhibited style of playing catapulted the Elgar Cello Concerto into popular consciousness in an age primed to push against any inhibitions. I’m not surprised that I came to the Concerto in high school precisely through the 1970 du Pré recording, and immediately fell in love with it. Nor am I surprised that I once ended up with a speeding ticket while listening to it. (“Ma’am, do you know how fast you were going?” “But sir, have you heard this cello?!” The officer was remarkably understanding, afterward.) But the effect and the overall tone or atmosphere du Pré achieves might be quite different from what Elgar envisioned, whistling the tune with the Malvern Hills in mind. The reason has to do partly with the changing fashion or taste in cello playing throughout the century, tied to the move from thick gut strings to metallic ones. 

In Elgar’s time, cellists were still using gut strings, which produce a richer, warmer sound (rather than the “brighter,” louder sound of metal or synthetic strings), and which affect tone quality and vibrato. Elgar was used to serious cellists employing portamento in their serious playing—a sliding technique creating phrasing, coloring, and expressiveness (think “breath glide”). It can be awkward and schmaltzy on metal strings, and string players have deemed it outside the realm of good taste since the 1940s. Du Pré played on metallic, not gut strings. 

Cellist Steven Isserlis recorded the Elgar with the LSO under Richard Hickox on gut strings, which gives you a sense for what tones Elgar might have had in mind. The cello sound is richer, darker even (though not in the sense of morbid), and because of the gut, the dynamics are more contained. In the third Adagio movement, this emphasizes the middle-of-the-night pensive lyricism of the melody. He savors the cadences, and in the final diminuendo there is a reposeful bidding of an adieu. There is no raging against the dying of the light for Isserlis.

Whether you’re an expert or a novice when it comes to classical music, the way Zander coaches the cellist to new understandings and expressions of notes and emotions he thought he’d mastered will bring you some appreciative insight and no small amount of joy.

Not using gut strings, but along the lines of this warmer and more restrained expression, cellist Julian Lloyd Webber produced a BRIT award-winning recording with Conductor Yehudi Menuhin and the Royal Philharmonic Orchestra in 1985. Webber manages a subtle portamento, even with metal strings. Elgar scholar Jerrold Northrop Moore and BBC Music Magazine have described Webber’s as the “finest ever version,” and you can listen to it here on Spotify, preferably on repeat. Menuhin was perhaps the last musician to have a strong connection with Elgar himself: they were close friends and he had performed the Elgar Violin Concerto under Elgar’s baton in 1932. Attuned to the essential gentleness of Elgar’s soul, Menuhin told Webber of the Cello Concerto’s theme: “Play it as if it’s coming from a distance over the hills.” On his deathbed, Elgar had whispered to his friend Sir Barry Jackson about this same theme, “If ever after I’m dead you hear someone whistling this tune on the Malvern Hills, don’t be alarmed. It’s only me.” 

Over the years, I’ve gravitated to the Webber-Menuhin recording. It’s this interpretation that most consistently seems to keep the arc of Elgar’s “nobilmente” markings in mind. And it returns us to the dynamics of compassion. In the themes of memory and sorrowing, there’s resignation and not an ounce of self-pitying. They’re delicately colored emotions for Elgar, and Webber honors that. Webber, too, is caught by the inherent loneliness at the heart of the Concerto, and engages with it in his playing. 

You can watch Boston Philharmonic conductor Benjamin Zander coach a young cellist—and a small public audience—through the Concerto’s first movement in one of his popular Saturday morning interpretation classes, exploring all these themes. Whether you’re an expert or a novice when it comes to classical music, the way Zander coaches the cellist to new understandings and expressions of notes and emotions he thought he’d mastered, will bring you some appreciative insight and no small amount of joy. And not just about the art of music.

Commemorating the Concerto’s 100th anniversary in 2019, Webber remarked: “There are no traditional ‘fireworks’ on display, no showy cadenzas; instead lies the ultimate challenge of conveying to an audience one man’s wounded interpretation of the human condition as viewed through the passage of time.” It’s a comment reminiscent of a reflection in J. B. Priestly’s 1948 play The Linden Tree, in which the Elgar Cello Concerto nearly figures as a character. In the play, the aging history professor Robert Linden hears his daughter practicing the Concerto. While he’s struggling to respond to externally-applied pressure to retire, the professor reflects that in the wake of the destruction of World War I, Elgar, looking backwards at what no longer can be, “distils his tenderness and regret, drop by drop, and seals the sweet melancholy in a Concerto for cello. And he goes, too, where all the old green sunny days and the twinkling nights went—gone, gone.” 

But that nostalgia is not the end of the story. Linden observes that his young daughter, “who knows and cares nothing about Bavaria in the nineties or the secure and golden Edwardian afternoon, here in Burmanley, this very afternoon—unseals for us the precious distillation, uncovers the tenderness and regret, which are ours as well as his, and our lives and Elgar’s, Burmanley today and the Malvern Hills in a lost sunlight, are all magically intertwined.” 

The post Distant Strains of Memory appeared first on Law & Liberty.

]]>
46494 https://lawliberty.org/app/uploads/2023/06/shutterstock_316844786.jpeg
In the Face of Suffering https://lawliberty.org/in-the-face-of-suffering/ Mon, 28 Nov 2022 11:00:00 +0000 https://lawliberty.org/?p=39721 Still and silent, she stood the song says, sorrowing. Around her was chaos and blood; soldiers shouting orders; gawkers spitting and heckling; other death row criminals shrieking in hysterical terror. For six hours she stood, the length of time tradition has it that the man opposite her was dying of suffocation, dehydration, and multiple-organ failure […]

The post In the Face of Suffering appeared first on Law & Liberty.

]]>
Still and silent, she stood the song says, sorrowing. Around her was chaos and blood; soldiers shouting orders; gawkers spitting and heckling; other death row criminals shrieking in hysterical terror. For six hours she stood, the length of time tradition has it that the man opposite her was dying of suffocation, dehydration, and multiple-organ failure due to crucifixion. Internecine political and politico-religious dynamics, along with an arguably deliberate failure of the justice system, had led her on this path to watch the dying of her son. And yet her stillness was not paralysis in the face of horrific agony. Her stillness was the determination of a courageous soul, one willing to endure the sight of her child’s suffering in order for him to do the thing he needed to do. Her silence was not a dismissal, but the acknowledgment of that suffering’s unique magnitude, and the victim’s ownership of it separate from her own tragic experience. Her suffering was not the marquee point. His was.

For six hours Mary watched her son Jesus progress through death, and for two millennia now, billions of spiritually and culturally-inclined individuals occasionally have paused to watch her watching her dying son. There’s curiosity and sympathy in that pause, and not a little perplexity. In front of the spectacle of another’s suffering, does one stay or does one go? How to differentiate between voyeurism and exploitation on the one hand, and respect and genuine compassion on the other? Above all, outside of knowing what to do with the fact of human suffering, what is it exactly that we are supposed to do with the sight of human suffering, whether the suffering of individuals or communities, or nations—not the least, of one’s own mother? 

Quis est homo qui non fleret? Who is the human who would not weep, at the sight of such great suffering? It’s the reverberant question posed by the Stabat Mater, a medieval poem and hymn that’s still perhaps the most famous account of the watching of suffering. In answer to it is an initial modern confusion: Today, we do a lot of affinity weeping. Living as so many do on our social media platforms, we’ve seemingly developed a discomfiting capacity to want to claim the dividends of a tragedy not our own, as our own. We stake our claim to public attention on the grounds of an often-tenuous proximity to a tragic event—to exhibit our participation in the tragedy-pie. (Witness, in this recent pandemic, those individuals who took to Twitter and Instagram pre-vaccine to bemoan the fact that “not even Covid wanted them”; unaffected celebrities admonishing “we are all in this together” to the suddenly unemployed hoi polloi; also, the sharp rise in appetite for “disaster porn.”) 

Tears today come publicly and cheaply. Yet not to be affected by a proximate suffering, not to respond to it somehow, seems hardly less reprehensible. And so again the question: What are we to do with the sight of suffering?

Politics gives no straightforward answers here. Things get complicated fast when nations feel emotionally moved to intervene out of a sense of righteousness or compassion. This is why in his Pacificus essays, Hamilton cautions that the rule of morality is not the same between governments or nations as between individuals (which is not to say that political life is less moral than private life). The scale alone differentiates it: “Existing Millions and for the most part future generations are concerned in the present measures of a government: While the consequences of the private actions of an individual, for the most part, terminate with himself or are circumscribed within a narrow compass.” Even when motivated by empathy, prudence is the cardinal virtue of political action. It’s a functional damper, when well employed, on a reactive, emotional fix-it-ness.  

Religion seems more straightforward: A wide variety of creedal faiths have an array of answers to human suffering, in the form of specific traditions and social behaviors, pat catechetical definitions, pastoral guidance, or esoteric treatises. Often there is a theme of acceptance and patience. But even here there is a real struggle to make sense of the bare fact of suffering. What does human suffering imply about the reality of a Creative Divinity, benevolent or otherwise? What might that mean for human beings?

More capacious than either politics or religion is art. For most human beings, art seems to bridge these two realms and the spirit and heart within each individual. Out of everything else, art seems uniquely to allow the suffering just to be: To be seen and to be experienced; to be sat with, but also moved through, in a manner that can be both respectful and compassionate. Music above all seems to have this capability—which perhaps explains why, in the instance of Jacopone da Todi’s enduringly famous twenty-verse poem Stabat Mater, over 600 secular composers from Europe to Africa to Asia have continued to find a rich vein of source material to mine. 

Music creates a necessary bridge between the closely-held personal experience of suffering, the encountering of the sight of another’s suffering, and the communal appreciation of its occurrence. Music, in its performance, is necessarily external and public, but it still expresses profoundly personal emotions. 

The initial phenomenon of the Stabat Mater is Simeon’s prophecy to Mary in Luke’s Gospel, foretelling her witness of her son’s brutal death and her central role in humanity’s reckoning thereafter.

The Stabat Mater, in its poetic origins, is a prayer, thought to originate from Franciscan sources in the 13th century in parallel with the tradition of St. Francis of Assisi’s mysticism, whose central feature was an immersion of the individual in the especially physical sufferings of the Christ. (Its authorial origins are contested, as later research has uncovered what appeared to be a first sighting of the hymn in a Dominican convent in Bologna, Italy). The poem is voiced from the reader’s point of view, fixed initially on the figure of the standing and still mother. Acknowledging her suffering, the observer-reader spontaneously asks:

Can the human heart refrain
From partaking in her pain,
In that Mother’s pain untold?”

The reply is a simple negative. The heart cannot refrain. There must be a response. What follows is a request to that suffering mother, asking permission to participate in her and her son’s suffering in order to experience, under her protective guidance, a redemptive cleansing and ultimately, a glorious salvation. It’s a chance at a larger-than-life catharsis. Even the poem-prayer bridges the personal and the communal in uniting each reciter to (in Christian parlance) the Church Militant, the Church Suffering, and the Church Triumphant—to humanity both past and future. But here this activity is mainly observational rather than truly interactive. We never once hear a verbal response, nor, in da Todi’s version, are we ever granted access to the inmost thoughts of Mary’s heart.

“Thy own soul a sword shall pierce, that, out of many hearts, thought may be revealed.” To Mary is the actual pain; to us, the subsequent contemplations. The initial phenomenon of the Stabat Mater is Simeon’s prophecy to Mary in Luke’s Gospel, foretelling her witness of her son’s brutal death and her central role in humanity’s reckoning thereafter. The dramatic pronouncement already is communal, even theatrical. Indeed, there’s a nod to the central fact of the “spectacle” of the Stabat Mater in the very thematic origins of the poem, in their relation to the dramatic conventions of the Middle Ages. Very popular in medieval Germany and Italy was a type of theater devoted to the performance of passion plays. Marienklagen or Maria laments were one group of dramas that emphasized the unique role of Christ’s mother in the Passion (especially as depicted in the Gospel according to St. John) but also across the New and Old Testaments. Such plays were usually performed during the Lenten penitential season, in synch with the liturgical calendar of Christ’s birth, passion, death, and resurrection. But these were secular events—meant for a large and popular audience. However edifying in authorial intent and message, the plays were undeniably also entertainment.

Perhaps it’s unsurprising that this element of artistic entertainment is only ever magnified. As Western music developed away from sacred-only formulations such as Gregorian chant, and toward polyphony and then complex choral and orchestral pieces for secular consumption, the prayerful words of the Stabat Mater increasingly seem to have paled in importance compared to the melodies, harmonies—and eventually, dramatic atmosphere—in which composers have suspended them, enthralled by the enduring drama of the tragic spectacle. Surveying eight centuries of musical formulations of the Stabat Mater, we see that the contemplative ethos of the supernatural drama comes to the fore in Josquin Desprez (1480), Alonzo de Alba (1510) and Palestrina’s (1590) Stabats. Later versions show considerably more creative adaptation, from the “Sturm und Drang” florid, yet classical stylings of Franz Joseph Hayden’s (1767) oratorio version to the rambunctious liturgical opera of Giacchino Rossini (1842), or the Il Trovatorian tragic, dramatic stylings of Verdi (1898). Modern composers, too, have been fascinated by the Stabat, as we see in the grief-filled sacred and profane clashes of Francis Poulenc (1950), or Nicola Piovani’s cinematic recitative subsumed within a framework of class and race (1998). The multi-lingual, bluesy lullaby of Cameroonian Francis Bebey (1990) offers another interpretation, and we even see the Stabat as an expression of the “planetary restlessness” of television host and singer-songwriter Franco Simone in his symphonic rock opera cum music video production (2015). 

Dvorak’s Stabat, composed in the midst of the unimaginable grief of the successive loss of his three little children, binds him to America.

The differences are obvious, and sometimes cringeworthy. While some today do seem to equate the cause of environmental justice to the supernatural drama of human salvation, the intellectual horizon of Canadian activist Bruce Cockburn’s 2017 Stab at Matter parody (wordplay? interpretation?) of the Stabat Mater is incredibly diminished from that of say, Giovanni Battista Pergolesi’s much-beloved version (1736), famously composed on that composer’s literal deathbed as he succumbed to tuberculosis, still in the prime of life. (Even the hardly-devout Jean Jacques Rousseau reportedly thought Pergolesi’s opening movement was “the most perfect and touching duet to come from the pen of any composer.”) Religious believers may not appreciate the transition from the tender, yet human modesty of Antonio Vivaldi’s 1712 Stabat to Stefaan Vanheertum’s nearly opposite reduction of the same theme to “feelings like disbelief, pain, anger, despair” at “young refugees drowned during the crossing of the Mediterranean.” 

Even so, across the hundreds of known musical versions of the Stabat Mater, the most remarkable thing is, in fact, the centrality of the original prayer that formed, seemingly spontaneously, before the spectacle of Mary’s still and silent figure. That prayer has taken many forms over the centuries—now veering more toward the mystical and contemplative, now more toward the emotional and supplicative; now more communal, and now more individual. Prayer has always been of many kinds. But it has always been a lifting up of the heart. 

Eia Mater, fons amoris
Me sentire vim doloris
Fac ut tecum lugeam.

Oh Mother, fount of love
Me make feel the strength of sorrow
that I with you may grieve.

Original translation

Mary’s standing at Calvary and afterward was of course the longest such moment. Her sorrow has a strength; it is a strength; she is strong with it. She stands. In musical terms, it is Antonín Dvořák’s Stabat Mater (1877) that commands the longest performance time, at 90 minutes, and that requires the greatest amount of instruments, musicians, and singers. (At its British premiere at the Royal Albert Hall, under Dvorak’s own baton, there were over 600 singers alone.) It’s the piece that so profoundly moved British audiences that it paved the Czech composer’s eventual way to America. Dvorak’s Stabat, composed in the midst of the unimaginable grief of the successive loss of his three little (and at the time, his only) children, binds him to America, which is a poetic and poignant fact for me as an American lover of music. What are those words we’ve inscribed around the base of the Statue of Liberty in New York harbor? They, and America, are meant to be also a solace.

In grieving notes, Dvořák offers solace. With seriousness and without affectation, Dvořák offers up ten movements that perfectly arch through a succession of emotions to resolution, with the initial lacrimosa, the tears, repeated in the concluding paradisi, the glory of paradise granted to a soul. The orchestra’s introductory undulations are Mary’s falling tears, and we know we are the ones lamenting this sight, and that the lamentation will not be some uncontrollable flailing, but a meditation on the multiple aspects of her grief, and the desire to share her suffering and wrest some solace for her. 

We do not only lament here, nor do we have to lament alone. Dvořák’s private suffering from the death of his children is woven through his song within Mary’s individual, yet public suffering at the death of her son. Through the musical devices of singers, chorus, and symphonic orchestra, showcasing da Todi’s centuries-old poem, Dvořák gently yet emphatically makes the case that grieving with others reinforces the human community, reducing some of the bitterness of pain. By placing his final “amen” in the D-major key—the key Bach and Beethoven used to depict heaven— Dvořák at least alludes to the possibility of a type of (infinitely more fragile, to be sure) heaven or blessedness that societies bound by space, time, and mortality can aim to achieve even in the here and now. Aristotle called it “concord,” and predicated it on individual human beings communing with each other in a society small enough for individuals to be seen. The pre-Incarnation Ancients may not have been sanguine about the role that a grieving together of society can play toward achieving such concord, but that belief and hope directly animates the entire history, artistic and otherwise, of the Stabat Mater

Dvořák, as so many of his fellow composers do, uses his musical art to help the community of his musicians and his listeners achieve a parallel ending. Having seen and participated in the act of the spectacle of sorrowing, through his Stabat Mater he as artist places us in a living portrait of grace and concord, enabling us to see and participate in this meditation. By allowing us the space to see, sit with, and move through a lamentable suffering, he enables us to reconcile ourselves to its presence—perhaps even, to stand courageously in the sight of it. That resolution can extend outside of the performance hall and even to the levels of whole societies and nations. I’d like to think that perhaps one of the reasons that Dvořák’s Stabat has had such a powerful impact in America was that it was introduced when the United States was still reeling from the horrific violence and death of the Civil War. Compared to its European cousins, the young nation had little experience with this—and even today, it has been blessed with relatively little experience of this kind of horror. If his Stabat was a solace then, it still can be so for us today, as we struggle mightily in our culture and politics with the question of whether, and to what extent, the United States still might be a solace to suffering masses of humanity currently repressed by cruel governments.

Still and silent, she stood the song says, sorrowing. To Mary is the actual suffering; to us, the subsequent contemplations. The seeing of suffering in fact turns out to be something crucial to us in our humanness. But we first have to pause, to see the suffering. Finally, what the music, the art, and the spectacle of that suffering mother gift us is not merely the human importance of pausing at the sight of suffering. They gift us also this insight: that it is no less important to allow our own suffering to be seen, to soften some of its bitterness. Then, we too can stand, despite our sorrowing.

The post In the Face of Suffering appeared first on Law & Liberty.

]]>
39721 https://lawliberty.org/app/uploads/2022/11/shutterstock_1795972501.jpeg
The Idea of the American Veteran https://lawliberty.org/the-idea-of-the-american-veteran/ Fri, 11 Nov 2022 11:01:00 +0000 https://lawliberty.org/?p=39591 French historian Antoine Prost has observed in his In The Wake of War that when there is “one word [that] has no equivalent in another language, it generally suggests that we confront one of the particularities of a given society.” Such a word, he claims, is the English-American word “veteran.” To be a former soldier […]

The post The Idea of the American Veteran appeared first on Law & Liberty.

]]>
French historian Antoine Prost has observed in his In The Wake of War that when there is “one word [that] has no equivalent in another language, it generally suggests that we confront one of the particularities of a given society.” Such a word, he claims, is the English-American word “veteran.”

To be a former soldier is to have belonged to one of the most ancient of all professions of the human race—to be a defender of people and places and also a wager of war. To be a veteran, however, is to participate in a distinctly modern concept, one that has its theoretical roots in Enlightenment ideas and the birth of the nation-state, but with its practical articulation in the founding and later development of the United States of America. Even France, the nation that in its birth-throes arguably originated conscripted mass mobilization with the levée en masse, and thus built its new national identity with hundreds of thousands of former soldiers, does not understand itself to have “veterans.” Writing well after two other massive conscriptions of French society undertaken in the fighting of two world wars, Prost remarked in 1992 that, linguistically speaking, the French still only had “anciens combattants.” That linguistic difference, he argued, had to be taken seriously as indicative of an entirely separate notion that had arisen in America and been dubbed “the veteran.”

A Political Force

The concept of the veteran as we’ve come to experience it today appears to be a thoroughly American experiment, but one that has, remarkably, gone largely if not entirely unnoticed. This is despite America having participated in numerous wars, despite the generational reverence still felt decades later for the “Greatest Generation,” and despite what Admiral Mike Mullen once termed in the midst of the Iraq and Afghanistan wars as “a sea of good will” among the American public toward Post-9/11 veterans.

We ought not to be so oblivious to this history, and to its richness in showcasing the centrality of military veterans to the development of the American nation, even to political and constitutional ideas.

The veteran is, first and foremost, an experiment in civil-military relations and egalitarian democratic society. But veterans—and the questions that arise both from reincorporating ex-soldiers into civil society, and from wrestling with who cares (and to what extent) for their wounds and needs—have without doubt influenced and shaped American government, along with its public and private institutions, society, and culture. For one, the government lobbyist, today so central—and so reviled—a figure to the American legislative system, was invented, perfected, and perpetuated, by military veterans.

The no-longer existent Grand Army of the Republic (the GAR), formed from out the ashes of the Civil War, was the first national veterans’ organization. It made an indelible mark on American political life through its veteran pension advocacy, not to mention its instrumental role in electing five Civil War veterans and GAR members to the presidency—Ulysses Grant, Rutherford B. Hayes, James A. Garfield, Benjamin Harrison, and William McKinley, in addition to Civil War veteran (but non-GAR member) Chester Arthur. In the definitive work on the subject, Veterans in Politics: The Story of the GAR, Mary Dearing reveals the complex and compelling story. Companionship, solidarity, and charity were certainly GAR ends, as the organization would claim in recounting its origin story, but so was politics. The struggle among Radicals, conservative Republicans, and Democrats over the Reconstruction issue had formed the background for the founding of the Grand Army, while the ambitions of several Illinois politicians (General John A. Logan and Governor Richard Oglesby in particular) ushered it into existence. By keeping in view a very tangible legislative purpose—cash benefits for veterans—over several decades, GAR maintained its considerable political presence until President Benjamin Harrison signed a generous 1890 pension law.

That pension law has expanded after every war since, now including veterans who had never even experienced war. But it also became the inspiration and blueprint both for more far-reaching programs, including  Social Security under President Franklin D. Roosevelt and Medicare under President Lyndon B. Johnson. Editors at the New Republic had early grasped the opportunities inherent in veterans’ welfare, and they urged liberals to embrace it for ideological as well as practical political reasons. As they argued in “The Progressive and the Veteran”:

Progressives may ignore…the question of whether men who have served their country in uniform are entitled to special economic consideration in the name of patriotism. They cannot afford to ignore the fact that the fate of a generation is at stake and that the settling-up of a wide and socially constructive system of benefits is of the deepest significance to the future of the democratic philosophy.

Today, it hardly needs pointing out, the federal government is assumed to hold responsibility for the social and economic security of all citizens. This is why the fights over veteran-related legislation, and particularly in regards to the Department of Veteran Affairs, in fact often have very little to do with actual considerations about the positive welfare of the nation’s veterans: They are inevitably now proxy fights about the role of government in providing for the needs of its citizens at the individual level.

The story of the American veteran, it turns out, is a whole-of-America story, even while it might be hidden.

From Citizen-Soldiers to Soldier-Citizens

As then-Commander in Chief of the Continental Army George Washington knew, it is not so difficult to turn citizens into soldiers. Turning those soldiers back into citizens is the infinitely more difficult task. As Reed Robert Bonadonna puts it, soldiers “walk the weird wall at the edge of civilization.” Soldiers are bred out of and for violence, but in order to have and ensure peace. When soldiers are trained and deployed on the battlefield to close with and destroy an enemy, they are the physical executors of government power. They are uniquely creatures of politics: The state calls their identity as soldiers into being and then dismisses them, framing that identity with so many legislative words and regulations. Their official activity is the rawest of all political activities, if we embrace Clausewitz’s dictum about war being “the continuation of politics through other means.” Soldiers are thus bred to a sense of official, if not great, purpose. What happens when such purpose disappears from their day-to-day lives? Can former soldiers ever truly be civilians again? And content to be so?

For democracies, this challenge is even more compounded: If military service is not simply the ultimate expression of civic virtue but is also the highest duty of citizenship, are veterans in fact superior citizens? What are they rightly owed by their country, and what can they rightly claim from their fellow citizens? How considerations of freedom and equality factor into this equation is not easily answered, especially within the context of limited government. (I have written previously for Law & Liberty on these questions.)

Washington and his generation appear to have understood that to answer these questions, it would always be insufficient to contextualize former soldiers within the framework of their past, and of past wars. Like the Ancients, Washington understood that making soldiers involves the cultivation of an Achillean thumos, or spiritedness, which enables them to do what they need to do in the face of death. Anticipating 20th-century sociologist Willard Waller, Washington wrestled with the puzzle of how to understand thumos and how to deploy it toward civic ends after and outside a time of war. Plato, Washington, Homer, and Waller, each understood spiritedness to be a neutral force, but one that is always on the lookout, as it were, for a cause to serve.

And thus we find George Washington presciently advising his veterans, in his 1783 Farewell Orders to the Armies of the United States, of the tense psychological dynamics they will face once separated from military service. He urges them to view their service as one rung of experience on the ladder of their personal identity, and so to direct their energies into industrial, commercial, and agrarian pursuits, so as to “prove themselves not less virtuous and useful as Citizens, than they [were] persevering and victorious as soldiers.”

Washington felt intuitively that veterans needed to maintain a sense of self after military service, and that ex-soldiers’ veteran status ought to be only one (temporary) part of their American identity.

Reflecting the insight that thumos is a force deployable for positive and negative ends, the first decades of the United States show how many veterans did indeed build up America through western land settlement, agrarian cultivation, entrepreneurship, and continued public service. The Virginia Military District after the American Revolution, so designated in order to exchange land for payment to Virginia’s Revolutionary War veterans, is one concrete example of this—populated with veterans, this territory essentially became the state of Ohio. Those decades also show how veteran thumos could be destructive of civil society—witness, for instance, Shays’ Rebellion. As Dixon Wecter has noted in When Johnny Comes Marching Home, a combination of soldierly impatience, civil fickleness, and murky economic problems complicated the Continentals’ return to civilian life, in a pattern repeated after every major armed conflict involving American forces since.

After Shays’ Rebellion, Henry Knox wrote Washington that the insurrection had failed chiefly because officers of the late army had joined to quell it on the strength of their Society of Cincinnati ties. Named in honor of the Roman Cincinnatus, the military society was meant to bridge the space between military camp and veteran life for the officer corps, who pledged to follow their namesake’s example “by returning to their citizenship.” A hereditary society, the Cincinnati’s ostensible purpose was the perpetuation of friendships made during the war. With George Washington among its first presiding officers, the Cincinnati was a success immediately though controversial soon afterwards, due to its suspicious marks of aristocracy and the unpopularity of the officers’ bonus in the years following the war. But in quelling Shays’ Rebellion, the (former) officers’ actions vindicated Washington’s belief in the feasibility of citizen-soldiers turned citizen-veterans. And in fighting and wining a second war against the British soon afterward, Americans, it seems, accepted this also. Their soldiers were indeed their fellow citizens—people whom they knew. The veterans in their midst were their family members, their tradesmen, their townspeople, and their farmers. They were not the social and moral dregs of society, nor suspicious actors of the state, but were a true cross-section of American democratic society.

“The Faithful Image of the Nation”

In his essay “Similes,” Seth Benardete contends that it is war, not peace, that needs similes for us to understand it, because “peace is what everyone knows” while “war cannot explain itself.” Similarly, he wonders whether the heroes (who fought and fell before the walls of Troy in Homer’s Iliad) have “counterparts” that can be found in our world; or, he asks, “must peace be distorted to fit them?”

Very few Americans today would think of the Constitutional Convention and the eventual ratification of the US Constitution as a hashing out of Benardete’s (or Homer’s) question—which is in essence a question about the social and political roles or place of the military veteran. And yet the Constitution is very careful to provide no pathway for military service, or martial excellence, to become a right to political power via political office. The president is Commander in Chief of the Armed Forces by virtue of having been elected to the executive office, for instance, and not because of any military affiliation; no elected office in America requires prior—or any—military experience. Not even at the unexpected death of the president do any military generals step in. Peace, the American Founding generation seems to be saying, does not, indeed must not, be distorted to fit its military guardians, whether that guardianship is present or was in the past.

To return to General George Washington’s thinking in particular, the important thing in this regard seems to be a focus on the democratic souls and character of those who must fight for the democratic nation. As demonstrated in his Farewell Orders, the one-time commander of the Continental Army felt intuitively that veterans needed to maintain a sense of self after military service, and that ex-soldiers’ veteran status ought therefore to be only one (temporary) part of their American identity. What came before military service in terms of the citizen’s identity was to prevail: Soldiers cannot simply remain ex-soldiers once their period of service is fulfilled.

This was a crucial plank of Washington’s argument that the new nation could have a professional army without endangering the liberties of citizens. Alexis de Tocqueville gave the more explicit explanation several decades later, when he showed why the American soldier displays “a faithful image of the nation.” Most democratic citizens would be naturally habituated to reserve their passions and ambitions rather for civilian life than for martial grandeur, he wrote, because they think of military service as at most a passing obligation, not an identity. “They bow to their military duties, but their souls remain attached to the interests and desires they were filled with in civil life.”

In America, neither peace nor civil society need to be distorted to fit the veteran. It is the free and equal citizen who prevails always in importance—but neither does that free and equal citizen fear to be spirited, full of thumos, in the defense of his rights. Perhaps it is this delicate marriage of spiritedness and restraint that makes the concept of the veteran such an American experiment.

The post The Idea of the American Veteran appeared first on Law & Liberty.

]]>
39591 https://lawliberty.org/app/uploads/2022/11/Washington-Farewell-to-His-Officers.jpg
The Lost Veterans of Donbas https://lawliberty.org/the-lost-veterans-of-donbas/ Mon, 21 Feb 2022 11:00:00 +0000 https://lawliberty.org/?p=31752 Mykola Mykytenko, being of sound mind and body and a family man, nevertheless set himself on fire on Kyiv’s Maidan Square one day this past October. Mykytenko is just one out of the nearly 400,000 surviving veterans of the eight-year Ukrainian-Russian conflict, but his protest was hardly his alone. The “Donbas veterans” share intense feelings […]

The post The Lost Veterans of Donbas appeared first on Law & Liberty.

]]>
Mykola Mykytenko, being of sound mind and body and a family man, nevertheless set himself on fire on Kyiv’s Maidan Square one day this past October.

Mykytenko is just one out of the nearly 400,000 surviving veterans of the eight-year Ukrainian-Russian conflict, but his protest was hardly his alone. The “Donbas veterans” share intense feelings about defending Ukraine against Russia, none more so than the former volunteer citizen-soldiers among them who make up half their number. These have been vociferous in denouncing any peace overtures by Ukrainian authorities that bear the slightest perfume of capitulation toward Russia. To no small degree, this is because once having left the physical frontlines, the Donbas veterans have found themselves still having to fight at home against half-Soviet ways, whether in the Verkhovna Rada, the government bureaucracy, or in society at large, for even just official recognition of their veteran status.

These former volunteer citizen-soldiers do not look at all like modern American citizen-soldiers. America’s volunteer soldiers join a well-oiled machine, with clear and established entrance and exit processes documented by a true mound of paperwork. That paper trail confirms when the citizen becomes a soldier, when a former soldier becomes a veteran, and what honors, benefits, and privileges he or she is due. When the Donbas volunteer soldiers volunteered, there wasn’t much of anything to the Ukrainian military. In the wake of the anti-Putin Maidan Nezalezhnosti (Independence Square) uprising, when the “little green men” appeared in Eastern Ukraine and Russia illegally annexed Crimea, the Ukrainian military essentially evaporated—over 70 percent of the officer corps alone reportedly defected to Russia, and the mostly Soviet-era equipment was found to be inoperable. With no functional institution in place, it was everyday citizens that mobilized: The majority of the current Donbas veterans quite literally dropped the pens, papers, and tools of their professional trades to rush to the front, despite having zero military experience or even equipment.

Churches, civil society groups, and Ukrainian oligarchs funded and sponsored a variety of hastily constructed battalions, and in response, the Ukrainian parliament authorized the formation and deployment of volunteer armed groups—militias—to defend the nation (eventually, it passed a conscription measure as well.) By October 2014, more than forty-four territorial defense battalions, thirty-two special police battalions, three volunteer national guard battalions, and at least three pro-Ukrainian unregulated battalions that answered officially to no one had been stood up. It wasn’t until 2019 that all of the volunteer battalions became incorporated into conventional military and police structures.

Militia warfare, even when legally authorized, “invites messy margins,” and not just in terms of the (lack of) paperwork. Some of the volunteers were long-time political activists who had participated in the Maidan Square uprising; some always had had unsavory political ideologies or affiliations tending toward the radical. Some were private security attached to particular Ukraine oligarchs. But the majority were just patriotic citizens stirred by the current civic volunteerism movement that has been central to Ukraine’s post-independence national culture. When all of these volunteer soldiers returned from the frontline, they did so alongside the conventional soldiers and members of the newly reestablished National Guard of Ukraine, to find themselves at the mercy of a mostly nonexistent Soviet-era, fragmented, and corrupt veterans system, and in competition with each other for government’s attention.

Ukraine has begun working to build a veterans system more in line with its current rule of law and democratic aspirations, but similar difficulties plague this process as well. The political tensions that surrounded the volunteer units have affected their post-deployment status: Only in 2019 did Zelenskyy sign a law granting the volunteer soldiers combatant status, but there continue to be widespread instances of local officials denying Donbas veterans any benefits, whether because the official is sympathetic to Russia, has been influenced negatively by Russian disinformation coloring Donbas veterans as right-wing fascists, or is simply corrupt.

Donbas veterans “keep awake all: the Church, government, and politicians,” Major Archbishop Sviatoslav Shevchuk recently remarked; veterans are the “yeast of the new Ukraine.” As they’ve struggled with reintegrating into civilian society with little support from official sources, these veterans have revealed how they are at the beating heart of Ukraine’s struggles toward a functional democracy.

And it’s hardly their fellow Ukrainians alone who’ve taken note of the potent potential of veterans’ civic strength. According to reports from the Atlantic Council, the Global Public Policy Institute, and even the World Bank, Russia has made it its business to delegitimize Donbas veterans through a sustained pincer disinformation campaign. Utilizing a broad array of negative narratives, this campaign seeks to demonize Donbas volunteers and the Ukrainian military as rapists, Western puppets, extremists, and corrupt aggressors wishing to inflict punishment on innocent civilians. This both targets veterans themselves in order to heighten their social isolation, and also undermines veterans’ image in the eyes of Ukrainians, so that Ukrainians will distrust and shun them—and through them, civic volunteerism.

The Donbas veterans reveal a nation strongly in love with a non-Russian Ukrainian “Motherland”; a people frustrated by political corruption; but a nation still believing in civic volunteerism, civil society, and the possibility of a free and functional democracy.

In today’s America, with our reported high esteem for the US Armed Forces, we don’t really think about our enemies propagating negative stories in disguise about veterans across our media ecosystem as a deliberate tactic to undermine our civil society or national security—or that we would fall for them. But our enemies certainly know that for us to have military recruits in the first place, those young souls have to be willing to embrace the full spectrum of the soldier-veteran image popularized by civil society. We are a large country, however, and our diversity layered on top of established democratic institutions and processes inures us to the potential threat. Even in a protracted twenty-year war with increasingly ambiguous goals, thousands of Americans still voluntarily served.

Ukraine has none of this luxury—it never had the wealth, the institutional stability, or the state capacity to build up its democratic infrastructure before the country’s existence was assaulted by Russia in 2014. After the Soviet Union’s collapse, Ukraine largely struggled on its own, without the levels of Western support that Eastern Europe enjoyed to help create and sustain a democratic infrastructure. Thus, even while it has had to fight against a powerful foreign aggressor, it was trying to establish democratic institutions and processes that challenge at every level the Soviet-style habits instilled in society for decades. As any post-college graduate new on the job knows, no matter how brilliant one’s ideas or kinetic energy, to overcome the unconscious habits of a workplace culture is a gargantuan task, and unwinnable with only one day’s battle.

Russia gleefully exploits the messy truths that this baseline situation has necessitated. And this is why they target the Donbas veterans in particular—Ukraine’s mix of militia and conventional warfare has been messy, and has involved its share of unsavory characters.  

All of this is true: Since the beginning, soldiers bring diverse motivations to war. And yet this, too, is true: The citizens of Ukraine stepped up to defend their own. They fought in combat, no matter whether they were officially and legally “on paper” allowed to do so (as was the case for women) or whether they were formally affiliated with the military. Since December 2020 far-flung fellow Ukrainians have remembered this civic sacrifice, and the Donbas volunteer combatant veterans are celebrated as national heroes, as the comments from Archbishop Sviatoslav Shevchuk illustrate, and as the new soldier-volunteers stepping up since January shows. But they are also celebrated as a source of inspiration for today’s, and future, fights—and this is why Russia cannot let them go. Civil society has ever been the autocrat’s foe.

A cohort of 400,000 veterans trained on the job in combat and animated by volunteerism is a force large enough to mount a rearguard action against Russia, should Russia attempt again to invade Ukraine. And indeed, in January Ukraine’s new “On the Fundamentals of National Resistance” law took effect, which codified the roles and responsibilities of ministries in harnessing citizen resistance potential through territorial forces units, volunteer battalions, and other citizen-centric tasks. Thus to fracture that movement before it can even form is a logical necessity for Russia—not only would delegitimization weaken that action, it could frost over any society-wide imitation of those original defenders. And thus Russia keeps the Donbas veterans squarely in its disinformation sights. And, however sobering for the personal costs that this has entailed, so also are the Donbas veterans keeping Russia in its sights.

Veterans’ stories too often are treated as merely human-interest stories—tales to invigorate or innervate the heart; to lighten the pocketbook; to castigate government. But Mykola Mykytenko’s death is not just some tragic veteran story. For Tocqueville in the 19th century, the story about veterans was about the nation. They are a mirror for its issues—especially in democratic-leaning nations, soldiers and so veterans are “a faithful image of the nation,” Tocqueville wrote. They do not come from one class, segment, political party, or religious persuasion; they reflect often the nuances of society in being expressions of its various parts. Some of them are sinners, other saints, and others just average human beings who love their country. The Donbas veterans of today reveal this type of Tocquevillean tale. They reveal a nation strongly in love with a non-Russian Ukrainian “Motherland”; a people frustrated by political corruption; but a nation still believing in civic volunteerism, civil society, and the possibility of a free and functional democracy.

The post The Lost Veterans of Donbas appeared first on Law & Liberty.

]]>
31752 https://lawliberty.org/app/uploads/2022/02/1200px-Memorial_to_Ukrainians_Killed_in_Donbass_War_-_Berehove_-_Ukraine_36296731870_2-e1645293059982.jpg
Hearts, Purpled https://lawliberty.org/hearts-purpled/ Tue, 05 Jan 2021 11:00:53 +0000 https://lawliberty.org/?p=19792 General Douglas MacArthur was adamant: He wanted to revive George Washington’s purple heart-shaped Badge of Military Merit to honor the original Commander in Chief on his bicentenary. Above all, MacArthur wanted the medal “to animate and inspire the living.” What he didn’t want was to make it “a symbol of death, with its corollary depressive influences.” Army […]

The post Hearts, Purpled appeared first on Law & Liberty.

]]>
General Douglas MacArthur was adamant: He wanted to revive George Washington’s purple heart-shaped Badge of Military Merit to honor the original Commander in Chief on his bicentenary. Above all, MacArthur wanted the medal “to animate and inspire the living.” What he didn’t want was to make it “a symbol of death, with its corollary depressive influences.” Army heraldic specialist Elizabeth Will took note. When she designed the new Purple Heart Medal, Will included Washington’s profile and family coat of arms but left off the Washington motto: Exitus Acta Probat. The medal itself was to be its proof—The Outcome is the Test of the Act. 

MacArthur and the War Department wanted an official way to celebrate “persons who…perform[ed] any singularly meritorious act of extraordinary fidelity or essential service” to the nation, post World War I. That echoed Washington’s intent, as he’d realized the importance to the fledging democratic experiment of an official military commendation, which had nothing to do with rank and had everything to do with individual merit expressed in service for the common good. For Washington, blood shed was not a necessary part of the merit equation. For MacArthur, blood also ranked low, as a parenthetical, in his General Orders No. 3. Only under certain detailed circumstances would “a wound… be construed as resulting from a singularly meritorious act of essential service.” 

And yet not a hundred years removed from MacArthur’s revival of the Purple Heart, thanks to a swirl of changing social mores, advanced medical science, partisan activists, and congressional legislation, what the American public most identifies with the Purple Heart Medal is wounds or death suffered during military service. Beyond any valor represented by the medal, what most seems to have caught the public imagination over the last few decades is the wounds potentially suffered by combat veterans. And because significant portions of the public don’t realize that combat veterans represent a small minority of veterans—that only around ten percent of the roughly twenty to forty percent of soldiers ever deployed even see combat—Americans today seem reflexively to believe that veterans are psychologically and thus medically damaged by their military service, even while they put them on a pedestal for their service. 

While no doubt unintentional, what American society seems to have done over time in emphasizing the medical image of the veteran, is to have completely politicized that veteran image. 

The Veterans and Politics Problem

Historians and politicos of yore have called it the “suffering soldier” phenomenon, or sometimes, “waving the bloody shirt.” (Updated to what we’re currently more familiar with, we might call it “invoking the retired officers’ endorsements.”) But we might as well call it the veterans and politics problem. This problem isn’t the civil-military one that characterized the 2020 election cycle and that’s occasionally popped up previously. The veterans problem is that from the beginning of our polity American society’s view of soldiers has been shaped by politics, and the politics surrounding soldiers and veterans has been primarily shaped by emotion, with three predominating: charity or philanthropy, fear, and honor. And fueling it all is a complex mixture of gratitude and a sense of justice. 

The “wounded warrior” is the centerpiece of veterans’ legislation in the 21st century not only because medical care is its historical root, but also because stakeholders and legislators have learned that highlighting the “brokenness” of veterans is the most effective mechanism to move legislation through Congress. 

Before it even was a nation, colonies in America made sure to reserve public funds for combat veterans, like Plymouth Colony did in 1636 when it provided money to those disabled in its defense. Originally, veterans’ benefits were created on an ad hoc basis, and veterans received quite different benefits based upon the economic and political climate of the time. The crippling cost of Civil War veterans’ benefits and the perpetually scandal-plagued administration of them cooled the public’s ardor in the lead-up to World War I. It colored the creation in 1917 of a new veterans benefits and disability compensation system, most notably in Congress’ choice to describe benefits as “compensation.” “Pension” implied generosity, an emotion-fuelled gesture not subject to any limiting principle. “Compensation” on the other hand implied payment for a loss, which the drafters believed could be measured and controlled, and thus (they thought) cost overruns prevented. 

The 1917 veterans benefits system is the system that the Department of Veterans Affairs uses today, groaning under the weight of an enormously expanded set of veterans benefits haphazardly added on after a century’s worth of wars, all reflecting changing (and often conflicting) views of individual rights, government benefits, economics, and military service. Richard Levy has written out a helpful explainer of the contemporary conflagration resulting from the dynamics at work in these benefits—something he calls the “uneasy mixture of two basic models of government benefits,” the charitable and the social insurance models

In the charity model, “whatever moral obligation the nation may owe its veterans, the fulfillment of that responsibility is, from a legal perspective, a voluntary undertaking.” The charity model prevailed for a significant portion of American history, including during the time the veterans benefit system emerged. The creation of Social Security, Welfare, and Medicare decades later signaled a different understanding of government benefits, what we commonly call entitlements, and which Levy calls the “social insurance model” of benefits. Levy writes that in this latter model, “benefits are a form of social contract through which the government uses its taxing and spending powers to spread the costs of old age, disability, unemployment, and poverty.” 

In the expansion of modern veterans benefits to include now housing insurance and fertility treatments, we can see the social insurance model in play, along with the charity model. The dual motivations of gratitude and a just repayment of a debt behind those two models are not difficult to discern. But the range of benefits the contemporary veteran can qualify for is so expansive that the veteran’s relationship with the VA may be the most important relationship in her post-service life. The VA can define who, as a veteran, she is in her own mind—whether disabled, because she receives a check for such, or not. And in its capacity as the second largest federal agency and the most visible public expression of the nation’s gratitude towards its veterans, the VA certainly shapes the American public’s expectations and understanding of who the veteran is. 

Society’s medicalized perception of the veteran is further reemphasized, as James D. Ridgeway has noted, by veterans service organizations frequently lobbying for all benefits as compensation that is owed the veteran, as their right. But in fact, the “wounded warrior” is the centerpiece of veterans’ legislation in the 21st century not only because medical care is its historical root, but also because stakeholders and legislators have learned that highlighting the “brokenness” of veterans is the most effective mechanism to move legislation through Congress. 

In 1980, Harris and Associates explicitly recommended this tactic to legislators even while noting its risky downsides for the public image of the actual veterans. That Congress liked the recommendation and paid no attention to the warning seems obvious in the post-9/11 context by the frequency with which members in the House and Senate introduce suicide prevention legislation despite repeated empirical evidence that veterans’ most consistent source of stress is understanding how to navigate the veterans benefits system, and being able to secure meaningful employment. 

This by no means is to make light of the statistics about veteran and military suicides. But at what point does holding one single perspective distort the truth of a photograph or a profession, a person or a phenomenon? 

Framing Military Service, and Sacrifice

Should the veteran disappear behind the significance of his wounds, because those wounds show both what is the blood price of the nation’s foreign adventures and also reveal something about his personhood and character? Or should the whole person of the veteran predominate, even if that means sometimes deemphasizing the wounds? Does a veteran not matter if she has no wounds? Any wound represents real suffering. Yet in and of itself, a wound is an amoral thing. The circumstances around receiving the wound are not, and it’s those that we’ve traditionally looked to to clarify the ambiguities of the wound. Perhaps this is why General Washington and later General MacArthur deemphasized the importance of the wound in relation to the quality of the soldier’s act of service. Society can easily evaluate wounds from two opposing perspectives: They can be received by a passive subject, which verbally anyway renders the wounded individual as a victim— “these wounds senselessly happened to her.” But a soldier can receive wounds as a byproduct of a combination of circumstance and her own, out-of-the-ordinary courageous choice to answer that circumstance— “this soldier was wounded in the course of duty.” However much the wound still happened to her, she consciously chose to bear it so that her companions would not. It’s a matter of character. But in our shorthand mass media society, we’ve seemed to elide the character of the soldier and of the circumstances to the mere fact of the wound. And while this might not normally be problematic outside of sophistic moral philosophy debates, our societal obsession with the shorthand image of the complex civil-military democratic relationship—the wound—in fact has made the wounded veteran easily exploitable in partisan political terms ever since the social, partisan upheavals sparked by participating in the Vietnam War.

Thanks to the Vietnam War and the reactions it sparked to all things war related, for half a century what all political parties have effectively done is to disappear veterans behind their visible and invisible wounds, as a proxy front for partisan battles about the legitimacy of war as a tool of national security. A complete politicization of the veteran image has taken hold; as an intended and an unintended consequence of the intense public focus on veterans’ physical and mental health coming out of the Vietnam War, when aspects of veterans health were explicitly invoked in partisan pro- and antiwar arguments

Over fifty years after the Vietnam War and nearly twenty years into The Global War on Terror, our unease has grown. We find it difficult to seriously confront or analyze the meaning and status of the military veteran in contemporary American society.

By the time the American Psychiatric Association included post-traumatic stress disorder in the third edition of the Diagnostic and Statistical Manual of Mental Disorders in 1980, PTSD and mental health were already lightning rods of political sentiment. Antiwar liberals Robert Jay Lifton and Chaim Shatan’s “rap groups” with veterans in the New York office of Vietnam Veterans Against the War had fuelled their psychiatric research, and their writing about it in outlets such as the New York Times had caught the attention of Democratic Senators Vance Harke and Alan Cranston, both fiercely antiwar veterans themselves. Senators Harke and Cranston were the first two Chairmen of the newly formed Senate Committee on Veterans’ Affairs between 1971-1981, and while both were genuinely concerned about winning wide recognition for the medical science of PTSD and mental health care more generally, they explicitly crafted congressional hearings in order for the PTSD issue to be the mechanism translating their antiwar leanings into policy outcomes. Cranston especially did not shrink from outright politicizing veteran mental health issues in the Senate.

The legacy of this approach, as Jerry Lembcke acknowledges in The Spitting Image: Myth, Memory, and the Legacy of Vietnam is that the institutional recognition of PTSD had the unfortunate effect of medicalizing antiwar resistance among veterans; of, as he writes, reframing “badness” as “madness.” Given the partisan antiwar doula that helped birth the public recognition of veteran mental health challenges, is it a surprise that the more traditional model of unquestioning patriotic, dutiful soldier and patriotic military-supporting families that had predominated until this moment couldn’t see the newly-developing medical science for the political bathwater? Hence, at least up until the completion of Operation Desert Storm, if not the 9/11 Twin Tower attacks, there was a perceived conservative or Republican Party political reaction towards the veteran mental health narrative as well as the patriotism of (especially veteran) antiwar activists. In turn, this conservative reaction caused its own liberal or Democratic Party counter-reaction, especially during the 1980s and 1990s, when accounts condemning American POW/MIA rhetoric as false narratives began to proliferate, as well as books questioning the historical veracity of the “spat upon Vietnam War Vet” narrative.

Nearly twenty years into the Global War on Terror, American society, the military, and veterans’ organizations have made marked progress towards accepting the medical realties of mental health underneath any partisan politics. At the same time, the real horror that American society in general has come to articulate at even seeming to be opposed to the troops has both fuelled and fed off of a certain politization of the veteran. To an extent, it might explain our aversion to talking seriously about veterans issues. But at what cost to the veteran, to the military, and to civil society was the veteran’s (especially mental) health originally politicized? Not only for its part in producing the broken veteran/trauma hero narrative, but in limiting the willingness of politically and otherwise diverse young adults to consider joining the military as a legitimate career option, or otherwise to engage in public service? As reported pronouncements from America’s 45th Commander in Chief about wounded soldiers seem to evidence, that cost is still accruing, and not just electorally. Meanwhile, there seems to be no study that tries to understand the long social ramifications of having doubled down on politicizing the image of the veteran nearly fifty years ago, and just as the All-Volunteer Force was coming into being, in need of a willing public to understand it, support it, and join its ranks. 

Over fifty years after the Vietnam War and nearly twenty years into The Global War on Terror, our unease has grown. We find it difficult to seriously confront or analyze the meaning and status of the military veteran in contemporary American society. The distinct memory of previous neglect of care hushes many from voicing a critique of the current status quo of the medicalized veteran image, as does a (legitimate) worry of appearing to devalue an individual soldier’s personal sacrifice, valor, and character. After all, the cultural power of national service awards such as the Purple Heart still resonate broadly throughout society, if confusedly, about the high value of the meritorious service of an individual willing to put his or her blood on the line. Not just anyone receives a Purple Heart Award.

But the effect of our unwillingness to have such a reckoning means that our media, our legislation, and our culture continue to popularize a stilted image of who the modern veteran is. This harms not only current veterans, but those youth who will choose never to be veterans—and in that process, the American nation as well. It leaves the veteran to be no more than a image of what one perspective we emotionally choose on a random day to see of their scars, their service, their medals, or their political convenience. 

The post Hearts, Purpled appeared first on Law & Liberty.

]]>
19792 https://lawliberty.org/app/uploads/2020/12/shutterstock_30245284.jpg
Phonies and the Past https://lawliberty.org/phonies-and-the-past-wineburg-review/ Wed, 06 Nov 2019 00:00:00 +0000 https://lawliberty.org/phonies-and-the-past-wineburg-review/   “History is not an American pastime,” Kevin Honold emphasizes in the Hudson Review, in a sonorous tract of musings about the Ohio heartland, Jesuit missionaries and the Age of Explorers; childhood and the imagination; the strategic advantage of trees for empire; plumbers; and the fate of the American Indian warrior. In sympathetic step with generations […]

The post Phonies and the Past appeared first on Law & Liberty.

]]>
 

“History is not an American pastime,” Kevin Honold emphasizes in the Hudson Review, in a sonorous tract of musings about the Ohio heartland, Jesuit missionaries and the Age of Explorers; childhood and the imagination; the strategic advantage of trees for empire; plumbers; and the fate of the American Indian warrior. In sympathetic step with generations of social studies educators, Honold thinks this is explained in part by how history is taught to American schoolchildren: “As a thing from which they are meant to draw ‘lessons,’ as though history were a series of unfortunate incidents involving hot skillets and monkey cages.” Textbook history—history presented as moralizing schoolmarm or anodyne roll call of names and dates—fuels why advocates today lament but excuse teenagers’ lack of cranial investment in historical literacy.

“Textbook history” certainly doesn’t seem like an American pastime. Not only do we have ample evidence year after year that Americans of all ages and backgrounds barely know the highlight reel of their nation’s past, but even history’s professional practitioners also struggle to formulate a rationale for their subject that resonates. Alan Mikhail of Yale University recently implied as much in comments to the American Historical Association, in light of the discovery that history has had the sharpest (and starkest) decline out of all majors at US colleges and universities. Nor have the expert historians seemed able to persuade school principals, much less the general public, away from materially acceding history class to Google’s infinite yield of search returns.

History barely ranks as a classroom pastime: A subsumed subject in the social studies curriculum, history today takes up far less than ten percent of at least a public school student’s classroom time. In the age of a Tablet for every desk and an iPhone for every pocket, we seem to have moved beyond the need even to ask why students ought to study history, given what’s supplied by the tools at our fingertips.

Perhaps in acknowledgment of this attitude, Sam Wineburg, professor of education and history at Stanford University, chooses not to register the challenge of technology to historical literacy as a question to muse philosophically about. His recent monograph, Why Learn History (When It’s Already on Your Phone), isn’t a question; it’s an ex post facto statement from Silicon Valley. Masquerading as an accessible weekend read for civics advocates and interested laymen, the book is something of a gauntlet heaved at public handwringers about historical illiteracy, and especially against their historical assumptions about historical literacy and its role in sustaining democracy.

Wineburg has no patience for the handwringers. And frankly, it’s refreshing to settle in to a thesis favoring history because of technology, and that is not the next iteration of the Glumly-Go-Round Argument: America is doomed because American children don’t ace their multiple choice history tests, which dooms their ability to be self-governing citizens, which dooms America’s future. Wineburg finds this strain of worrywartism to be less apocalyptic than annoying, even while acknowledging that it has old and august roots.

Indeed, ever since Benjamin Franklin’s famous hortatory in the wake of the Constitutional Convention, Americans have been paraprofessional Cassandras issuing dire warnings about the future of the American experiment given insufficient attention to the past. And arguably we’ve only gotten more persistent about that. But is it modern technology, modern pedagogy, contemporary political partisanship, or something within our democracy itself that explains our increasing need to test, measure, and then bemoan our fellow citizens’ ignorance about the dates for the wars of 1812, the Civil War, and World War II; whether Madison or Jefferson (or was that Jefferson Davis?) was the “Father of the Constitution”; the causes of the Flour Riots or the Haymarket Affair or the Bonus Army; not to mention about basic constitutional structure and design, or whether FDR’s “Second Bill of Rights” officially replaced the original Bill of Rights?

More pointedly: If historical ignorance dooms the American experiment, how has America endured for over two hundred years?

But more profoundly: Has America endured because of, or despite, the very worrywartism that Wineburg dismisses?

Of Ignorance, and Manufactured Ignorance

Since J. Carleton Bell of the Brooklyn Training School for Teachers and his colleague David F. McCollum first issued a large-scale test of historical facts to Texas students in 1917, we’ve tested our youth on historical minutiae and perpetually found them wanting. Professor Wineburg would have us know that with every iteration of the National Assessment of Education Progress (NAEP) since it was first administered in 1987, students’ subpar results in history and civics have provoked similar public reactions of horror as greeted Bell’s and McCollum’s findings. They’ve all been iterations of then-president of the National Council for the Social Studies Kim O’Neil’s response, in 2015: “How do we, as a nation, maintain our status in the world if future generations of Americans do not understand our nation’s history?” Likewise, Bernard Bailyn’s exclamation about test and survey results in 1976: “Absolutely shocking.”

Such reactions are trite and predictable, Wineburg argues, because Americans’ infamous ignorance is more manufactured than real, courtesy of the internal logic of standardized testing: “As practiced by the big testing companies, modern psychometrics guarantees that test results will conform to a symmetrical bell curve.” The point of bell curve testing is not to show that students have absorbed knowledge, or to assess whether they are historically literate, but to “create spread” among students. In order to do that, the Education Testing Service (ETS) statisticians discard questions for which the majority of responders might know the correct answer (such as identifying George Washington or “The Star-Spangled Banner”) and introduce instead questions about disparate historical minutiae (John F. Hartranft, the battle(s) of Fort Wagner, and Benjamin Gitlow), which sizeable shares of students will most likely not recognize.

This assessment methodology reinforces the already-bad habit of the typical history teacher, teaching from the typical history textbook, and in line with state-mandated curricular standards, of emphasizing lists of disconnected names and dates for students to memorize and hemorrhage forth at the appointed moment. Wineburg takes the textbooks, their publishers and their promoters—whether in state legislatures or nonprofits—to task for being captured by special interests, heavier than a “Duraflame log,” and the prime suspect for reducing the “intrinsically human character” of history to a pile of nonsense information.

From this standpoint, Wineburg argues that (more conservative leaning) critics of supposed progressive teachings in social studies, like E.D. Hirsch, have missed the mark about what ails history class.

As long as textbooks dominate instruction, as long as states continue to play a ‘mine is bigger than your’ standards game, as long as historians roll over and play dead when faced with number-wielding psychometricians, we can have all the blue-ribbon commissions we want… but the results will remain the same.

But contrary to the expectations of more progressive critics of history class, for whom the standard American history textbook is propaganda of the worse sort, Howard Zinn and A People’s History of the United States is also not the answer. Of that, Wineburg is adamantly sure. Despite Zinn’s cultural popularity for being a smash-all-the-patriarchy-narratives anti-textbook, his 729-paged tome speaks as authoritatively, one-sidedly, and inaccurately as a typical textbook, “albeit one that claims to be morally superior.”

In an extensive chapter cheekily titled “Committing Zinns,” Wineburg excavates the case against Zinn, helpfully showing the reader through a series of particular examples how Zinn’s method shuts off historical inquiry by discarding “unruly fibers of evidence” in asking of history “yes-type questions” that appear to prove a broad claim, and by asking them in such a narrative style that our emotions and sense of justice are immediately engaged even while exploiting our “expected ignorance.” “They’re all phonies is a message that never goes out of style,” is Wineburg’s summation of Zinn’s popularity. That popularity is more concerning to Wineburg than Zinn’s actual (mis)interpretations, because it means that Zinn is often now the only encounter with American history that students (and their teachers) have—and that encounter is malforming. Zinn strikes at the very core of what the history discipline is: historical inquiry, a way of thinking critically, one that Wineburg argues lets us acknowledge nuance and ambiguity.

History, for Wineburg, is about knowledge, yes, but it’s more about an approach to knowledge: “It’s about determining what questions to ask in order to generate new knowledge.” This is not your trademarked pyramid of Bloom’s Taxonomy of Educational Objectives “critical thinking,” in which knowledge undergirds understanding, and proceeds to application, hence to analysis, hence to synthesis, culminating with evaluation. Nor is it the “close reading” critical thinking encouraged by the Common Core State Standards, which has students focus on the words of texts but divorced from context. It’s a unique way of acquiring knowledge through repeated, reevaluating inquiry:

The past bequeaths jagged fragments that thwart most attempts to form a complete picture. Determining cause is less about isolating a mechanism than knitting together a textured understanding that withstands scrutiny at different levels and grain sizes. Parsimony in historical explanation often flirts with superficial reductionism.

In emphasizing the “jagged fragments” of history demanding a “textured understanding,” Wineburg nods in the direction of fellow historian Wilfred McClay, who likewise presents history as “a way of knowing what facts are worth attending to… that fit a template of meaning, and point to a larger whole.”

History, understood as a knowledge-gaining inquiry, Wineburg has come to see as a “subcategory of something larger—a broader, more encompassing way of thinking about information in the social world.” And here is the point towards which Wineburg has been building all along: Fluency in navigating the system of exhibiting knowledge, whether through the device of a standardized bubble test or a device with Google access, is no substitute for comprehension of that information. Google is a tool; the Encyclopedia is a tool. A good memory for listicles of facts was never the point of history class. And to the extent that the reductio ad bubble test has become the complacent norm for history class built off of that assumption, we’ve invited the conclusion that Google can save us from our ignorance, our bad textbooks, and bad teachers. But “Google can’t save us,” argues Wineburg—neither the students, nor the professors, nor democracy.

“Most of us suck at judging what flows across our screens.” Wineburg was shocked to discover that your typical professional historian, so careful in evaluating primary and secondary sources and weighing their various claims offline, is as easily snookered online as a ten-year old by such surface-level things as placement ranking in Google searches, official-sounding names, and nice fonts. His national survey of students’ Internet skills revealed that 59 percent of adults couldn’t tell the difference between an ad (“sponsored content”) and a news story, a finding that National Public Radio, Forbes, Slate, and the Wall Street Journal conveniently glossed over in their related stories convulsing, again, about ignorant kids these days. But with each of us equipped with our own powerful computing handheld phone, around whose convenience we increasingly build our lives, and our own propensities towards intellectual complacency given the strictures of time—are we condemned to be either neo-luddites or settle for phony intellects?

Wineburg resists this binary forking of solutions to the knowledge problems posed by the Internet. He argues that what’s needed is a combination of a new approach to consuming information online that’s more akin to what fact checkers instinctively do, and a redevelopment of historical inquiry and a commitment to it, by educators above all. Quoting Thomas Jefferson in light of the expansion of information presented by the Internet, Wineburg acknowledges that despite the “new reality [where] the ill-informed hold just as much power… as the well-informed… If we think [the people] not enlightened enough to exercise their control with a wholesome discretion, the remedy is not to take it from them, but to inform their discretion by education.”

Uses and Abuses of Worrywortism for Democracy

“Education, Keeper of the Republic,” is an epithet that America’s founding generation, followed by scores of statesmen and thinkers, continually returned to in their arguments for how to perpetuate the American experiment of self-government. So much rightly depends upon the educators. But who are the educators? As a professional educator himself, Wineburg reasonably focuses on the formal educators in formal settings, teachers with degrees in primary or secondary education spaces, and the formal assessments which they give and the empirical data that students’ answers to those assessments provide. And yet by his own acknowledgement (students’ ignorance is manufactured ignorance), these measurements may not provide an accurate assessment of the true story of Americans’ rapport with history.

If historical knowledge is vital for American democracy, and if Americans have exhibited when tested a stubborn historical ignorance for a hundred years and yet the Republic still stands, is the premise wrong? Or are there predominantly qualitative means which better reveal Americans’ relationship with the past, because they extend beyond the narrow confines of pedagogy attached to a classroom?

To return to Kevin Honold’s essay, textbook history may very well not be an American pastime. But he is wrong to conflate the textbook loathing with history loathing, and on two accounts. As his own musings reveal, Americans as far away from the crisp classroom as journeyman plumbers on the line in Cincinnati are fascinated by the interwoven, complex, and compelling stories of human life that make up history. They love to buy and read books about swamp-fighting in South Carolina during the American Revolution and hedgerow fighting in Normandy during World War II, to thrill to the machinations of “Little Turtle, brilliant strategist of the Miami, whose confederacy… inflict[ed] the bloodiest defeat ever suffered by the American military at the hands of Native Americans,” and to trace their geographical, physical, and spiritual connection to those past moments, however tenuously.

Secondly, ever since Ben Franklin sounded the alarm, a favorite American pastime has been the ritual of bemoaning our lack of attention to the past. Why we bemoan the past and our peers’ ignorance of it, and how to meet that knowledge with more than apprehensive handwringing, and for what end, is a challenge that we are clearly still struggling to meet. And perhaps it isn’t so much our anxiety about the future or the past that fuels this ritual as it is a perpetual anxiety about our present, and how it will pan out for us in the short term, that influences us to look backward and yet feel inept about how to use that past.

Nearly half a century ago, political theorist Joseph Cropsey was alive to this restless tendency among Americans and pondered its wellsprings in his 1975 essay, “The United States as Regime and the Sources of the American Political Way of Life.” Cropsey provides some answers for us by placing America at the foremost edge of the project of modernity—as “the arena in which modernity is working itself out,” and in light of modernity’s own uncertainty about its project. He was pondering at a philosophical plane, noting how the tension of modernity’s two tendencies affects America:

[O]ne inspiriting, reminding man of his earthbound solitude and presenting the world as an opportunity for greatness of some description, the other pointing toward survival, security, and freedom to cultivate the private and privately felt predilections. At its worst, the latter shows itself as acquisitive self-indulgence….

Further on, Cropsey nods in the direction of computational technology, as he elaborates about this self-indulgence as applied to modern science: “At the fountain of scientific modernity, as at the sources of moral modernity, there is discernible the direction of inspiriting and indulging that generates the energy that has moved through modernity ever since.”

He was not writing directly about Google and smartphone technology, of course, and yet it is remarkable that he wasn’t, given how well his words apply to our current befuddlement at what it means to live life so uninhibited with both tools in our palm. The indulgent beliefs that both technologies enable about conquering nature and its limitations have direct consequences for how individuals experience the tension between their freedom and security, as evidenced by everything ranging from the “surveillance state” government down to surveillance retail by Big Coffee. And importantly for this discussion, such beliefs in tandem with technology affect how we think about thinking itself, and thus about education—its purposes and design—and about educators. What is the past for when you no longer have to consciously record it, recall it, or think about it, in order to access parts of it?

To leap from Cropsey’s philosophical exposition of our anxious national pastime to our more practical conundrum about what Google means for historical learning, whether inside or outside the classroom, does injustice to the deft coiling of his complex argument. But Cropsey’s core exposition, it seems to me, is a necessary preface to these questions as provoked by technology and our national character, and as formulated by Wineburg and similar others, if we want truly to answer whether the study of history matters any longer; if it does, whether it is a contradiction to argue that Americans’ century-long record of dismal historical knowledge is beside the point; and crucially, if historical knowledge matters, what we ought to be doing to break the cycle of rigged testing and handwringing.

The post Phonies and the Past appeared first on Law & Liberty.

]]>
9390 https://lawliberty.org/app/uploads/2020/01/Wineburg-Why-Learn-History-for-wp-e1568835817451.jpg