Bruce Kuklick
Before Barry Bonds.
- View Issue
- Subscribe
- Give a Gift
- Archives
In the early years of my boyhood, we listened to Bill Stern’s Sports on the radio. Although I did not hear Stern tell his most amazing story, my father often repeated it to me, and I took it to be God’s truth. When Abraham Lincoln was shot in Ford’s Theater in April of 1865, the mortally wounded president was taken across the street to the house in which he would soon pass away. In attendance at his bedside was General Abner Doubleday, a hero of the battle of Gettysburg. At the end of his agony, Lincoln beckoned to Doubleday, and whispered his last words into the general’s ear: “Don’t let baseball die.” I knew that Doubleday had founded baseball in Cooperstown, New York in 1839. Stern’s story thus linked the two most important things about American life, the savior of the Republic and the national pastime.
By the time I was a knowledgeable baseball fan, the myth about Doubleday and the beginning of baseball at Cooperstown was on its last legs. The legend had justified the location of the Hall of Fame in Cooperstown, but everyone with even a passing knowledge of and interest in the game thought that the story was fanciful, that people had probably played baseball earlier and in other places. Baseball historians soon demonstrated these ideas with greater authority, and part of what David Block does in his curious book is to drive some last nails into the Cooperstown coffin. Baseball Before We Knew It, however, goes further and tells us how and why baseball promoters and publicists developed the tale about Doubleday and Cooperstown—they wanted to secure an American starting point for the game, and not an English one.
More important, for Block, is his challenge to the alternative accounts that have arisen since scholarship debunked the Cooperstown fable. A legion of historians has told us that in the 18th century the English played various ball games. One of these was “rounders,” a game so called because the players went “round” some bases. Baseball, these historians have informed us, derives from rounders. The American game has its roots in an earlier English game that colonists transformed after they migrated to the New World.
Block argues that this story too is wrong. What really happened is that in the 1700s the English (and the French and Germans) did play a variety of games that involved upwards of two players and that used bats, balls, and bases in many different ways. One dominant set of games used bases (Block calls this set English base-ball). Both baseball and rounders came from English base-ball, but rounders developed only in England, and indeed developed later than baseball, which had an independent growth in the United States. Although the national pastime had its origins in England, rounders is not its mother but only a younger brother.
I called this book curious a couple of paragraphs back, and it is. While Block is an engaging and knowledgeable writer, the heart of Baseball Before We Knew It is an exhaustive and annotated bibliography of references to early “baseball” in a variety of books and pamphlets. There are also other lengthy odds and ends of bibliographical information. Many interesting illustrations depict these early games and accompany the bibliographies. Block uses this hard-won information in several of his chapters to make his case that American baseball is independent of rounders. Nonetheless, there is inevitably some overlap between analysis and bibliography, and considerable repetition. Moreover, the 55-page bibliography wearies even a committed reader (me).
That is not all. The author’s brother has written one separate chapter (on the Doubleday question) that does not exactly fit with anything else. Two other chapters (they are called appendices) reprint essays on the beginnings of ball games by other historians. Baseball Before We Knew It is a valuable examination of early instances of ball games, and Block is a careful and serious guide. But this is more an antiquarian compendium than anything else.
The genre of scholarship that the book exemplifies also reached its high point in the 19th century, when research into origins consumed scholars. Where was the first human language? How did the books of the Bible, or the Homeric poems, come to be written? What was the oldest religion, and how had it developed into higher forms? The approach to these questions lay in what I call conjectural history. It joined tiny fragments of evidence to complex theories. Something must have happened in the past to produce the piece of evidence that we now had; if this piece of evidence meant thus-and-so, well, that was consistent with what we made of the next piece of evidence that might date from a few hundred years later. Conjectural history had some notable successes—surely Darwin’s ideas—but many foolish and plainly false speculative narratives also litter the scholarly road of the 19th century.
Block is a conjectural historian, and one of some confidence. He says, for example that he has “demonstrated how the game of rounders could not have been baseball’s progenitor.” But his evidence is always modest and limited. It consists of passages in a few books that refer (usually briefly) to some ball game or another; or of a picture that is almost always ambiguous; and often, and most typically, of old descriptions of how games are played.
Block says that the earliest references to rounders don’t appear till 1828. Since we know that games resembling English base-ball were around much earlier in America, rounders can’t be the direct ancestor. But how can the reference to a game in children’s books be a measure of whether or not kids were playing the game? And even if people first used the name “rounders” in 1828—and who knows that?—why should the name be the criterion of whether the game existed?
Block likes descriptions of games in old books, and often mistakes them for rules. One of his appendices prints a list of 21 early rulebooks of baseball clubs, and I think the emergence of such rulebooks from the late 1830s to the 1860s formally identifies the beginnings of American baseball. That is why the Doubleday legend is not so crazy. Folks may have been playing baseball in Cooperstown at just about the time institutions were codifying how sporting contests using bats, balls, and bases ought to take place. But descriptions of how young people might play a game in some piece of literature are different from rules. The descriptions are just some author’s opinion, or instruction, or advice. They may tell us how to play a game, but they don’t legislate; they don’t constrain players.
Consider trap-ball, a game that Block writes about and that was around from the 1400s. The player with a bat places a ball on the ground in a “trap,” which is a device for elevating the ball into the air. In different versions of the game players compete to see who can hit the ball the farthest, or hit in a certain spot, or get the most hits that others do not catch. Can you play trap-ball without a bat? Well, I would think so—players could use their hands or fists to hit the ball (as in handball, or pimple ball). Suppose the trap breaks. Couldn’t you play the game even without the trap? If you throw the ball up with one hand, and try to hit it with the other holding a bat (as in hitting fungos), you get the same sort of contest; and you get a similar sort of contest if someone throws the ball and you try to hit it far (home run derby). Without the written rules that Block mis-allies to descriptions of games, it is very difficult to pinpoint what is the essence of a game. Is home run derby played with a pitching machine—the latest version of a trap—a descendent of 15th-century trap-ball? This is the sort of queer question Block is asking about 17th- and 18th-century ball games when he has so little evidence and no rules.
Imagine a ball game that we can easily visualize in contrast to baseball. It has three bases and a home plate. There are three players on a side, with a steady pitcher. He also covers the plate on some plays, since there is no catcher and no bunting. There are no walks or stealing. Batters may strike out, but the pitcher does not try to strike them out, but rather to have the batter put the ball into play. If two players are on base, the batter needs not only to get a hit but also to hit sufficiently well to score the lead runner, who is otherwise out. That is, there are no “ghost runners.” If there is another player under the age of seven or so, he becomes a steady batter, with a place in each lineup; he cannot strike out, and the ghost runner issue becomes nugatory.
This game is called “Mill Pond Baseball,” and with minor variations children and adults have played it in Elkhorn, Wisconsin, for over fifty years. If there are rules, they are not written down, and although I have played the game for twenty years, I have not heard the word “rules” once. Sometimes with two men on, a fielder will yell, “No ghost runners.” One time an overly zealous pitcher tried to ring up a six-year-old kid. After the boy failed to hit a third ball, the pitcher shouted, “You’re out.” One of the fielders cried, “He can’t strike out,” and that was that.
If there are rules, might they not be inscribed in the hearts of the players? But even this is not correct, as lovely as it sounds, for new players automatically adjust to the game without being informed of “rules,” or having any writing in their hearts.
And the game is called Mill Pond Baseball not because of the conventions of play, but because of the contours of the field. Errors are often made because balls hit a rock or take unpredictable hops, or because a player falls in a declivity or even a hole where a tree has once stood. When players excuse their poor performance, they shake their heads and exclaim: “Mill Pond Baseball!” They are designating the dubious quality of the playing surface on the field where they play.
The question of written rules that players must follow is central. Without them it is much harder to define what distinguishes one game from another, or speak intelligently about the metamorphosis of one game into another. Surely the discovery of descriptions in children’s books does not obligate players, or determine when games were first played. Didn’t the Mill Pond game exist before I wrote about it? Wouldn’t it have existed even had I not the opportunity to write this review? Couldn’t it be played with very different conventions?
More important, what is going on in this game and its variants is not something you can get at by looking at portrayals in books. The evidence does not yield what we need for the historical study of what is profound and deep about baseball’s growth as a recurring social celebration. David Block obviously loves baseball, and his book is in many ways a labor of love, helpful in enabling historians to talk about the evolution of bat and ball amusements. But people’s games of the early modern period (and after) fundamentally differ from organized sports, and the key difference is the existence of a rulebook that some powerful institutionalized group has authorized. Block is looking for origins that may not have existed. I also think that despite his affection for the game, he has missed in his search one crucial component of what made baseball endure—its emergence as a communal ritual.
Bruce Kuklick is Nichols Professor of American History at the University of Pennsylvania, where he has won four different teaching awards. He is the author most recently of Blind Oracles: Intellectuals and War from Kennan to Kissinger, just out from Princeton University Press.
Copyright © 2006 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromBruce Kuklick
Garret Keizer
A modest project for the resistant church.
- View Issue
- Subscribe
- Give a Gift
- Archives
Religious life and political life have this in common: the chief enemy of both is despair. Poison to church and state alike, despair comes as the conviction that while something ought to be done and probably can be done in the face of some evil, we have neither the strength nor the will to do it. We’re overcome, paralyzed.
Much of the work of Christ in the world consists of disavowing that conviction. “Take up your bed and walk.” To bear Christian witness, in the most effectual sense and certainly in the best political sense, means demonstrating that a witness is not the same thing as a bystander.
But where to begin? For we are always needing to begin. How about with torture. It has taken place, it continues to take place, both in detention centers run under the auspices of the United States and in other countries to which suspected terrorists are routinely outsourced for interrogation. That the passage of the McCain bill last December will change any of this seems highly unlikely.
For why should this law exercise any greater restraint on current administration policy than the United States’ adoption of the Universal Declaration of Human Rights in 1948 or, for that matter, than the 1978 law that forbids domestic wiretapping without a warrant? Why should McCain’s law compel compliance and not the others? The very fact that the McCain ban would be deemed necessary calls into question that it will ever be deemed binding.
In fact, the little-publicized “signing statement” that accompanied President Bush’s approval of the McCain bill declares, “The executive branch shall construe [the law] in a matter consistent with the constitutional authority of the President … as Commander-in-Chief.” In other words, the president reserves the right to place himself above the law in the interests of “national security.”
As David Golove, a New York University law professor, put it, “The signing statement is saying, ‘I will comply with this law when I want to, and if something arises in the war on terrorism where I think it’s important to torture… . I have the authority to do so and nothing in this law is going to stop me.'”
For a Christian, of course, the crux of the matter is to be found not in the nuances of the legal language but in what John the Baptist called “the fruits of repentance.” No such fruit is in evidence here, nor was it in evidence as the administration fought tooth and nail to prevent McCain’s bill from ever reaching the president’s desk.
If Christians cannot find common ground on the issue of torture, then perhaps we deserve to despair. Torture is an insult to the image of God, a violation of the Golden Rule, and certainly a scandal to those of us who “glory in the cross” and the victory it represents over cruelty and the prerogatives of worldly power. Our task in regard to torture, then, is to keep ourselves and our fellow citizens awake to something that many of us would just as soon ignore, rationalize, or dismiss as a dead issue.
One graphic way of doing so might be to sew black cloth hoods, modeled on those featured so prominently in the photos that came out of Abu Ghraib, and to display those hoods along with tags that say “Stop Torture” in public places. The project would be simple—indeed simple enough to risk gimmickry—but it has a number of virtues.
First, it employs a traditional activity of the church, the sewing circle, and makes it militant. It takes “the stone that the builders rejected,” or at least denigrated, as belonging to A) the past, B) women, and C) the elderly and turns that rock into “the chief cornerstone” of a small resistance movement. It is an attempt to approximate Gandhi’s spinning wheel.
Second, for all its simplicity it makes a complex statement: not only against torture but also against the flouting of law and the people’s trust. Mounted in the aftermath of the McCain bill’s passage, a protest of this kind says, in effect, “We don’t believe you.”
Third, the project allows for some exercise of the imagination, not to mention some daring, in the matter of placement. Possible locations include mailboxes, flagpoles, cemetery obelisks, parking meters, lawn statuary, fire hydrants, car antennae, weather vanes, goalposts, street lamps, library busts, microphones, surveillance cameras, suggestion boxes, napkin dispensers, dormant birdhouses, coffee urns, bar taps, and gumball machines. Not to forget crucifixes and crosses.
Fourth, it combines the talents of the more and the less agile, the young and the old. One sows and another reaps; one sews and another leaps the wall and hoods the bronze general on his horse. The eye cannot say to the hand, “I have no need of you.” The hood-sewer cannot say to the hood-runner, “You are irrelevant.”
Fifth, the project is both provocative and authentically nonviolent in that no piece of public or private property is permanently defaced by the hood. To the wisdom of the serpent we add the harmlessness of the dove.
Sixth—and nevertheless—it does carry the potential for arrest, at the least for littering or disturbing the peace.
Therefore, and seventh, it prompts the resistant church to engage in discernment. For example, when do we invite confrontation and when do we avoid it, when do we retreat to the Mount of Olives and when do we ride hell-bent-for-leather to Jerusalem and to the cross?
Or, for another example, how do we explain our actions to the youngest members of the congregation without acquainting them too prematurely with the horrors of this world? The delicacy of the question hardly disqualifies the project; it is a question faced every Good Friday by any Sunday school teacher who teaches little kids.
And with what arguments do we explain our actions to the unconvinced? How do we answer the person who will say, “What if the use of torture could obtain information that would save a member of your family from dying in a terrorist attack?” (Is any terrorist ever so keen to take our loved ones hostage as a rhetorical opponent of Christian nonviolence?) We will need to know our response. We will need to know that it is not and never can be an easy one.
Furthermore, if the project is taken up by others outside the church, are we prepared to see our idea shared, modified, and possibly co-opted? Are we humble enough to say with Jesus, “Forbid them not, for whoever is not against us is for us”?
And finally, how do we transition to something more substantive? Even before that, how do we make sure that a project that is largely “for the hands” also has a heart and head? We will need to do our homework, perhaps with some help from Amnesty International. We will need to pray. But eventually we will need to look ahead to engaging the related issues of militarism, global influence, and our consumption of oil. C. S. Lewis said that a good egg cannot remain a good egg for long; it must either hatch or go rotten. The same can be said of any good idea. How do we hatch this egg and what can we expect of our hatchling?
Even if the project peters out, however, it will have done something if it keeps us from despair. In the face of that enemy, a holding action is worth holding on to. We must never lose hope. Faithful Christians and faithful Christian communities have brought down empires, brought down institutions of oppression, and when those institutions included the institutional church, they have brought that down as well. This is the lesson that needs to be taught again and again in all classes and to all ages of the church. Take this project merely as a workbook exercise in that curriculum.
Garret Keizer is the author most recently of Help: The Original Human Dilemma (HarperSanFrancisco) and The Enigma of Anger (Jossey-Bass).
Copyright © 2006 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromGarret Keizer
Andy Crouch
In search of the perfect shave.
- View Issue
- Subscribe
- Give a Gift
- Archives
As the “tech editor” for NBC’s Today Show, Corey Greenberg spends most of his on-air time shilling for the latest technological gadgets. (Literally, shilling—last April the Wall Street Journal revealed that several technology companies had paid him handsomely for his promotional efforts.) He can tell you why you need a video iPod, what you’re missing without satellite radio, and where to put the fifty-inch flat screen tv. But on January 29, 2005, he was enthusiastically undermining half a century’s worth of high technology.
In the Today Show studio, Greenberg lathered up his face with English shaving cream and a badger brush, whipped out a vintage double-edge razor, and made a passionate case that the multi-billion-dollar shaving industry has been deceiving its customers ever since 1971, when Gillette (no small advertiser on network television) introduced the twin-blade razor. Everything you need for a fantastically close and comfortable shave, Greenberg said, was perfected by the early 20th century.
With his Today Show segment, Greenberg became the highest-profile convert to “wet shaving.” He is still one of its most fervent evangelists, with—what else?—a blog, www.shaveblog.com. At 120,000 words and counting, Greenberg’s blog could best be described as gonzo shave journalism. He explores every nook and, for that matter, nick of the wet shaving experience, whose defining elements are a single sharp blade (whether ensconced in a safety razor or exposed in the fearsome straight-edge), a brush, soap, and lots of hot water.
But Greenberg’s blog is just the most visible salient of a movement that has all the ingredients to reach its tipping point. I first discovered this utterly retro trend on the über-geek Web site del.icio.us, where most links tend to point to topics like “Javascript” and “hardware” (the computer kind). There are the requisite Internet forums populated by enthusiastic “shave geeks” and several specialty retailers that are selling more old-fashioned shaving products than ever. The shaving emporium Truefitt & Hill recently opened a shop in, of all places, Las Vegas. Wet shaving is far from being a mass movement, but it is growing, primarily because almost every man who tries it discovers that, in fact, Greenberg was right: with a little time and practice, shaving with a single blade can deliver an extraordinary shave, and is great fun besides.
Meanwhile, in January of this year Gillette launched its newest “shaving system,” the Fusion. Its cartridges have six blades—five in a row and a sixth on the back of the razor. Each cartridge costs more than three dollars. When an editor for Cargo magazine tried the Fusion—after a Gillette “facialist” prepared his face with “a variety of moisturizing unguents and salves”—the razor cut his neck in three places.
When I tell my friends that I have switched to wet shaving, they ask three questions, usually in the following order. Don’t you cut yourself? Doesn’t it take more time in the mornings? And isn’t it expensive to buy all the necessary equipment, compared to a drugstore razor and can of shaving cream? The answers are yes I do, yes it does, and not necessarily.
Indeed, one of the most fascinating aspects of the Gillette–Schick juggernaut is how these companies have managed to extract extraordinary profits by selling plastic cartridges to the masses, while the shaving brush, emblem of the wet shaver, has somehow become an icon of the affluent. Until I stumbled across Greenberg’s testimony, the only shaving brushes I had ever seen were displayed in opulent cases in high-end department stores and had prices in three figures.
And in fact the wet shaving market operates in direct contradiction to the hard-won insights of consumer marketing. The legendary brilliance of King Gillette was to sell razors below cost and blades above—way above—cost. Today, a Fusion razor with one extra cartridge is only $10, the magical amount at which most middle-class Americans will make an impulse purchase. A new safety razor from Merkur, one of the few remaining manufacturers in the category, is around $30. And whereas Gillette’s Edge shaving gel goes for $2 or $3 a can, a tube of Italy’s Proraso shave cream is $9, and many other shave creams come in tubs priced at $25 or more. A good badger-hair shaving brush does not, it turns out, have to cost three figures (though there are plenty of merchants who will happily sell you one for that amount), but it will not be less than $40.
But over time the numbers change places. The Merkur razor, forged of implacable stainless steel, will last several lifetimes. There is simply nothing in it to wear out. The shaving brush will last a decade or more. Replacement double-edge blades range from inexpensive (Merkur’s fearsomely thin and sharp blades are 45 cents each) to dirt cheap (at Wal-Mart, which sells perfectly serviceable blades). A year’s supply of Proraso is roughly three tubes—$27. So in the first year of wet shaving, a frugal wet shaver might spend $120 or so. But a customer who follows the siren song of Gillette marketing up the brand ladder to the Fusion will spend $150 on blades alone. In succeeding years the wet shaver, already equipped with the razor and brush, can indulge in the ridiculously luxurious Trumper’s English shave cream—two tubs, a year’s supply, are $50—and still be spending only half as much as his Gillette-addled counterpart.
But Gillette, savvy capitalists that they are, are not marketing their product on price. And the truth is that in spite of their slogan, “The best a man can get,” they are not marketing based on the quality of their shaves either. The best shave a man can get, as everyone in the shaving industry knows, is one he cannot give himself, because by far the best shave a man can get comes from a barber—the shave that starts with the application of hot towels and warm lather, and culminates in the practiced gliding of a cut-throat razor over the face of the grateful, and assiduously immobile, patient.
No, for Gillette and its well-researched customers, the issue is neither price nor quality. It is convenience—or, as the philosopher Albert Borgmann has put it more precisely, “disburdenment.” Gillette promises to relieve you of the burden of getting a close shave. You will be relieved of the upfront cost of razor and brush, to be sure (though it would be entirely within Gillette’s power to once again produce, as they did for decades, an economical safety razor that would last a lifetime). But you are also relieved of the burden of time. There’s no way around it: wet shaving takes more time. For years I used a succession of electric razors that did an adequate job in three or four minutes; wet shaving takes me ten to twelve.
And there is the issue of shaving cuts, those tiny lesions that can bleed for half an hour if not staunched with a styptic pencil or a bit of tissue paper. I never once got a shaving cut from my electric razor—it buzzed innocuously over my face, its blades well sheathed. The latest cartridge razors, the Cargo editor’s experience notwithstanding, are indeed designed to minimize the number of cuts, and they do tolerably well at it. A safety razor, on the other hand, looks anything but safe when you are loading a skinny double-edged blade into it for the first time, and in the hands of an inattentive or inexperienced user it is indeed capable of harm. This, it seems to me from my anecdotal polling of family and friends, is the core appeal of the cartridges and electrics that have driven wet shaving from the general market. They are easy, and they are safe. That is the way we want our lives—or at least, in the fog of the early morning, our shaves.
A few times a month I rise at 4:30 to catch a 6:00 flight to Chicago. There are few experiences of sheer vertiginous despair that can compare to the alarm going off at 4:30 after six hours or so of sleep. The furnace, uninformed of my travel plans, has not yet come on, so the temperature in our bedroom is about 55 degrees. I stumble into the bathroom, shivering, and wince against the light. The hot shower is comforting, but I still feel a vague sickness of fatigue in my stomach and sluggishness in my limbs. After my shower, I step to the sink and begin filling it with hot water, using that water to drench both my face and the shaving brush. Then I pick up my razor.
There are two ways to look at this moment. You can say that no one in his right mind should wield a double-edged razor half-asleep. Or you can say that no one in his right mind can stay half-asleep when he picks up a double-edged razor. Here is what invariably happens: as I swirl the brush in the tub of Trumper’s Sandalwood Shaving Cream, as I scrub my face gently with the brush, covering it with fragrant lather, as I apply the razor at an acute angle to my cheek next to my right ear, I suddenly become gloriously awake. Ten minutes into my day, I am paying utmost attention. The sandalwood aroma fills my nostrils, the steam rising from the sink caresses my skin, and most extraordinary of all, as I run the blade down my cheek it makes a tiny and distinct plink with each hair that it encounters, amplified by the tension of the blade held in the steel jaws of the razor. This experience simply doesn’t happen with a cartridge razor, let alone a whirring electric shaver. Only a single sharp blade can give you the sound of every one of the hairs on your head being numbered.
In the logic of high technology, the fundamental premise is our incapacity. We are tired, fuzzy (in mind and face), and in need of a simple, safe, efficient solution. Gillette’s army of engineers go to work, and place in our hand “the best a man can get.” But there is another kind of logic—call it the logic of the blade. The double-edged razor blade, of course, is technology too, of quite an advanced kind. But the blade does not exist to underwrite our fuzzy, lazy, half-asleep lives. It requires something of us—discipline, skill, patience. The fundamental premise of the blade is that we can learn to handle fearsome things in delicate ways.
The cartridge razor is safe, but it is ultimately dull. The double-edged razor, with apologies to Aslan, is not safe, but it is good. It is good to be at risk. It is good for me to face myself and hear the myriad plinks of each hair being numbered and shorn. It is good to wake up.
“After the age of forty, every man is responsible for his own face.” This aphorism, most commonly attributed to Albert Camus, was comforting when I heard it in my twenties. I was not given an easy face to be responsible for—angular, pale, and regularly visited, well into my thirties, by the acne that I was once assured was a passing adolescent tribulation. I took comfort not just in Camus’s implied absolution for those faults dredged up from the Bennett and Crouch gene pools, but also in the promise that a life well lived might change my countenance, if not into that of a movie star, at least into a soft grandfatherly handsomeness. Forty also seemed comfortably far away, as distant a milestone as 21 once was. If nothing else, Camus’s quote was yet another way to prolong adolescence (the state of mature irresponsibility, with or without acne) well into adulthood.
Now I am 38, two years away from Camus’s benchmark. The only remaining milestone for maturity I can think of is fifty, the age of AARP membership and annual prostate exams. I am not sure I still find Camus’s aphorism comforting. And thanks to Google I now know that Camus may not have meant to comfort me in any case, since the most common form of the quote begins with a word I never heard in my twenties: “Alas, . . .”
Alas, unless the next two years bring a sudden Botox-like transformation, the face I will be responsible for in my forties and beyond has quite as many faults as the one I was not responsible for in my twenties. And without a doubt, wet shaving has only made me more conscious of the face I am about to be responsible for. Every morning I stand in front of the mirror, naked from the waist up, and spend a good ten minutes peering intently at every angle and curve, every wrinkle and blemish. This is not, fundamentally, an encouraging experience.
I am not young anymore, I find myself thinking on these mirror mornings. I am not old, either, but I am old enough to be responsible. What have I done? What is there left for me to do? I have had a good, even wonderful, life so far, with vastly more than my share of blessing. I suspect I am far happier than Albert Camus. But who can go through forty years of this life, any life on this beautiful cursed earth, and not say, “Alas”?
But if Camus’s slogan is no longer comforting, it has become bracing. Just in time, at the age of 38, I have learned how to shave. I have become responsible for my own face.
Last summer I began reading the Odyssey to my eight-year-old son. Every night Timothy and I huddled up close on the sofa and opened up another chapter of Telemachus’ and Odysseus’ adventures. The Odyssey is, among many things, the story of a father and a son, and especially about the bonds between a father and son forged in shared danger, adventure, and triumph. It is also, like its counterpart the Iliad, a book about what it is to be a man: like Odysseus, a man who goes away a strong young warrior and returns, decades later, somehow something more, and like Telemachus, a youth who grows into the likeness of his father. It is also, especially to an eight-year-old, a rollicking good story, with one of the features children like most, repetition. To his delight, Timothy quickly recognized a distinctive feature of Homer’s poetry, the stock phrases, epithets, and even whole passages that recur again and again. Somewhere around book eight, he observed, “Dad, these guys take a lot of baths.”
Indeed they do. Homer’s heroes bathe because they feast: no scene of feasting in the great halls of an Achaean king is complete without the visit to the bathchamber before the meal. The Iliad, the book of war on the shores of Troy, has almost no such scenes. Its men are at war, and too busy to bathe. But the Odyssey, though not without its adventures and battles, is a book that celebrates the man at home—the pleasure of the bath, the board, and the bed. Just offstage and never forgotten in the poem is the murderous bath Clytemnestra and her lover Aigisthus prepare for Agamemnon, a cautionary tale that reminds the heroes that baths can be dangerous and vulnerable places, and that the home requires, in its own way, as much valor and steadfastness from both husband and wife as the battlefield.
As if to accentuate the power and danger of the bath, more often than not the baths in the Odyssey conclude with a phrase like this, after the hero has been scrubbed, clothed, and anointed with oil: “He emerged from the bath chamber, like in appearance to the immortal gods.” There is here, I suspect, the simple awe of a good bath that is known only to a culture where ordinary people may never have had a proper bath in their lives; there is also the sense that the hero is most fully himself, most godlike and most manlike, when he has submitted to the intimate ministrations of a bath, has been made new by making himself vulnerable to the hospitality of his host.
It will sound like madness to say it, but when I have rinsed the lather from my face and splashed it with intensely cold water, when I have patted my face dry with a towel and rubbed in the lotion to protect the newly exfoliated skin, smooth and supple—I have some sense of what Homer meant. On a good day, a good close shave is the Iliad and the Odyssey in one: the mastery of the dangerous blade, the return to the comforts of home. To shave well is to be a man, and to be a man is closer than Homer could ever have imagined to being like in appearance to the immortal gods—as Psalm 8 put it, “a little lower than the angels,” and as Genesis put it, made in the image of God.
Yet just as the Iliad is a very strange sort of poem about war, diffident about heroism even as it celebrates it, so the Odyssey is a very strange poem about homecoming, given how often and in how many ways Odysseus seems to defer his arrival. A modern reader cannot help but be troubled by the year Odysseus spends in the house (and the bed, and the bath) of Circe, and the way he pursues every possible detour, at great cost to his shipmates, on the circuitous path back to Ithaca. Even his return home brings delay after delay, long after the bath where he is recognized by his old nurse for the first time. Odysseus and his tragic counterpart Achilles are our archetypes of manhood, for better and for worse: our capacity for valor but also our tendency toward wounded depression and inward-turning rage; our longing for home, hearth, and bed but also our bent toward wandering and lingering far from home, prolonging our adventures while our wives and children wait for our return.
Every bath, vulnerable and naked, warm and wet, is a return home, but every man is Odysseus, prone to wander. Every shave, leaving our skin as smooth as it was before we became men, before the years crept in, before we were, alas, responsible for our own face, simultaneously restores us and reminds us that something has gone wrong. We are, if we are fortunate, more than we expected, but not quite what we hoped to become.
Our final redemption will be, I think, a razor’s-edge experience. Like so many modern wanderers, Camus was both right and wrong. We will not ultimately be responsible for our own face. If the gospel is true, this life, where we face ourselves in the mirror and take responsibility for all we see there, is rehearsal for another. And that life will begin, if I read St. Paul correctly, with a very close shave, the best a man can get. Another will be the barber. If we have practiced well, we will know what is coming: the blade will be applied at just the right angle to shear off the stubble. It will be terribly sharp and terribly close, but wielded with tremendous skill and care, it will divide who we truly can become from what we were never meant to be. Then cold water will splash against our skin; fragrant oil will leave us glistening and new. We will arise and go, godlike, to the feast.
Andy Crouch is a writer and editorial director of the Christian Vision Project at Christianity Today.
Copyright © 2006 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
Betty Smartt Carter
The Booker Prize-winning novel by John Banville.
- View Issue
- Subscribe
- Give a Gift
- Archives
I learned a lot from John Banville’s latest novel, The Sea. For instance, I can rattle off five new terms for bodily excretions, including “particles of nether-do”; I can diagnose a case of “grog blossoms”; and I’m aware that “ichor” refers not only to the liquid flowing through the veins of the gods but to that watery stuff that leaks out of a paper cut. I also know that “strangury” is a mystical-sounding word for slow urination (yes, this is a book about growing old).
Such discoveries mostly delight me, but others find Banville’s writing pretentious and remote. “Banville’s famously torrid affair with his thesaurus,” writes Jessica Winters of the Village Voice, “has previously birthed erudite but emotionally delimited characters … but The Sea nudges this pathos toward parody.” It is true that if he paid a fine for every time he broke the writer’s rule of ordinary language, Banville would have to mortgage his Booker Prize. It’s unfair to say, though, that he dotes on words at the expense of human feeling. In fact, it’s the preening language of The Sea that most reveals its hero, a man both vain and emotionally broken.
Max Morden is an art historian of no particular genius. (“As for us middling men,” he says, “there is no word sufficiently modest that yet will be adequate to describe what we do and how we do it.”) For years he’s been “mired” in a monograph on the French artist Pierre Bonnard (1867–1947), famous for his many portraits of his wife Marthe in the tub: “Brides-in-the-Bath,” Max’s wife Anna calls him. At their oceanside home, Anna herself spends long hours in the bath, soothing the pain of her cancer. Max sometimes worries she’ll drown accidentally: “I would creep down the stairs and stand on the return, not making a sound, seeming suspended there, as if I were the one under water.” Guiltily, he half wishes she’d go on and get it over with, for both their sakes. When she does at last die, in the hospital, he finds himself drowning in grief, past and present.
Led by a dream, Max leaves his present home and wanders back to Ballyless, the seaside town where he spent boyhood holidays. He moves into a local boarding house, the Cedars, which was once the private residence of his close friends, the Graces. Their presence lives on in the ministrations of the landlady, the elegant Rose Vavasour, whom Max remembers as their governess. Miss Vavasour has just one other tenant, Colonel Blunden, a lonely soul with a military haircut and a love of peppery condiments. Max is skeptical about the Colonel’s cartoonish habits and half-wonders if the soldier identity is a ruse, a façade.
Max has good reason to be skeptical about the hapless Colonel, since he’s been reshaping his own identity for his entire life. As a child, staying with his working class parents in their rather primitive “chalet” at Ballyless, he felt ashamed of their ordinariness, consciously distancing himself from them: “I did not hate them, I loved them, probably. Only they were in my way, obscuring my view of the future. In time I would be able to see right through them, my transparent parents.” Then his father left the family and his mother sank into maudlin despair. Max spent a miserable adolescence shut away with her in a depressing series of claustrophobic flats, waiting on his father’s monthly posts.
It was the prosperous family Grace that claimed Max’s early devotion and affection. In their upper-class vacation home they seemed like gods descended to earth, more living and solid than ordinary humans. First to arrive in his consciousness were the parents: Carlo Grace (father of the gods), satyrlike and mocking, and Connie Grace, an enveloping earth-mother who fed Max’s first erotic fantasies. Through a mythopoeic shift of affections, he one day ceased to love the mother and fell madly in love with the daughter, a nymphlike ten-year-old named Chloe with a portentous green tinge to her teeth. Chloe’s twin brother Myles was an otherworldly creature. His webbed toes made him seem a kind of godling, and he never spoke a word, either because he wouldn’t or couldn’t. Max imagines that Myles whispered with Chloe at night, the two of them enjoying their private joke on the world: they seemed like two halves of one person.
Max weaves his reminiscences of the Graces together with painful memories of life with Anna. From the time he met her, he saw his Anna as a demi-goddess. Tall and statuesque, beautiful in a fierce way and also gracious and beneficent, she conferred a little of her divinity upon him, making him feel set apart from the mortals in their company:
Ah those parties, so many of them in those days. When I think back I always see us arriving, pausing together on the threshold for a moment, my hand on the small of her back, touching through brittle silk the cool deep crevice there, her wild smell in my nostrils and the heat of her hair against my cheek. How grand we must have looked, the two of us, making our entrance, taller than everyone else, our gaze directed over their heads as if fixed on some far, fine vista that only we were privileged enough to see.
Max finds a parallel between the inseparable twins, Myles and Chloe, and himself and Anna. He and Myles are the changelings, creatures without their own voices, without names except those given them by the greater gods. In his case, he knows that he used Anna from the start to invent himself as he wanted to be.
From earliest days I wanted to be someone else… . I never had a personality, not in the way others have, or think they have. I was always a distinct no one, whose fiercest wish was to be an indistinct someone… . Anna, I saw at once, would be the medium of my transmutation. She was the fairground mirror in which all my distortions would be made straight.
Of course, Max also loved Chloe once; she was a forerunner to Anna, and he speaks of losing her and Myles (I can’t give away too much here) as “the departure of the gods.” In losing Anna, he’s again lost the object of his devotion. The gods, in their weakness, have abandoned him again. This leads him to the conclusion that death is ultimately all there is—death symbolized in the swelling sea, maternal and yet capricious, bitterly jealous and yet indifferent. There is no defeating death: it draws gods and mortals alike into its arms.
In Banville’s previous novel, The Shroud, he used the Christian legend of the Shroud of Turin as a metaphor for a man who existed only as the shadow of someone long dead and unknown. In The Sea, he employs some of the same character-types and plot elements: a young man reinvents himelf, eventually becoming a wraithlike old man who is haunted by the youth he was. This time, Banville textures his story with references to pagan mythology: Miss Vavasour tends the Graces’ old house like a temple virgin, Anna appears in the narrative like Athena descending into an ancient epic, and then of course there’s the wine-dark sea itself, bounty and bane of ancient mariners. Max himself makes no end of drowning allusions: no wonder Banville gives him the surname “Morden” (“What sort of a name is that?” Chloe asks).
I do think that Banville may be straining a bit. When a writer constantly invokes the transcendent to picture the ordinary, won’t something eventually be lost? Either the myths will lose their power, or the daily perils of human life will seem (perhaps ridiculously?) overblown. After all, how much symbolic weight can a paper-cut bear?
It’s hard to argue, though, with Banville’s gorgeous prose, so beautifully anchored in layers of mystery and metaphor. And whatever critics may say about overwrought language or lofty erudition, The Sea gives us a very human, very fragile hero. Max’s emotions penetrate the narrative, first leaking out in a poetic drip (one might almost call it a case of literary strangury), then swelling into a steady rush of pain and grief. A compassionate reader will overlook any artifice here and recognize a drowning soul—someone who has clung far too closely to those he once loved as gods, only to be shipwrecked and alone at the end.
Betty Smartt Carter is a novelist living in Alabama.
Copyright © 2006 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromBetty Smartt Carter
Lauren F. Winner
New fiction from Vinita Hampton Wright.
- View Issue
- Subscribe
- Give a Gift
- Archives
You don’t have to read very far in Vinita Hampton Wright’s new novel, Dwelling Places, to conclude that all is not well. In the first six pages we learn that the patriarch of the Barnes family is dead, that the farm is “long since gone,” that Mack, the elder and only surviving Barnes son, is wrapping up a stay at a mental hospital. Not to mention that widow Barnes’s car seems to be on its last legs—just another stress in an already heavily burdened life.
The Barnes family of Beulah, Iowa, has been battered by the farm crisis of the late-20th-century. Taylor Barnes died a decade before the novel opens, in a farming accident that may not have been so accidental, after all. His younger son Alex was forced to sell his farm for a pittance, and subsequently drank himself to death. Mack realized he had to get out of farming, too, and he and his wife now work in town, though they still live, with their two teenage children, in the old family farmhouse. The unexplored grief of giving up a beloved way of life has taken its toll on Mack, and he has been sunk in a great, gray depression. Only his wife, heroic, strong Jodie, with the loving and occasionally co-dependent help of her mother-in-law Rita, is holding the family together.
Jodie, of course, is not as heroic and strong as she appears. Indeed, the subtleties of her character are the narrative and psychological strength of the book. She does hold the family together, but she begins to lose herself in the process. In the chaos of work at the school cafeteria, tending to the needs of her kids (one’s gone Goth; the other has immersed herself in the youth group at the Baptist church), trying to reach her husband, making sure the bills are paid—in the midst of all this, she notices that she’s attracted to a co-worker, a teacher named Terry Jenkins. And she notices he’s attracted back, and that his attentions and the attendant frisson feel good. After a little requisite hand-wringing—”This is a bad idea. A really bad idea,” she tells herself—Jodie gives herself over to the “lawless pleasure” of an affair.
Wright’s growth as a novelist has been exciting to watch. Her first novel, Grace at Bender Springs (1999), took readers to a Kansas town suffering through both literal and spiritual drought. The novel was edgier, a little less pat than most novels published by Christian houses. As Publishers Weekly reported, “Grace at Bender Springs had made it through the entire editorial process at Christian publishing house Multnomah and was ready to go to the printer when the company’s executives decided at the 11th hour to pull the plug. Although the novel contained no foul language or explicit sex scenes, its realistic characters and dark tone made it a risk for the Christian market.” The novel was picked up by Broadman & Holman and published to critical acclaim.
A year later, Broadman & Holman brought out Wright’s second novel, Velma Still Cooks in Leeway. One of many small-town Christian novels that came out that year, Velma departed from the familiar trajectory in which the culmination of the novel is someone’s conversion. Velma, by contrast, explored a community of good Christian folk grappling with real sin—in this case, rape and domestic abuse—in their midst.
As Velma Still Cooks in Leeway moved beyond the conventional conversion plot to the story of life after the altar call, so Dwelling Places eschews the traditional marriage plot, in which anxious courtship moves the novel forward and everything comes to a happy conclusion when the protagonists tie the knot. Dwelling Places is a story of what happens after—way after—the wedding day. The town in which Wright has set her story is aptly named: Beulah means “married,” and at the heart of this novel is the story of what happens in the belly of a loving, strained, frayed, tested marriage.
I am a fan of all of Wright’s novels, but it is undeniable that each book has been better than the one before, each new novel more adroit and artful than the last. Dwelling Places is not didactic. The novel hardly suggests that adultery is a good antidote to marital woes, but Wright does not seem to think that her principal job as novelist is to heavy-handedly condemn or condone what she bluntly names as Jodie’s sin. There is narrative sympathy here; the reader will find that she understands, if she doesn’t approve of, Jodie’s affair. Dwelling Places, in other words, is almost as complex as real life.
Wright has grafted her novel onto the old hymn “O Thou, In Whose Presence,” and each of the book’s five sections is keyed to one of the hymn’s stanzas. As the hymn begins with an afflicted psalmist crying out to God but concludes finally with a joyful commitment to “hear and . . . follow Thy call,” so too Dwelling Places begins deep in the valley but arrives at a happy ending. Or, at least, a relativelyhappy ending. Even as Jodie sticks to her decision to break things off with Terry, she second-guesses herself; even though she and Mack are making a go of things, Jodie realizes that she’s “misplaced her heart, or maybe she’s locked it away,” and “until it comes back” her marriage won’t be set aright. A local pastor creates a special worship service designed for families who have said goodbye to farming. The service is cathartic, healing in the best, truest, sense of the word; but Rita Barnes refuses to attend, and even for Mack and Jodie, one church service can’t be a panacea.
Dwelling Places will inevitably be compared to Jane Smiley’s Pulitzer Prize-winning A Thousand Acres. Both are finely grained family stories set in Iowa farm country. Jodie and Mack’s marriage is reminiscent of Ginny and Ty’s ordinary, flawed union, and Jodie’s affair recalls Ginny’s dalliance with the handsome prodigal son Jess. But whereas the precipitating crisis in Smiley’s novel is sexual abuse, in Wright’s it is the collapse of the rural farm economy. In the 1980s, due in part to high interest rates, low commodity prices, and the Reagan Administration’s preference for directing dollars to big agribusiness, family farms failed throughout the Midwest. In 1979, Iowa was home to 121,000 small farms; by 2000, the number had dropped to 93,500. Unsurprisingly, child abuse, alcoholism, and divorce all rose precipitously in the Midwest farm belt during these years.
There is something refreshing about Wright’s contextualizing her family saga in the wider social and political context of the farm crisis. This is not to say that sexual abuse is merely a private sin; Smiley, I’m sure, would quickly point out that it is embedded in the social problem of patriarchy. Nonetheless, the backdrop of widespread farm failure rescues Dwelling Places from the myopia that afflicts so many domestic novels.
An important subtheme in Dwelling Places is the failure of the local church to meaningfully respond to the farm crisis. The Barnes family had long been active in their local Methodist church, but the church was worse than useless when things started to go downhill. Jodie found the shame of being unable to contribute to the Sunday collection plate unbearable. Though bankers in the congregation were foreclosing against the farmers with whom they shared the pew, the pastor was silent. Rita Barnes finally called out Reverend Sipes, insisting that he had a moral obligation to chastise the “two deacons who’ve plumb taken advantage of every farmer in this county.” In the face of Rita’s righteous discontent, Sipes maintained that the bankers had broken no laws and that the town’s economy was “not my business.” After that conversation, Rita stopped volunteering at the church, and eventually the Barneses switched to the Methodist church in the neighboring town.
Without grinding an ax, Wright’s damning portrayal of polite churchy obtuseness is undeniably prophetic, and will force Christian readers to ask whether their own church community is keeping mum about the farm crises in their own backyard.
Lauren F. Winner is the author most recently of Real Sex: The Naked Truth About Chastity (Brazos).
Copyright © 2006 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromLauren F. Winner
Stephen Lake
Torture lite.
- View Issue
- Subscribe
- Give a Gift
- Archives
Through the reflective glass, we see the suspect in the adjacent interrogation room. He is sitting erect, wearing a black leather jacket, looking unnaturally confident, even tough, while cuffed to a chair. A cold blue ribbon of neon light casts an eerie glow on the room.
Torture and Eucharist: Theology, Politics, and the Body of Christ
William T. Cavanaugh (Author)
Wiley-Blackwell
286 pages
$58.75
“Remember,” Agent Dessler says to her colleague, “he’s an ex-Marine. He won’t cave easily.”
Eyes fixed, face stern, Agent Manning replies, “I just need to establish that even though we are in a government building, I’m willing to go as far as it takes.”
“How are you going to start?” Dessler asks.
“I’ll use Richards”—and the camera pans to a man lurking in a dark corner of the room.
“OK,” answers Dessler.
Agent Richards, carrying a pale blue case, follows Manning into the interrogation room.
The cocky suspect, Joe Prado, announces to Manning: “Told you, I’m not talkin’ to you, pal. I haven’t been charged with anything. I haven’t done anything wrong.” Prado was just seen assisting a known terrorist, whom he inexplicably shot and killed.
“We both know that’s a lie, so let’s not waste each other’s time. Whaddya say, Joe?”
“I’m not sayin’ anything to you,” Prado insists.
And with that, Richards flips open the case and pulls out a syringe.
If you watch Fox Television Network’s edge-of-your-seat anti-terrorism series 24, you will recognize the above scene from season four. You will have also been to the interrogation room before. Numerous times. In this past season alone.
The fictitious Counter Terrorism Unit (CTU) portrayed on 24 responds with any means necessary to terrorist crises. It is customary for Richards to administer the “truth serum” and various sensory deprivation techniques, while lead character, Agent Jack Bauer (Kiefer Sutherland), will beat, shoot, and otherwise employ extreme measures to force suspects to disclose crucial information. Coercion, including torture, is routinely applied and justified by the logic of the so-called “ticking time-bomb.” The fourth season of 24 delivered the ultimate apocalyptic storyline: CTU raced to stop Islamic terrorists from launching a nuclear warhead at an American city. Under those conditions, who could object to breaking the law—and a few of Joe Prado’s fingers?
With Hollywood-style Realpolitik, 24 raises old questions in a new context. Questions about “dirty hands” or the “lesser evil” are probably as old as politics itself. Sometimes it appears as if the expedient, or even good thing to do, may require recourse to immoral means. The name of Machiavelli is forever associated with such casuistry. The new setting of such questions—countering terrorism—has provoked a lively debate about the choice between security and civil liberties and is now associated with the names Abu Ghraib and Guantànamo Bay. May a person be coerced to the breaking point if our national security appears to demand it?
Most of the time, most of us think, governing does not require such choices because with a little imagination and a little cunning—all within the bounds of morality—laws can be passed and enforced. And ought to be. But what if the post-9/11 world is, somehow, different? If our enemies do not play by the rules, must we? Or may we bend them? It is here that our normal moral intuitions are put to the test.
Sanford Levinson’s anthology Torture: A Collection showcases the robust debate about the lesser evil, in which Christian voices are playing a small but not insignificant part. The most provocative proposal comes from Harvard law professor Alan Dershowitz. He advocates court-ordered torture warrants for cracking the tough nut in a time of emergency. Most contributors, however, are reluctant to normalize torture through an official warranting process. Judge Richard Posner, for instance, prefers that courts consider the “necessity defense” in prosecuting government agents who go too far in fighting terror. Elaine Scarry, who launches a full-scale critique of her Harvard colleague, argues that a principled appeal to necessity may be morally, if not legally, exculpatory.
The prominent human rights theorist Michael Ignatieff takes on these issues in his Gifford Lectures, published as The Lesser Evil: Political Ethics in an Age of Terror. With Rawlsian aplomb, he develops stringent criteria under which we might undertake the lesser evil calculus. But he has not pleased anyone. Critics on the left accuse Ignatieff of selling out to the Bush Administration1; but later, much to his outrage and dismay,2 the kinds of considerations he advocated appear to have lost out entirely during internal debates about detainee treatment at Abu Ghraib and Guantanàmo Bay.
But what about the followers of Christ? What does the Christian ethicist have to say about these matters? In general, we acknowledge all humans as bearers of the image of God. Our Lord further instructs us not to repay evil with evil and to love our enemies (Matt. 5). It is respect for these teachings that motivates both Christian pacifist and just war traditions in their common insistence that the enemy be treated fairly and humanely, and that even in a time of war there are certain means—such as torture—that are evil as such (malum in se). For Christians, even in terrorist emergencies, is there ever a place for the lesser evil?
In her contribution to the Levinson anthology, University of Chicago professor Jean Bethke Elshtain confesses:
Before the watershed event of September 11, 2001, I had not reflected critically on the theme of torture. I was one of those who listed it in the category of “never.” It did not seem to me possible that the United States would face some of the dilemmas favored by moral theorists in their hypothetical musings on whether torture could ever be morally permitted. Too, reprehensible regimes tortured. End of question. Not so, as it turns out.
For Elshtain, the Christian ethicist now ought to reexamine the case for the lesser evil more carefully, recognizing the harsh and dangerous world in which we live.
While Elshtain decries Dershowitz’s torture warrant proposal as “a stunningly bad idea … up-ending the moral universe: that which is rightly taboo now becomes just another piece in the armementarium of the state,” she admits that “there is no absolute prohibition to what some call torture.” Her primary concern is definitional. There is a problem, she argues, with “the word itself,” torture: “If everything from a shout to the severing of a body part is ‘torture,’ the category is so indiscriminate as to not permit of those distinctions on which the law and moral philosophy rest. If we include all forms of coercion or manipulation within ‘torture,’ we move in the direction of indiscriminate moralism and legalism—a kind of deontology run amok.” For the sake of precision, she argues, we ought to limit use of the term “torture” only to horrific torments that everyone would consider as such: rape, mutilation, electrical shocks, the rack, crucifixion and cruelty to a suspect’s spouse or children. Just as there are degrees of murder, from manslaughter to murder in the first degree, so too is there is a range of coercive tactics. Thus she endorses Mark Bowden’s notion of “torture lite,”3 admitting that shouting, trickery, sensory and sleep deprivation, hooding and stripping, and even moderate physical coercion (slaps, shoves, collaring, etc.) may be allowed. For when it comes to defending innocent lives from terrorist attack, it is “moralistic ‘code fetishism'” to proscribe all forms of what the Geneva Conventions call cruel, inhuman, or degrading treatment.4
In short, Elshtain appears to sympathize with the legal position eventually taken by the Bush White House, that there is a real difference between banned torture and legitimate torture lite. Ultimately, she concludes, “I would want officials to rank their moral purity as far less important in the overall scheme of things than eliciting information that might spare my child or grandchild and all those other children and grandchildren.” They should also be prepared to defend their actions in court—and pay the penalty, if unjustified.
What are we to make of Elshtain’s position? The first thing to note is that she bases it mainly on an appeal to our common sensibilities. It has the feel of Justice Potter Stewart’s definition of obscenity: I know it when I see it. She does not (and I suspect it is not possible to) offer a univocal lexical definition of what counts as “torture” versus “mere coercion,” so we could sort the offending acts from the benign. This is a strange feature of a distinction she claims ought to serve for greater legal and moral precision.
She also neglects what John T. Parry offers, namely a phenomenologically richer description when, observing the history of the practice, he puts torture in its more complex political, sociological and psychological contexts.5 Torture involves a relationship of domination and submission; it serves to fragment social groups and instill fear in the surrounding population; it is, as Elaine Scarry has argued, “world destroying” for the subject.
This raises, I think, a more significant question for Elshtain’s distinction: Is it not possible that the cumulative effect of many acts of torture lite would amount to torture proper? A steady diet of hooding, sleep and food deprivation, nakedness and shame, exposure to severe temperatures, deception, and intimidation can surely have the effect of creating servility, creating a environment of fear, and destroying a subject’s world. Here it is telling to note that Elshtain tends to associate torture with singular acts of extreme physical torment; but if Parry and Scarry are correct, the cumulative effect of persistent torture lite—which plays as much on the mind as on the body—can be equally devastating to the person as a whole.
Christian ethicists (including Elshtain herself) hold that the image of God resides in the whole person, who is a complex, integrated whole of body and mind. If this richer understanding of coercion is correct, it might, then, appear better to draw the lines precisely where the Geneva Conventions did, putting torture and torture lite in their respective categories while proscribing both. Even if necessity drives agents beyond the pale, even if our courts allow for such a legal defense, the moral line remains clear in this murky terrain. To my mind, this line of reasoning hardly counts as “moral code fetishism”—least of all, for the Christian ethicist.
An emerging voice in Roman Catholic theology strikes a markedly different tone. In Torture and Eucharist: Theology, Politics and the Body of Christ and more recent publications,6 William T. Cavanaugh admonishes Christians to reject lesser evil thinking entirely. The Christian’s response to torture, he maintains, is to unite as the body of Christ and to practice, instead, a Eucharistic politics of peaceable resistance to power. In the Eucharist, the Church is united as the body of Christ in celebration of the Lord Jesus Christ, tortured and slain by the powers of this world. In his own, typical words:
The job of the church is to tell the truth: this is not an exceptional nation and we do not live in exceptional times, at least as the world describes it. Everything did not change on 9/11; everything changed on 12/25. When the Word of God became incarnate in human history, when he was tortured to death by the powers of this world, and when he rose to give us new life—it was then that everything changed. Christ is the exception that becomes the rule of history.7
Eucharistic resistance responds differently in the face of terror. By it,
we are made capable of loving our enemies, of treating the other as a member of our own body, the body of Christ. The time that Christ inaugurates is not a time of exceptions to the limits of violence, but a time when the kingdoms of this world will pass away before the inbreaking kingdom of God.
A crucial feature of Cavanaugh’s approach is to contextualize torture. He would agree with Parry’s broader understanding of the phenomenon, and go further. Theologically and historically, torture must be located within the modern state’s battle for political supremacy over other competing authorities—especially over religion. Here he builds on a controversial narrative he developed some years earlier,8 which rejects the common view that the Wars of Religion necessitated the rise of the modern secular state as an adjudicator of conflict and keeper of the peace. Rather, “what was at issue in these wars was the very creation of religion as a set of privately held beliefs without direct political relevance …[which] was necessitated by the new State’s need to secure absolute sovereignty over its subjects.”9
In Torture and Eucharist, Cava-naugh illustrates this point through a case study of torture and the Roman Catholic Church in Pinochet’s Chile. He argues that in typical modern fashion, the church tragically accepted a lesser evil tradeoff with the modern, secular state. In a kind of Gnostic deal with the devil, the church gained conditional “spiritual authority” over “Chilean souls” so long as it did not contest the state’s unconditional sovereignty over the body and the means of physical coercion. But eventually, when the cycle of violence threatened to destroy the Chilean republic, the church corrected course. It rejected the terms of the modern compromise and sought to recover its Eucharistic unity as the body of Christ. Only then was it finally capable of resisting the practice of torture by the Pinochet regime—and helping to bring it to an end.
The message is clear: If the post-9/11 world forces such choices on the body of Christ, we ought to reject them, too. Instead of security at all costs, Christians ought to embrace our nation’s friends and enemies—and reject torture as a means to our own security.
Cavanaugh’s message is provocative and stirring. At its best, it is, I believe, a prophetic call to the church to move to a deeper and more rigorous denial of a politics rooted in violence. Nevertheless, I have yet to find in his writings10 a systematic reckoning with the Pauline teaching in Romans 13, that the governing authorities are God’s “agents … who do not bear the sword for naught.” Cavanaugh readily embraces Paul’s teaching about the body of Christ and its mission, but what of his teaching about the state and its mission? It may be that the state often immorally rushes to violent means, but does violence ever have a role? Elsewhere, Cavanaugh appears to endorse just war ethics,11 but it is not clear how its affirmation of restrained political violence in the service of justice fits within his Eucharistic politics, which appears to eschew violence entirely. Perhaps the most plausible interpretation is that he wants a more rigorous, morally idealistic application of just war thinking than Elshtain, George Weigel, or some other theorists have offered. But for that application to have a more solid grounding, I think Cavanaugh will need to integrate Paul’s teaching about the body of Christ with his teaching about the limited sovereignty over earthly affairs that God does grant to the state.
Stephen Lake is chairman and assistant professor of philosophy at Trinity Christian College in Palos Heights, Illinois. This essay draws from a larger research project, entitled “Ethics After 9/11,” graciously funded by a Trinity Summer Research Grant. Visit him on the web at drlake.blogspot.com.
1. See, for example, www.opendemocracy.net/democracy-americanpower/jefferson_2679.jsp
2. “Mirage in the Desert,” The New York Times Magazine, June 27, 2004.
3. A term Bowden brought into common currency with his important article, “The Dark Art of Interrogation,” The Atlantic Monthly, October 2003.
4. Unfortunately, Elshtain mistakenly asserts that the Geneva Conventions make “no distinctions of any kind” between torture and these lesser forms of coercion such as shouting, sleep deprivation, and the like. But as John T. Parry’s essay in the Levinson volume makes clear, mere “cruel, inhuman or degrading treatment or punishment” is placed by the Conventions in another category, which, however, signatories are also bound to “prevent.”
5. Levinson, ed., p. 152ff.
6. See for example “Taking Exception: When Torture Becomes Thinkable,” The Christian Century, January 25, 2005, p. 9.
7. Ibid., p 10.
8. See ” ‘A Fire Strong Enough to Consume the House’: The Wars of Religion and the Rise of the State,” Modern Theology, Vol. 11, No. 4 (Oct. 1995), pp. 397–420.
9. Ibid., p 398.
10. Including his book Theopolitical Imagination: Discovering the Liturgy as a Political Act in an Age of Global Consumerism (Continuum, 2002).
11. See “At Odds with the Pope: Legitimate Authority and Just Wars,” Commonweal, May 23, 2003: p. 11.
Copyright © 2006 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromStephen Lake
John H. McWhorter
Composing for cartoons.
- View Issue
- Subscribe
- Give a Gift
- Archives
For most people who grew up enjoying the music in Warner Brothers cartoons on television in the 1960s and 1970s, what stood out were the classical music parodies such as What’s Opera, Doc‘s reduction of the Ring Cycle, and the magnificent Rabbit of Seville. But there have always been oddballs who got a kick out of the ordinary pop music swinging, sliding, and sparking along under the action.
I was one of those as a kid. Watching the cartoons week after week, I started wondering, for example, “What is that little song that always plays when somebody falls into a lot of money?” My father, a natural song encyclopedia, told me that it was the Twenties hit “Lucky Day.” I went down to the Philadelphia Free Library to xerox the sheet music, the beginning of a lifetime’s delight in, first, vintage pop, and second, the marvelous work of the man who put the Looney Tunes scores together, Carl Stalling. Stalling’s musical accompaniment is as deft as Max Steiner’s or Ennio Morricone’s, and is aural ambrosia besides.
Daniel Goldmark’s Tunes for ‘Toons is a book-length treatment of Hollywood cartoon musical scoring, and naturally gives Stalling pride of place. Yet Goldmark’s take on cartoon music is the rare one that finds fault with Stalling’s approach. He enjoys Stalling as much as anyone but considers his reliance on pop tunes a lazy fallback. For Goldmark, Stalling remained always the silent film pianist, endlessly mining the dingdong joke of linking each screen event to a pop tune whose title corresponded to the action.
Goldmark considers the joke not only overused but underpowered. Many of the tunes Stalling used were not timeless standards but merely ephemeral top-forties bonbons or even third-string ditties that never made any major mark, now recognizable only by the elderly or hard-core vintage pop buffs. And even when the cartoons were new, audiences still likely missed the jokes as often as they caught them.
Goldmark’s general perspective in the book hinges on the question of how music can be rendered to highlight film narrative. For Goldmark, MGM’s Scott Bradley was more artistically advanced than Stalling in this vein. Because MGM owned fewer songs than Warner Brothers, Bradley had to compose more original music for Tom and Jerry and Tex Avery cartoons. Moreover, Bradley, who wrote some concert pieces when on breaks from scoring cat-and-mouse chases, was given to wishing that he were allowed to compose scores that cartoons were built upon rather than the other way around.
Bradley, then, aspired to what was then called “program music,” in which classical music was intended to tell a tale along the lines of Gershwin’s An American in Paris. Compared to this, we could see Stalling’s stringing together of pop tunes as unambitious. Stalling, indeed, wrote no concert pieces and, in an interview near the end of his life, displayed none of Bradley’s academic aspiration.
Goldmark’s research is impeccable, based on examining Stalling’s written scores. He is one of those rare people who can identify even the most arcane of the songs Stalling used, and he documents the roots of Stalling’s technique in the books silent film pianists relied on, which gave suggestions as to effective accompaniment. Cartoon music is vastly undercovered, given the effort and the art that have often gone into it, and Goldmark’s book is therefore especially welcome.
Yet in the end, my sense from a lifetime of gorging on both Warner Brothers and MGM cartoons is that in terms of making music tell a story, Goldmark undervalues Stalling’s achievement and overvalues Bradley’s. Stalling did not simply play tunes under the action but reconceived them—and more richly than just “stretching and squeezing” them, as Stalling fans often put it.
For example, in Little Red Riding Rabbit (1944), Stalling scores a chase sequence between Bugs Bunny and the Big Bad Wolf with “They’re Either Too Young or Too Old,” a novelty tune from the year before in which a woman bemoans the fact that World War II has rendered age-appropriate men unavailable. This is indeed one of Stalling’s jokes, in that the other character in the cartoon is a squawky adolescent girl. Likely few viewers caught the joke even at the time—and today, forget it. But even when I myself was a squawky teen and not yet hip to obscuriana such as the name of this song, I always loved the music under the scene just because Stalling scored it so well.
As Bugs and the Wolf go up and down stairs, Stalling has a xylophone figure ascending and descending against the tune to track Bugs’ steps while a bassoon does similarly for the Wolf. When the sequence pauses as the Wolf rips away a door and makes himself look doorlike in order to catch Bugs as he goes through, Stalling bides time with two-bar figures from the song’s extended coda. As the Wolf slowly raises his paw to whop Bugs, Stalling stretches out the final half-dozen notes of the chorus in a creamy legato voicing of the winds alone, as if to say “Watch ou-u-t…!!”—and then extends the last note to five quick staccato plonks for each boink on the head that Bugs delivers to the Wolf instead.
The passage by itself is as much of a joy to hear as a good big-band arrangement. Stalling’s achievement is especially obvious in comparison to his less gifted equivalents, such as the clattering saxophony sludge that Phillip Scheib cranked out under Terrytoons (e.g., Mighty Mouse), or even the journeyman work of men like Winston Sharples for the Popeye and Casper the Friendly Ghost cartoons. Stalling elevated a silent film accompanist’s trick into magic.
And then, Goldmark overstates the contrast between Stalling’s and Bradley’s reliance on pop. Bradley was hardly as chary of it as Goldmark tends to imply. A Tom and Jerry cartoon often includes about as many pop tunes as a Looney Tune; where Stalling always played “A Cup of Coffee, a Sandwich and You” for an eating sequence, Bradley just as dependably used “Sing Before Breakfast” from MGM’s Broadway Melody of 1936, and so on. But even granting that there is a bit more original music in Bradley’s work, Stalling wrote plenty of original tunes, too—and was actually the larger artist in this respect than Bradley.
Goldmark, for instance, is especially taken with Bradley’s use of a Schoenbergian twelve-tone sequence to accompany Jerry the mouse walking around covered by a fake dog head in Puttin’ on the Dog. Point taken, but in the cartoon, this goes by very quickly and is but a whisper amidst the score in its entirety. In this and his other cartoons, the overall impression of Bradley’s original music was a considered yet busy shrillness. In the early 1990s, two CDs of Stalling’s work were released, and they are still available in stores today. But a CD of Bradley scores, despite being written for cult-fave Tex Avery cartoons, was only available briefly and sank like a stone. The Bradley CD is monotonous and unsatisfying without the pictures. Stalling’s music, even by itself, is immediately identifiable as very special.
My favorite example of original Stalling music is for a scene in Putty Tat Trouble, in which Tweety encounters one of those wooden birds placed on the rim of a glass that on its own tips forward into the water and back again. Stalling could have simply played a bit of something or other for this sequence, such as the “Drinking Song” from The Student Prince. Instead, he underscores the bird with a solo cello line that sweetly parallels the bird’s tipping forward, bouncing gently on the water a few times, bouncing back a few times, and then coming forward again to repeat the sequence. Tweety, thinking the bird is alive, gamely starts miming it—upon which Stalling adds a solo flute line in elaborate counterpoint to the solo cello.
Of course, to give Bradley his maximal due, he worked at a studio where cartoons were notoriously frenetic and violent—Tom and Jerry beating each other up and Tex Avery’s frantic pursuits full of split-second surprises. Rarely did these cartoons pause for ruminative sequences like the dipping bird in Putty Tat Trouble. But in 1941, Bradley got an opportunity to show what he was made of in his ballet Dance of the Weed, for which he composed the score before the cartoon was constructed. And the result demonstrates the limits inherent to music as an independent medium of communication.
The question is, after all, how specifically music can delineate narrative by itself. Music can depict horses galloping in a mimetic rhythmic sense, as Rossini’s William Tell overture is now heard because of its recruitment as the theme to The Lone Ranger on the radio and beyond. But how could music depict someone telling us that his mother had it rough but did her best? Dance of the Weed is hemmed in by that kind of limitation—a twee, post-Victorian confection no one cherishes from seeing it on television as a kid.
Goldmark’s insights on other topics are useful but reflect certain bugbears of today’s ivory tower. For example, we learn that jazz in old cartoons was sometimes intended to signify the primitive essence of black people. True enough, but I am one black American cartoon fan who is unoffended by the jazz entries. The insult is too ancient, broad, and brief to sting, and meanwhile the jazz and the hep goings-on it illustrates are so much fun that it seems more useful to save protestations of racial grievance for more urgent issues. Warner Brothers’ Coal Black and De Sebben Dwarves—big surprise—is not a respectful portrait of black American people. But, especially given that no Looney Tune respects anything, we can admit that Coal Black is also one of the most exquisitely vibrant seven minutes in film history.
At the end of the day, for me Stalling will always be the man who—with the assistance of his undersung orchestrator Milt Franklyn—could make a quick chorus of “Camptown Races” under the credits of a Foghorn Leghorn cartoon sound like a little story in itself. Maybe announcing the farm setting of the cartoon with a hick tune was an easy “joke,” but fifteen seconds like those go a lot further in making life worth living than a ballet of dancing weeds.
John H. McWhorter is a senior fellow at the Manhattan Institute. He is the author most recently of Winning the Race: Beyond the Crisis in Black America (Gotham Books).
Copyright © 2006 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromJohn H. McWhorter
Edward J. Blum
A moral history of the Civil War.
- View Issue
- Subscribe
- Give a Gift
- Archives
When it came to race relations, Ruth Smith believed that she grew up in a progressive community. Her grandfather, so the family stories went, had hated slavery so much that he rushed to fight in the Civil War, and her church in Howard, Kansas taught that all people were created and loved by God. After college, Ruth trekked south to Alabama to teach at a school for African American women. There, her entire world was transformed. In the process of everyday living (with the occasional defense against Ku Klux Klan members), Ruth committed her life to social justice. She felt betrayed, though, by fellow white Christians because they so rarely stood against white supremacy.
Perhaps her most painful discovery took place on a visit to her home in Kansas. Ruth found her grandfather’s notebook, in which he had recorded his feelings about the Civil War and his ethical transformation during it. “I had never been prejudiced for or against slavery,” he wrote. “I had imbibed the idea that a professing Christian should not go to war to kill people, and I had decided to teach school and let the sinners fight.” But his friends and minister altered his opinion. “To change my mind they quoted scripture and argued… . ‘Christ said you must be subject to the laws… . We live in a government which is threatened destruction by an army… . Our government says we must protect our country.’ ” Ruth could hardly believe her eyes: Her grandfather’s moral ruminations had little to do with slavery and much to do with a shift from Christian pacifism to belligerent patriotism.1
The cognitive dissonance Ruth experienced when she took up the notebook may be felt collectively by readers of Harry Stout’s Upon the Altar of the Nation. This major reevaluation of the Civil War enlightens and entertains, shocks and saddens, tantalizes and troubles. Stout approaches the war in terms of morality and “just war theory,” and he finds both sides lacking. Both the Union and the Confederacy may have read “the same Bible” and prayed “to the same God,” as Abraham Lincoln said in his second Inaugural Address, but neither engaged seriously with the difficult moral questions of proportionality or discrimination. How much blood was reuniting the nation or obtaining independence worth? Should civilian farmers whose crops made their way to soldiers be held responsible for the conflict’s longevity and hence targeted as combatants? Was it appropriate to teach children to glorify combat and generals? How ethical were field orders that shelled cities or federal directives that justified guerrilla tactics? These questions, often avoided during the war, drive Stout’s study and make his work fresh, invigorating, and powerful.
Religious, political, intellectual, and artistic leaders offered chilling responses when they approached these ethical problems. When considering martial escalation, they repeated the same refrain: more death and more destruction. To justify the carnage, Americans North and South cast the conflict as sacred and elevated it to a cosmic plane. Gore became godly; war became worship; presidents became prophets; soldiers became saints; the nation’s clergy became national cheerleaders; and blood became baptismal water. In this transformation, Stout locates the flowering of an American civil religion. A new patriotism created the United States as a mystical nation that deserved honor, praise, and worship and justified death and murder.
Stout traces the war from start to finish, beginning in the late 1850s and ending in the late 1860s. He finds neither side prepared morally for the start of the conflict. Since no one anticipated a long engagement, no one initiated a discussion about the ethics of the combat. At first, saving the “Union” offered sufficient justification in the North and defending the “homeland” in the South. By late 1862, however, Lincoln believed that the North needed a new ethical imperative to heighten the level of combat, and he crafted one. The Emancipation Proclamation, which freed all slaves in rebellious territories but did little to touch slavery still in existence in Union states, served as the moral level to sanctify total war. Now, northern troops would die not merely as saviors of the Union, but as cosmic warriors against the nation’s greatest sin: slavery. Emancipation’s limitations and northern white society’s intense racism could be ignored, and northerners could believe they had obtained what novelist and poet Robert Penn Warren later termed a “Treasury of Virtue.”2
Yet even emancipation could not legitimate a war against civilians. Both Lincoln and his Confederate counterpart, Jefferson Davis, struggled to encourage total war, while at the same time maintaining the appearance of ethical martial conduct. The Confederacy did an especially poor job on this front with such maneuvers as the Partisan Ranger Act of 1862, which legitimated guerrilla tactics. Highlighted in northern war reporting, southern terrorist organizations provided Lincoln further justification for military escalation. Then, with William Tecumseh Sherman, Lincoln found his man. Sherman brought total war with the spirit of an avenging angel. He blurred lines between southern civilians and southern combatants, and assailed both. In the process, he enlivened Confederate animosity for the North and helped further instill among southern whites a nascent “Religion of the Lost Cause,” a southern civil religion that apotheosized the likes of Robert E. Lee and Thomas “Stonewall” Jackson. This sectional faith accepted defeat yet upheld the belief that God was on the side of the South.
With the end of the war, however, something mystical seems to have happened. Death gave birth as northern and southern whites alike learned that they were all Americans—or, more accurately, that all white folk were Americans. Stout concludes that the right side won, not because of the Union’s justness, but in spite of its injustices.
Upon the Altar of the Nation is an impressive work. Stout immersed himself imaginatively in the world of mid-19th-century America, and he brings his reader into that realm: listening attentively in the offices of President Lincoln and President Davis and eavesdropping in the tents of generals; reading love letters from soldiers in the trenches and peering over the shoulders of the women who read those lines; sitting in the pews of northern churches and standing in the revivals of Confederate soldiers. And Stout has managed to contain his sweeping narrative of the war in a single volume, a feat that his distinguished predecessors Allan Nevins and Shelby Foote were unable to accomplish.
Stout’s work is indebted to the just war ethics of Michael Walzer, the civil religion theory of Robert Bellah and John F. Wilson, and the nationalism concepts of Carolyn Marvin and David Ingle. Lurking behind the theoretical scene is Émile Durkheim, a founding father of sociology. In his magisterial Elementary Forms of the Religious Life (1906), Durkheim claimed that religions and societies establish themselves in similar, if not identical, ways. A body of individuals affix emotional and psychological energy to a symbol or symbols—what Durkheim called totems—that come to represent the community. The totem eventually transcends the community and becomes the society’s grand moral arbiter, distinguishing right from wrong, good from bad, clean from unclean, and hence regulating how the society should function. According to Stout, the Civil War provided the crucial historical moment when the United States unified itself around a set of sacred symbols, particularly the American flag, and created a nation. In essence, then, the war was America’s true birth.
Stout also builds upon more than a decade of historical re-evaluation of religion and the Civil War that has altered significantly the terrain of Civil War scholarship. Gardiner Shattuck, Reid Mitchell, and Steven E. Woodworth have demonstrated the various ways religious beliefs influenced Confederate and Union soldiers, while Bryon Andreasen has drawn attention to “Christian Democrats” in the North. Eugene Genovese and Elizabeth Fox-Genovese have underscored the importance of Christian ideologies among Confederate élites, while Daniel Stowell and Paul Harvey have found vital links between religion and notions of freedom for whites and blacks. Yet Stout’s work moves beyond this scholarship in crucial ways. Taking account of life on the battlefields, the ideas of politicians and ministers, the imaginations of writers and artists, and the feelings of thousands of men and women, Stout offers a comprehensive examination of the morality and justness of the war. What Bell Wiley did decades ago to uncover the everyday experience of common soldiers, Stout has done now to trace the ethical contours and inner spiritual meanings of the war.
There is so much within this ambitious work, so many elements that will fascinate and frustrate, that it feels almost immoral to argue with it. The chapters on children’s literature and northern artwork are superb; the assessments of Lincoln’s Gettysburg Address and Second Inaugural are riveting; and Stout’s intrepid, albeit largely fruitless search for voices that wondered about the ethics of the war is sobering. But there is room to wonder about some of his paths and final destinations.
Most of the time, for instance, Stout relies on preachers or publishers from Richmond, Virginia—the capital of the Confederacy—to stand for southern whites in general. Certainly feelings in other southern areas differed, though. Further, Stout includes no discussion of the venture of northern white women into the South to teach among African Americans and to provide medical care for the troops. These “soldiers of light and love,” as they were termed later, invaded the South with an ethical approach far different from that of Sherman and his minions of patriotic gore. For one of them, Marie Waterbury, her time among African Americans was a spiritual watershed. As she versed it:
No more talks about our Lord
No more searching for his word
No more longings for his grace
She hath seen him face to face.3
The ethical meanings of the war also seemed distinct for African Americans. Stout does an excellent job showing how the most immoral behavior was leveled at men and women of color and how a host of African Americans responded to the conflict. On at least one occasion, Confederate soldiers used black men as human shields, and in other instances Confederates massacred surrendering black soldiers. Yet there were moral elements within the Civil War that Stout neglects, in part because his breadth of research did not extend to the interviews with former slaves conducted in the 1930s by the Works Progress Administration.
Despite the methodological pitfalls inherent in such memory-based narratives, within them one can discern hidden moral meanings during these momentous years. Black men and women remembered and told tales of reunited families, of women finally empowered to protect their bodies from rape, of aged men provided the right to carry canes and of young men the right to carry school primers. One African American man certainly viewed the chance to read and write as a holy element of the war: “These few lines will show that I am a new beginner. I will try, and do better… . Thank God I have a book now. The Lord has sent us books and teachers. We must not hesitate a moment, but go on and learn all we can.”4 Seen from the perspective of African Americans, the morality of the war takes on a very distinct hue. American civil religion for black men and women, as David Howard-Pitney has shown, is perhaps one of the most creative and unappreciated elements of American religion, and the Civil War was a vital aspect in its formation.5
These critiques aside, Upon the Altar of the Nation is an extraordinary book. Historians, priests, ministers, politicians, ethicists, Civil War buffs, and anyone interested in morality and war should read it. Stout’s central conclusions should be heeded: citizens and religious leaders must ask moral questions before engaging in war; they must demand moral accountability during war; and they must push for true justice at the end of war. Let us not leave it to our grandchildren or to scholars of future generations to debate the morality of the wars we start and know not how to finish.
Edward J. Blum is assistant professor of history at Kean University. He is the author of Reforging the White Republic: Race, Religion, and American Nationalism, 1865–1898 (LSU Press) and coeditor of Vale of Tears: New Essays on Religion and Reconstruction (Mercer Univ. Press).
1. Ruth Smith, White Man’s Burden: A Personal Testament (Vanguard Press, 1946), p. 217.
2. Robert Penn Warren, The Legacy of the Civil War: Meditations on the Centennial (Random House, 1961).
3. M. Waterbury, Seven Years Among the Freedmen (T. B. Arnold, 1891).
4. Quoted in Edward J. Blum, Reforging the White Republic (LSU Press, 2005), p. 76.
5. David Howard-Pitney, The Afro-American Jeremiad: Appeals for Justice in America (Temple Univ. Press, 1990).
Copyright © 2006 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromEdward J. Blum
Ronald C. White, Jr.
How Lincoln turned his rivals into allies.
- View Issue
- Subscribe
- Give a Gift
- Archives
Doris Kearns Goodwin has written a compelling collective biography of an unlikely political quartet in which Abraham Lincoln had to earn the right to sing lead. We have become accustomed to recent biographies of leaders of the American Revolution—Benjamin Franklin, George Washington, John Adams, Thomas Jefferson, Alexander Hamilton—where a wide-angle lens always keeps in view others of “the Founding Brothers.” By contrast, the lens for many biographies of Abraham Lincoln has often been narrowly focused, so that the contributions of other leaders of the Second American Revolution are barely in view.
Team of Rivals: The Political Genius of Abraham Lincoln
Doris Kearns Goodwin (Author)
Simon & Schuster
944 pages
$22.49
Goodwin originally contemplated writing an account of Abraham and Mary Lincoln in the White House as a sequel to her Pulitzer Prize-winning No Ordinary Time, the story of Franklin and Eleanor Roosevelt. Early on she abandoned that double story for what she believed was the more fascinating story of an odd political quartet. The result is an absorbing biography because it is not just about Lincoln but about a talented collection of men and women who find, to their surprise, that Lincoln is the person who binds them together. In Goodwin’s narrative we meet Lincoln afresh as a leader whose “extraordinary array of personal qualities … enabled him to form friendships with men who had formerly opposed him.” Goodwin’s portrait of Lincoln’s political genius allows us to appreciate his “astoundingly magnanimous soul.”
She begins with a dramatic, detailed account of “four men waiting.” William H. Seward, who had served as governor and senator from New York; Salmon P. Chase, who had filled the same offices in Ohio; Edward Bates, former congressman and judge from Missouri; and Abraham Lincoln waited in their hometowns to hear who would be the nominee of the Republican convention meeting in Chicago. When Lincoln was finally nominated on the third ballot, the other three leaders were stunned. Each believed he was better qualified by education, experience, and political savvy than the relatively unknown Lincoln.
Lincoln, on the very night he was elected, decided to invite these chief rivals to be members of his cabinet. Why? Lincoln told Joseph Medill, editor of the Chicago Tribune, “We needed the strongest men of the party in the Cabinet. I had no right to deprive the country of their services.” Goodwin steps behind the public personas to describe the private aspirations and fears of these “strongest men.”
And women. For what further sets her biography apart is Goodwin’s introduction of a quartet of remarkable women to match her male leads. She tells the stories of Mary Todd Lincoln, Fanny Seward, Kate Chase, and Julia Bates, showing their social and political acumen at a time when women were expected to remain within the private sphere of home and family, and her judicious use of letters and diaries gives immediacy to her narrative.
When Lincoln convened his cabinet in March, 1861, the cabinet secretaries were chafing over their subordinate roles; the president, they were convinced, was simply not up to the job. And yet, as Goodwin paints with sure brushstrokes, we follow each of their stories to the point where they come to appreciate Lincoln’s political genius.
Goodwin’s major interest, next to Lincoln, is Seward, who led on the first two ballots at the Republican convention in 1860. Lincoln appointed him as Secretary of State, and he was not shy about speaking his mind. By the end of Lincoln’s first month in office, the president was confronted everywhere by the question: did he have a policy? In frustration, Seward drafted a letter to Lincoln on April 1st that was no April Fool’s joke.
“We are at the end of a month’s administration and yet without a policy either domestic or foreign,” Seward wrote, adding that “further delay to adopt and prosecute our policies for both domestic and foreign affairs would not only bring scandal on the Administration, but danger upon the country.”
What was the solution? “Either the President must do it himself, and be all the while active in it; or Devolve it on some member of his Cabinet.” Guess who Seward was nominating?
Lincoln wrote out a reply to Seward that very day, responding point-by-point. In conclusion, Lincoln said simply, “if this must be done, I must do it.”
This frank exchange, rather than widening the distance between the two men, brought them together. Lincoln valued Seward’s intellect, abilities, and humor. Seward came to appreciate this gaunt Westerner who had deprived him of the highest prize of his life. Toward the end of June, in a letter to Frances, Seward told his wife, “The President is the best of us.”
Secretary of the Treasury Salmon B. Chase emerges as the most egotistical and exasperating of Lincoln’s rivals. But this allows Goodwin ample opportunity to speak of Lincoln’s political genius in soothing Chase’s wounded pride. Lincoln had to contend not only with the erstwhile governor’s continuing machinations as he jockeyed for the Republican nomination in 1864 but also with Chase’s feuds with Seward and Secretary of War Simon Cameron. In the end Lincoln appointed Chase to be Chief Justice of the Supreme Court. When Lincoln was reminded of Chase’s intrigues against him, he replied, “Now, I know meaner things about Governor Chase than any of those men can tell me,” but “we have stood together in time of trial, and I should despise myself if I allowed personal differences to affect my judgment of his fitness for the office.”
Goodwin moves beyond the three central rivals for the Republican nomination to also include treatments of Gideon Welles, Secretary of the Navy; Edwin Stanton, who succeeded Cameron as Secretary of War; and Montgomery Blair, Postmaster General. The story of Stanton, who had rudely snubbed Lincoln in a famous law case in Cincinnati in 1859, is particularly gripping. Stanton grew to have a deep affection for Lincoln and after his assassination was overcome with grief for weeks.
Goodwin allows Mary Todd Lincoln to break through the usual caricatures and emerge as a complex woman, both intelligent and politically ambitious. Frances Seward, even while staying in their family home in Auburn, New York, encouraged if not pushed her husband to pursue strong antislavery stands in relation to Lincoln and his cabinet rivals. Kate Chase, the beautiful daughter of Salmon P. Chase (whose three wives had died young), was absolutely dedicated to her father’s political career, and like her father, was ambitious to upstage the Lincolns. Julia Bates’ passionate love letters to Edward Bates, and her diary entries, allow us to see Bates in fresh ways.
Goodwin’s narrative gifts, so appreciated in No Ordinary Time, are used to good effect in Team of Rivals. In exquisite detail Goodwin allows us to listen in on the gossip and political deals in backroom meetings, Kate Chase’s parties, and Mary Todd Lincoln’s state dinners. Setting out to “see Lincoln liberated from his familiar frock coat and stovepipe hat,” Goodwin gives us the private Lincoln, the president at ease, engaged in conversation in Seward’s home across the street from the White House, with his feet up in front of the fireplace.
As stirring as is Goodwin’s story of Lincoln’s skills in working with rivals, there is a presence of an absence in her narrative of his presidential years. Lincoln spoke several times during the 12-day pre-inaugural train trip from Springfield to Washington about his belief that he viewed himself, as he told the New Jersey legislature in Trenton, as “an humble instrument in the hands of the Almighty.” Yes, Lincoln was an extraordinarily gifted manager of rivals, but at the center of Lincoln’s self-understanding of his leadership as president was his sense that he had been called by God for a special task at a specific moment.
During the Civil War, Lincoln embarked on a religious odyssey. He struggled to come to terms with the will of God in a brief musing he wrote for his eyes only after the Second Battle of Bull Run in 1862: “It may be that God’s will is different from the will of either party.” This reflection was found by his young secretary, John Hay, after Lincoln’s death, and given the title “Meditation on the Divine Will.”
Goodwin tells the story of the historic meeting of the cabinet on September 22, 1862, when Lincoln revealed that he would issue an Emancipation Proclamation. But she does not tell us, in her usually detailed descriptions, that both Chase and Welles, independent of each other, recorded that Lincoln, in Welles’ words, told his cabinet “he had made a vow, a covenant, that if God gave us the victory in the approaching battle, he would consider it an indication of Divine will, and that it was his duty to move forward in the cause of Emancipation.”
When Goodwin comes to the Second Inaugural, she tells us that Lincoln “fused spiritual faith with politics,” but there is so much more to say about the sources of the meaning of what some contemporaries called Lincoln’s “Sermon on the Mount,” a masterful speech that provided the basis of an ethic that could reach out to the rival South in a reconciling spirit.
Two smaller quibbles, oddly related to each other, are worth noting. First, Goodwin repeatedly interrupts her compelling narrative to quote a whole bevy of historians, evidently intended to bolster her argument. Too often the effect is merely to distract the reader. But second, in her more than 200 pages of notes, where—given the example of the best recent biographers of the American Revolutionary leaders—we might rightly expect to find Goodwin in dialogue or debate with other interpreters of Lincoln, Seward, slavery, or emancipation, we get very little substantive give-and-take.
As much as we may admire Lincoln, central to his enduring presence is that he remains elusive and paradoxical. We dare not miss the conflicts and contradictions in his spirit. And in the middle of that paradox is the odyssey of a president who never joined a church but offered the most profound statement we have on religious faith and the American nation only 41 days before his death.
Ronald C. White, Jr., is professor of American Church History at San Francisco Theological Seminary. He is the author of Lincoln’s Greatest Speech: The Second Inaugural (Simon & Schuster) and The Eloquent President: A Portrait of Lincoln Through His Words (Random House).
Copyright © 2006 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromRonald C. White, Jr.
Allen C. Guelzo
Don’t skip the chapter after the Civil War.
- View Issue
- Subscribe
- Give a Gift
- Archives
For over a century, the era of Reconstruction was the unwanted child of American history. By contrast with the drama and nobility of the Civil War, the dozen years between Appomattox and the final decision to withdraw federal occupation troops from the former Confederacy in 1877 looked like a confused tale of disillusion, corruption, blasted hopes, and a resigned descent into failure, populated with some of the least-appealing neanderthals in American political history. The first great academic survey of the Civil War era, James Ford Rhodes’ History of the United States from the Compromise of 1850, pictured Reconstruction as a political moonscape where “the ignorant negroes, the knavish white natives and the vulturous adventurers who flocked from the North” disported themselves, and the college textbook that ruled the middle of the 20th century—James Garfield Randall’s Civil War and Reconstruction—instructed its legions of undergraduate readers to regard Reconstruction as a Radical Republican “racket.” Lincoln hoped at Gettysburg that the dead of the war had not died in vain. Reconstruction seemed to suggest that this was precisely what they had done.
Forever Free: The Story of Emancipation and Reconstruction
Eric Foner (Author), Joshua Brown (Author)
Knopf
304 pages
$23.67
But the notion that Reconstruction was a terrible mistake, a rape of the South by the unscrupulous and the vengeful that could only be redressed by letting Southerners have control of their own lives again, met with serious questioning of its own in the 1950s. The Montgomery bus boycott, the overturning of Jim Crow public education by Brown v. Board of Education, and the civil rights movement which sprang from both, squeezed from the white South the same complaints about Northern agitators, the incapacities of blacks for full civil equality, and the need to let the South take its own slow and gradual path that had been heard in Reconstruction. Only now, the complaints were coming from the likes of Bull Connor and George Wallace, and the agitation was coming from Martin Luther King, and suddenly the juxtaposition of the words and the characters changed the whole way Reconstruction looked to a generation of Americans.
The rethinking of Reconstruction which emerged out of the civil rights movement found its first voice in Kenneth Stampp’s passionate and headlong The Era of Reconstruction, 1865–1877, published three days after the centennial of Appomattox, and fearless to a fault in its one-man assault on Rhodes, Randall, and the entire historiography of Reconstruction. Sympathy for the civil rights movement of the 1950s translated, in Stampp’s case, into sympathy for what now looked increasingly like its forerunner in the 1870s, while contempt for the redneck supremacists with their billyclubs and water cannons in Birmingham engendered a parallel contempt for the white Redeemers who had smothered black equality under the pillow of segregation.
“Were there mass arrests, indictments for treason or conspiracy, trials and convictions, executions and imprisonments” under the first Reconstruction, Stampp indignantly asked. “Nothing of the sort… . After four years of bitter struggle costing hundreds of thousands of lives, the generosity of the federal government’s terms was quite remarkable.” And were the rights that blacks demanded, and the Reconstruction regimes that tried to grant them, really so absurd or so tainted with corruption, that the Ku Klux Klan was the only legitimate solution? “In no southern state did any responsible Negro leader, or any substantial Negro group, attempt to get complete political control into the hands of the freedmen,” Stampp countered, nor did they “attempt radical experiments in the field of social or economic policy.”
An end came to Reconstruction in 1877, not because the Reconstruction governments were too incompetent to be tolerated by virtuous Southerners, but because unreconciled whites “resorted to race demagoguery,” made “incendiary appeals to race bigotry,” and adopted terror tactics to vault themselves back into power. Reconstruction was a missed opportunity, not a mistake. It had been dedicated to making “southern society more democratic” and “emancipation … something more than an empty gesture,” and its failures should not obscure the “radical idealism” that motivated it.1
Stampp’s was not actually the first trumpet raised against the conventional view of Reconstruction. As early as 1935, W. E. B. Du Bois’ Black Reconstruction in America attacked the notion of a virginal South being ravished by a vicious Reconstruction, and key components of Stampp’s argument were foreshadowed in Eric McKitrick’s Andrew Johnson and Reconstruction (1960) and James McPherson’s The Struggle for Equality: Abolitionists and the Negro in the Civil War and Reconstruction (1964). Nor was Stampp’s the last—in fact, if anything, the argument now swung so violently in the opposite direction that sympathetic historians began to argue that Reconstruction failed because it had been too mild, too timid, and too feeble to work real change. Very little in the Reconstruction South actually changed under Union Army occupation, argued Leon Litwack in Been in the Storm So Long: The Aftermath of Slavery (1979). Union officers pressed the freed slaves back into agricultural work for their former masters, and the typical Freedmen’s Bureau agent “looked upon their positions as sinecures rather than opportunities to protect the ex-slaves in their newly acquired rights.” Far from being too radical, Reconstruction had not been nearly radical enough. Congress imposed Reconstruction only out of “political necessity, not … the spirit of the Declaration of Independence,” and even then, “within a year of the war’s end,” Litwack insisted, “the planter class had virtually completed the recovery of its property.”2
It was the achievement of Eric Foner, the DeWitt Clinton Professor of American History at Columbia University, to haul the interpretation of Reconstruction back from the brink of irrelevancy to which Litwack had pushed it. In 1988, Foner published Reconstruction: America’s Unfinished Revolution, 1863–1877, simultaneously underscoring the real accomplishments of Reconstruction and delivering the first comprehensively detailed (and charmingly literate) history of Reconstruction since Du Bois. True, Foner acknowledged, “The freedmen’s political and social equality proved transitory, but the autonomous black family and a network of religious and social institutions survived the end of Reconstruction,” and even at its worst, the Jim Crow South was unable to deprive blacks of citizenship, or force blacks back into “complete dispossession and immobilization.” Nor were Reconstruction’s failures entirely due to a lack of resolve on the part of Northern Republicans. The rebound of Democratic political strength in Congress in 1868, the deaths or retirements of the major Radical Republican leaders between 1868 and 1873, and the national financial panic of 1873 all played an irresistible role in sapping Reconstruction’s vitality, entirely apart from any residual racism or weakness in the political knees.3
Forever Free: The Story of Emancipation and Reconstruction is at once a shorter, more popularized (and updated) version of Reconstruction, and also a more edgy and frustrated one. Partly, this is because Foner believes “our racial institutions and attitudes” have still not outgrown the “unresolved legacy” of emancipation and Reconstruction. “In contemporary debates over affirmative action, the rights of citizens, and the meaning of equality, Americans still confront issues bequeathed to our generation by the successes and failures of Reconstruction.” But another cause of Foner’s tetchiness is his disappointment that the embrace of Reconstruction begun by Stampp (and continued by Foner) still hasn’t gained any meaningful purchase on the public’s historical memory. “Long abandoned in the academic world, the traditional view of Reconstruction still survives in popular memory and everyday life,” Foner complains, and largely because Americans seem to “prefer historical narratives with happy endings.”
And yet, apart from those exasperated moments, Forever Free manages (like so much else in Foner’s work) to ladle out judgment with an even hand and nudge a balky narrative into readable and convincing shape. While acknowledging that Abraham Lincoln “shared many of the era’s racial prejudices,” Foner refuses to join the Lincoln-was-a-racist chorus. The Emancipation Proclamation “is perhaps the most misunderstood important document in American history,” and Foner decries the notion that “the proclamation freed no slave on the day it was issued.” To the contrary, “the proclamation launched the historical process of Reconstruction.” The villain in Foner’s story is not Lincoln, but Andrew Johnson, who was only in office six weeks before he pulled the rug from under Reconstruction by issuing waves of pardons to former Confederates and restoring to their white owners plantation lands that the federal government had seized during the war (for non-payment of taxes) and sold to freed slaves. The loss of access to land meant the loss of any possibility of economic clout for the freed slaves. And while Foner finds blacks successfully building families, churches, and political mobilization after 1865, the failure to acquire control over the means of economic production through land ownership meant that “blacks must remain a dependent plantation labor force in a situation not very different from slavery.”
Still, the political accomplishments of blacks in Reconstruction flew far higher than Rhodes or Randall ever dreamt. Based on the work he did in assembling Freedom’s Lawmakers: A Directory of Black Officeholders during Reconstruction in 1993, Foner can point to over 2,000 black officeholders during Reconstruction, “from justice of the peace to governor and United States senator,” the vast majority of whom confound the field-hand stereotype of blacks in Reconstruction offices. “By the early 1870s, biracial democratic government, something unknown in American history, was functioning effectively in many parts of the South.”
But blacks were never a majority either of officeholders or voters in any but a few pockets of the South. Even in majority-black states, “every Reconstruction governor was white.” They depended instead on establishing alliances with white Unionists (the much-demeaned Scalawags) and émigré Northern Republicans (the even-more demeaned Carpetbaggers)—and that, to a large degree, was their undoing. Blacks never ceased to agitate for the restoration of the lands they had occupied during and after the war (and which they had farmed as slaves). But the political ideology of the Republican party was chilly to agrarianism in any form, and glorified the “free labor” of workers who entered the market for wages, accumulated capital through savings, and then turned business-owner and employer themselves.4
Asking for the confiscation of rebel land might have seemed obvious to Southern blacks, but it grated on the sensibilities of white Republicans, who saw confiscation as the substitution of government favors for individual enterprise and a threat to property rights. And given that the generation following the Civil War was rocked by waves of revolutionary labor violence (from the Paris Commune in 1871 to the Pullman Strike in 1894), the Southern black demand for land re-distribution gradually alienated even their most fervent Northern white supporters.5 That, in turn, encouraged unreconstructed Southern white Democrats to turn to violence, and then to national political blackmail in the Hayes-Tilden election of 1876, to bring Reconstruction down. It did not help, either, that the hallmark of post-Civil War government in the North was corruption; Northerners who were sickened by the graft and bribery of the Gilded Age and wanted the power of government restrained were not disposed to listen favorably to Southern black pleas for increased government intervention to protect them or advance their cause.
Much of Foner’s book is straight-up political history, and Forever Free gives as good an account of the politics of Reconstruction, from the First Reconstruction Act to the Supreme Court decisions of the 1880s which overturned much of the Reconstruction legislation, as one could hope for. In the process of rendering this account concise, though, a good deal of the lower-level politics gets missed. There is nary a glimpse of how Reconstruction’s “bi-racial democracy” actually worked bi-racially, despite substantial evidence of whites and blacks who did join forces to promote democracy; and there is only a passing acknowledgement of how persistently Gilded Age Republican administrations—Garfield, Harrison, McKinley, and Roosevelt—tried to shield Southern blacks from the full fury of Southern Democratic backlash after 1877 by distributing federal patronage to blacks, sponsoring federal anti-lynching legislation (which Southern bloc votes routinely defeated), and offering symbolic pokes to Southern white suprem-acist blather (Theodore Roosevelt’s dinner invitation to Booker T. Washington being a case in point).
Nor does Foner give much attention to Southern blacks who did, in fact, use the free-labor ideology to win real political victories from their former plantation masters. In the southern Louisiana sugar parishes, free black wage laborers leveraged the complex skills and processes of sugar production as weapons to extort a substantial degree of economic and political independence from some of the most deeply-entrenched plantation elites of the Old South.6 It was not the blacks’ appeal to government intervention, but the white planters’ use of government-sponsored violence in the 1880s, that destroyed even these pockets of black resistance.
It is here that a cloud of ambiguity hangs suspended over Foner’s narrative, since politics both made and unmade Reconstruction. As much as Foner deplores Booker T. Washington’s 1895 injunction to blacks to “cast down your buckets where you are” and cultivate economic independence rather than political power, the single most dramatic difference between Reconstruction and the civil rights movement was the role played in the latter by a newly nascent black middle class, using its economic heft to force concessions from white supremacists and finally bringing the whole hideous political edifice of segregation crashing, Dagon-like, into the dust.
Ultimately, the question which Reconstruction holds for Americans today may be whether we see economic life as prior to political life, or vice versa. I suspect that, in a genuine democracy, the answer looks less like yes or no and more like a moving target, sometimes yes and sometimes no, responding to the demands made by each generation’s circumstances and crises. This is less ideologically absolute, and maybe less emotionally satisfying to both the Left and the Right, but it is by no means as guilty of looking for the “happy endings” as Foner thinks.
Allen C. Guelzo is the Henry R. Luce Professor of the Civil War Era and director of the Civil War Era Studies program at Gettysburg College. A two-time winner of the Lincoln Prize for Abraham Lincoln: Redeemer President (2000) and Lincoln’s Emancipation Proclamation: The End of Slavery in America (2004), he recently received the National Society Daughters of the American Revolution Medal of Honor.
1. Kenneth Stampp, The Era of Reconstruction, 1865–1877 (Knopf, 1965), pp. 10, 168, 170, 214.
2. Leon Litwack, Been in the Storm So Long: The Aftermath of Slavery (Knopf, 1979), pp. 410, 488, 446, 588.
3. Eric Foner, Reconstruction: America’s Unfinished Revolution, 1863–1877 (Harper & Row, 1988), p. 602.
4. And no one knows the intellectual geography of the Civil War-era Republicans better than Foner, whose first book, Free Soil, Free Labor, Free Men: The Ideology of the Republican Party before the Civil War (Oxford Univ. Press, 1970) remains one of my desert-island favorites.
5. Heather Cox Richardson, The Death of Reconstruction: Race, Labor, and Politics in the Post-Civil War North, 1865–1901 (Harvard Univ. Press, 2001), p. 217.
6. John C. Rodrigue, Reconstruction in the Cane Fields: From Slavery to Free Labor in Louisiana’s Sugar Parishes, 1862-1880 (LSU Press, 2001), pp. 2-6, 139-140.
Copyright © 2006 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromAllen C. Guelzo