Friday, March 31, 2017

Makers and Takers: The Rise of Finance and the Fall of American Business by Rana Foroohar , Crown Business



How Wall Street is choking our economy and how to fix it

A couple of weeks ago, a poll conducted by the Harvard Institute of Politics found something startling: only 19% of Americans ages 18 to 29 identified themselves as “capitalists.” In the richest and most market-oriented country in the world, only 42% of that group said they “supported capitalism.” The numbers were higher among older people; still, only 26% considered themselves capitalists. A little over half supported the system as a whole.

This represents more than just millennials not minding the label “socialist” or disaffected middle-aged Americans tiring of an anemic recovery. This is a majority of citizens being uncomfortable with the country’s economic foundation—a system that over hundreds of years turned a fledgling society of farmers and prospectors into the most prosperous nation in human history. To be sure, polls measure feelings, not hard market data. But public sentiment reflects day-to-day economic reality. And the data (more on that later) shows Americans have plenty of concrete reasons to question their system.

This crisis of faith has had no more severe expression than the 2016 presidential campaign, which has turned on the questions of who, exactly, the system is working for and against, as well as why eight years and several trillions of dollars of stimulus on from the financial crisis, the economy is still growing so slowly. All the candidates have prescriptions: Sanders talks of breaking up big banks; Trump says hedge funders should pay higher taxes; Clinton wants to strengthen existing financial regulation. In Congress, Republican House Speaker Paul Ryan remains committed to less regulation.

All of them are missing the point. America’s economic problems go far beyond rich bankers, too-big-to-fail financial institutions, hedge-fund billionaires, offshore tax avoidance or any particular outrage of the moment. In fact, each of these is symptomatic of a more nefarious condition that threatens, in equal measure, the very well-off and the very poor, the red and the blue. The U.S. system of market capitalism itself is broken. That problem, and what to do about it, is at the center of my book Makers and Takers: The Rise of Finance and the Fall of American Business, a three-year research and reporting effort from which this piece is adapted.

To understand how we got here, you have to understand the relationship between capital markets—meaning the financial system—and businesses. From the creation of a unified national bond and banking system in the U.S. in the late 1790s to the early 1970s, finance took individual and corporate savings and funneled them into productive enterprises, creating new jobs, new wealth and, ultimately, economic growth. Of course, there were plenty of blips along the way (most memorably the speculation leading up to the Great Depression, which was later curbed by regulation). But for the most part, finance—which today includes everything from banks and hedge funds to mutual funds, insurance firms, trading houses and such—essentially served business. It was a vital organ but not, for the most part, the central one.TIME photo-illustration

Over the past few decades, finance has turned away from this traditional role. Academic research shows that only a fraction of all the money washing around the financial markets these days actually makes it to Main Street businesses. “The intermediation of household savings for productive investment in the business sector—the textbook description of the financial sector—constitutes only a minor share of the business of banking today,” according to academics Oscar Jorda, Alan Taylor and Moritz Schularick, who’ve studied the issue in detail. By their estimates and others, around 15% of capital coming from financial institutions today is used to fund business investments, whereas it would have been the majority of what banks did earlier in the 20th century.

“The trend varies slightly country by country, but the broad direction is clear,” says Adair Turner, a former British banking regulator and now chairman of the Institute for New Economic Thinking, a think tank backed by George Soros, among others. “Across all advanced economies, and the United States and the U.K. in particular, the role of the capital markets and the banking sector in funding new investment is decreasing.” Most of the money in the system is being used for lending against existing assets such as housing, stocks and bonds.

To get a sense of the size of this shift, consider that the financial sector now represents around 7% of the U.S. economy, up from about 4% in 1980. Despite currently taking around 25% of all corporate profits, it creates a mere 4% of all jobs. Trouble is, research by numerous academics as well as institutions like the Bank for International Settlements and the International Monetary Fund shows that when finance gets that big, it starts to suck the economic air out of the room. In fact, finance starts having this adverse effect when it’s only half the size that it currently is in the U.S. Thanks to these changes, our economy is gradually becoming “a zero-sum game between financial wealth holders and the rest of America,” says former Goldman Sachs banker Wallace Turbeville, who runs a multiyear project on the rise of finance at the New York City—based nonprofit Demos.

It’s not just an American problem, either. Most of the world’s leading market economies are grappling with aspects of the same disease. Globally, free-market capitalism is coming under fire, as countries across Europe question its merits and emerging markets like Brazil, China and Singapore run their own forms of state-directed capitalism. An ideologically broad range of financiers and elite business managers—Warren Buffett, BlackRock’s Larry Fink, Vanguard’s John Bogle, McKinsey’s Dominic Barton, Allianz’s Mohamed El-Erian and others—have started to speak out publicly about the need for a new and more inclusive type of capitalism, one that also helps businesses make better long-term decisions rather than focusing only on the next quarter. The Pope has become a vocal critic of modern market capitalism, lambasting the “idolatry of money and the dictatorship of an impersonal economy” in which “man is reduced to one of his needs alone: consumption.”

During my 23 years in business and economic journalism, I’ve long wondered why our market system doesn’t serve companies, workers and consumers better than it does. For some time now, finance has been thought by most to be at the very top of the economic hierarchy, the most aspirational part of an advanced service economy that graduated from agriculture and manufacturing. But research shows just how the unintended consequences of this misguided belief have endangered the very system America has prided itself on exporting around the world.

America’s economic illness has a name: financialization. It’s an academic term for the trend by which Wall Street and its methods have come to reign supreme in America, permeating not just the financial industry but also much of American business. It includes everything from the growth in size and scope of finance and financial activity in the economy; to the rise of debt-fueled speculation over productive lending; to the ascendancy of shareholder value as the sole model for corporate governance; to the proliferation of risky, selfish thinking in both the private and public sectors; to the increasing political power of financiers and the CEOs they enrich; to the way in which a “markets know best” ideology remains the status quo. Financialization is a big, unfriendly word with broad, disconcerting implications.

University of Michigan professor Gerald Davis, one of the pre-eminent scholars of the trend, likens financialization to a “Copernican revolution” in which business has reoriented its orbit around the financial sector. This revolution is often blamed on bankers. But it was facilitated by shifts in public policy, from both sides of the aisle, and crafted by the government leaders, policymakers and regulators entrusted with keeping markets operating smoothly. Greta Krippner, another University of Michigan scholar, who has written one of the most comprehensive books on financialization, believes this was the case when financialization began its fastest growth, in the decades from the late 1970s onward. According to Krippner, that shift encompasses Reagan-era deregulation, the unleashing of Wall Street and the rise of the so-called ownership society that promoted owning property and further tied individual health care and retirement to the stock market.

The changes were driven by the fact that in the 1970s, the growth that America had enjoyed following World War II began to slow. Rather than make tough decisions about how to bolster it (which would inevitably mean choosing among various interest groups), politicians decided to pass that responsibility to the financial markets. Little by little, the Depression-era regulation that had served America so well was rolled back, and finance grew to become the dominant force that it is today. The shifts were bipartisan, and to be fair they often seemed like good ideas at the time; but they also came with unintended consequences. The Carter-era deregulation of interest rates—something that was, in an echo of today’s overlapping left-and right-wing populism, supported by an assortment of odd political bedfellows from Ralph Nader to Walter Wriston, then head of Citibank—opened the door to a spate of financial “innovations” and a shift in bank function from lending to trading. Reaganomics famously led to a number of other economic policies that favored Wall Street. Clinton-era deregulation, which seemed a path out of the economic doldrums of the late 1980s, continued the trend. Loose monetary policy from the Alan Greenspan era onward created an environment in which easy money papered over underlying problems in the economy, so much so that it is now chronically dependent on near-zero interest rates to keep from falling back into recession.

This sickness, not so much the product of venal interests as of a complex and long-term web of changes in government and private industry, now manifests itself in myriad ways: a housing market that is bifurcated and dependent on government life support, a retirement system that has left millions insecure in their old age, a tax code that favors debt over equity. Debt is the lifeblood of finance; with the rise of the securities-and-trading portion of the industry came a rise in debt of all kinds, public and private. That’s bad news, since a wide range of academic research shows that rising debt and credit levels stoke financial instability. And yet, as finance has captured a greater and greater piece of the national pie, it has, perversely, all but ensured that debt is indispensable to maintaining any growth at all in an advanced economy like the U.S., where 70% of output is consumer spending. Debt-fueled finance has become a saccharine substitute for the real thing, an addiction that just gets worse. (The amount of credit offered to American consumers has doubled in real dollars since the 1980s, as have the fees they pay to their banks.)

As the economist Raghuram Rajan, one of the most prescient seers of the 2008 financial crisis, argues, credit has become a palliative to address the deeper anxieties of downward mobility in the middle class. In his words, “let them eat credit” could well summarize the mantra of the go-go years before the economic meltdown. And things have only deteriorated since, with global debt levels $57 trillion higher than they were in 2007.

The rise of finance has also distorted local economies. It’s the reason rents are rising in some communities where unemployment is still high. America’s housing market now favors cash buyers, since banks are still more interested in making profits by trading than by the traditional role of lending out our savings to people and businesses looking to make longterm investments (like buying a house), ensuring that younger people can’t get on the housing ladder. One perverse result: Blackstone, a private-equity firm, is currently the largest single-family-home landlord in America, since it had the money to buy properties up cheap in bulk following the financial crisis. It’s at the heart of retirement insecurity, since fees from actively managed mutual funds “are likely to confiscate as much as 65% or more of the wealth that … investors could otherwise easily earn,” as Vanguard founder Bogle testified to Congress in 2014.

It’s even the reason companies in industries from autos to airlines are trying to move into the business of finance themselves. American companies across every sector today earn five times the revenue from financial activities—investing, hedging, tax optimizing and offering financial services, for example—that they did before 1980. Traditional hedging by energy and transport firms, for example, has been overtaken by profit-boosting speculation in oil futures, a shift that actually undermines their core business by creating more price volatility. Big tech companies have begun underwriting corporate bonds the way Goldman Sachs does. And top M.B.A. programs would likely encourage them to do just that; finance has become the center of all business education.

Washington, too, is so deeply tied to the ambassadors of the capital markets—six of the 10 biggest individual political donors this year are hedge-fund barons—that even well-meaning politicians and regulators don’t see how deep the problems are. When I asked one former high-level Obama Administration Treasury official back in 2013 why more stakeholders aside from bankers hadn’t been consulted about crafting the particulars of Dodd-Frank financial reform (93% of consultation on the Volcker Rule, for example, was taken with the financial industry itself), he said, “Who else should we have talked to?” The answer—to anybody not profoundly influenced by the way finance thinks—might have been the people banks are supposed to lend to, or the scholars who study the capital markets, or the civic leaders in communities decimated by the financial crisis.

Of course, there are other elements to the story of America’s slow-growth economy, including familiar trends from globalization to technology-related job destruction. These are clearly massive challenges in their own right. But the single biggest unexplored reason for long-term slower growth is that the financial system has stopped serving the real economy and now serves mainly itself. A lack of real fiscal action on the part of politicians forced the Fed to pump $4.5 trillion in monetary stimulus into the economy after 2008. This shows just how broken the model is, since the central bank’s best efforts have resulted in record stock prices (which enrich mainly the wealthiest 10% of the population that owns more than 80% of all stocks) but also a lackluster 2% economy with almost no income growth.

Now, as many top economists and investors predict an era of much lower asset-price returns over the next 30 years, America’s ability to offer up even the appearance of growth—via financially oriented strategies like low interest rates, more and more consumer credit, tax-deferred debt financing for businesses, and asset bubbles that make people feel richer than we really are, until they burst—is at an end.

This pinch is particularly evident in the tumult many American businesses face. Lending to small business has fallen particularly sharply, as has the number of startup firms. In the early 1980s, new companies made up half of all U.S. businesses. For all the talk of Silicon Valley startups, the number of new firms as a share of all businesses has actually shrunk. From 1978 to 2012 it declined by 44%, a trend that numerous researchers and even many investors and businesspeople link to the financial industry’s change in focus from lending to speculation. The wane in entrepreneurship means less economic vibrancy, given that new businesses are the nation’s foremost source of job creation and GDP growth. Buffett summed it up in his folksy way: “You’ve now got a body of people who’ve decided they’d rather go to the casino than the restaurant” of capitalism.

In lobbying for short-term share-boosting management, finance is also largely responsible for the drastic cutback in research-and-development outlays in corporate America, investments that are seed corn for future prosperity. Take share buybacks, in which a company—usually with some fanfare—goes to the stock market to purchase its own shares, usually at the top of the market, and often as a way of artificially bolstering share prices in order to enrich investors and executives paid largely in stock options. Indeed, if you were to chart the rise in money spent on share buybacks and the fall in corporate spending on productive investments like R&D, the two lines make a perfect X. The former has been going up since the 1980s, with S&P 500 firms now spending $1 trillion a year on buybacks and dividends—equal to about 95% of their net earnings—rather than investing that money back into research, product development or anything that could contribute to long-term company growth. No sector has been immune, not even the ones we think of as the most innovative. Many tech firms, for example, spend far more on share-price boosting than on R&D as a whole. The markets penalize them when they don’t. One case in point: back in March 2006, Microsoft announced major new technology investments, and its stock fell for two months. But in July of that same year, it embarked on $20 billion worth of stock buying, and the share price promptly rose by 7%. This kind of twisted incentive for CEOs and corporate officers has only grown since.

As a result, business dynamism, which is at the root of economic growth, has suffered. The number of new initial public offerings (IPOs) is about a third of what it was 20 years ago. True, the dollar value of IPOs in 2014 was $74.4 billion, up from $47.1 billion in 1996. (The median IPO rose to $96 million from $30 million during the same period.) This may show investors want to make only the surest of bets, which is not necessarily the sign of a vibrant market. But there’s another, more disturbing reason: firms simply don’t want to go public, lest their work become dominated by playing by Wall Street’s rules rather than creating real value.

An IPO—a mechanism that once meant raising capital to fund new investment—is likely today to mark not the beginning of a new company’s greatness, but the end of it. According to a Stanford University study, innovation tails off by 40% at tech companies after they go public, often because of Wall Street pressure to keep jacking up the stock price, even if it means curbing the entrepreneurial verve that made the company hot in the first place.

A flat stock price can spell doom. It can get CEOs canned and turn companies into acquisition fodder, which often saps once innovative firms. Little wonder, then, that business optimism, as well as business creation, is lower than it was 30 years ago, or that wages are flat and inequality growing. Executives who receive as much as 82% of their compensation in stock naturally make shorter-term business decisions that might undermine growth in their companies even as they raise the value of their own options.

It’s no accident that corporate stock buybacks, corporate pay and the wealth gap have risen concurrently over the past four decades. There are any number of studies that illustrate this type of intersection between financialization and inequality. One of the most striking was by economists James Galbraith and Travis Hale, who showed how during the late 1990s, changing income inequality tracked the go-go Nasdaq stock index to a remarkable degree.

Recently, this pattern has become evident at a number of well-known U.S. companies. Take Apple, one of the most successful over the past 50 years. Apple has around $200 billion sitting in the bank, yet it has borrowed billions of dollars cheaply over the past several years, thanks to superlow interest rates (themselves a response to the financial crisis) to pay back investors in order to bolster its share price. Why borrow? In part because it’s cheaper than repatriating cash and paying U.S. taxes. All the financial engineering helped boost the California firm’s share price for a while. But it didn’t stop activist investor Carl Icahn, who had manically advocated for borrowing and buybacks, from dumping the stock the minute revenue growth took a turn for the worse in late April.

It is perhaps the ultimate irony that large, rich companies like Apple are most involved with financial markets at times when they don’t need any financing. Top-tier U.S. businesses have never enjoyed greater financial resources. They have a record $2 trillion in cash on their balance sheets—enough money combined to make them the 10th largest economy in the world. Yet in the bizarre order that finance has created, they are also taking on record amounts of debt to buy back their own stock, creating what may be the next debt bubble to burst.

You and I, whether we recognize it or not, are also part of a dysfunctional ecosystem that fuels short-term thinking in business. The people who manage our retirement money—fund managers working for asset-management firms—are typically compensated for delivering returns over a year or less. That means they use their financial clout (which is really our financial clout in aggregate) to push companies to produce quick-hit results rather than execute long-term strategies. Sometimes pension funds even invest with the activists who are buying up the companies we might work for—and those same activists look for quick cost cuts and potentially demand layoffs.

It’s a depressing state of affairs, no doubt. Yet America faces an opportunity right now: a rare second chance to do the work of refocusing and right-sizing the financial sector that should have been done in the years immediately following the 2008 crisis. And there are bright spots on the horizon.

Despite the lobbying power of the financial industry and the vested interests both in Washington and on Wall Street, there’s a growing push to put the financial system back in its rightful place, as a servant of business rather than its master. Surveys show that the majority of Americans would like to see the tax system reformed and the government take more direct action on job creation and poverty reduction, and address inequality in a meaningful way. Each candidate is crafting a message around this, which will keep the issue front and center through November.

The American public understands just how deeply and profoundly the economic order isn’t working for the majority of people. The key to reforming the U.S. system is comprehending why it isn’t working.

Remooring finance in the real economy isn’t as simple as splitting up the biggest banks (although that would be a good start). It’s about dismantling the hold of financial-oriented thinking in every corner of corporate America. It’s about reforming business education, which is still permeated with academics who resist challenges to the gospel of efficient markets in the same way that medieval clergy dismissed scientific evidence that might challenge the existence of God. It’s about changing a tax system that treats one-year investment gains the same as longer-term ones, and induces financial institutions to push overconsumption and speculation rather than healthy lending to small businesses and job creators. It’s about rethinking retirement, crafting smarter housing policy and restraining a money culture filled with lobbyists who violate America’s essential economic principles.

It’s also about starting a bigger conversation about all this, with a broader group of stakeholders. The structure of American capital markets and whether or not they are serving business is a topic that has traditionally been the sole domain of “experts”—the financiers and policymakers who often have a self-interested perspective to push, and who do so in complicated language that keeps outsiders out of the debate. When it comes to finance, as with so many issues in a democratic society, complexity breeds exclusion.

Finding solutions won’t be easy. There are no silver bullets, and nobody really knows the perfect model for a high-functioning, advanced market system in the 21st century. But capitalism’s legacy is too long, and the well-being of too many people is at stake, to do nothing in the face of our broken status quo. Neatly packaged technocratic tweaks cannot fix it. What is required now is lifesaving intervention.

Crises of faith like the one American capitalism is currently suffering can be a good thing if they lead to re-examination and reaffirmation of first principles. The right question here is in fact the simplest one: Are financial institutions doing things that provide a clear, measurable benefit to the real economy? Sadly, the answer at the moment is mostly no. But we can change things. Our system of market capitalism wasn’t handed down, in perfect form, on stone tablets. We wrote the rules. We broke them. And we can fix them.

King Solomon's Table: A Culinary Exploration of Jewish Cooking from Around the World Hardcover – April 4, 2017 by Joan Nathan



Joan Nathan, the “doyenne of Jewish cuisine,” is at it again. The James Beard Award-winning author  has just published a new cookbook—her 11th: King Solomon’s Table: A Culinary Exploration of Jewish Cooking from Around the World.

Thumbing through the book’s recipes and bright, color-drenched photographs feels like a dizzying, exhilarating ride through the myriad kitchens and countries that collectively tell the story of global Jewish cuisine. Inside, Nathan travels from ancient Babylon to the present day, introducing readers to the sweet-and-sour stuffed grape leaves made by a Persian woman living in Los Angeles, the spicy chocolate rugelach baked by a Mexican-Jewish chef with Ashkenazi roots, and a fragrant carrot salad cooked on an Israeli moshav by Jews originally hailing from India. Even more so than in any of her previous works, Nathan also delves into the recipes’ histories, seeking, as she writes in the introduction, “to discover what makes Jewish cooking unique.”

What she finds along the way is that Jewish cuisine is, and has always been, a two-way street of influence. “As a wandering people, Jews have influenced many different local cuisines as they carried their foods to new lands via Jewish trade routes, while fleeing prejudice in search of safer lands, or while migrating in search of new opportunities,” she writes. Meanwhile, Jews have been equally affected by the places they traveled and settled, incorporating local ingredients and dishes into their culinary repertoires while making substitutions as necessary to ensure the dishes followed the laws of kashrut and the customs of the Sabbath and holidays.



Nathan first tapped into the global nature of Jewish cuisine while living in Israel in the 1970s. “It was and is a country of immigrants, really, and so are we in America,” she told me. She was years ahead of her time in terms of recognizing the power and meaning behind Jewish food. Along with Claudia Roden, the late Gil Marks, and a few others, she pioneered the field of modern Jewish food scholarship. While many Americans were distancing themselves from their culinary heritage, she dug deeper, certain there was a world worth exploring and sharing.

After so many years and so many books (not to mention the countless articles on Jewish cooking she’s published for The New York Times and Tablet, among other places), one might think Nathan would run out of subjects. But that idea is antithetical to her very nature. “When I was working on my France book, I realized just how deeply you could go into the story of Jewish food there,” she told me. “That is true of so many places if you are willing to dig.”

And Nathan has always been willing to dig. A tireless traveler and culinary anthropologist, she has a knack for finding her way into the kitchens of chefs and home cooks alike, ready with a notebook and an endless series of questions. “Joan is an unstoppable force,” writes her close friend Alice Waters (herself a culinary icon) in the book’s foreword.

Nathan’s work is also propelled by the people she meets and the stories they have to share. “People are always coming up to me with a family recipe that means so much to them and they want to uncover the history of,” she said. And with that tidbit, off she goes. No single family’s kitchen can tell the complete story of Jewish food, but Nathan knows that each family’s lived experience—the tears they shed, the joys they celebrate, and the bread they break together—completes another piece of the puzzle.

During the six years she worked on King Solomon’s Table, Nathan traveled extensively, including trips to India and Greece, Italy and Canada. “I went to these places for the recipes and the people,” she said. In one case, she traveled to Cuba on just two days’ notice. “I had a wild weekend,” she said. While there, she also discovered a sweet-and-sour cabbage dish made by a woman whose family of Turkish descent settled in Cuba in the 1920s. In El Salvador, a place with an isolated Jewish community that she describes as having “a kind of culinary lag with regard to holiday foods,” she uncovered Schokoladenwurst, an old-fashioned chocolate “sausage” made with coffee and marzipan that traces its roots back to Germany.

“You have to have a bit of imagination when writing about food and history,” Nathan told me. “You follow the clues that remain from what otherwise might be a completely lost community, and then help connect the dots.”

A few of the recipes in King Solomon’s Table are only very tangentially Jewish. Take the chilaquiles that iconic Los Angeles food critic Jonathan Gold makes at home for his family. On the one hand, the jumble of eggs and fried tortillas cooked in salsa has no real connection to Jewish tradition. But for Gold, someone who relates to his heritage primarily through his stomach, they offer a gateway of connection. “We call it Mexican matzo brei,” he is quoted as saying in King Solomon’s Table. By including the recipe, Nathan asserts just how expansive and ever-evolving Jewish cuisine can be. Unlike most other cuisines, the Jewish food canon is not bound by geographical borders. What makes a food “Jewish,” her book argues, is the meaning that Jewish people give to it by cooking, serving, and loving it.

In the end, what makes King Solomon’s Table truly great is the same thing that has made all of Joan Nathan’s previous cookbooks great: Nathan herself. Her zest for exploration and passion for bringing people together through their stories is unrivaled. “More than anything,” she said, “I love the challenge of uncovering what lies beneath.”
***

Thursday, March 30, 2017

Zealot: The Life and Times of Jesus of Nazareth by Reza Aslan Random House, 337 pp., $27





Within an hour of its online debut, the number of viewers of Reza Aslan’s now notorious interview with Fox News’ Lauren Green had far exceeded the number of Israelites who crossed the Red Sea under the leadership of the father of all Jewish nationalist zealots, Moses. Aslan was being interviewed on the occasion of the appearance of his book that places Jesus of Nazareth at the top of a long list of subsequent, rabidly nationalist messianic Jewish zealots.

By now, Aslan’s Zealot: The Life and Times of Jesus of Nazareth has come to dominate every book sales index in America, and his fifteen Fox-minutes of fame have been viewed by millions. Sales for this “scholarly” book have broken every record in the category of religious studies; the number of stops on Aslan’s current speaking tour is nothing short of staggering; and it will surely require a crack team of number-crunching accountants to calculate the tax debt on Aslan’s royalties and speaking fees.



Reza Aslan on Fox News.

Given such overwhelming statistics, it is perhaps understandable that Aslan himself appears to be experiencing some problems keeping his numbers, and his facts, straight. That he is numerically challenged was apparent during the Fox interview. No sooner had Green posed the first of her series of preposterous questions, all pondering what might motivate, or justify, a Muslim to publish such a provocative book about her Lord and Savior, than Aslan, with rapid-fire confidence, listed his many alleged scholarly credentials, as a “scholar of religions with four degrees, including one in the New Testament, and a PhD in the History of Religions . . . who has been studying the origins of Christianity for two decades.” He then went on to claim that he was a professor of religious history; that he had been working assiduously on his Jesus book for twenty years; that it contains more than “one hundred pages of endnotes”; and, finally, that Zealot is the fruit of research based on his “study of around 1,000 scholarly books.”

That last greatly exaggerated claim is as good, or awkward, a place as any to begin an assessment of the credibility of the man and his work. Zealot does indeed provide a respectable, if spotty, bibliography. But it lists one hundred and fifty four, not one thousand, books. Given his evident talent at self-promotion, it is hard to imagine that Aslan was holding back out of an abundance of humility. His claim regarding his extensive endnotes is also plainly false, since there is not a single footnote or conventional endnote to be found anywhere in Zealot. The book’s chapters are, rather, appended by bibliographic essays, loosely related to their respective general themes. A cursory review of Aslan’s own biography and bibliography also renders impossible his repeated claims that Zealot is the product of twenty years of “assiduous scholarly research about the origins of Christianity.”

To be sure, Aslan, 41, has been very hard at work since graduating college with a dazzling array of projects—mostly having to do with Islamic religion, culture, and literature as well as Middle Eastern politics—but none of which has anything to do with his quest for the historical Jesus. He is, to quote his own website, “the founder of AslanMedia.com, an online journal for news and entertainment about the Middle East and the world, and co-founder and Chief Creative Officer of BoomGen Studios, the premier entertainment brand for creative content from and about the Greater Middle East,” including comic books. Of his three graduate degrees, one is from the University of Iowa where he studied creative writing (the subject he actually teaches at the University of California, Riverside); the second was a two-year masters degree at the Harvard Divinity School, where he apparently concentrated on Islam; and his doctorate was not, as he indignantly told the hapless Green, in “the history of religions.” Rather, he wrote an exceedingly brief sociological study of “Global Jihadism as a Transnational Movement,” at UC Santa Barbara.

Speaking on CNN in the wake of his Fox interview, Aslan ruefully observed, “There's nothing more embarrassing than an academic having to trot out his credentials. I mean, you really come off as a jerk.” Actually, there is something significantly more embarrassing, and that is when the academic trots out a long list of exaggerated claims and inflated credentials.

Perhaps it is Aslan’s general fondness for breathless, and often reckless, exaggeration that explains his problems with the basic digits and facts about his own work and life. Such hyperbole alas pervades Zealot. Depicting the religious mood of first-century Palestine early on in the book, Aslan asserts that there were “countless messianic pretenders” among the Jews (there were no more than an eminently countable half-dozen). Among his most glaring overestimations is Aslan’s problematic insistence that the foundational Christian belief about Jesus, namely that he was both human and divine, is “anathema to five thousand years of Jewish scripture, thought and theology.” The vast chronological amplification aside, Judaism’s doctrine about this matter is not nearly so simple, as Peter Schäfer demonstrated exhaustively in his very important study, The Jewish Jesus, and which Daniel Boyarin has argued even more forcefully in his latest book, The Jewish Gospels. Boyarin and Schäfer are just two of the many serious scholars whose works Aslan has clearly failed to consult.

This combination of overly confident and simplistic assertions on exceedingly complex theological matters, with stretching of truths—numerical, historical, theological, and personal—permeates Aslan’s bestseller. And yet, precisely because Zealot is generating such frenzied controversy, this is all serving Aslan very well. But as it would be wrong to judge Aslan’s book by its coverage, let us turn to its text.

Aslan’s entire book is, as it turns out, an ambitious and single-minded polemical counter-narrative to what he imagines is the New Testament’s portrayal of Jesus Christ.



The core thesis of Zealot is that the “real” Jesus of Nazareth was an illiterate peasant from the Galilee who zealously, indeed monomaniacally, aspired to depose the Roman governor of Palestine and become the King of Israel. Aslan’s essentially political portrayal of Jesus thus hardly, if at all, resembles the depiction of the spiritual giant, indeed God incarnate, found in the Gospels and the letters of Paul. While Aslan spills much ink arguing his thesis, nothing he has to say is at all new or original. The scholarly quest for the historical Jesus, or the “Jewish Jesus,” has been engaged by hundreds of academics for the past quarter millennium and has produced a mountain of books and a vast body of serious scholarly debate. The only novelty in Aslan’s book is his relentlessly reductionist, simplistic, one-sided and often harshly polemical portrayal of Jesus as a radical, zealously nationalistic, and purely political figure. Anything beyond this that is reported by his apostles is, according to Aslan, Christological mythology, not history.

Aslan is, to be sure, a gifted writer. The book’s Prologue is both titillating and bizarre. Entitled “A Different Sort of Sacrifice” it opens with a breezy depiction of the rites of the Jerusalem Temple, but very quickly descends to its ominously dark denouement: the assassination of the High Priest, Jonathan ben Ananus, on the Day of Atonement, 56 C.E., more than two decades after Jesus’s death:

The assassin elbows through the crowd, pushing close enough to Jonathan to reach out an invisible hand, to grasp the sacred vestments, to pull him away from the Temple guards and hold him in place just for an instant, long enough to unsheathe a short dagger and slide it across his throat. A different sort of sacrifice.

There follows a vivid narration of the political tumult that had gripped Roman-occupied Palestine during the mid-first century, which Aslan employs to great effect in introducing readers to the bands of Jewish zealots who wreaked terror and havoc throughout Judea for almost a century. It seems like an odd way to open a book about the historical Jesus, who was crucified long before the Zealot party ever came into existence, until one catches on to what Aslan is attempting. The Prologue effectively associates Jesus, albeit as precursor, with that chillingly bloody murder by one of the many anonymous Jewish Zealots of first-century Palestine.

To address the obvious problem that the Jesus depicted in Christian Scriptures is the antithesis of a zealously political, let alone ignorant and illiterate, peasant rebel and bandit, Aslan deploys a rich arsenal of insults to dismiss any New Testament narrative that runs counter to his image of Jesus as a guerilla leader, who gathered and led a “corps” of fellow “bandits” through the back roads of the Galilee on their way to mount a surprise insurrection against Rome and its Priestly lackeys in Jerusalem. Any Gospel verse that might complicate, let alone undermine, Aslan’s amazing account, he insolently dismisses as “ridiculous,” “absurd,” “preposterous,” “fanciful,” “fictional,” “fabulous concoction,” or just “patently impossible.”

Aslan’s entire book is, as it turns out, an ambitious and single-minded polemical counter-narrative to what he imagines is the New Testament’s portrayal of Jesus Christ. The strawman Jesus against whom he is arguing, however, is a purely heavenly creature, far closer to the solely and absolutely unearthly Christ of the 2nd-century heretic Marcion, than the exceedingly complex man/God depicted by the Evangelists and painstakingly developed in the theological works of the early Church Fathers.

Aslan dismisses just about all of the New Testament’s accounts of the early life and teachings of Jesus prior to his “storming” of Jerusalem and his subsequent arrest and crucifixion. He goes so far as to insist that Jesus’s zealous assault on the Jerusalem Temple is the “singular fact that should color everything we read in the Gospels about the Messiah known as Jesus of Nazareth.” Everything! Aslan goes on to assert that the very fact of his crucifixion for the crime of sedition against the Roman state is “all one has to know about the historical Jesus.” Still, as the New Testament constitutes the principal primary source for these facts as well as for anything else we can know about the “life and times of Jesus,”Aslan has little choice but to rely rather heavily on certain, carefully selected New Testament narratives.

Aslan is insistently oblivious even of the powerfully resonant climax to the single act of violence on the part of any of the twelve apostles recorded in the Gospels.



The persistent problem permeating Aslan’s narrative is that he never provides his readers with so much as a hint of any method for separating fact from fiction in the Gospels, a challenge that has engaged actual scholars of the New Testament for the last two centuries. Nowhere does he explain, given his overall distrust of the Gospels as contrived at best and deliberately fictitious at worst, why he trusts anything at all recorded in the New Testament. But one needn’t struggle too hard to discern Aslan’s selection process: Whichever verses fit the central argument of his book, he accepts as historically valid. Everything else is summarily dismissed as apologetic theological rubbish of absolutely no historical worth.

So, for example, after recounting the Romans’ declaration of Jesus’s guilt, he writes:

As with every criminal who hangs on a cross, Jesus is given a plaque, or titulus, detailing the crime for which he is being crucified. Jesus’s titulus reads KING OF THE JEWS. His crime: striving for kingly rule, sedition. And so, like every bandit and revolutionary, every rabble-rousing zealot and apocalyptic prophet who came before or after him— like Hezekiah and Judas, Theudas and Athronges, the Egyptian [sic] and the Samaritan [sic], Simon son of Giora and Simon son of Kochba, Jesus is executed for daring to claim the mantle of king and messiah.

(Lest the words of the titulus be mistaken for mockery, Aslan informs us that the Romans had no sense of humor, which will come as a surprise to classicists.)

Aslan is particularly fond of assembling such lists of Jesus’s seditious predecessors, peers, and successors. Elsewhere, he compares Jesus’s mission to Elijah’s, which ended in his slaughter of the four hundred and fifty prophets of Baal; on another occasion he sets Jesus alongside Judas Maccabeus, who waged a long and bloody war against the Greek Seleucids. And Jesus is made out to be a direct forerunner of the militant rebel of the second century, Simon bar Kokhba, who battled the Romans to the goriest of ends. And so on.

The crucial distinction that Aslan fails to acknowledge is that what clearly sets Jesus so radically apart from all of these figures is his adamant rejection of violence, to say nothing of the pervasively peaceful and loving content of his teachings and parables, which Aslan willfully misconstrues and at one point revealingly describes as so “abstruse and enigmatic” as to be “nearly impossible to understand.”

Aslan is insistently oblivious even of the powerfully resonant climax to the single act of violence on the part of any of the twelve apostles recorded in the Gospels, which occurred during the tumult surrounding Jesus’s arrest by the minions of the Jewish High Priest, Caiaphas:

While he was still speaking, suddenly a crowd came, and the one called Judas, one of the twelve, was leading them. He approached Jesus to kiss him; but Jesus said to him, “Judas, is it with a kiss that you are betraying the Son of Man?” When those who were around him saw what was coming, they asked, “Lord, should we strike with the sword?” Then one of them struck the slave of the high priest and cut off his right ear. But Jesus said, “No more of this!” And he touched his ear and healed him. (Luke 22:47-51)

Matthew’s version of the same episode ends with Jesus’s stern and powerful admonition against any sort of violence:

Suddenly, one of those with Jesus put his hand on his sword, drew it, and struck the slave of the high priest, cutting off his ear. Then Jesus said to him, “Put your sword back into its place; for all who take the sword will perish by the sword.” (Matthew 26:51-52)

And in Mark’s version of the story, Jesus protests his peaceful intentions to those who come to seize him violently, crying out “Have you come out with swords and clubs to arrest me as though I were a bandit?” (Mark 14:48).

Unsurprisingly, Aslan dismisses these passages as pure invention. One problem with this is that Aslan sometimes justifies his own selective acceptance of certain New Testament narratives by pointing to their appearance in all three of the synoptic Gospels which, he argues, lends them a degree of historical credibility.

Another problem is that one of the key texts that Aslan uses to buttress his thesis that the proto-Zealot Jesus was planning for some kind of apocalyptic showdown with his enemies, is taken from the very same chapter in Luke:

He said to them, “But now, the one who has a purse must take it, and likewise a bag. And the one who has no sword must sell his cloak and buy one.” . . . They said, “Lord, look, here are two swords.” He replied, “It is enough.” (Luke 22:36, 38)

The Jesus actually depicted here is hardly prepping his “band of zealots” for a rebellion against Roman rule by strictly limiting their arsenal to two, obviously symbolic, swords.



Why would the Evangelists deliberately engage in so much wanton fabrication? Aslan offers a simple explanation:

With the Temple in ruins and the Jewish religion made pariah, the Jews who followed Jesus as messiah had an easy decision to make: they could either maintain their cultic connection to their parent religion and thus share in [sic] Rome’s enmity, or they could divorce themselves from Judaism and transform their messiah from a fierce Jewish nationalist into a pacifistic preacher of good works whose kingdom was not of this world.

Allergic to ambiguities or complexities of any kind that might interfere with his Manichean dichotomy between the historical Jesus of Nazareth and mythical Jesus Christ of the Gospels, Aslan perceives everything as an either/or proposition—either the zealous, radical, and purely political Jesus of history, or the entirely fictional moral teacher and pacifistic Jesus of Christology. He takes the same approach to the Jews of Jesus’s era: There existed either the violent apocalyptic Jewish bandits who mounted one rebellion after another against the Romans, or the corrupt quisling Priests, such as Caiaphas who suppressed all such activity. The passive, scholarly Pharisees who opposed both these postures, are simply ignored.

The only passage I could find in Aslan’s entire book where he argues for a more nuanced approach to anything pertains to Jesus’s “views on the use of violence,” which he insists have been widely misunderstood:

To be clear, Jesus was not a member of the zealot party that launched the war with Rome because no such party could be said to exist for another thirty years after his death. Nor was Jesus a violent revolutionary bent on armed rebellion, though his views on violence were far more complex than it is often assumed.

And yet, elsewhere Aslan insists, that being “no fool,” Jesus “understood what every other claimant to the mantle of messiah understood: God’s sovereignty could not be established except through force.” And it is this latter characterization which is central to Zealot.

To take account of the fact that even at the moment of Jesus’s maximal zeal, when he stormed the Temple, he was also interpreting Hebrew Scriptures would seriously undermine Aslan’s insistence on Jesus’s illiteracy, so he ignores it. The same goes for the numerous times he is addressed, both by his disciples as well as by the Pharisees and the Romans, as “teacher” and “rabbi.”

There is not so much as an allusion to be found in Zealot to the fascinating debates between Jesus and the Pharisees about the specifics of Jewish law, such as the permissibility of divorce, the proper observance of the Sabbath, the requirement to wash one’s hands before eating, the dietary laws, and—most fascinating and repercussive of all—the correct understanding of the concept of resurrection, in response to a challenge by the Sadducees who rejected that doctrine tout-court.

Aslan is intent on portraying Jesus as a faithful, Torah-abiding Jew for obvious reasons: Intent on being crowned King of Israel, and as such a candidate for the highest Jewish political office, how could he be anything less than a “Torah-true” Jew? So Aslan takes at face value the Gospel’s report (Matthew 5:7) of Jesus’s insistence that he has not come to undermine a single law of the Torah, but rather to affirm its every ordinance.

Aslan's ignorance—if that is what it is—has serious consequences.



In this connection, alas, Aslan offers a most unflattering and skewed stereotype of Jesus as a typical Jew of his era, namely an intolerant ethnocentric nationalist prone to violence towards Gentiles and whose charity and love extend only to other Jews:

When it comes to the heart and soul of the Jewish faith—the Law of Moses—Jesus insisted that his mission was not to abolish the law but to fulfill it (Matthew 5:17). That law made a clear distinction between relations among Jews and between Jews and foreigners. The oft-repeated [sic] commandment to “love your neighbor as yourself” was originally given strictly in the context of internal relations within Israel . . . To the Israelites, as well as to Jesus’s community in first-century Palestine, “neighbor” meant one’s fellow Jews. With regard to the treatment of foreigners and outsiders, oppressors and occupiers, the Torah could not be clearer: “You shall drive them out before you. You shall make no covenant with them and their gods. They shall not live in your land” (Exodus, 23:31-33)

As in his highly selective misuse of the Gospels, Aslan is here distorting the Hebrew Scriptures, conflating different categories of “foreigners,” and erasing the crucial distinction between the righteous ger, or foreigner, and the pernicious idolator, as well as the radically different treatments the Torah commands towards each. He mischievously omits the Torah’s many and insistent prohibitions against “taunting the stranger, for you know the soul of the stranger having been strangers in the land of Egypt,” and “cheating the foreigner in your gate”, and, most powerfully, the injunction to “love the stranger as yourself." (See, inter alia, Exodus 22:20 & 23:9, Leviticus 19:34 and Deuteronomy 24:14.)

Aslan achieves the two central goals of his book with this distorted, and terribly unflattering, depiction of the treatment he alleges Jewish law demands of the foreigner. It at once hardens his argument about Jesus’s “fierce nationalism,” while at the same time creating an image of the Jews as a hateful bunch, profoundly intolerant of the mere presence of others in their land. It is difficult when reading this, and many similar blatant distortions, to suppress all suspicion of a political agenda lying just beneath the surface of Aslan’s narrative.

What will prove most shocking, at least to those with some very basic Jewish education, are Aslan’s many distorted, or plainly ignorant, portrayals of both the Jews and their religion in Jesus’s day. Aside from his apparent unfamiliarity with the critically important recent works of Schäfer and Boyarin, Aslan seems oblivious of more than a century of scholarship on the exceedingly complex theological relationship between the earliest disciples of Jesus and the early rabbis. The foundational work of R. Travers Herford in Christianity in Talmud and Midrash (1903) and, three-quarters of a century later, Alan Segal’s Two Powers in Heaven: Early Rabbinic Reports about Christianity and Gnosticism (1977) are just two of the hundreds of vitally important books missing from his bibliography.

This ignorance—if that is what it is—has serious consequences. For it is not only the Gospels’ accounts of Jesus citing Hebrew Scriptures while discussing Jewish law that belie Aslan’s portrait of Jesus and his apostles as an uncouth band of Galilean peasants. As Schäfer has richly documented, Rabbinic sources contain numerous references to the original biblical exegesis of Jesus and his disciples, including accounts of rabbis who were attracted to these interpretations, even as they came ultimately to regret and repent of their “heresy.” That Aslan has not read Schäfer is made most painfully clear in his pat dismissal of the Roman historian Celsus’s report of having overheard a Jew declare that Jesus’s real father was not the Jew, Joseph, but rather a Roman centurion named Panthera. Aslan says that this is too scurrilous to be taken seriously. While it would be unfair to expect him to be familiar with the common Yiddish designation of Jesus as Yoshke-Pandre (Yeshua, son of Panthera), one might expect him to have read the fascinating chapter devoted to this very familiar and well-attested theme in rabbinic sources, in Schäfer’s Jesus in the Talmud.

On the other hand, Aslan weirdly accepts at face value, and even embellishes, the dramatic accounts in the Gospels of Jesus’s entry into Jerusalem allegedly just before the Passover, as the Jewish crowds wave palm branches and chant hosannas. But were he familiar with the basic rituals of the Sukkot festival, Aslan might somewhere have acknowledged the skepticism expressed by many scholars about the Gospels’ contrived timing of this dramatic event to coincide with Passover.

I will spare readers a long list of Aslan’s blatant and egregious errors regarding Judaism, from his misunderstanding of the rabbinic epithet Am Ha-Aretz (the rabbinic description, fittingly enough, of an ignoramus) to his truly shocking assertion that rabbinic sources attest to Judaism’s practice of crucifixion. Aslan very effectively explains why the Romans employed crucifixion to such great effect; the horrific public spectacle of the corpses of those condemned for their sedition to this most agonizing of deaths was a powerful deterrent to would-be insurrectionists. However, he seems not to understand how particularly offensive this was to the Jews, whose Torah demanded the immediate burial of executed criminals (who were to be hanged, never crucified), prohibiting their corpses to linger “even unto the morning” as this was considered a desecration of the divine image in which all men were created.

Finally, there is Aslan’s description of the fate of the Jews and Judaism in the wake of the destruction of the Temple. In his account, all of the Jews were exiled from Judea, and not so much of a trace of Judaism was allowed to survive in the Holy Land after 70 C.E.. Astonishingly enough, Aslan says not a word about the tremendously important armistice arranged between the pacifistic party of Jewish moderates led by Yochanan ben Zakai, or of the academy he established at Yavneh (Jabne, or Jamnia) some forty miles northwest of Jerusalem, and which flourished for more than a half-century, breathing new life and vitality into rabbinic Judaism in the immediate aftermath of the destruction of Jerusalem.

Readers with no background in the history of rabbinic Judaism will be misled by Aslan to believe that its pacification was the result of the Jews total defeat and expulsion from Palestine. Aslan seems to think that rabbinic Judaism is entirely a product of diaspora Jews who, only many decades after the Temple’s destruction, began to develop a less virulently and racist version of the Jewish religion, centered on Torah study. Can it be that this self-professed “historian of religions” is entirely ignorant of the Sages of the Land of Israel who flourished in the wake of the destruction of the Temple, and whose teachings are recorded in the Mishnah, and later the Jerusalem Talmud? Or is Aslan, here again, choosing deliberately to ignore inconvenient historical truths? And none is less convenient than the fact that a significant, and ultimately dominant, faction of Jews of first-century Palestine, far from being nationalist zealots, were pacifists whose accord with Vespasian gave birth to the religion we today recognize as Judaism.

Which brings us back to Aslan’s awful interview on Fox. Lauren Green’s questioning of Aslan’s right, as a Muslim, to write his book was absolutely out of bounds, but perhaps she was, quite unwittingly, onto something about his agenda. While the form taken by Green’s questions was unacceptable and made Aslan look like the victim of an intolerant right-wing ambush, might it not be the case that it was Aslan who very deftly set her up? He prefaces the book with an “Author’s Note,” which is a lengthy and deeply personal confession of faith. Here Aslan recounts his early years in America as an essentially secular Iranian emigré of Islamic origins with no serious attachment to his ancestral faith, his subsequent teenage conversion to evangelical Christianity and finally his return to a more intense commitment to Islam. Aslan ends this intimately personal preface by proudly declaring:

Today, I can confidently say that two decades of rigorous academic research into the origins of Christianity have made me a more genuinely committed disciple of Jesus of Nazareth than I ever was of Jesus Christ.

This unsubtle suggestion that Evangelical Christians’ discipleship and knowledge of Jesus is inferior to his own makes it rather harder to sympathize with him as an entirely innocent victim of unprovoked, ad hominem challenges regarding his book’s possibly Islamist agenda. Aslan had to know that opening a book that portrays Jesus as an illiterate zealot and which repeatedly demeans the Gospels with a spiritual autobiography that concludes by belittling his earlier faith as an Evangelical Christian would prove deeply insulting to believing Christians.

And yet, if there is one thing Aslan must have learned during his years among the Evangelicals, it is that even the most rapturous among them, who pray fervently for the final apocalypse, reject even the suggestion that their messianic dream ought to be pursued through insurrection or war. In a word, they have, pace Aslan, responded appropriately to the question “what would Jesus do?”

Finally, is Aslan’s insistence on the essential “Jewishness” of both Jesus and his zealous political program not also a way of suggesting that Judaism and Jesus, no less than Islam and Mohammed, are religions and prophets that share a similarly sordid history of political violence; that the messianic peasant-zealot from Nazareth was a man no more literate and no less violent than the prophet Mohammed?

The Iran Wars: Spy Games, Bank Battles, and the Secret Deals that Reshaped the Middle East by Jay Solomon Random House, 352 pp., $28




In April 2009, a young Iranian, Shahram Amiri, disappeared in Medina, Saudi Arabia. Ostensibly there to perform the hajj, Amiri had in fact brokered a deal with the CIA to provide information on Iran’s nuclear program. Leaving his wife and child behind in Iran and a shaving kit in an empty Saudi hotel room, Amiri fled to America, received asylum, pocketed $5 million, and resettled in Arizona. Formerly a scientist at Malek Ashtar University, one of several institutes harboring Iran’s nuclear endeavors, Amiri conveyed the structure of the program and intelligence about a number of key research sites, including the secret facility at Fordow.

The story might have ended there. But according to Jay Solomon, chief foreign affairs correspondent for the Wall Street Journal and author of The Iran Wars, what happened next “emerged as one of the strangest episodes in modern American espionage.” A year after Amiri defected, he appeared on YouTube, claiming that the CIA had drugged and kidnapped him. In fact, Iranian intelligence had begun threatening his family through their intelligence assets in the United States. Buckling under that pressure, Amiri demanded to re-defect. In July 2010, he returned to a raucous welcome in Tehran, claimed he had been working for Iran all along, and reunited with his son. Of course this was not the end of the story. Amiri soon disappeared, and in August 2016, shortly after Solomon’s book was published, he was hanged.

Amiri’s saga exemplifies the kinds of stories that Solomon tells throughout his account of the U.S. struggle with Iran. Amiri was just the latest victim in 30 years of spy games between the United States and Iran, a conflict, Solomon writes, “played out covertly, in the shadows, and in ways most Americans never saw or comprehended.” The Iran Wars draws on a decade of Solomon’s Middle East reporting to trace that history, urging readers who are caught up in centrifuge counts and sanctions relief to view Iran’s nuclear build-up as part of its broader clash with the United States—and sometimes between the United States and itself.

Solomon’s book appeared in the waning days of Obama’s presidency, as a first-take history of the American–Iranian nuclear negotiations. Yet in the wake of Donald Trump’s surprising ascension to the presidency, it’s newly relevant—now not as a summation but as a starting point. Few within the GOP support the Iran deal, but intra-party debates have emerged on how to move forward. Many critics of the deal eagerly await its demise, while others grudgingly insist on honoring it with strict enforcement. These differing viewpoints appear to exist within the Trump administration itself.

By distilling years of play-by-play news into a coherent narrative, sprinkled with vignettes from Solomon’s extraordinary reporting in the Wall Street Journal, the book offers a basis for reestablishing common facts and chronology as a new administration confronts the long-bedeviling dilemma of Iran. In that sense, it is a blueprint for the Trump team’s reevaluation of the Iran deal—whether to preserve it, and if so, how.



Iranian nuclear scientist Shahram Amiri addresses journalists upon his arrival at Imam Khomeini Airport, Tehran, July 15, 2010. (Atta Kenare/AFP/Getty Images.)

Solomon begins his chronicle in the weeks and months following September 11, 2001. As the United States launched its campaign to root out al-Qaeda from Afghanistan, it quickly discovered the footprints of foreign powers—not just India and Pakistan, but Iran. CIA operatives working with the Northern Alliance, which had been founded with the aid of Iran’s Revolutionary Guards (IRGC), were told to keep a low profile as they neared the front lines, to avoid detection by Iranian observers. For several months, Washington and Tehran felt their way toward some form of détente in Afghanistan. But as Solomon reports, early confidence-building measures collapsed under long-standing suspicions and duplicitous Iranian behavior, which extended to harboring al-Qaeda fugitives, including Osama bin Laden’s son.

The same pattern repeated in Iraq. Initial engagement with Iranian envoys, including the future foreign minister, Javad Zarif, soon dissolved into mutual misgivings. Solomon criticizes the Bush administration for failing to understand the level of existing Iranian influence in Iraq, and for failing to prepare for the complex religious and tribal schisms that Tehran would exploit. Even before the war, Iran began drawing on its local influence, “lying in wait,” according to Solomon, for the United States to fall into its trap. While many U.S. officials hoped that the Iraq war would weaken the Islamic Republic, its leadership launched a bid for hegemony so rapid that few recognized it and so ambitious that both sides of Iraq’s civil war received its support. Iranian meddling deepened Iraq’s endemic dysfunction and proved deadly for U.S. forces.

Iran has deployed this strategy of sectarian gamesmanship across the Middle East. Over the last decade, its agents have seemed to be everywhere; in Lebanon, their powerful proxy, Hezbollah, assassinated Prime Minister Rafik Hariri and crushed the Cedar Revolution, and in Syria, as Solomon puts it, Damascus “became the forward operating base for Iran’s Axis of Resistance.” Throughout, Tehran continued to bewitch Washington with a brew of cooperation and competition. Even as Iran undermined U.S. interests across the region, many continued to see it as a potential ally.

Those forces within the American foreign policy community had little to stand on while Iranian-supplied explosives maimed U.S. troops in Iraq. But if they couldn’t reach the ayatollah, they’d aim for Bashar al-Assad instead. They hoped that they could reel the Syrian dictator out of Iran’s orbit, potentially generating a series of foreign policy achievements. In 2006, then-Senator John Kerry defied the Bush administration and traveled to Damascus to meet with Assad. The next year, Nancy Pelosi followed Kerry to Syria’s capital. When President Obama took office, Solomon writes, “Assad emerged as a central target of Obama’s new diplomacy,” with Senator Kerry spearheading the outreach.

The attempt to wean Syria away from Iran foreshadowed Obama’s later attempt to wean Iran away from itself. Solomon largely holds his rhetorical fire in The Iran Wars, but he reserves special scorn for Kerry. The senator, he writes, dined at five-star Damascus restaurants with the Assads and returned home to proclaim Bashar a “progressive” ruler who understood “his young population’s desire for jobs and access to the Internet.” In another sign of things to come, Kerry was particularly taken in by the smooth-talking Syrian ambassador to the United States, Imad Moustapha. The ambassador “engaged in a crafty diplomatic dance” in Washington, lambasting Israeli policies but regularly meeting with Jewish figures and singing songs of Syrian moderation. Even as Assad began butchering Syrian civilians in 2011, Kerry praised Moustapha publicly and declared his continued belief that “Syria will change”—showing, in Solomon’s words, “a troubling lack of judgment” that “raised questions about his ability to read the intentions of world leaders.”

Kerry and others would later trade in Assad for Iranian president Hassan Rouhani, and Moustapha for Rouhani’s foreign minister, Javad Zarif. In the meantime, however, the regional struggle between the United States and the Islamic Republic increasingly centered on Iran itself—and especially on its burgeoning nuclear program. In the months before the Berlin Wall fell, the Iranian government had already begun preparing the way for its nuclearization. Solomon retells that story by focusing on the key figures, such as Ali Akbar Salehi, who would later play a major role in securing Iran’s nuclear future by negotiating with the United States. In one of the several spy-novel twists featured throughout The Iran Wars, Solomon recounts how Iran recruited a former Soviet nuclear scientist, Vyacheslav Danilenko, who was an expert in designing nuclear detonators. Over the course of their six-year partnership, Danilenko helped the Iranians design their explosives testing facility at Parchin—ground zero for Iran’s attempts to weaponize its nuclear operation.

As Iran put the final touches on Parchin in 2002, an Iranian opposition group, the Mujahedin-e-Khalq, held a press conference in Washington revealing the scope of Iran’s nuclear ambitions. In an episode that Solomon curiously glosses over, European powers, with tepid American acquiescence, soon started negotiations with Iran to freeze its nuclear development. In the wake of the lightening U.S. victory in Iraq, Iran agreed to suspend its nuclear enrichment, the only time it agreed to do so in a dozen years of diplomacy. But by January 2006, at the nadir of American involvement in Iraq, Iran had broken its commitment.

From that point forward, the Bush administration adopted steadily intensifying financial measures to compel Iran to choose between having an economy and having a nuclear program. Orchestrated by Stuart Levey, the undersecretary for terrorism and financial intelligence at the Treasury Department, the effort at first focused on blocking tainted Iranian financial institutions—those facilitating terrorist and nuclear activities—from obtaining access to the U.S. dollar. Solomon describes Levey’s “global road shows” to Arab, Asian, and European capitals to persuade banks that conducting business with Iranian firms would harm their own access to the U.S. financial system. Over the next three years, the Treasury Department wove together a boa-like blockade of the Iranian financial system that increasingly suffocated the Islamic Republic’s economy.




Iranian Foreign Minister Mohammad Javad Zarif and U.S. Secretary of State John Kerry stand between Chinese Foreign Minister Wang Yi and French Foreign Minister Laurent Fabius, at meetings in Geneva, November 24, 2013. (Fabrice Coffrini/AFP/Getty Images.)

When the Obama administration took office, however, it slowed Levey’s offensive in order to initiate an aggressive diplomatic outreach. Just months into his term, the new president dispatched two letters to Ayatollah Khamenei that immediately took regime change off the table; he also slashed funding for democracy-promotion initiatives in Iran. His commitment to finding a path forward with Iran’s ruler remained so strong that, rather than back the millions-strong Green Revolt that arose in 2009, Obama remained silent. Only when a subsequent round of nuclear negotiations collapsed did Obama resume the sanctions push, and only then reluctantly—spurred in part, as the book details, by the fear of a unilateral Israeli attack. Despite the administration’s attempts to water down subsequent sanctions legislation, Solomon admiringly reports that Treasury’s war “proved singularly lethal,” amassing unprecedented financial leverage against the Islamic Republic.

By 2013, the pressure became so severe that Iran’s new president, Hassan Rohani, knew he needed access to the more than $100 billion in frozen oil revenues sitting in banks abroad. The only way to do so was through negotiations. Yet just as Iran began to buckle, the Obama administration seemed to become even more eager to deal. Solomon devotes the final portion of The Iran Wars to the two-year diplomatic dance that followed, eventually resulting in the Joint Comprehensive Plan of Action (JCPOA).

Public negotiations between Iran and the P5+1 (the permanent members of the U.N. Security Council, plus Germany) resumed amidst the palaces and hotels of old Europe; the French begged for a harder line, while Ayatollah Khamenei demanded that Iran retain an industrial-sized nuclear enrichment program. The real work, however, took place backstage. Quiet feelers through an Omani back channel that commenced in 2009 blossomed into full-fledged secret diplomacy once Rohani took office. The United States and Iran fleshed out the terms of the deal, largely imposing them on the rest of the parties. Along the way, Solomon writes, the United States angered its allies, issued empty threats to walk out of the talks, and “started significantly departing” from its original strategy while “weakening its terms.” Only a last-minute series of cave-ins secured the final agreement, which, at best, delayed Iran’s nuclearization by 15 years.

At what cost? Solomon devotes much of his book to searching for an answer. The deal’s constraints, he grants, “offer an opportunity to calm the Middle East.” But the deal also leaves open a path to nuclearization that risks “unleashing an even larger nuclear cascade” across the region. And, most crucially for Solomon, it does nothing to address the issues that produced the Iran wars in the first place—namely, the violent, anti-American aggression of Tehran’s theocrats. In hot pursuit of an arms accord, the Obama administration downplayed or accommodated crises generated by Iran, arguing that it needed to prioritize the nuclear issue. At least some within the administration seemed to hope that the accord would provide the solutions to those very crises. This wishful expectation, if not outright contradiction, could, Solomon believes, lead to a “much bigger and broader war” bubbling up between Iran, its adversaries, and likely the United States.

Solomon’s insights are piercing, but unfortunately he clouds them in the way that he chose to structure The Iran Wars. Rather than covering the many facets of the U.S.–Iranian struggle chronologically, he divides them into discrete sections—Afghanistan, the Iraq war, the Arab Spring—frequently rewinding the clock to detail the events. He also devotes a great deal of his attention to areas outside of the nuclear negotiations, so much so that he doesn’t turn to the nuclear program until a third of the way through the book. Although this approach succeeds in recontextualizing the relationship between Washington and Tehran, it sometimes decontextualizes the nuclear negotiations themselves. Readers rarely see, for example, how an Iranian action in Iraq might have influenced a U.S. negotiating position in Switzerland. Solomon makes a strong case for the need to comprehend the relationship between the broader U.S.–Iranian battle and the nuclear talks, but he sometimes obscures the very connections he has done so much to reveal.

Nonetheless, Solomon has excavated many of the deeper patterns that underlay the nuclear diplomacy. In the process of seeking to empty Iran’s centrifuges of nuclear isotopes, the Obama White House loaded them with psychological meaning, and the impulses that drove Washington’s misbegotten entreaties to Damascus reemerged. John Kerry, whose role in The Iran Wars is as a kind of diplomatic Don Quixote, dashed around the region, proposing to visit Tehran in 2009 and floating massive U.S. concessions without full White House approval. The new Imad Moustapha in his life, Iranian foreign minister Javad Zarif, responded by tweeting “Happy Rosh Hashanah,” an act of ecumenism that reportedly astonished Obama staffers. A sweet greeting here, a moderate move there; the Islamic Republic’s rhetorical morsels fed an insatiable American appetite for fantasies of a Tehran transformed.



Yet those fantasies weren’t simply about Iran. They were also about redefining America’s role in the world. For key figures in the administration, the nuclear talks represented something much more than preventing the spread of nuclear weapons. “What was interesting about Iran is that lots of things intersected,” one of Obama’s chief aides, Ben Rhodes, told Solomon. From nonproliferation to the Iraq war, several currents of American foreign policy “converged” around Iran, making it not only a “big issue in its own right, but a battleground in terms of American foreign policy.” President Obama seemed to believe that history was trending in America’s direction and that the best approach was to avoid needless confrontations that could interrupt that process. If the goal was for the United States to get out of history’s way, the greatest threat to that project was the Iranian nuclear crisis. The possibility of war, after all, meant the possibility of America imposing itself once again in the Middle East and on the globe.

The prospect of American action against Iran was just as likely to materialize as a result of a unilateral strike by Israel, which would almost certainly draw in U.S. forces. This explains why the Obama administration so feared Israeli action—a fear that in some ways defined American diplomacy with Iran. Solomon refers to Washington’s anxieties about Jerusalem several times throughout his story, but The Iran Wars may underemphasize Israel’s role in the moral and political calculus behind the Iran deal. One senior policymaker indicated its centrality when, after much progress in the talks, he famously described Benjamin Netanyahu to Atlantic writer Jeffrey Goldberg as a “chickenshit.” “He’s got no guts,” the official said, while another called Netanyahu a “coward” who had succumbed to the administration’s pressure and failed to bomb Iran when he could. Central members of the president’s team seem to have gone out of their way to taunt the prime minister of a close ally in the heat of sensitive, multiparty nuclear negotiations.

Whether calculated strategy or revenge-fueled gloating, the leak signified that Obama’s nuclear diplomacy was almost as much a contest of wills with Israel as it was with Iran. For generations, a strong U.S.–Israel relationship embodied and necessitated American leadership in the region; downgraded ties signaled a reduced regional role for the United States, with the added benefit, for some administration officials, of weakening the pro-Israel lobby in the United States. As much as, in Solomon’s words, Obama “wagered” on eventual Iranian reform, he also wagered that the deal would prevent the “blob,” the long-standing foreign policy experts of both parties, from reversing the administration’s new course in the region.

The question moving forward, of course, is whether President Obama succeeded, particularly with regard to Iran. During the campaign, now-President Trump consistently criticized the Iran nuclear agreement, calling it a “terrible deal,” and warning that Iran would no longer get away with humiliating U.S. forces in the Gulf. He repeated those criticisms after the election and once in office, telling Bill O’Reilly that the nuclear accord was “the worst deal I’ve ever seen negotiated” and arguing that it had “emboldened” Iran. “They follow our planes, they circle our ships with their little boats, and they lost respect because they can’t believe anybody could be so stupid as to make a deal like that,” he said in February. The administration quickly backed that rhetoric with action. Following an Iranian ballistic missile test and an attack on a Saudi warship by Iranian-backed Houthi rebels in Yemen, then-Trump National Security Adviser Michael Flynn put the Islamic Republic “on notice,” and the White House imposed a new round of sanctions on Iranian individuals and entities.

At the same time, Trump has pulled back from his threat to tear up the agreement, telling O’Reilly that “we’re going to see what happens.” And those who might have advocated for such an approach are no longer in Trump’s direct orbit. Flynn, whom many saw as the most hawkish force on Iran within the administration, resigned in February, leaving behind those seemingly more inclined to honor the deal with strict enforcement, including Secretary of Defense James Mattis and Secretary of State Rex Tillerson. Meanwhile, Israeli Prime Minister Benjamin Netanyahu, long the most vociferous foreign critic of the agreement, appears to seek a more muscular U.S. posture as opposed to full cancellation. Although it remains too early to say how all of the internal and external dynamics shaping the administration’s Iran policy will develop, Trump certainly harbors no emotional or political attachment to the nuclear deal.

Yet the larger question posed by the JCPOA is its overall effect on U.S. foreign policy. The Obama administration’s negotiations with Iran, and its broader quest to solve several global problems at once, are part of a recurring pattern in U.S. diplomatic history. From Cairo to Baghdad, Democratic and Republican presidents alike have looked to foreign capitals for salvation-sized solutions to regional crises, hoping that winning over a leader or signing a peace agreement could touch off cascading successes. Such expectations have generally led to disappointment. The Iran Wars is relevant now not just as a valuable synthesis of the history of U.S–Iranian relations, but also as a warning against the risks of such domino diplomacy, no matter how seductive.

The Refusal of Work: The Theory and Practice of Resistance to Work. David Frayne. Zed Books. 2015. Inventing the Future: Postcapitalism and a World Without Work. Nick Srnicek and Alex Williams. Verso. 2015.





There is today a preoccupation with the idea of ‘the end of work’: whether in chronicling the decline of permanent careers in the age of precarity; dystopian warnings that digitisation is about to bring a new age of mass unemployment; or utopian demands that we seize the opportunity to radically redefine our relationship to work. The most practical proposal of the latter is the campaign for a Universal Basic Income, an amount paid to every citizen without means-testing, the cost of which, it is argued, could be defrayed by a combination of savings in welfare bureaucracy and reductions in health spending. Until recently, UBI was spoken of as a ‘thought experiment’, but numerous countries are now witnessing campaigns and trials dedicated to making the policy a reality. In Britain, the Green Party, Labour and the SNP have expressed varying levels of openness to it, and there are also advocates on the libertarian Right. The basic annual amount of £3,692 proposed by UK campaigners may not keep the majority from pursuing further income, but such practical questions are less significant than the change that UBI represents to the very ontology of work. For the first time since the Garden of Eden, work would no longer be the condition of being, and all of us could live without paid labour if we chose.

Two recent books have offered coordinates for thinking through this ‘end of work’ moment. In The Refusal of Work, David Frayne makes a case against the culture of over-working we see at all points on the social scale; and in Inventing the Future, Nick Srnicek and Alex Williams offer a radical programme for bringing about a ‘post-work’ future in which everyone is sustained by the fruits of digital technologies and a UBI. Both books are rigorous in their arguments for the desirability of an end – or a radical reduction – to the amount of work we do, and searching in their analyses of how this might be achieved. They also have another interesting thing in common: both come up against the question of what we are to do when we no longer depend on paid work. This, we want to suggest, should become a more central question for the post-work debate, even more so for the fact that it leads to the question of how we judge what we are without paid work.

Central to The Refusal of Work are case studies of people in Britain who have reduced the amount of paid work they do. Whether they have other sources of income, live on benefits or find ways to make money outside regular employment, they each do so in order to pursue more fulfilling activities. Anne leaves a stressful job in the media to focus on photography; Matthew abandons administrative work to study and volunteer for the Royal Society for the Protection of Birds; and Rhys aspires to leave his job as a computer programmer to develop his allotment. ‘There is a limit to the extent to which people will tolerate the contradiction between ethical ideals and daily realities’, Frayne remarks, which leads to an ‘opposing tradition of insurgency and rebellion, arising wherever people have refused to internalise the idea that to work is good, healthy and normal’. Most of Frayne’s participants find modern labour to be as damaging and unhealthy as unemployment is generally alleged to be. Part of the problem of work is the dreadful gap between the unhappy experience many have of it and its status as a high ethical demand. Even those among his interviewees who are not benefits claimants and live by their own means said they feel stigmatised for not doing paid work.
Image Credit: (Cliff James CC 2.0)

The work ethic’s moral injunction, however, is often met by an equivalent moralism of the refusal (or the pose of a refusal) to participate in capitalist culture, meaning that work and non-work both tend to be justified in moralising terms. As Frayne puts it, ‘a lot of popular anti-capitalist polemic […] tells people (often in a rather pious fashion) that they will be happier if they choose to work less and moderate their spending’. Frayne is careful to question any critique of work that places its stress on the ‘brainwashing or moral laxity’ or acquisitiveness of those who fail to resist work, and he takes pains not to idealise the often trying lives of his interviewees. And yet, whenever he discusses what is beneficial in their lives, Frayne’s discourse cannot entirely extricate itself from its own kind of moral ‘piety’.

Take, for example, the fact that so many of Frayne’s interviewees find that because working full-time generates expenses, giving up work actually gave them a head-start on making up for lost income. ‘Given the extent to which many modern commodities – from pre-prepared meals to high-caffeine drinks, car washes, repair services, care services, personal trainers, dating agencies and so on – are capitalising on our lack of free-time’, Frayne says, ‘it is not surprising that many of the people I met found that working less was allowing them to save money. They were able to do more for themselves’ (180). However, when we ‘do more for ourselves’, we are still performing work; only it is work of an implicitly idealised kind.

What is at stake in the ideal of ‘doing more for ourselves’ comes to the fore in an early response to the argument that liberation from certain kinds of work can give rise to personal autonomy and creativity. Anticipating later arguments in feminism for domestic and reproductive labour to be recognised as work, Virginia Woolf argued that much of the intellectual potential of women has historically been stifled by the everyday household: ‘daughters of educated men have always done their thinking from hand to mouth […] They have thought while they stirred the pot, while they rocked the cradle’. Frayne’s comments, however, have more in common with one of Woolf’s harshest detractors. In a review of Woolf’s Three Guineas (1938), literary critic Q.D. Leavis remarks:

I feel bound to disagree with Mrs Woolf’s assumption that running a household and family unaided necessarily hinders or weakens thinking. One’s own kitchen and nursery, and not the drawing-room and dinner-table [that is, the spaces of upper class coterie conversation]… is the realm where living takes place, and I see no profit in letting our servants live for us.

Leavis’s objections are inflected by class, explicitly contrasting her tough-minded petit-bourgeois background to Woolf’s suspiciously louche aristocratic one. (In one of the better jokes in literary criticism, she wonders whether someone of Woolf’s breeding would ‘know which end of the cradle to stir’). For Leavis, intellectual creation and self-fulfilment are not to be abstracted from the tasks of ordinary life, but draw their strength precisely from them (though whether men are expected to participate in intellectually rewarding cradle-stirring remains unanswered here).

This position is consistent with what Raymond Williams called the ‘culture and society’ tradition in middle-class radical thought since the end of the 1700s, which opposes the deadening alienation of modern ‘society’ with an idealised, unalienated form of lived ‘culture’. A residue of this idealism remains implicit in the arguments of many post-work exponents. For instance, in a discussion of André Gorz, Frayne describes ‘the injustice in a society where one section of the population buys their free-time by offloading their chores on to the other’ (40). This remains a poignant observation, particularly with the rise of platforms such as TaskRabbit and Amazon Turk that encourage the most casualised labourers to compete to undercut each other to perform household work for those who can afford them. Where the Leavisian moralism creeps in, however, is when Frayne’s discussion suggests that it is not merely unjust that many of us offload our unpleasant but regrettably necessary day-to-day tasks, but that we are missing out when our hectic work lives oblige us to do so. At best, the situation of Frayne’s interviewees is that for as long as they can abandon work, they are no longer servants; in being freed to grow their own vegetables, plan their own exercise regimes and raise their own children, they are no longer ‘letting servants live for them’. These activities, in this analysis, are no straightforward kind of work, but the privileged space of ‘culture’ as such. We were duped into thinking work was life, and now we learn that life is staying at home.

This idealisation does not come down to any theoretical neglect on Frayne’s part; rather, it is a problem structural to the post-work idea. While it aims to rescue us from the faulty moralism that keeps us in jobs we hate, it only justifies itself by proposing a more grounded, fulfilling and self-sufficient life, which it falls to post-work theorists to define. This comes to the surface whenever the literature on post-work describes what everyone is specifically projected to do once they have stopped working for a wage. For Frayne, ‘shorter working hours would open up more space for political engagement, for cultural creation and appreciation, and for the development of a range of voluntary and self-defined activities outside work’ (36). In their own ambitious manifesto for a post-work future, Srnicek and Williams offer analogous statements: ‘leisure should not be confused with idleness, as many things we enjoy most involve immense amounts of effort. Learning a musical instrument, reading literature, socialising with friends and playing sports all involve varying degrees of effort – but these are things we freely choose to do’ (140, 85).




For these authors, work in the sense of effort is tolerable if we pursue it voluntarily and towards some meaningful project. In both books, the vital statement of this ethos comes from Karl Marx, in a remark in The German Ideology that can be seen as totemic for the post-work movement. Marx proposes that under ideal circumstances, it would be possible ‘to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind’. This famous passage about self-determined pursuit is often linked to Marx’s demand for a shortened work-week as the only way to human freedom; what is less noted is that Marx is still implicitly prescribing how we should use our newfound freedom. It is not incidental that this ideal of leisure is made up entirely of what we might call ‘productive enjoyments’: it is not about viewing waxworks and magic lanterns in the morning, reading penny novels in the afternoon and drinking gin all evening. It seems that the argument for post-work has always been rhetorically dependent on making the case that not only should we do away with work because it is unjust, unnecessary, damaging etc, but also because the alternative activities we would be freed to do would be actively good for us. Marx, as well as those of us attracted by his vision of self-determination, falls into the idealism held out by the hope for an ‘unalienated’ labour.

For many people –academics and activists included – the prospect of living between the allotment and the Hegel discussion group might seem idyllic. But there are plenty of others for whom this post-work utopia could sound like hard work: a mix of newly idealised domestic labour on the one hand, and a haughty obligation to constantly improve ourselves on the other. The accusation is anticipated in the slogan coined by new media activist Aaron Bastani. ‘Fully Automated Luxury Communism’ is a provocative and maybe partly tongue-in-cheek phrase gaining traction as a means of summarising a future where most waged labour has been automated, everyone is sustained by UBI and advanced Green technologies keep all in a state of ecologically-sound plenty. While it does not include the phrase, Srnicek and Williams’s book is the outstanding analysis of how to bring about a future of this sort: Inventing the Future, in its call for a ‘synthetic freedom’ to bring the potential of today’s machines on the side of a common good life, leans towards a more luxurious post-work future than hitherto imagined.

Through this link between technology, a work-free future and pleasure, ‘Fully Automated Luxury Communism’ is to rescue Left utopianism from the dour, ascetic and pious reputation Frayne warns about: whether mental images of Soviet bread queues or Jeremy Corbyn allegedly eating cold baked beans after a long day’s leafleting. If a new popular movement is to emerge, the communist prospect is going to need to sound like something people might actually like. This is where the promise of Green technologies that can produce virtually anything – that is, a post-scarcity economy – comes in. In this analysis, it is no longer a choice between saving the planet and ending inequality or pursuing our desires. An accelerated digital future claims to deliver both: shared ownership of the means of production, with any kind of luxury watches, cars (electric, of course) and other commodities that we covet.

The post-work prospect appears to be justified according to two not entirely reconcilable ideals of what it is we are to do ‘after work’. On the one hand, the considerable effort of overturning society demands the traditional justification that work is keeping us from reaching our full human potential, hence the promise that we could all become craft brewers, learn ballet and hold three PhDs. On the other, for those concerned that most post-work definitions of the good life are rooted in class-specific productive pleasures, overly domestic or simply a bit worthy, we have to bring in the other promise that one can still enjoy the hedonistic joys of capitalism that automated communism will produce all the more plentifully.


One of the virtues of Inventing the Future is that, despite the seismic reimagining of society it describes, its pragmatic emphasis makes it sound a fairly no-nonsense affair to bring it about. All the Left need do is abandon its love affair with virtue-signalling ‘folk politics’ of protest, its nostalgia for the alleged sweet spot of post-war welfare capitalism and aim to gain control of hegemony-creating institutions. (Whatever one’s assessment of Corbyn, Bernie Sanders, Syriza and Podemos individually, their successes in recent years suggest that the last is not an unattainable prospect). This pragmatism extends to the authors’ concession that while such a change in society would have unanticipated consequences, it is still worth trying. Whatever unknowable things people might want to do with their freedom, it has to be better than the lack of autonomy that exists now. Theirs is ‘a humanism that is not defined in advance’, a humanism of ‘synthetic freedom’ that even anticipates the dangers of essentialism:


[Synthetic freedom] is constructed rather than natural, a collective historical achievement rather than the result of simply letting people be. Emancipation is thus not about detaching from the world and liberating a free soul, but instead a matter of constructing and cultivating the right attachments.

As such, Srnicek and Williams are off the hook as far as the humanist essentialism of the ‘culture and society’ tradition is concerned. They acknowledge that whatever freedom there can be, it will be the product of social and power relations. But if we’ve abandoned the old humanist faith that people can be vaguely trusted to sort themselves out in a congenial manner once liberated from their alienation, who or what will shape our lives in the post-capitalist future? If there is no human essence that can guarantee the form of the transformation, and yet we continue to hope that it will ‘cultivate the right attachments’, then this new kind of human subject has to be scrupulously promoted, created and maintained in a manner that seems irreconcilable with the impish consumer easygoingness promised by Bastani. In the book’s closing remarks, the unbuttoned ‘luxury’ of ‘Fully Automated Luxury Communism’ falls by the wayside:


Such a project demands a subjective transformation in the process – it potentiates the conditions for a broader transformation from the selfish individuals formed by capitalism to communal and creative forms of social expression liberated by the end of work.

Here, we have returned to something akin to Frayne’s ideal of ‘political engagement’, ‘cultural creation and appreciation’ and ‘the development of a range of voluntary and self-defined activities outside work’: activities enormously attractive to certain kinds of people, not inevitably so to others. While there are strategic and theoretical reasons for smudging the point a little, there seems no post-work prospect that is not ultimately prescriptive in this way.

One tentative conclusion might be that these admirable steps towards a positive programme for the future of work, which acknowledges that ‘subjective transformation’ is required, might also explicitly acknowledge that work could never simply be ‘driven by our own desires’ in the manner hoped for in Srnicek and Williams’s conclusion. In what is virtually the sole reference to the Russian Revolution in his works, Sigmund Freud remarks in Civilization and its Discontents (1929) that while the end of economic inequality might be desirable, it could never result in everyone peacefully undertaking ‘whatever work is necessary’ in a state of happiness. Freud’s book grapples with the reason for this: the myriad ways in which our drives and desires are diverted, thwarted, harnessed and given shape to by human society and culture, beyond economics alone. We do not end with this reference in the spirit of conservative pessimism, but rather to suggest that while we find that these books make a post-work future look attractive and even achievable, the ideological, economic and political measures they present do not take account of the problem of desire. Our desires are never simply our own, and therefore work can never be driven simply by ‘our own desires’. If, as Srnicek and Williams demand, the Left is to rediscover itself as a force of utopian optimism, then the forms that desire might take when it is no longer stymied by alienated labour is a question we must continue addressing.