Literature, Philosophy, Science

Structuralism, Poststructuralism, and the Decline of the Literary Humanities

It seems hard to believe, from our current vantage point in which the academic study of literature appears to be in a state of perpetual crisis, that there was a time, not so long ago, when the literary humanities reigned over an expanding scholarly empire — one that was not unlike the empire of the quantitative social sciences, and especially economics, today. Instead of literary academics feeling tempted or obligated to apply quantitative methods to the study of literature — as, for example, Franco Moretti has done, with results of (predictably, it seems to me) real but limited value — non-literary scholars felt tempted or obligated to become conversant in literary theory.

I was reminded of this while reading some essays by Jerome Bruner, an academic psychologist who died in 2016. In works like “Life as Narrative” (1987), Bruner found it useful to draw on literary theory about the structure of narratives as a source of ideas for understanding his own field, and even for designing empirical experiments. He cites Vladimir Propp, Frank Kermode, and Paul de Man, among many others.

Who outside of literary academia reads the works of literary academics today? What happened?

I would like to propose, a little controversially, that the literary humanities finds itself in its current state of isolation in part because of its rejection of structuralism. By “structuralism,” I do not mean only what Lévi-Strauss meant when he introduced the term. I mean something more broad: arguments that attempt to reduce complex, unwieldy human phenomena into relatively simple structures that can then be used to make predictions. The kind of models that the structuralist anthropologist Mary Douglas developed, for example. In its turn to poststructuralism, American literary academia developed a profound antipathy toward this kind of thought — an antipathy, I would argue, that has discouraged literary scholars from developing insights and models that might be of use outside of academic literary studies.

When literary scholarship turned against structuralism, it also implicitly turned against modeling. But models are a large part of what we use to make sense of our worlds, and they are one of the primary ways that ideas move between academic disciplines. To reject the search for predictively useful models is to invite the kind of intellectual isolation in which literary academia currently finds itself.

Continue reading

Standard
Economics, Law, Philosophy, Politics, Religion, Science

Max Weber and Political Ethics

I hadn’t read anything by Max Weber until very recently, but finally made my way through “Politics as a Vocation,” his late lecture delivered shortly after the end of the First World War and the start of the German Revolution.

Weber seems to be primarily known today for several largely logically independent ideas scattered across the social sciences and humanities — especially: the idea that a Protestant work ethic played a role in the rise of capitalism, the importance of charisma to politics, the centrality of bureaucracy in the modern state, and the definition of the state as “a human community that (successfully) claims the monopoly of the legitimate use of physical force within a given territory” (where “legitimate” only means “accepted as legitimate,” to the apparent consternation of many normative political theorists).

I had heard that “Politics as a Vocation,” where this definition of the state appears, was one of the places where Weber approached political theorizing, and I was predisposed to sympathize with the lecture by some positive remarks that the legal scholar Duncan Kennedy had made about Weber and the “ethic of responsibility.” I’ve also always believed that political theorists tend to pay too little attention to empirical knowledge from history and political science, so I was hopeful that a broadly historically and empirically informed social scientist like Weber might offer a valuable perspective.

To my surprise, however, the lecture as a whole turns out to be remarkably parochial, and in parts, dangerously misguided.

Continue reading

Standard
Law, Philosophy, Politics

When to support a war: consequentialist + deontological justification

I’ve been meaning to write a quick post about the question of when a nation should go to war, and when it should not — and in particular, under what conditions the United States should use large-scale military force against another country. I don’t mean the question of whether a war is legal under the international humanitarian law governing jus ad bellum. I mean the question of when large-scale military engagement is a good idea, something that the public should support. It’s not inconceivable that there are situations when military force is a good idea even though the legal basis is unclear or lacking — such as Kosovo in 1999, or maybe Libya in 2011 — and there are also, certainly, situations when the legal grounds for a war exist, but going to war would be unwise — such as attacking Russia in response to its annexation of Crimea last year.

Based on the armed conflicts involving the United States during my lifetime, it sometimes seems as though the wisdom of entering or not entering an armed conflict gets determined in retrospect, based on how the war turned out — which doesn’t seem like a useful or fair standard for judging wisdom. No one seems particularly bothered about Desert Storm, looking back, although many progressives at the time (including, for example, Joe Biden) opposed military intervention. On the other hand, many people seem to feel that the United States should have intervened in Rwanda to stop the genocide, although there was no great progressive push to do so at the time. It’s hard to avoid the conclusion, looking at attitudes toward U.S. uses of force over the last few decades, that we tend to treat decisions about wars as good decisions when they turn out well, and treat them as bad decisions when they don’t. But we often can’t know in advance how a war, or the choice not to go to war, will turn out — wars are notoriously unpredictable, and often develop their own momentum, and motivations and expectations frequently change — so how are we supposed to decide what to support beforehand?

The idea I’ve been meaning to post is an answer to this question. It’s a fairly simple one, and it may already appear somewhere in the literature on just war. But I’ve never come across it before.

Continue reading

Standard
Philosophy, Politics

Recognition in the Hierarchy of Political Needs

This is another post in the series exploring the idea of a hierarchy of political needs. Can we better understand political change — and, in a democracy, voting behavior — by thinking of voters as a kind of “body politic” motivated by a relatively stable hierarchy of concerns, with national security above the economy, and the economy above largely altruistic concerns such as responding to the risks of climate change?

After writing the first post, it occurred to me that there might a political concern that trumps even national security: roughly speaking, what Thucydides called “honor,” what Hegel called “recognition,” and what is sometimes discussed today using terms like “cultural identity” and “dignity.”

Continue reading

Standard
Economics, Philosophy, Politics, Science

The media at the hinge of political history

I’m beginning to wonder whether the media is in some sense the most crucial actor in understanding political change in a democracy.

The more you read in political science, the more you find grounds for skepticism that various ostensibly powerful actors can bring about change through their own actions. The presidency, for example, doesn’t appear to be all that it’s cracked up to be. Despite our desire for a President who will use the “bully pulpit” to sway the public, the evidence suggests that Presidents rarely succeed in changing public opinion. At most, their public statements can help shape the agenda, forcing the public to have an opinion on an issue — by influencing what the media talks about.

The courts rarely depart significantly from public opinion, despite the myth of the Supreme Court as the last refuge of liberty and equality in times of crisis and stress. In theory, the Supreme Court might be able to bring about political change by decree, ordering the government to do this or that radically unpopular thing. But that almost never happens in practice.

I suppose someone could argue that Congress is a driving force for political change. Maybe they’d point to the Senate’s ostensible deliberative golden age in the antebellum era. But I don’t imagine many people would seriously suggest that Congress today is leading much of anything, or more influencing than influenced.

The public itself is remarkably uninformed, and seems likely to remain uninformed despite the dreams of theorists of deliberative democracy for “deliberation days” and so on. To the extent that some portions of the public are informed, they’re largely informed by the mass media — and, perhaps, social media, to the extent that the two are different.

How about grassroots activists? There’s no doubt that activists can be a real force for political change — on those rare occasions when their decades of Sisyphean efforts bear fruit. But, when this happens, it is usually in part because they have succeeded in getting favorable coverage by the media. Or because they have made their own favorable media, for example by creating a popular, muckraking documentary film.

Continue reading

Standard
Economics, Philosophy, Politics, Science

Civil Disobedience: the Poor Man’s Lobbying

Free Photo: Dust Bowl Farm

So let’s assume, as I considered in an earlier post, that there’s a relatively stable hierarchy of political needs among voters in democracies like the United States — a rough ranking of concerns that tend to determine voting behavior, especially in presidential elections. And let’s assume that in this ranking, “national security trumps economic policy, and economic policy trumps other issues, such as civil liberties, or campaign finance reform, or more altruistic goals like saving future generations from the consequences of severe climate change…”

That means that voters will tend to vote based on how they perceive the economy to be doing — unless there is a perceived threat to national security, in which case voters will tend to support the candidate or party that is perceived as strongest, or at least will only support candidates that are perceived as sufficiently strong, on national security. Maybe there’s something that trumps even national security — something like honor, identity, or recognition — but I’ll set that aside for the moment.

What if you, the engaged citizen, want to bring about change on some issue that is beneath the economy on the hierarchy of political needs? What if, for example, you want to see the federal government change its policies on carbon emissions? Is this a hopeless dream?

It seems to me that there are several mechanisms in our democracy for getting around the hierarchy of political needs.

Continue reading

Standard
Economics, Philosophy, Politics

Thucydides and the Social Sciences (Autobiographical)

Free Photo: Amphitheater at Pompeii

This post offers a little piece of intellectual autobiography that I hope will place some other posts in a clearer light — especially the posts related to the later Wittgenstein, and the posts on economics. For me, it’s a chance to sort out some of my current thinking by considering what preceded it.

There was a time, shortly after my first exposure to the history of economic ideas, following years of being focused almost exclusively on the humanities, when I thought that what the scholarly world really needed was a kind of new grand unified theory of the social sciences. All I look for from a social science — from any science — is an increase in the power to predict and control nature in ways that serve our purposes, whatever they are. The intellectual run-up to the global financial crisis seemed to show that orthodox economics, as practiced by the world’s leading economists, was failing by this standard.

And economics appeared to be at the vanguard of the social sciences. If economics was driven by “physics envy” — the scientistic desire to emulate the mathematico-deductive rigor of theoretical physics — then other social sciences, such as political science, seemed to be afflicted with “economics envy.” But the global financial crisis called into question whether the emperor was wearing any clothes. Under such circumstances, it seemed to me, wasn’t it worth questioning the reigning assumptions? Might it not be time for some revolutionary science?

Once I began reading about the history of economic ideas, along with critiques of contemporary economic thought, my enthusiasm for this idea grew. To begin with, the secondary literature on economic thought is full of persuasive critiques of the intellectual underpinnings of a great deal of contemporary academic economics, especially the kind practiced in “freshwater” economics departments and by business school professors teaching finance. The more one reads about rational choice theory and the assumptions of quasi-omniscient, hyper-mathematical rationality that dominate so much of mainstream academic economics, the more the field seems ripe for a paradigm shift based on a skeptical rethinking of the basic phenomena under investigation.

In fact, it occurred to me that the predictive successes of modern economics, such as they are, might be largely attributable to the fact that when one is investigating human behavior related to money and closely related subjects — the core focus of economics as a subject matter — the single most important factor in human behavior is calculated self-interest, or, as economists sometimes call it, “rationality.” When making money, people will generally try to make as much as they can with as little effort as possible; when spending money, people will generally try to spend as little as they can for the greatest possible return; and so on. If you’re trying to predict money-related human behavior using as simple a model as possible, a model based on the assumption that individual actors are more or less rational agents (in the economic sense of rationality) is probably your best bet.

But even if you achieve good predictive results with this model in the context of money-related activity, this success obviously does not imply that rationality will always be the most useful model for predicting human behavior, especially in contexts less directly related to money, or where we have good reason to believe that non-pecuniary concerns may trump pecuniary ones.

For example, when we try to imagine what contemporary American political life would look like if all the political actors behaved purely based on calculated self-interest — without gaming the results ahead of time by redefining “self-interest” to include all sorts of ad hoc preferences and motivations that we would not ordinarily view as “self-interested” — the thought experiment leads to absurd results. Do we live in a world with no voters, where politicians run for office without any ideological commitments, tribal affiliations and moral commitments play no role, and officials attract the public’s support by offering generous populist benefits, such as lavish infrastructure and a guaranteed minimum income, with no concern for the deficit? Not at all. Many of the central features of our political life are phenomena that one would not expect to see if the relevant actors were behaving purely as rational actors — unless, again, the idea of rationality is transformed beyond recognition or usefulness.

So, when one discovers that the rational choice methodologies of economics have expanded, perhaps based partly on economics’ scientistic allure, to other domains in the social sciences, the case for a new grand unified theory of the social sciences seems even stronger. If the use of rational choice theory in economics invites skeptical questioning, the use of rational choice theory in, for example, political science — in so-called “public choice theory” — can sometimes seem not only absurd but useless. What unexpected predictive successes can public choice theory claim, against the countless instances where its models would lead us astray? The same could be asked of many rational-choice-based forays into sociology, such as the study of family life.

Certainly, focusing on calculated self-interest may help to dispel comforting illusions about human behavior — for example, if anyone thinks that crime results from some kind of mental pathology, it could certainly be useful to show the contexts in which rationality helps explain crime. But how many comforting illusions are there left to dispel today? Hasn’t the Machiavellian assumption of cold, calculating rationality as the driving force in all human behavior become our own dominant illusion — comforting us not by flattering our moral characters, but by flattering our cold-eyed realism, our courageous perceptiveness and freedom from childish illusions — even where an equally tractable alternative model might yield superior predictions?

With these thoughts in mind, I asked myself: why doesn’t someone develop a better alternative to rational choice theory that can displace its imperialistic role within the social sciences? Why, for example, doesn’t someone follow the lead of Thucydides, who recognized the great importance of self-interest to human behavior, but saw self-interest as one only one of human beings’ three central motivations — the other two being fear and honor?

Continue reading

Standard