1 Aristotle and Ainslie: An Empirical Basis for Virtue Ethics

Jennifer Baker

“Is it not clear that there are several concepts that need investigating simply as part of the philosophy of psychology and, as I should recommend—banishing ethics totally from our minds? Namely—to begin with: ‘action,’ ‘intention,’ ‘pleasure,’ ‘wanting.’ More will probably turn up if we start with these. Eventually it might be possible to advance to considering the concept ‘virtue;’ with which, I suppose, we should be beginning some sort of a study of ethics.” Elizabeth Anscombe, “Modern Moral Philosophy,” 1958

When the question under consideration is which virtues accompany science, it might seem to be coming at things from the wrong way round to ponder the extent to which science has come to support an account of virtue.[1] But failing to ask science to check on our understanding of virtue, while it has been invited over and is on hand, is a bit like keeping the car hood shut while the mechanic is over. This is not the same as a request for a re-design, however. There are already ample examples of virtue ethics (and ethical proposals more generally)[2] being redone with a focus on information gleaned from, for example, social psychology[3] or personality psychology.[4]

But I don’t want a new car. Not yet. Like others, I do not even recognize classical virtue ethics in the descriptions of those offering upsells.[5] Imagine instead that I am only looking for the visiting mechanic to inventory parts of the engine in order to confirm that the classic car in the driveway should run. This is all that I hope to do in this paper: emphasize (show off) the psychological capacities that “motor” classical virtue ethics, those crucial to its various and modern forms.[6] I want to suggest that these components can be recognized by contemporary science, and are actually described in behavioral science to the extent that we virtue ethicists could simply adopt these newest of formulations. I think this could be a useful way to ward off skepticism (warranted enough, no doubt) about how plausible ancient accounts of our moral psychology could still be. It is also a way to move forward, as the ongoing research could sync up with the continued development of modern approaches to virtue. I conclude with an example of this.

There are all sorts of virtue ethics, of course. Some may be non-naturalistic so that overlaps with science are of limited use. But as Julia Annas explains, the original, eudaimonist, accounts of virtue are naturalistic.[7] I mean to discuss this type of virtue ethic. Contemporary accounts following this model should not be derived from science, nor reduced to it, and yet are “weakened if the best contemporary science conflicts with its claims or makes it hard to see how they could be true.”[8] Contemporary eudaimonistic approaches take happiness to be a final end and practical rationality to be necessary to virtue.[9] An eudaimonistic virtue ethic will also include, in some version, accounts of the following: (1) self-generated reward (SGR) and (2) self-imposed behavioral norms (SIBN). (We will soon see that the behavioral scientists represent these as “personal rules,” or verbal commitments that we generate ourselves. I give an example of their equivalence to SIBN.)[10] If we look for support for these features, we will single out virtue ethics, as other accounts of ethics do not depend on SGR or emphasize SIBN at all. And we will also be running a rather efficient test, as classical and neo-classical virtue ethics take SGR and SIBN to result in the better-known features of the view (resultant automaticity, akrasia, enkrasia, changes in what is pleasurable, acting ethical “for its own sake,” improved agential efficacy). If there is no scientific basis for SGR and SIBN, virtue ethics would not be able to work as promised. The mechanic could walk away from the engine before checking anything else. There would seem to be no point in taking a neutral stand between ethical theories, comparing (or compiling!) features of virtue with other discovered propensities towards acting ethically. Not if it turns out virtue ethics won’t run.

If I have not sustained the car-in-the-driveway metaphor for too long: cars might have bodies that withstand wind pressure, drivers might tend to obey stop signs, theorists have lots of ideas about what counts as ethical. Science can track such things. But we want to know about this model of car. And so I propose we turn to behavioral science to do this check, rather than to the scientific fields that have already generated accounts of moral behavior or moral judgment, even if the accounts seem somewhat friendly to the idea of virtue.[11] This involves ignoring some extant support for SGR and SIBN, but there are several benefits to taking up one specific explanatory framework and applying it to eudaimonistic virtue ethics as described.[12] George Ainslie’s approach has been demonstrated to mesh well with counter-proposals from other behavioral scientists as well as with related fields such as neuroscience.[13]

Ainslie’s proposals are mature and well-known, and have been vetted by those working in other fields.[14] He developed his approach on the basis of animal studies, experiments with human subjects (including controlled experiments), neuroanatomical data, and clinical observations. He is best known for his account of hyperbolic discounting, which overturned the idea that our preference curves were exponential, a matter of consistent preference over some specified time. Instead, animals and humans alike prefer smaller rewards sooner, and these behavioral choices can be best represented on a hyperbolic curve. Ainslie’s was a seminal contribution, impacting his own field but also the fields of economics, animal science, and neuroeconomics. He has also applied his theory in a very broad manner, making it much easier for non-scientists to apprehend. He has used it in explanation of common phenomena like procrastination and addiction, for example.[15]

Invoking one specific, robustly developed and articulated scientific framework is a way to limit the freedom philosophers have to interpret data for themselves. This is to heed warnings philosophers have developed through recent trial and error. As John Doris puts it, philosophers should no longer lean “too heavily on any one study, or one series of studies, in theory construction.”[16] Ainslie’s framework is the constructed theory with which philosophers would need to contend. Assessing theory is what philosophers are trained in, and it seems more appropriate for philosophical ethicists to do this rather than generate interpretations of data that are ad hoc. Finally, since Ainslie limits his investigations to general motivation, we virtue ethicists are assured that his own toolbox was not designed to test assumptions about ethics with which a virtue ethicist would disagree. Ainslie is not assuming, for example, that ethics can be gauged through observation, or that any moral judgment must be rational or somehow maximizing. (He is not testing this car by whether it travels on some particular route.) Since he has not operationalized ethics for his work, it begs fewer questions. And often, from the perspective of a virtue ethicist, surprising basic claims about “what a good person would be expected to do” can still be at odds with the account.[17]

Right off the bat, Ainslie’s approach lends support to one of the most basic assumptions in ancient virtue ethics. Plato tells us our soul has three parts that vie for control over our behavior. Annas describes these aspects of our moral psychology as focused on immediate, midrange, and longer-term goals.[18] Ainslie gives us an empirical basis for agreeing that agency is divided, a matter of coordinating various parts and perspectives. Animal and human experiments alike, he writes, demonstrate “that we have successive motivational states that regularly conflict, and in a way that prevents durable resolution.”[19] Ainslie describes the “internal bargaining” between these constituent “states” as being what prevents us from acting like children given free rein in a candy store. We develop methods to “avoid or forestall” decisions that would reflect only our shortest-term interests. Addiction is particularly revealing of these types of internal conflicts. Addicts, for example, may report an interest in exercising restraint at the same time as they are seeking or actually taking drugs.[20]

We see this possibility in Aristotle’s descriptions of akrasia, or the inability to do what we think is best. Some approaches that have been used to supplement ethics—such as rational choice theory—fail to problematize motivation enough to acknowledge how often we are akratic. Ainslie, in contrast, sees this as the question we ought to set out to answer: “what motivates someone to repeatedly choose what she herself often sees as a poorer option, even if she is trying to stop choosing it?”[21] What akratic behavior reveals about us is simply missed if we treat it as merely irrational, glossing it: “when someone is seduced by a fudge sundae or cocaine high, she chooses immediate consumption in one modality despite larger, later losses in others—health, wealth, safety.”[22] Ainslie instead accounts for akrasia by identifying the appeal of the “lesser” immediate reward and the ubiquity of “inconsistent propositions.” We simply do want to have cake and eat it too. This observation about human nature is what ancient philosophers are notorious for telling us. (Recall Plato’s description of us as leaky jars in the Gorgias.) Ainslie’s investigations into how we avoid short-term diversions is the same work Plato did without the benefits of modern science.[23]

And like Aristotle, Ainslie also emphasizes that we should never think of choice in terms of a single moment in time. Choice, in order to be explained, must be regarded diachronically, rather than synchronically. This is to understand that our agency, our values and self-understandings, are involved in the creation of reward. Martha Nussbaum sees this insight from classical virtue ethics as having been roundly ignored: “most philosophers who have written about the appetites have treated hunger, thirst, and sexual desire as human universals, stemming from our shared animal nature. Aristotle himself was already more sophisticated, since he insisted that the object of appetite is ‘the apparent good’ and that appetite is therefore something interpretative and selective, a kind of intentional awareness.”[24]

If we unduly isolate “choice” from this context, we’d mistakenly assume that an agent might choose in the same way over time, unless she got some new information. But neither animals nor humans do this. And we are often receiving new information at a rapid pace. What if we changed our choices accordingly? We would behave like distracted children in that candy store. To make it even more obvious that our preferences are not static and cannot be modeled as such: even chosen rewards are perishable. We are, Ainslie explains, designed to tire of them.

The complexities of this motivational system emerged, Ainslie argues, to encourage us to “explore our environment, both when we are young and inept and when we have become master problem solvers.”[25] He explains that if our reward mechanisms operated in strict proportionality to how much of some external stimulus we could get, then a reward rate sufficient to shape our behavior when we were beginners would lead us to rest on our laurels once we had become adept. But instead, as we learn an activity, the reward it generates increases only at first. It then decreases again because our appetite does not build as much before it is satisfied.

Ainslie proposes that we have developed a few methods for coping with this phenomenon. For one, we consider choices in terms of the way one choice will affect later choices. The pleasures of an addict aren’t shared by most of us, because most of us can effectively imagine how things might go as a result. We also test out how we might feel after taking an option. There has come to be a lot of agreement on the basic neural mechanics of choice, and it suggests that “we try out scenes before entering them.”[26] Multiple studies of monkeys entertaining choices in their intraparietal cortex indicate they are engaging in “vicarious trial and error.”[27] When we, too, do this, we are not just considering the route to greater rewards, we “bring up a memory so as to relive a scene, or a plan so as to anticipate one, or another person’s experience so as to model one, and may stay engaged with any of them for a considerable time without necessarily being moved to any actual behavior.”[28]

So this isn’t simple math we are doing. Ainslie explains that “a scenario competes for our engagement against alternatives such as preparing coffee, taking a nap, or imagining something else.” We entertain prospective rewards: dessert before or feeling self-controlled later. This places a very heavy emphasis on the role of our imagination, a rejection of accounts that describe us in terms of first- and second-order desires. Aristotle similarly saw imagination as key to explaining our behavior, as Jessica Moss’s recent interpretative work carefully points out. She argues that not only “non-rational character virtue” but also practical rationality depends, for Aristotle, on past pleasurable perception.[29]

But this may not yet explain why we do not always respond in predictable ways to external and somehow pre-set rewards. It is also due to what Ainslie has come to recognize as the nature of reward. He has identified a type that cannot be traced back to what is on offer in some external way. It’s a type of reward “which does not strictly depend on events outside of the mind, or on the promise of such events.” He has termed this type of reward “endogenous” in his article “Grasping the Impalpable: The Role of Endogenous Reward in Choices, Including Process Addictions,” where he explains that such rewards are “a class of incentives that do not depend on the prediction of physically privileged environmental events.”[30] Though we may begin with instrumental motives, we learn to cultivate endogenous rewards, “coining” them for ourselves, engaging in a kind of “hedonic management.” A student put it well when she offered that it feels much better to do well on a test than to not study. Her satisfaction at her self-image would be its own “endogenous” reward. Virtue ethicists surely find this story familiar. The notion of endogenous rewards lends support to the idea that we can begin to pursue being nice without finding it particularly easy, or for instrumental reasons, but later find ourselves regarding even our unsuccessful efforts to be nice pleasurable. “Endogenous” reward is key to explaining behavior when it in no way matches outside, assumed standards. It is in play when we follow our own prescriptions, confounding those noting only what we are missing out on. Ainslie thinks this category of rewards is of great explanatory significance. It is a “hypothesis about an area of human economy that has eluded systemic study, and perhaps for that reason has not been recognized by conventional utility theory, even to the extent of being a blank terra incognita.”[31]

Virtue ethicists can use “endogenous reward” to explain how agents can come to take pleasure in doing the right thing, making them less tempted to stray from good behavior than other agents. This has been considered an implausible feature of virtue ethics. We know this for certain as updaters sometimes remove it to better fit with assumptions about moral activity being a matter of sacrifice or duty. But even for virtue ethicists, it can be difficult to explain the pleasure of right action in comparison to plain, simple pleasure. And, of course, if behavioral science can explain that reward is something we determine and so does not “come at us” at various levels, perhaps the pleasure of right action can be connected to reductions in the temptation to stray from behavior we consider good. Ainslie does not take up the topic of “right action,” of course, but let us consider if his framework nevertheless allows us to recognize how we might personally commit to a goal like being nice. This has us turn to the role self-conscious commitments have been discovered to play in our choices, as behavioral scientists study them.

Ainslie’s third suggestion, when it comes to tactics we use to delay our response to immediate rewards, is the use of what he calls either “verbalized commitments,” “principles,” or “self-rules.” I want to suggest that these are included under the category contemporary virtue ethicists refer to (and translate) as “rules” or “norms.” (I prefer specifying that they are “endorsed norms,” leaving unendorsed norms to be things we follow without much awareness.) But let me acknowledge that we are not always accustomed to associating virtue ethics with rules, self-generated or not. As Dan Russell explains, “it is a mistake to think that good ethical reasoning can be codified and broken into rules which one can grasp and apply correctly, regardless of one’s particular character and if only one is a quick enough study.”[32] So the self-generated rules (or principles or commitments or norms or standards) that we do discover through practical rationality are not the sort that can be handed to us, ready for use. Rosalind Hursthouse uses the term “v-rules” (virtue rules) to distinguish between these understandings of rules.[33] Nor can they explain ethics on their own. But virtue ethics does take advantage of what contemporary virtue ethicist Lawrence Becker describes as our proclivity for thinking and acting “consistently,” in, as he puts it, an “informal,” “unsystematic,” and “serviceable” sort of way.[34] We identify and support “normative propositions” in these efforts. Ainslie describes, it seems to me, a subcategory of these norms, as he is always talking about them being self-authored. Eudaimonistic virtue ethicists seem to invoke a broader category, leaving room for “orphan” or “unendorsed norms” that still have effects on us, even when we have not identified them.[35]

On the other hand, Ainslie’s terminology can seem a little loose. Ethicists are not very accustomed to thinking of a diet as itself a principle, but in Break-Down of Will Ainslie describes efforts to stop eating “ad lib” as a matter of making a resolution to “decide according to principle.” A consciously-endorsed diet would serve as a “criteria for deciding which choices constitute lapses,” and so we see that it would work as a personal rule or a principle or a standard, and that neither Ainslie nor the virtue ethicists are dependent on any over-formalizing of the nature of a rule.[36] The terms (verbalized commitment, principle, norm, rule) are currently interchangeable, as there are no proposals concerning how these might function differently in practical reasoning. We represent these “self-rules” to ourselves in intractable ways. And like the virtue ethicists, Ainslie recognizes no a priori or even unusual motivational force in the personalized rules we develop. They work instead by giving us a way to put our own credibility up as a stake when contracting with ourselves. For example, when we consider dessert, we can also consider being the kind of person who now eats dessert before dinner. This is so even if we weren’t particularly committed to any rule about dessert or even being a person of one sort or another. As the research on norms has shown, it can be enough that you see that others have these commitments.[37] For Ainslie, such matters factor into how we’d feel about a choice after the fact by providing a practical-motivational basis for self-blame. This self-prediction process is recursive, as each estimate of future self-control is fed back into the estimating process, thus forming part of the incentive for each choice.

So, though it may be already clear, if you catch yourself “violating your diet,” the cost can’t be thought of only in terms of extra calories. The cost is psychic. As Ainslie puts it: “there are no external sanctions for this contract you’ve violated with yourself” and yet “you have lost credibility with yourself, making you fearful of future risks, and it is natural that you begin to look for reasons to keep the diet violation from actually counting as one.”[38] Add to this some evidence that our personalized rules will likely concern general topics and behaviors, rather than very small concerns: our recursive self-prediction functions not just with respect to some singular choice but to also “bundles” of choices that present themselves to us. Given that these bundles involve rewards that will accrue at various times, any personal rules are going to be designed to apply at some level of generality, to bundle A versus bundle B. And they will also encompass the span of time in which the rewards come from these.[39]

This might be enough to suggest that we generate behavioral norms for ourselves, but what makes them stick? How do they come to have any force at all, when they do?

Ainslie argues that the way we commit to certain behaviors is by putting up “pledges” to ourselves to get us to follow personal rules of this sort. The type of self-credibility that works as a pledge in every situation is a lot like the commitments to ethical norms that virtue ethics takes to be so consequential. Ainslie writes: “the more you believe that you will keep (the pledge) the more you can keep it and the more you will subsequently believe; the less you believe you will keep it the less you will keep it, and so on.”[40] As the stakes get higher, you have to “throw in more collateral,” such as the credibility of your intentions generally. Ainslie’s explanation seems to capture the conscious commitment that virtue ethics recommends. But how would this type of self-commitment result in the increased automaticity and fluidity of behavior that classical virtue ethics describes as the result of being successfully committed?

Ainslie explains that “mental processes are learned to the extent that they are rewarded. Hyperbolic discount curves predict that mental processes based on incompatible rewards available at different delays do not simply win or lose acceptance, but interact over time. Processes that are congenial to each other cohere into the same process. Contradictory processes treat each other as strategic enemies. Ineffective ones cease to compete at all.”[41] If we engage in the match between a highlighted, endorsed norm and our motivation, the result could be that our efforts cohere.  For example, we intend to be nice and all the efforts to do so become coordinated in our minds. We will not only associate this commitment with ourselves, but with any pleasure for any success we’ve had with it. (More on this pleasure in a moment.) We will begin to think we are successfully nice. This then should make it even easier to continue being nice.

These possibilities are promising to virtue ethics. If personal rules can begin as instrumental, then become pleasurable to follow for their “own sake,” we see, potentially, how virtue becomes its own reward. This, even more than the idea that we can come to take pleasure in good behavior, is the claim that distinguishes classical virtue ethics from all manner of alternative takes on ethics.[42] Once again, even friends of the ancient approach can find the “for its own sake” standard implausible and seem to wave it aside in a bit of second-hand embarrassment. And yet Ainslie’s research provides support to this major component of the “engine” of classical virtue ethics.

Finally, eudaimonistic ethics does not merely expect that we can recognize, internalize, and personalize standards for ourselves, becoming motivated by these standards rather than some further reward, but that we also experience negative psychological feedback when we violate these. Eudaimonistic virtue ethics needs us to be able to recognize when we are acting improperly by our own lights. It should be stressful when we fail to keep our commitments, and traditional virtue ethics anticipates a psychological “kick” if we do. As Aristotle made clear in the Rhetoric, agents are themselves the best test of our norms, standards, and claims, because we experience tensions in these as a form of felt “distress.”[43]

To this account, Ainslie adds some familiar observations: not all people wield responsibility well and some of us too readily excuse our own failures. If so, do these people then fail to even notice their failures, removing some of the consequences classical virtue ethics anticipates? Even if so, Ainslie sees them as experiencing negative feedback, a form of willpower failure where we lower our expectations for doing what we intend. The “distortions of planning” we engage in to kid ourselves about the various self-rules we’ve evaded seems to be, on Ainslie’s account, as harmful as classical virtue ethics warned, as it reduces your own trust in yourself.[44]

Though he does not take up ethics explicitly, Ainslie seems to leave room for the idea that merely paying attention to virtue can be usefully incorporated into our agency. He writes that “a given train of imagination” might be “instrumentally useful.” A “plan or hypothesis about environmental contingencies, or some other mental process that is not rewarding in its own right”[45] can help us to resist the breaking of personal rules. Suppose you experiment with trying to be nice, and set yourself a personal standard to that effect. Ainslie suggests that having such a rule about your behavior might result in multiple effects. The rule “I will be nice” can curb (if not eliminate) cravings simply because it becomes appealing to keep to it. If you accept this rule, you might begin to see temptations to be cruel or gossip as not just acts isolated from anything else, but instead as capable of setting a precedent. Someone might urge you on by saying an incident of “non-nice” behavior is “harmless,” but even if you recognize that the action has no other bad consequences, you know you will see its relevance as an instance of breaking a rule you accept. Thus rules can aid self-control by silencing opportunities to do something harmless and tempting but inconsistent with how we want to think about ourselves.

If I have suggested that contemporary social and behavioral science might inspect the “engine” of classical virtue ethics and nod in recognition at its Greek-named parts, that is good. But we can also see that, from the perspective of behavioral science, some components of classical virtue ethics will still seem untested and without support. Let me end with this challenge, then, noting that this would be a productive focus for those developing theories of classical virtue ethics. Such theories depend on virtue having a certain appeal as a goal, one that differs from other goals in terms of permanence.

Becker has defended this proposal by suggesting that it is the nature of a commitment to the development of one’s agency that explains this.[46] Ainslie recognizes no such features, and his recent work has focused on the idea that we quickly tire of the stimulus of a repeated experience (unless we avoid this through addiction). To keep ourselves from boredom with a reward, we must continually stoke our appetites. Ainslie points out that there are several ways to do this. It just isn’t clear that any is compatible with virtue, which involves unwavering commitment to doing the right thing.

One way to outsmart ourselves, to not tire ourselves out with any chosen goal, is to pace ourselves so that we create an opportunity for a novel or surprising treat. This “incentive to restrict premature consumption” builds appetite “by using adequately rare occasions as cues for consumption.”[47] This does not readily map on to our descriptions of virtuous behavior. Situations might be novel, but the norms which we apply have grown familiar from use. Virtue ethics does not recommend novel experiences as some remedy for exhaustion at being good, as an example.

Another method Ainslie describes is stoking our appetites by breaking a rule or creating some loss. The risk of losing money while gambling might reinvigorate one’s appetite for the regular rewards of self-control with one’s money. Aristotle writes of the mistakes of our youth being crucial, and perhaps the regrets we develop from bad behavior then work in the way Ainslie suggests.[48] But mostly, virtue ethics recommends a steady and regular commitment to doing the right thing. It does not suggest we will grow tired of being good. This may represent the current limits of scientific support for philosophical theories of virtue.

Yet I hope we are leaving classical virtue ethics in a more plausible position than it has often been assumed to be in. This would be good because there are always going to be things that we need ethics, and not just science, for. For example: the justification of good behavior. Without begging questions about what ethics amounts to, will science be able to distinguish between the successful clever smoker (constantly refreshing his appetite and motivating himself thereby) and the successful other-directed and very good friend? At least, I cannot yet foresee how science will pick up norms we follow and analyze them apart from our particular psychologies, testing them against our own behaviors and any norms we find worthy and in conflict. This also isn’t work we do alone, in our minds. This is work we do together, as ethical theorists.

JENNIFER BAKER is Professor of Philosophy at the College of Charleston. Her focus is on updating ancient virtue ethics for use today. Her published articles include “Who is Afraid of a Final End? The Omission of Traditional Practical Rationality from Contemporary Virtue Ethics;” “Virtue Ethics and Practical Guidance;” and “Virtue and Behavior.” She is co-editor of the anthology Virtue and Economics (Oxford, 2016) and has a forthcoming monograph on Stoicism.

Bibliography

  • Ainslie, George. Break-Down of Will. Cambridge: Cambridge University Press, 2006.
  • ——. “Emotion: The Gaping Hole in Economic Theory,” in Economics and the Mind, edited by B. Montero and Mark White, 11–28. London: Routledge, 2006.
  • ——. “Free Will as Recursive Self-Prediction: Does a Deterministic Mechanism Reduce Responsibility?” In Addiction and Responsibility, edited by George Graham and Jeffrey Poland, 55–88. Cambridge, MA: MIT Press, 2011.
  • ——. “Grasping the Impalpable: The Role of Endogenous Reward in Choices, Including Process Addictions.” Inquiry 56.5 (2013): 446–69.
  • ——. “Money as MacGuffin: A Factor in Gambling and Other Process Addiction.” In The Mechanisms of Self-Control: Lessons from Addiction, edited by Neil Levy, 16–37. Oxford: Oxford University Press, 2013.
  • ——. “Palpating the Elephant: Current Theories of Addiction in Light of Hyperbolic Delay Discounting.” In Addiction and Choice: Rethinking the Relationship, edited by Heather Nick and Gabriel Segal, 227–44. Oxford: Oxford University Press, 2016.
  • ——. “Procrastination: The Basic Impulse.” In The Thief of Time: Philosophical Essays on Procrastination, edited by Chrisoula Andreou and Mark White, 11–27. Oxford: Oxford University Press, 2000.
  • ——. “Selfish Goals Must Compete for the Common Currency of Reward.” Behavioral and Brain Science 37.1 (2014): 135–36.
  • Alfano, Mark. Character as Moral Fiction. Cambridge: Cambridge University Press, 2009.
  • Annas, Julia. The Morality of Happiness. Oxford: Oxford University Press, 1993.
  • ——. Intelligent Virtue. Oxford: Oxford University Press, 2011.
  • ——. “Virtue Ethics.” In The Oxford Handbook of Ethical Theory, edited by David Copp, 515–36. Oxford: Oxford University Press, 2005.
  • Anscombe, G. E. M. “Modern Moral Philosophy.” Philosophy 33 (1958): 1–19.
  • Aristotle. Nicomachean Ethics. Translated by W. D. Ross. In The Works of Aristotle, edited by W. D. Ross and J. A. Smith. Oxford: Clarendon Press, 1908.
  • Baker, Jennifer. “Virtue and Behavior.” Review of Social Economy 67.1 (2009): 3–24.
  • Bazerman, M. H. “In Favor of Clear Thinking: Incorporating Moral Rules into Wise Cost-benefit Analysis.” Perspectives on Psychological Science 5.2 (2010): 209–12.
  • Becker, Lawrence. A New Stoicism. Princeton, NJ: Princeton University Press, 2009.
  • Besser-Jones, Lorraine. Eudaimonic Ethics: The Philosophy and Psychology of Living Well. London: Routledge, 2014.
  • Bicchieri, Cristina. “Norms, Conventions, and the Power of Expectations.” In Philosophy of Social Science, edited by Nancy Cartwright and Eleonora Montuschi, 208–31. Oxford: Oxford University Press, 2012.
  • Doris, John. Talking to Our Selves: Reflection, Ignorance, and Agency. Oxford: Oxford University Press, 2015.
  • Doucet, Mathieu, and Rachel MacKinnon. “This Paper Took Too Long to Write: A Puzzle About Overcoming Weakness of Will.” Philosophical Psychology 28.1 (2013): 49–69.
  • Green, Joshua, and Amatai Shenhav. “Moral Judgments Recruit Domain-general Valuation Mechanisms to Integrate Representations of Probability and Magnitude.” Neuron 67 (2010): 667–77.
  • Hursthouse, Rosalind. On Virtue Ethics. Oxford: Oxford University Press, 2001.
  • ——. “What Does the Aristotelian Phronimos Know?” In Perfecting Virtue: New Essays on Kantian Ethics and Virtue Ethics, edited by Lawrence Jost and Julian Wuerth, 38–57. Cambridge: Cambridge University Press, 2011.
  • ——. “Human Nature and Aristotelian Virtue Ethics.” In Human Nature, edited by Constantine Sandis and M. J. Cain, 169–88. Cambridge: Cambridge University Press, 2012.
  • LeBar, Mark. The Value in Living Well. Oxford: Oxford University Press, 2013.
  • Miller, Christian. Moral Character: An Empirical Theory. Oxford: Oxford University Press, 2013.
  • ——. “The Real Challenge to Virtue Ethics.” In The Philosophy and Psychology of Character and Happiness, edited by Nancy E. Snow and Franco V. Trivigno, 15–34. London: Routledge, 2014.
  • Moss, Jessica. Aristotle on the Apparent Good: Perception, Phantasia, Thought and Desire. Oxford: Oxford University Press, 2012.
  • Nussbaum, Martha. “Non-Relative Virtues: An Aristotelian Approach.” Midwest Studies in Philosophy 13.1 (1987): 32–53.
  • Paxton, J.M. “Patterns of Neural Activity Associated with Honest and Dishonest Moral Decisions.” Proceedings of the National Academy of Sciences USA 106.30 (2009): 12506–11.
  • Prinz, Jesse. “The Normativity Challenge: Cultural Psychology Provides the Real Threat to Virtue Ethics.” Journal of Ethics 13 (2009): 2–3.
  • Ross, Don. “The Relationship Between Addiction and Reward Bundling: An Experiment Comparing Smokers and Non-smokers.” Addiction 106.2 (2010): 402–9. Russell, Daniel. Practical Intelligence and the Virtues. Oxford: Oxford University Press, 2009.
  • Snow, Nancy. Virtue as Social Intelligence: An Empirically Grounded Theory. London: Routledge, 2010.

  1. Thanks to the students in Chris Surprenant’s class at the University of New Orleans for letting me try out these ideas with them. Thanks also to the Society for Philosophy and Psychology for letting me get feedback on this as a poster. And thank you to my interlocutors at the Notre Dame London conference.
  2. John Doris, Talking to Our Selves: Reflection, Ignorance, and Agency (Oxford: Oxford University Press, 2015).
  3. Christian Miller, Moral Character: An Empirical Theory (Oxford: Oxford University Press, 2013).
  4. Nancy Snow, Virtue as Social Intelligence: An Empirically Grounded Theory (New York: Routledge, 2010).
  5. See Mark Alfano, Character as Moral Fiction (Cambridge: Cambridge University Press, 2013) or Jesse Prinz, “The Normativity Challenge: Cultural Psychology Provides the Real Threat to Virtue Ethics,” Journal of Ethics 13 (2009): 2–3.
  6. Contemporary eudaimonist accounts of virtue include: Julia Annas, Intelligent Virtue (Oxford: Oxford University Press, 2011); Rosalind Hursthouse, On Virtue Ethics (Oxford: Oxford University Press, 2001); Mark LeBar, The Value in Living Well (Oxford: Oxford University Press, 2013); Daniel Russell, Practical Intelligence and the Virtues (Oxford: Oxford University Press, 2009). There are Stoic accounts as well, for example, Lawrence Becker, A New Stoicism (Princeton, NJ: Princeton University Press, 2011).
  7. Julia Annas, The Morality of Happiness (Oxford: Oxford University Press, 1993).
  8. Julia Annas, “Virtue Ethics,” in The Oxford Handbook of Ethical Theory, edited by David Copp (Oxford: Oxford University Press, 2005), 526.
  9. Lorraine Besser-Jones. Eudaimonic Ethics: The Philosophy and Psychology of Living Well (London: Routledge, 2014).
  10. Annas, The Morality of Happiness.
  11. There are many examples of studies about discrete phenomenon that could be used to inform or support virtue ethics. See, for example, M. H. Bazerman,“In Favor of Clear Thinking: Incorporating Moral Rules into Wise Cost-Benefit Analysis,” Perspectives on Psychological Science 5.2 (2010): 209–12, and J. M. Paxton, “Patterns of Neural Activity Associated with Honest and Dishonest Moral Decisions,” Proceedings of the National Academy of Sciences USA 106.30 (2009): 12506–11. Even studies that do not purport to support virtue ethics might be accommodated fairly easily, as soon as the ethical assumptions at odds with virtue ethics are set aside. See Joshua Green and Amatai Shenhav, “Moral Judgments Recruit Domain-general Valuation Mechanisms to Integrate Representations of Probability and Magnitude,” Neuron 67 (2010): 667–77.
  12. I want to point out a different approach with similar aims. Rosalind Hursthouse has provided a list of empirical claims necessary to the viability of a eudaimonistic virtue ethic. Her focus, however, is on how there is no evidence yet (not in evolutionary biology or even in armchair speculations) to show that our human nature is incompatible with virtue. See “Human Nature and Aristotelian Virtue Ethics,” in Human Nature, edited by Constantine Sandis and M. J. Cain (Cambridge: Cambridge University Press, 2012), 169–88.
  13. George Ainslie, “De Gustibus Disputare: Hyperbolic Delay Discounting Integrates Five Approaches to Impulsive Choice,” Journal of Economic Methodology 24.2 (2017): 166–89. See also George Ainslie, “Palpating the Elephant: Current Theories of Addiction in Light of Hyperbolic Delay Discounting,” in Addiction and Choice: Rethinking the Relationship, edited by Heather Nick and Gabriel Segal (Oxford: Oxford University Press 2016), 227–44.
  14. His influential book Break-Down of Will (Cambridge: Cambridge University Press) was published in 2001. As just one example of his use in a field like philosophy, see Mathieu Doucet and Rachel MacKinnon, “This Paper Took Too Long to Write: A Puzzle About Overcoming Weakness of Will,” Philosophical Psychology 28.1 (2013): 49–69.
  15. George Ainslie, “Procrastination: The Basic Impulse,” in The Thief of Time: Philosophical Essays on Procrastination, edited by Chrisoula Andreou and Mark White (Oxford: Oxford University Press, 2010), 11–27.
  16. Doris, Talking to Our Selves, 49.
  17. It seems pedantic to insist upon, but the ancient virtue ethicists were so loath to specify what counted as examples of good behavior because the theory always had to be consulted. Perhaps more vividly, Aristotle was only willing to give a few examples of behavior that is reliably wrong (murder, theft, adultery). See Christian Miller, “The Real Challenge to Virtue Ethics,” in The Philosophy and Psychology of Character and Happiness, edited by Nancy E. Snow and Franco V. Trivigno (London: Routledge, 2014), 19.
  18. Annas, Intelligent Virtue, 122.
  19. George Ainslie, “Money and MacGuffin: A Factor in Gambling and Other Process Addictions,” in The Mechanisms of Self-Control: Lessons from Addiction, edited by Neil Levy (Oxford: Oxford University Press, 2013), 20.
  20. Ibid.
  21. Ibid., 17.
  22. Ibid.
  23. George Ainslie, “Selfish Goals Must Compete for the Common Currency of Reward,” Behavioral and Brain Sciences 37.2 (2014): 135–36.
  24. Martha Nussbaum, “Non-Relative Virtues: An Aristotelian Approach,” Midwest Studies in Philosophy 13.1 (1987): 32–53.
  25. Ainslie, “Money and MacGuffin,” 22.
  26. Jessica Moss argues for the Aristotelian emphasis on the role “phantasia” or imagination plays in our practical rationality. I mention her work again a bit later. See Ainslie, “Money as MacGuffin,” 26.
  27. Ibid., 16-37.
  28. Ibid., 26.
  29. Jessica Moss, Aristotle on the Apparent Good: Perception, Phantasia, Thought and Desire (Oxford: Oxford University Press, 2012), 235.
  30. George Ainslie, “Grasping the Impalpable: The Role of Endogenous Reward in Choices, Including Process Addictions,” Inquiry 56.5 (2013): 446–69. 
  31. Ainslie, “Money as MacGuffin.”
  32. Daniel Russell. Practical Intelligence and the Virtues (Oxford: Oxford University Press, 2009), 23.
  33. Rosalind Hursthouse, “What Does the Aristotelian Phronimos Know?,” in Perfecting Virtue: New Essays on Kantian Ethics and Virtue Ethics, edited by Lawrence Jost and Julian Wuerth (Cambridge: Cambridge University Press, 2011), 47.
  34. Becker, A New Stoicism, 64, 111.
  35. Lawrence Becker slowly walks us through the way norms internal to one’s endeavors can (or not) be made compatible with other norms we self-consciously endorse. See Aristotle, n s commitments ewards is the use of what he calls ces, as behavioral scientists study them. ght play i of reward thBiIbidIibid., 84, and 128–31 for the argument for virtue made with reference to the role of norms.
  36. Ainslie, Break-Down of Will, 88. At 1164b30-1165a5 of the Nicomachean Ethics, Aristotle seems to describe decision-making with the use of some personalized rules, and if we think of the eudaimonistic tradition as including the Stoics, they make us even more comfortable with the notion that practical rationality incorporates rules. See Becker, A New Stoicism, 56–59, and Annas, The Morality of Happiness, 107.
  37. Cristina Bicchieri, "Norms, Conventions, and the Power of Expectations," in Philosophy of Social Science: A New Introduction, edited by Nancy Cartwright and Eleonora Montuschi (New York: Oxford University Press, 2012), 208–31.
  38. Ainslie, “Money as MacGuffin.”
  39. These claims about bundling are being tested currently. Also see Don Ross, “The Relationship Between Addiction and Reward Bundling: An Experiment Comparing Smokers and Non-smokers,” Addiction 106.2  (2010): 402–9.
  40. George Ainslie, “Emotion: The Gaping Hole in Economic Theory,” in Economics and the Mind, edited by B. Montero and Mark White (London: Routledge, 2006), 26.
  41. George Ainslie, “Free Will as Recursive Self-Prediction: Does a Deterministic Mechanism Reduce Responsibility?,” in Addiction and Responsibility, edited by George Graham and Jeffrey Poland (Cambridge: MIT Press, 2011), 64.
  42. Jennifer Baker, “Virtue and Behavior,” Review of Social Economy 67.1 (2009): 3–24.
  43. Jessica Moss, Aristotle on the Apparent Good: Perception, Phantasia, Thought and Desire (Oxford: Oxford University Press, 2012), 76.
  44. Ainslie, “Money as MacGuffin,” 27.
  45. Ibid., 26.
  46. Becker, A New Stoicism, 130.
  47. Ibid.
  48. A reviewer helpfully points out that there is a substantial support for this claim in the literature on the “sociology of failure.”